Skip to content

llmling_agent

Class info

Classes

Name Children Inherits
Agent
llmling_agent.agent.agent
Agent for AI-powered interaction with LLMling resources and tools.
    AgentConfig
    llmling_agent.models.agents
    Configuration for a single agent in the system.
      AgentContext
      llmling_agent.agent.context
      Runtime context for agent execution.
        AgentPool
        llmling_agent.delegation.pool
        Pool managing message processing nodes (agents and teams).
          AgentsManifest
          llmling_agent.models.manifest
          Complete agent configuration manifest defining all available agents.
            ChatMessage
            llmling_agent.messaging.messages
            Common message format for all UI types.
              StructuredAgent
              llmling_agent.agent.structured
              Wrapper for Agent that enforces a specific result type.
                Team
                llmling_agent.delegation.team
                Group of agents that can execute together.
                  TeamRun
                  llmling_agent.delegation.teamrun
                  Handles team operations with monitoring.
                    ToolInfo
                    llmling_agent.tools.base
                    Information about a registered tool.

                      🛈 DocStrings

                      Agent configuration and creation.

                      Agent

                      Bases: MessageNode[TDeps, str], TaskManagerMixin

                      Agent for AI-powered interaction with LLMling resources and tools.

                      Generically typed with: LLMLingAgent[Type of Dependencies, Type of Result]

                      This agent integrates LLMling's resource system with PydanticAI's agent capabilities. It provides: - Access to resources through RuntimeConfig - Tool registration for resource operations - System prompt customization - Signals - Message history management - Database logging

                      Source code in src/llmling_agent/agent/agent.py
                       116
                       117
                       118
                       119
                       120
                       121
                       122
                       123
                       124
                       125
                       126
                       127
                       128
                       129
                       130
                       131
                       132
                       133
                       134
                       135
                       136
                       137
                       138
                       139
                       140
                       141
                       142
                       143
                       144
                       145
                       146
                       147
                       148
                       149
                       150
                       151
                       152
                       153
                       154
                       155
                       156
                       157
                       158
                       159
                       160
                       161
                       162
                       163
                       164
                       165
                       166
                       167
                       168
                       169
                       170
                       171
                       172
                       173
                       174
                       175
                       176
                       177
                       178
                       179
                       180
                       181
                       182
                       183
                       184
                       185
                       186
                       187
                       188
                       189
                       190
                       191
                       192
                       193
                       194
                       195
                       196
                       197
                       198
                       199
                       200
                       201
                       202
                       203
                       204
                       205
                       206
                       207
                       208
                       209
                       210
                       211
                       212
                       213
                       214
                       215
                       216
                       217
                       218
                       219
                       220
                       221
                       222
                       223
                       224
                       225
                       226
                       227
                       228
                       229
                       230
                       231
                       232
                       233
                       234
                       235
                       236
                       237
                       238
                       239
                       240
                       241
                       242
                       243
                       244
                       245
                       246
                       247
                       248
                       249
                       250
                       251
                       252
                       253
                       254
                       255
                       256
                       257
                       258
                       259
                       260
                       261
                       262
                       263
                       264
                       265
                       266
                       267
                       268
                       269
                       270
                       271
                       272
                       273
                       274
                       275
                       276
                       277
                       278
                       279
                       280
                       281
                       282
                       283
                       284
                       285
                       286
                       287
                       288
                       289
                       290
                       291
                       292
                       293
                       294
                       295
                       296
                       297
                       298
                       299
                       300
                       301
                       302
                       303
                       304
                       305
                       306
                       307
                       308
                       309
                       310
                       311
                       312
                       313
                       314
                       315
                       316
                       317
                       318
                       319
                       320
                       321
                       322
                       323
                       324
                       325
                       326
                       327
                       328
                       329
                       330
                       331
                       332
                       333
                       334
                       335
                       336
                       337
                       338
                       339
                       340
                       341
                       342
                       343
                       344
                       345
                       346
                       347
                       348
                       349
                       350
                       351
                       352
                       353
                       354
                       355
                       356
                       357
                       358
                       359
                       360
                       361
                       362
                       363
                       364
                       365
                       366
                       367
                       368
                       369
                       370
                       371
                       372
                       373
                       374
                       375
                       376
                       377
                       378
                       379
                       380
                       381
                       382
                       383
                       384
                       385
                       386
                       387
                       388
                       389
                       390
                       391
                       392
                       393
                       394
                       395
                       396
                       397
                       398
                       399
                       400
                       401
                       402
                       403
                       404
                       405
                       406
                       407
                       408
                       409
                       410
                       411
                       412
                       413
                       414
                       415
                       416
                       417
                       418
                       419
                       420
                       421
                       422
                       423
                       424
                       425
                       426
                       427
                       428
                       429
                       430
                       431
                       432
                       433
                       434
                       435
                       436
                       437
                       438
                       439
                       440
                       441
                       442
                       443
                       444
                       445
                       446
                       447
                       448
                       449
                       450
                       451
                       452
                       453
                       454
                       455
                       456
                       457
                       458
                       459
                       460
                       461
                       462
                       463
                       464
                       465
                       466
                       467
                       468
                       469
                       470
                       471
                       472
                       473
                       474
                       475
                       476
                       477
                       478
                       479
                       480
                       481
                       482
                       483
                       484
                       485
                       486
                       487
                       488
                       489
                       490
                       491
                       492
                       493
                       494
                       495
                       496
                       497
                       498
                       499
                       500
                       501
                       502
                       503
                       504
                       505
                       506
                       507
                       508
                       509
                       510
                       511
                       512
                       513
                       514
                       515
                       516
                       517
                       518
                       519
                       520
                       521
                       522
                       523
                       524
                       525
                       526
                       527
                       528
                       529
                       530
                       531
                       532
                       533
                       534
                       535
                       536
                       537
                       538
                       539
                       540
                       541
                       542
                       543
                       544
                       545
                       546
                       547
                       548
                       549
                       550
                       551
                       552
                       553
                       554
                       555
                       556
                       557
                       558
                       559
                       560
                       561
                       562
                       563
                       564
                       565
                       566
                       567
                       568
                       569
                       570
                       571
                       572
                       573
                       574
                       575
                       576
                       577
                       578
                       579
                       580
                       581
                       582
                       583
                       584
                       585
                       586
                       587
                       588
                       589
                       590
                       591
                       592
                       593
                       594
                       595
                       596
                       597
                       598
                       599
                       600
                       601
                       602
                       603
                       604
                       605
                       606
                       607
                       608
                       609
                       610
                       611
                       612
                       613
                       614
                       615
                       616
                       617
                       618
                       619
                       620
                       621
                       622
                       623
                       624
                       625
                       626
                       627
                       628
                       629
                       630
                       631
                       632
                       633
                       634
                       635
                       636
                       637
                       638
                       639
                       640
                       641
                       642
                       643
                       644
                       645
                       646
                       647
                       648
                       649
                       650
                       651
                       652
                       653
                       654
                       655
                       656
                       657
                       658
                       659
                       660
                       661
                       662
                       663
                       664
                       665
                       666
                       667
                       668
                       669
                       670
                       671
                       672
                       673
                       674
                       675
                       676
                       677
                       678
                       679
                       680
                       681
                       682
                       683
                       684
                       685
                       686
                       687
                       688
                       689
                       690
                       691
                       692
                       693
                       694
                       695
                       696
                       697
                       698
                       699
                       700
                       701
                       702
                       703
                       704
                       705
                       706
                       707
                       708
                       709
                       710
                       711
                       712
                       713
                       714
                       715
                       716
                       717
                       718
                       719
                       720
                       721
                       722
                       723
                       724
                       725
                       726
                       727
                       728
                       729
                       730
                       731
                       732
                       733
                       734
                       735
                       736
                       737
                       738
                       739
                       740
                       741
                       742
                       743
                       744
                       745
                       746
                       747
                       748
                       749
                       750
                       751
                       752
                       753
                       754
                       755
                       756
                       757
                       758
                       759
                       760
                       761
                       762
                       763
                       764
                       765
                       766
                       767
                       768
                       769
                       770
                       771
                       772
                       773
                       774
                       775
                       776
                       777
                       778
                       779
                       780
                       781
                       782
                       783
                       784
                       785
                       786
                       787
                       788
                       789
                       790
                       791
                       792
                       793
                       794
                       795
                       796
                       797
                       798
                       799
                       800
                       801
                       802
                       803
                       804
                       805
                       806
                       807
                       808
                       809
                       810
                       811
                       812
                       813
                       814
                       815
                       816
                       817
                       818
                       819
                       820
                       821
                       822
                       823
                       824
                       825
                       826
                       827
                       828
                       829
                       830
                       831
                       832
                       833
                       834
                       835
                       836
                       837
                       838
                       839
                       840
                       841
                       842
                       843
                       844
                       845
                       846
                       847
                       848
                       849
                       850
                       851
                       852
                       853
                       854
                       855
                       856
                       857
                       858
                       859
                       860
                       861
                       862
                       863
                       864
                       865
                       866
                       867
                       868
                       869
                       870
                       871
                       872
                       873
                       874
                       875
                       876
                       877
                       878
                       879
                       880
                       881
                       882
                       883
                       884
                       885
                       886
                       887
                       888
                       889
                       890
                       891
                       892
                       893
                       894
                       895
                       896
                       897
                       898
                       899
                       900
                       901
                       902
                       903
                       904
                       905
                       906
                       907
                       908
                       909
                       910
                       911
                       912
                       913
                       914
                       915
                       916
                       917
                       918
                       919
                       920
                       921
                       922
                       923
                       924
                       925
                       926
                       927
                       928
                       929
                       930
                       931
                       932
                       933
                       934
                       935
                       936
                       937
                       938
                       939
                       940
                       941
                       942
                       943
                       944
                       945
                       946
                       947
                       948
                       949
                       950
                       951
                       952
                       953
                       954
                       955
                       956
                       957
                       958
                       959
                       960
                       961
                       962
                       963
                       964
                       965
                       966
                       967
                       968
                       969
                       970
                       971
                       972
                       973
                       974
                       975
                       976
                       977
                       978
                       979
                       980
                       981
                       982
                       983
                       984
                       985
                       986
                       987
                       988
                       989
                       990
                       991
                       992
                       993
                       994
                       995
                       996
                       997
                       998
                       999
                      1000
                      1001
                      1002
                      1003
                      1004
                      1005
                      1006
                      1007
                      1008
                      1009
                      1010
                      1011
                      1012
                      1013
                      1014
                      1015
                      1016
                      1017
                      1018
                      1019
                      1020
                      1021
                      1022
                      1023
                      1024
                      1025
                      1026
                      1027
                      1028
                      1029
                      1030
                      1031
                      1032
                      1033
                      1034
                      1035
                      1036
                      1037
                      1038
                      1039
                      1040
                      1041
                      1042
                      1043
                      1044
                      1045
                      1046
                      1047
                      1048
                      1049
                      1050
                      1051
                      1052
                      1053
                      1054
                      1055
                      1056
                      1057
                      1058
                      1059
                      1060
                      1061
                      1062
                      1063
                      1064
                      1065
                      1066
                      1067
                      1068
                      1069
                      1070
                      1071
                      1072
                      1073
                      1074
                      1075
                      1076
                      1077
                      1078
                      1079
                      1080
                      1081
                      1082
                      1083
                      1084
                      1085
                      1086
                      1087
                      1088
                      1089
                      1090
                      1091
                      1092
                      1093
                      1094
                      1095
                      1096
                      1097
                      1098
                      1099
                      1100
                      1101
                      1102
                      1103
                      1104
                      1105
                      1106
                      1107
                      1108
                      1109
                      1110
                      1111
                      1112
                      1113
                      1114
                      1115
                      1116
                      1117
                      1118
                      1119
                      1120
                      1121
                      1122
                      1123
                      1124
                      1125
                      1126
                      1127
                      1128
                      1129
                      1130
                      1131
                      1132
                      1133
                      1134
                      1135
                      1136
                      1137
                      1138
                      1139
                      1140
                      1141
                      1142
                      1143
                      1144
                      1145
                      1146
                      1147
                      1148
                      1149
                      1150
                      1151
                      1152
                      1153
                      1154
                      1155
                      1156
                      1157
                      1158
                      1159
                      1160
                      1161
                      1162
                      1163
                      1164
                      1165
                      1166
                      1167
                      1168
                      1169
                      1170
                      1171
                      1172
                      1173
                      1174
                      1175
                      1176
                      1177
                      1178
                      1179
                      1180
                      1181
                      1182
                      1183
                      1184
                      1185
                      1186
                      1187
                      1188
                      1189
                      1190
                      1191
                      1192
                      1193
                      1194
                      1195
                      1196
                      1197
                      1198
                      1199
                      1200
                      1201
                      1202
                      1203
                      1204
                      1205
                      1206
                      1207
                      1208
                      1209
                      1210
                      1211
                      1212
                      1213
                      1214
                      1215
                      1216
                      1217
                      1218
                      1219
                      1220
                      1221
                      1222
                      1223
                      1224
                      1225
                      1226
                      1227
                      1228
                      1229
                      1230
                      1231
                      1232
                      1233
                      1234
                      1235
                      1236
                      1237
                      1238
                      1239
                      1240
                      1241
                      1242
                      1243
                      1244
                      1245
                      1246
                      1247
                      1248
                      1249
                      1250
                      1251
                      1252
                      1253
                      1254
                      1255
                      1256
                      1257
                      1258
                      1259
                      1260
                      1261
                      1262
                      1263
                      1264
                      1265
                      1266
                      1267
                      1268
                      1269
                      1270
                      1271
                      1272
                      1273
                      1274
                      1275
                      1276
                      1277
                      1278
                      1279
                      1280
                      @track_agent("Agent")
                      class Agent[TDeps](MessageNode[TDeps, str], TaskManagerMixin):
                          """Agent for AI-powered interaction with LLMling resources and tools.
                      
                          Generically typed with: LLMLingAgent[Type of Dependencies, Type of Result]
                      
                          This agent integrates LLMling's resource system with PydanticAI's agent capabilities.
                          It provides:
                          - Access to resources through RuntimeConfig
                          - Tool registration for resource operations
                          - System prompt customization
                          - Signals
                          - Message history management
                          - Database logging
                          """
                      
                          @dataclass(frozen=True)
                          class AgentReset:
                              """Emitted when agent is reset."""
                      
                              agent_name: AgentName
                              previous_tools: dict[str, bool]
                              new_tools: dict[str, bool]
                              timestamp: datetime = field(default_factory=datetime.now)
                      
                          # this fixes weird mypy issue
                          conversation: ConversationManager
                          talk: Interactions
                          model_changed = Signal(object)  # Model | None
                          chunk_streamed = Signal(str, str)  # (chunk, message_id)
                          run_failed = Signal(str, Exception)
                          agent_reset = Signal(AgentReset)
                      
                          def __init__(
                              # we dont use AgentKwargs here so that we can work with explicit ones in the ctor
                              self,
                              name: str = "llmling-agent",
                              provider: AgentType = "pydantic_ai",
                              *,
                              model: ModelType = None,
                              runtime: RuntimeConfig | Config | StrPath | None = None,
                              context: AgentContext[TDeps] | None = None,
                              session: SessionIdType | SessionQuery | MemoryConfig | bool | int = None,
                              system_prompt: AnyPromptType | Sequence[AnyPromptType] = (),
                              description: str | None = None,
                              tools: Sequence[ToolType] | None = None,
                              capabilities: Capabilities | None = None,
                              mcp_servers: Sequence[str | MCPServerConfig] | None = None,
                              resources: Sequence[Resource | PromptType | str] = (),
                              retries: int = 1,
                              result_retries: int | None = None,
                              end_strategy: EndStrategy = "early",
                              defer_model_check: bool = False,
                              input_provider: InputProvider | None = None,
                              parallel_init: bool = True,
                              debug: bool = False,
                          ):
                              """Initialize agent with runtime configuration.
                      
                              Args:
                                  runtime: Runtime configuration providing access to resources/tools
                                  context: Agent context with capabilities and configuration
                                  provider: Agent type to use (ai: PydanticAIProvider, human: HumanProvider)
                                  session: Memory configuration.
                                      - None: Default memory config
                                      - False: Disable message history (max_messages=0)
                                      - int: Max tokens for memory
                                      - str/UUID: Session identifier
                                      - SessionQuery: Query to recover conversation
                                      - MemoryConfig: Complete memory configuration
                                  model: The default model to use (defaults to GPT-4)
                                  system_prompt: Static system prompts to use for this agent
                                  name: Name of the agent for logging
                                  description: Description of the Agent ("what it can do")
                                  tools: List of tools to register with the agent
                                  capabilities: Capabilities for the agent
                                  mcp_servers: MCP servers to connect to
                                  resources: Additional resources to load
                                  retries: Default number of retries for failed operations
                                  result_retries: Max retries for result validation (defaults to retries)
                                  end_strategy: Strategy for handling tool calls that are requested alongside
                                                a final result
                                  defer_model_check: Whether to defer model evaluation until first run
                                  input_provider: Provider for human input (tool confirmation / HumanProviders)
                                  parallel_init: Whether to initialize resources in parallel
                                  debug: Whether to enable debug mode
                              """
                              from llmling_agent.agent import AgentContext
                              from llmling_agent.agent.conversation import ConversationManager
                              from llmling_agent.agent.interactions import Interactions
                              from llmling_agent.agent.sys_prompts import SystemPrompts
                              from llmling_agent.resource_providers.capability_provider import (
                                  CapabilitiesResourceProvider,
                              )
                              from llmling_agent_providers.base import AgentProvider
                      
                              self._infinite = False
                              # save some stuff for asnyc init
                              self._owns_runtime = False
                              # prepare context
                              ctx = context or AgentContext[TDeps].create_default(
                                  name,
                                  input_provider=input_provider,
                                  capabilities=capabilities,
                              )
                              self._context = ctx
                              memory_cfg = (
                                  session
                                  if isinstance(session, MemoryConfig)
                                  else MemoryConfig.from_value(session)
                              )
                              super().__init__(
                                  name=name,
                                  context=ctx,
                                  description=description,
                                  enable_logging=memory_cfg.enable,
                              )
                              # Initialize runtime
                              match runtime:
                                  case None:
                                      ctx.runtime = RuntimeConfig.from_config(Config())
                                  case Config() | str() | PathLike():
                                      ctx.runtime = RuntimeConfig.from_config(runtime)
                                  case RuntimeConfig():
                                      ctx.runtime = runtime
                      
                              runtime_provider = RuntimePromptProvider(ctx.runtime)
                              ctx.definition.prompt_manager.providers["runtime"] = runtime_provider
                              # Initialize tool manager
                              all_tools = list(tools or [])
                              self.tools = ToolManager(all_tools)
                              self.tools.add_provider(self.mcp)
                              if builtin_tools := ctx.config.get_tool_provider():
                                  self.tools.add_provider(builtin_tools)
                      
                              # Initialize conversation manager
                              resources = list(resources)
                              if ctx.config.knowledge:
                                  resources.extend(ctx.config.knowledge.get_resources())
                              self.conversation = ConversationManager(self, memory_cfg, resources=resources)
                              # Initialize provider
                              match provider:
                                  case "pydantic_ai":
                                      validate_import("pydantic_ai", "pydantic_ai")
                                      from llmling_agent_providers.pydanticai import PydanticAIProvider
                      
                                      if model and not isinstance(model, str):
                                          from pydantic_ai import models
                      
                                          assert isinstance(model, models.Model)
                                      self._provider: AgentProvider = PydanticAIProvider(
                                          model=model,
                                          retries=retries,
                                          end_strategy=end_strategy,
                                          result_retries=result_retries,
                                          defer_model_check=defer_model_check,
                                          debug=debug,
                                          context=ctx,
                                      )
                                  case "human":
                                      from llmling_agent_providers.human import HumanProvider
                      
                                      self._provider = HumanProvider(name=name, debug=debug, context=ctx)
                                  case Callable():
                                      from llmling_agent_providers.callback import CallbackProvider
                      
                                      self._provider = CallbackProvider(
                                          provider, name=name, debug=debug, context=ctx
                                      )
                                  case "litellm":
                                      validate_import("litellm", "litellm")
                                      from llmling_agent_providers.litellm_provider import LiteLLMProvider
                      
                                      self._provider = LiteLLMProvider(
                                          name=name,
                                          debug=debug,
                                          retries=retries,
                                          context=ctx,
                                          model=model,
                                      )
                                  case AgentProvider():
                                      self._provider = provider
                                      self._provider.context = ctx
                                  case _:
                                      msg = f"Invalid agent type: {type}"
                                      raise ValueError(msg)
                              self.tools.add_provider(CapabilitiesResourceProvider(ctx.capabilities))
                      
                              if ctx and ctx.definition:
                                  from llmling_agent.observability import registry
                      
                                  registry.register_providers(ctx.definition.observability)
                      
                              # init variables
                              self._debug = debug
                              self._result_type: type | None = None
                              self.parallel_init = parallel_init
                              self.name = name
                              self._background_task: asyncio.Task[Any] | None = None
                      
                              # Forward provider signals
                              self._provider.chunk_streamed.connect(self.chunk_streamed)
                              self._provider.model_changed.connect(self.model_changed)
                              self._provider.tool_used.connect(self.tool_used)
                              self._provider.model_changed.connect(self.model_changed)
                      
                              self.talk = Interactions(self)
                      
                              # Set up system prompts
                              config_prompts = ctx.config.system_prompts if ctx else []
                              all_prompts: list[AnyPromptType] = list(config_prompts)
                              if isinstance(system_prompt, list):
                                  all_prompts.extend(system_prompt)
                              else:
                                  all_prompts.append(system_prompt)
                              self.sys_prompts = SystemPrompts(all_prompts, context=ctx)
                      
                          def __repr__(self) -> str:
                              desc = f", {self.description!r}" if self.description else ""
                              tools = f", tools={len(self.tools)}" if self.tools else ""
                              return f"Agent({self.name!r}, provider={self._provider.NAME!r}{desc}{tools})"
                      
                          def __prompt__(self) -> str:
                              typ = self._provider.__class__.__name__
                              model = self.model_name or "default"
                              parts = [f"Agent: {self.name}", f"Type: {typ}", f"Model: {model}"]
                              if self.description:
                                  parts.append(f"Description: {self.description}")
                              parts.extend([self.tools.__prompt__(), self.conversation.__prompt__()])
                      
                              return "\n".join(parts)
                      
                          async def __aenter__(self) -> Self:
                              """Enter async context and set up MCP servers."""
                              try:
                                  # Collect all coroutines that need to be run
                                  coros: list[Coroutine[Any, Any, Any]] = []
                      
                                  # Runtime initialization if needed
                                  runtime_ref = self.context.runtime
                                  if runtime_ref and not runtime_ref._initialized:
                                      self._owns_runtime = True
                                      coros.append(runtime_ref.__aenter__())
                      
                                  # Events initialization
                                  coros.append(super().__aenter__())
                      
                                  # Get conversation init tasks directly
                                  coros.extend(self.conversation.get_initialization_tasks())
                      
                                  # Execute coroutines either in parallel or sequentially
                                  if self.parallel_init and coros:
                                      await asyncio.gather(*coros)
                                  else:
                                      for coro in coros:
                                          await coro
                                  if runtime_ref:
                                      self.tools.add_provider(RuntimeResourceProvider(runtime_ref))
                                  for provider in await self.context.config.get_toolsets():
                                      self.tools.add_provider(provider)
                              except Exception as e:
                                  # Clean up in reverse order
                                  if self._owns_runtime and runtime_ref and self.context.runtime == runtime_ref:
                                      await runtime_ref.__aexit__(type(e), e, e.__traceback__)
                                  msg = "Failed to initialize agent"
                                  raise RuntimeError(msg) from e
                              else:
                                  return self
                      
                          async def __aexit__(
                              self,
                              exc_type: type[BaseException] | None,
                              exc_val: BaseException | None,
                              exc_tb: TracebackType | None,
                          ):
                              """Exit async context."""
                              await super().__aexit__(exc_type, exc_val, exc_tb)
                              try:
                                  await self.mcp.__aexit__(exc_type, exc_val, exc_tb)
                              finally:
                                  if self._owns_runtime and self.context.runtime:
                                      self.tools.remove_provider("runtime")
                                      await self.context.runtime.__aexit__(exc_type, exc_val, exc_tb)
                                  # for provider in await self.context.config.get_toolsets():
                                  #     self.tools.remove_provider(provider.name)
                      
                          @overload
                          def __and__(
                              self, other: Agent[TDeps] | StructuredAgent[TDeps, Any]
                          ) -> Team[TDeps]: ...
                      
                          @overload
                          def __and__(self, other: Team[TDeps]) -> Team[TDeps]: ...
                      
                          @overload
                          def __and__(self, other: ProcessorCallback[Any]) -> Team[TDeps]: ...
                      
                          def __and__(self, other: MessageNode[Any, Any] | ProcessorCallback[Any]) -> Team[Any]:
                              """Create agent group using | operator.
                      
                              Example:
                                  group = analyzer & planner & executor  # Create group of 3
                                  group = analyzer & existing_group  # Add to existing group
                              """
                              from llmling_agent.agent import StructuredAgent
                              from llmling_agent.delegation.team import Team
                      
                              match other:
                                  case Team():
                                      return Team([self, *other.agents])
                                  case Callable():
                                      if callable(other):
                                          if has_return_type(other, str):
                                              agent_2 = Agent.from_callback(other)
                                          else:
                                              agent_2 = StructuredAgent.from_callback(other)
                                      agent_2.context.pool = self.context.pool
                                      return Team([self, agent_2])
                                  case MessageNode():
                                      return Team([self, other])
                                  case _:
                                      msg = f"Invalid agent type: {type(other)}"
                                      raise ValueError(msg)
                      
                          @overload
                          def __or__(self, other: MessageNode[TDeps, Any]) -> TeamRun[TDeps, Any]: ...
                      
                          @overload
                          def __or__[TOtherDeps](
                              self,
                              other: MessageNode[TOtherDeps, Any],
                          ) -> TeamRun[Any, Any]: ...
                      
                          @overload
                          def __or__(self, other: ProcessorCallback[Any]) -> TeamRun[Any, Any]: ...
                      
                          def __or__(self, other: MessageNode[Any, Any] | ProcessorCallback[Any]) -> TeamRun:
                              # Create new execution with sequential mode (for piping)
                              from llmling_agent import StructuredAgent, TeamRun
                      
                              if callable(other):
                                  if has_return_type(other, str):
                                      other = Agent.from_callback(other)
                                  else:
                                      other = StructuredAgent.from_callback(other)
                                  other.context.pool = self.context.pool
                      
                              return TeamRun([self, other])
                      
                          @classmethod
                          def from_callback(
                              cls,
                              callback: ProcessorCallback[str],
                              *,
                              name: str | None = None,
                              debug: bool = False,
                              **kwargs: Any,
                          ) -> Agent[None]:
                              """Create an agent from a processing callback.
                      
                              Args:
                                  callback: Function to process messages. Can be:
                                      - sync or async
                                      - with or without context
                                      - must return str for pipeline compatibility
                                  name: Optional name for the agent
                                  debug: Whether to enable debug mode
                                  kwargs: Additional arguments for agent
                              """
                              from llmling_agent_providers.callback import CallbackProvider
                      
                              name = name or callback.__name__ or "processor"
                              provider = CallbackProvider(callback, name=name)
                              return Agent[None](provider=provider, name=name, debug=debug, **kwargs)
                      
                          @property
                          def name(self) -> str:
                              """Get agent name."""
                              return self._name or "llmling-agent"
                      
                          @name.setter
                          def name(self, value: str):
                              self._provider.name = value
                              self._name = value
                      
                          @property
                          def context(self) -> AgentContext[TDeps]:
                              """Get agent context."""
                              return self._context
                      
                          @context.setter
                          def context(self, value: AgentContext[TDeps]):
                              """Set agent context and propagate to provider."""
                              self._provider.context = value
                              self.mcp.context = value
                              self._context = value
                      
                          def set_result_type(
                              self,
                              result_type: type[TResult] | str | ResponseDefinition | None,
                              *,
                              tool_name: str | None = None,
                              tool_description: str | None = None,
                          ):
                              """Set or update the result type for this agent.
                      
                              Args:
                                  result_type: New result type, can be:
                                      - A Python type for validation
                                      - Name of a response definition
                                      - Response definition instance
                                      - None to reset to unstructured mode
                                  tool_name: Optional override for tool name
                                  tool_description: Optional override for tool description
                              """
                              logger.debug("Setting result type to: %s for %r", result_type, self.name)
                              self._result_type = to_type(result_type)
                      
                          @property
                          def provider(self) -> AgentProvider:
                              """Get the underlying provider."""
                              return self._provider
                      
                          @provider.setter
                          def provider(self, value: AgentProvider, model: ModelType = None):
                              """Set the underlying provider."""
                              from llmling_agent_providers.base import AgentProvider
                      
                              name = self.name
                              debug = self._debug
                              self._provider.chunk_streamed.disconnect(self.chunk_streamed)
                              self._provider.model_changed.disconnect(self.model_changed)
                              self._provider.tool_used.disconnect(self.tool_used)
                              self._provider.model_changed.disconnect(self.model_changed)
                              match value:
                                  case AgentProvider():
                                      self._provider = value
                                  case "pydantic_ai":
                                      validate_import("pydantic_ai", "pydantic_ai")
                                      from llmling_agent_providers.pydanticai import PydanticAIProvider
                      
                                      self._provider = PydanticAIProvider(model=model, name=name, debug=debug)
                                  case "human":
                                      from llmling_agent_providers.human import HumanProvider
                      
                                      self._provider = HumanProvider(name=name, debug=debug)
                                  case "litellm":
                                      validate_import("litellm", "litellm")
                                      from llmling_agent_providers.litellm_provider import LiteLLMProvider
                      
                                      self._provider = LiteLLMProvider(model=model, name=name, debug=debug)
                                  case Callable():
                                      from llmling_agent_providers.callback import CallbackProvider
                      
                                      self._provider = CallbackProvider(value, name=name, debug=debug)
                                  case _:
                                      msg = f"Invalid agent type: {type}"
                                      raise ValueError(msg)
                              self._provider.chunk_streamed.connect(self.chunk_streamed)
                              self._provider.model_changed.connect(self.model_changed)
                              self._provider.tool_used.connect(self.tool_used)
                              self._provider.model_changed.connect(self.model_changed)
                              self._provider.context = self._context
                      
                          @overload
                          def to_structured(
                              self,
                              result_type: None,
                              *,
                              tool_name: str | None = None,
                              tool_description: str | None = None,
                          ) -> Self: ...
                      
                          @overload
                          def to_structured[TResult](
                              self,
                              result_type: type[TResult] | str | ResponseDefinition,
                              *,
                              tool_name: str | None = None,
                              tool_description: str | None = None,
                          ) -> StructuredAgent[TDeps, TResult]: ...
                      
                          def to_structured[TResult](
                              self,
                              result_type: type[TResult] | str | ResponseDefinition | None,
                              *,
                              tool_name: str | None = None,
                              tool_description: str | None = None,
                          ) -> StructuredAgent[TDeps, TResult] | Self:
                              """Convert this agent to a structured agent.
                      
                              If result_type is None, returns self unchanged (no wrapping).
                              Otherwise creates a StructuredAgent wrapper.
                      
                              Args:
                                  result_type: Type for structured responses. Can be:
                                      - A Python type (Pydantic model)
                                      - Name of response definition from context
                                      - Complete response definition
                                      - None to skip wrapping
                                  tool_name: Optional override for result tool name
                                  tool_description: Optional override for result tool description
                      
                              Returns:
                                  Either StructuredAgent wrapper or self unchanged
                              from llmling_agent.agent import StructuredAgent
                              """
                              if result_type is None:
                                  return self
                      
                              from llmling_agent.agent import StructuredAgent
                      
                              return StructuredAgent(
                                  self,
                                  result_type=result_type,
                                  tool_name=tool_name,
                                  tool_description=tool_description,
                              )
                      
                          def is_busy(self) -> bool:
                              """Check if agent is currently processing tasks."""
                              return bool(self._pending_tasks or self._background_task)
                      
                          @property
                          def model_name(self) -> str | None:
                              """Get the model name in a consistent format."""
                              return self._provider.model_name
                      
                          def to_tool(
                              self,
                              *,
                              name: str | None = None,
                              reset_history_on_run: bool = True,
                              pass_message_history: bool = False,
                              share_context: bool = False,
                              parent: AnyAgent[Any, Any] | None = None,
                          ) -> ToolInfo:
                              """Create a tool from this agent.
                      
                              Args:
                                  name: Optional tool name override
                                  reset_history_on_run: Clear agent's history before each run
                                  pass_message_history: Pass parent's message history to agent
                                  share_context: Whether to pass parent's context/deps
                                  parent: Optional parent agent for history/context sharing
                              """
                              tool_name = f"ask_{self.name}"
                      
                              async def wrapped_tool(prompt: str) -> str:
                                  if pass_message_history and not parent:
                                      msg = "Parent agent required for message history sharing"
                                      raise ToolError(msg)
                      
                                  if reset_history_on_run:
                                      self.conversation.clear()
                      
                                  history = None
                                  if pass_message_history and parent:
                                      history = parent.conversation.get_history()
                                      old = self.conversation.get_history()
                                      self.conversation.set_history(history)
                                  result = await self.run(prompt, result_type=self._result_type)
                                  if history:
                                      self.conversation.set_history(old)
                                  return result.data
                      
                              normalized_name = self.name.replace("_", " ").title()
                              docstring = f"Get expert answer from specialized agent: {normalized_name}"
                              if self.description:
                                  docstring = f"{docstring}\n\n{self.description}"
                      
                              wrapped_tool.__doc__ = docstring
                              wrapped_tool.__name__ = tool_name
                      
                              return ToolInfo.from_callable(
                                  wrapped_tool,
                                  name_override=tool_name,
                                  description_override=docstring,
                              )
                      
                          @track_action("Calling Agent.run: {prompts}:")
                          async def _run(
                              self,
                              *prompts: AnyPromptType | PIL.Image.Image | os.PathLike[str] | ChatMessage[Any],
                              result_type: type[TResult] | None = None,
                              model: ModelType = None,
                              store_history: bool = True,
                              tool_choice: bool | str | list[str] = True,
                              usage_limits: UsageLimits | None = None,
                              message_id: str | None = None,
                              conversation_id: str | None = None,
                              wait_for_connections: bool | None = None,
                          ) -> ChatMessage[TResult]:
                              """Run agent with prompt and get response.
                      
                              Args:
                                  prompts: User query or instruction
                                  result_type: Optional type for structured responses
                                  model: Optional model override
                                  store_history: Whether the message exchange should be added to the
                                                  context window
                                  tool_choice: Control tool usage:
                                      - True: Allow all tools
                                      - False: No tools
                                      - str: Use specific tool
                                      - list[str]: Allow specific tools
                                  usage_limits: Optional usage limits for the model
                                  message_id: Optional message id for the returned message.
                                              Automatically generated if not provided.
                                  conversation_id: Optional conversation id for the returned message.
                                  wait_for_connections: Whether to wait for connected agents to complete
                      
                              Returns:
                                  Result containing response and run information
                      
                              Raises:
                                  UnexpectedModelBehavior: If the model fails or behaves unexpectedly
                              """
                              """Run agent with prompt and get response."""
                              message_id = message_id or str(uuid4())
                      
                              tools = await self.tools.get_tools(state="enabled")
                              match tool_choice:
                                  case str():
                                      tools = [t for t in tools if t.name == tool_choice]
                                  case list():
                                      tools = [t for t in tools if t.name in tool_choice]
                                  case False:
                                      tools = []
                                  case True | None:
                                      pass  # Keep all tools
                              self.set_result_type(result_type)
                              start_time = time.perf_counter()
                              sys_prompt = await self.sys_prompts.format_system_prompt(self)
                              message_history = self.conversation.get_history()
                              try:
                                  result = await self._provider.generate_response(
                                      *await convert_prompts(prompts),
                                      message_id=message_id,
                                      message_history=message_history,
                                      tools=tools,
                                      result_type=result_type,
                                      usage_limits=usage_limits,
                                      model=model,
                                      system_prompt=sys_prompt,
                                  )
                              except Exception as e:
                                  logger.exception("Agent run failed")
                                  self.run_failed.emit("Agent run failed", e)
                                  raise
                              else:
                                  response_msg = ChatMessage[TResult](
                                      content=result.content,
                                      role="assistant",
                                      name=self.name,
                                      model=result.model_name,
                                      message_id=message_id,
                                      conversation_id=conversation_id,
                                      tool_calls=result.tool_calls,
                                      cost_info=result.cost_and_usage,
                                      response_time=time.perf_counter() - start_time,
                                      provider_extra=result.provider_extra or {},
                                  )
                                  if self._debug:
                                      import devtools
                      
                                      devtools.debug(response_msg)
                                  return response_msg
                      
                          @asynccontextmanager
                          async def run_stream(
                              self,
                              *prompt: AnyPromptType | PIL.Image.Image | os.PathLike[str],
                              result_type: type[TResult] | None = None,
                              model: ModelType = None,
                              tool_choice: bool | str | list[str] = True,
                              store_history: bool = True,
                              usage_limits: UsageLimits | None = None,
                              message_id: str | None = None,
                              conversation_id: str | None = None,
                              wait_for_connections: bool | None = None,
                          ) -> AsyncIterator[StreamingResponseProtocol[TResult]]:
                              """Run agent with prompt and get a streaming response.
                      
                              Args:
                                  prompt: User query or instruction
                                  result_type: Optional type for structured responses
                                  model: Optional model override
                                  tool_choice: Control tool usage:
                                      - True: Allow all tools
                                      - False: No tools
                                      - str: Use specific tool
                                      - list[str]: Allow specific tools
                                  store_history: Whether the message exchange should be added to the
                                                 context window
                                  usage_limits: Optional usage limits for the model
                                  message_id: Optional message id for the returned message.
                                              Automatically generated if not provided.
                                  conversation_id: Optional conversation id for the returned message.
                                  wait_for_connections: Whether to wait for connected agents to complete
                      
                              Returns:
                                  A streaming result to iterate over.
                      
                              Raises:
                                  UnexpectedModelBehavior: If the model fails or behaves unexpectedly
                              """
                              message_id = message_id or str(uuid4())
                              user_msg, prompts = await self.pre_run(*prompt)
                              self.set_result_type(result_type)
                              start_time = time.perf_counter()
                              sys_prompt = await self.sys_prompts.format_system_prompt(self)
                              tools = await self.tools.get_tools(state="enabled")
                              match tool_choice:
                                  case str():
                                      tools = [t for t in tools if t.name == tool_choice]
                                  case list():
                                      tools = [t for t in tools if t.name in tool_choice]
                                  case False:
                                      tools = []
                                  case True | None:
                                      pass  # Keep all tools
                              try:
                                  message_history = self.conversation.get_history()
                                  async with self._provider.stream_response(
                                      *prompts,
                                      message_id=message_id,
                                      message_history=message_history,
                                      result_type=result_type,
                                      model=model,
                                      store_history=store_history,
                                      tools=tools,
                                      usage_limits=usage_limits,
                                      system_prompt=sys_prompt,
                                  ) as stream:
                                      yield stream
                                      usage = stream.usage()
                                      cost_info = None
                                      model_name = stream.model_name  # type: ignore
                                      if model_name:
                                          cost_info = await TokenCost.from_usage(
                                              usage,
                                              model_name,
                                              str(user_msg.content),
                                              str(stream.formatted_content),  # type: ignore
                                          )
                                      response_msg = ChatMessage[TResult](
                                          content=cast(TResult, stream.formatted_content),  # type: ignore
                                          role="assistant",
                                          name=self.name,
                                          model=model_name,
                                          message_id=message_id,
                                          conversation_id=conversation_id,
                                          cost_info=cost_info,
                                          response_time=time.perf_counter() - start_time,
                                          # provider_extra=stream.provider_extra or {},
                                      )
                                      self.message_sent.emit(response_msg)
                                      if store_history:
                                          self.conversation.add_chat_messages([user_msg, response_msg])
                                      await self.connections.route_message(
                                          response_msg,
                                          wait=wait_for_connections,
                                      )
                      
                              except Exception as e:
                                  logger.exception("Agent stream failed")
                                  self.run_failed.emit("Agent stream failed", e)
                                  raise
                      
                          async def run_iter(
                              self,
                              *prompt_groups: Sequence[AnyPromptType | PIL.Image.Image | os.PathLike[str]],
                              result_type: type[TResult] | None = None,
                              model: ModelType = None,
                              store_history: bool = True,
                              wait_for_connections: bool | None = None,
                          ) -> AsyncIterator[ChatMessage[TResult]]:
                              """Run agent sequentially on multiple prompt groups.
                      
                              Args:
                                  prompt_groups: Groups of prompts to process sequentially
                                  result_type: Optional type for structured responses
                                  model: Optional model override
                                  store_history: Whether to store in conversation history
                                  wait_for_connections: Whether to wait for connected agents
                      
                              Yields:
                                  Response messages in sequence
                      
                              Example:
                                  questions = [
                                      ["What is your name?"],
                                      ["How old are you?", image1],
                                      ["Describe this image", image2],
                                  ]
                                  async for response in agent.run_iter(*questions):
                                      print(response.content)
                              """
                              for prompts in prompt_groups:
                                  response = await self.run(
                                      *prompts,
                                      result_type=result_type,
                                      model=model,
                                      store_history=store_history,
                                      wait_for_connections=wait_for_connections,
                                  )
                                  yield response  # pyright: ignore
                      
                          def run_sync(
                              self,
                              *prompt: AnyPromptType | PIL.Image.Image | os.PathLike[str],
                              result_type: type[TResult] | None = None,
                              deps: TDeps | None = None,
                              model: ModelType = None,
                              store_history: bool = True,
                          ) -> ChatMessage[TResult]:
                              """Run agent synchronously (convenience wrapper).
                      
                              Args:
                                  prompt: User query or instruction
                                  result_type: Optional type for structured responses
                                  deps: Optional dependencies for the agent
                                  model: Optional model override
                                  store_history: Whether the message exchange should be added to the
                                                 context window
                              Returns:
                                  Result containing response and run information
                              """
                              coro = self.run(
                                  *prompt,
                                  model=model,
                                  store_history=store_history,
                                  result_type=result_type,
                              )
                              return self.run_task_sync(coro)  # type: ignore
                      
                          async def run_job(
                              self,
                              job: Job[TDeps, str | None],
                              *,
                              store_history: bool = True,
                              include_agent_tools: bool = True,
                          ) -> ChatMessage[str]:
                              """Execute a pre-defined task.
                      
                              Args:
                                  job: Job configuration to execute
                                  store_history: Whether the message exchange should be added to the
                                                 context window
                                  include_agent_tools: Whether to include agent tools
                              Returns:
                                  Job execution result
                      
                              Raises:
                                  JobError: If task execution fails
                                  ValueError: If task configuration is invalid
                              """
                              from llmling_agent.tasks import JobError
                      
                              if job.required_dependency is not None:  # noqa: SIM102
                                  if not isinstance(self.context.data, job.required_dependency):
                                      msg = (
                                          f"Agent dependencies ({type(self.context.data)}) "
                                          f"don't match job requirement ({job.required_dependency})"
                                      )
                                      raise JobError(msg)
                      
                              # Load task knowledge
                              if job.knowledge:
                                  # Add knowledge sources to context
                                  resources: list[Resource | str] = list(job.knowledge.paths) + list(
                                      job.knowledge.resources
                                  )
                                  for source in resources:
                                      await self.conversation.load_context_source(source)
                                  for prompt in job.knowledge.prompts:
                                      await self.conversation.load_context_source(prompt)
                              try:
                                  # Register task tools temporarily
                                  tools = job.get_tools()
                                  with self.tools.temporary_tools(tools, exclusive=not include_agent_tools):
                                      # Execute job with job-specific tools
                                      return await self.run(await job.get_prompt(), store_history=store_history)
                      
                              except Exception as e:
                                  msg = f"Task execution failed: {e}"
                                  logger.exception(msg)
                                  raise JobError(msg) from e
                      
                          async def run_in_background(
                              self,
                              *prompt: AnyPromptType | PIL.Image.Image | os.PathLike[str],
                              max_count: int | None = None,
                              interval: float = 1.0,
                              block: bool = False,
                              **kwargs: Any,
                          ) -> ChatMessage[TResult] | None:
                              """Run agent continuously in background with prompt or dynamic prompt function.
                      
                              Args:
                                  prompt: Static prompt or function that generates prompts
                                  max_count: Maximum number of runs (None = infinite)
                                  interval: Seconds between runs
                                  block: Whether to block until completion
                                  **kwargs: Arguments passed to run()
                              """
                              self._infinite = max_count is None
                      
                              async def _continuous():
                                  count = 0
                                  msg = "%s: Starting continuous run (max_count=%s, interval=%s) for %r"
                                  logger.debug(msg, self.name, max_count, interval, self.name)
                                  latest = None
                                  while max_count is None or count < max_count:
                                      try:
                                          current_prompts = [
                                              call_with_context(p, self.context, **kwargs) if callable(p) else p
                                              for p in prompt
                                          ]
                                          msg = "%s: Generated prompt #%d: %s"
                                          logger.debug(msg, self.name, count, current_prompts)
                      
                                          latest = await self.run(current_prompts, **kwargs)
                                          msg = "%s: Run continous result #%d"
                                          logger.debug(msg, self.name, count)
                      
                                          count += 1
                                          await asyncio.sleep(interval)
                                      except asyncio.CancelledError:
                                          logger.debug("%s: Continuous run cancelled", self.name)
                                          break
                                      except Exception:
                                          logger.exception("%s: Background run failed", self.name)
                                          await asyncio.sleep(interval)
                                  msg = "%s: Continuous run completed after %d iterations"
                                  logger.debug(msg, self.name, count)
                                  return latest
                      
                              # Cancel any existing background task
                              await self.stop()
                              task = asyncio.create_task(_continuous(), name=f"background_{self.name}")
                              if block:
                                  try:
                                      return await task  # type: ignore
                                  finally:
                                      if not task.done():
                                          task.cancel()
                              else:
                                  logger.debug("%s: Started background task %s", self.name, task.get_name())
                                  self._background_task = task
                                  return None
                      
                          async def stop(self):
                              """Stop continuous execution if running."""
                              if self._background_task and not self._background_task.done():
                                  self._background_task.cancel()
                                  await self._background_task
                                  self._background_task = None
                      
                          async def wait(self) -> ChatMessage[TResult]:
                              """Wait for background execution to complete."""
                              if not self._background_task:
                                  msg = "No background task running"
                                  raise RuntimeError(msg)
                              if self._infinite:
                                  msg = "Cannot wait on infinite execution"
                                  raise RuntimeError(msg)
                              try:
                                  return await self._background_task
                              finally:
                                  self._background_task = None
                      
                          def clear_history(self):
                              """Clear both internal and pydantic-ai history."""
                              self._logger.clear_state()
                              self.conversation.clear()
                              logger.debug("Cleared history and reset tool state")
                      
                          async def share(
                              self,
                              target: AnyAgent[TDeps, Any],
                              *,
                              tools: list[str] | None = None,
                              resources: list[str] | None = None,
                              history: bool | int | None = None,  # bool or number of messages
                              token_limit: int | None = None,
                          ):
                              """Share capabilities and knowledge with another agent.
                      
                              Args:
                                  target: Agent to share with
                                  tools: List of tool names to share
                                  resources: List of resource names to share
                                  history: Share conversation history:
                                          - True: Share full history
                                          - int: Number of most recent messages to share
                                          - None: Don't share history
                                  token_limit: Optional max tokens for history
                      
                              Raises:
                                  ValueError: If requested items don't exist
                                  RuntimeError: If runtime not available for resources
                              """
                              # Share tools if requested
                              for name in tools or []:
                                  if tool := self.tools.get(name):
                                      meta = {"shared_from": self.name}
                                      target.tools.register_tool(tool.callable, metadata=meta)
                                  else:
                                      msg = f"Tool not found: {name}"
                                      raise ValueError(msg)
                      
                              # Share resources if requested
                              if resources:
                                  if not self.runtime:
                                      msg = "No runtime available for sharing resources"
                                      raise RuntimeError(msg)
                                  for name in resources:
                                      if resource := self.runtime.get_resource(name):
                                          await target.conversation.load_context_source(resource)  # type: ignore
                                      else:
                                          msg = f"Resource not found: {name}"
                                          raise ValueError(msg)
                      
                              # Share history if requested
                              if history:
                                  history_text = await self.conversation.format_history(
                                      max_tokens=token_limit,
                                      num_messages=history if isinstance(history, int) else None,
                                  )
                                  target.conversation.add_context_message(
                                      history_text, source=self.name, metadata={"type": "shared_history"}
                                  )
                      
                          def register_worker(
                              self,
                              worker: AnyAgent[Any, Any],
                              *,
                              name: str | None = None,
                              reset_history_on_run: bool = True,
                              pass_message_history: bool = False,
                              share_context: bool = False,
                          ) -> ToolInfo:
                              """Register another agent as a worker tool."""
                              return self.tools.register_worker(
                                  worker,
                                  name=name,
                                  reset_history_on_run=reset_history_on_run,
                                  pass_message_history=pass_message_history,
                                  share_context=share_context,
                                  parent=self if (pass_message_history or share_context) else None,
                              )
                      
                          def set_model(self, model: ModelType):
                              """Set the model for this agent.
                      
                              Args:
                                  model: New model to use (name or instance)
                      
                              Emits:
                                  model_changed signal with the new model
                              """
                              self._provider.set_model(model)
                      
                          async def reset(self):
                              """Reset agent state (conversation history and tool states)."""
                              old_tools = await self.tools.list_tools()
                              self.conversation.clear()
                              self.tools.reset_states()
                              new_tools = await self.tools.list_tools()
                      
                              event = self.AgentReset(
                                  agent_name=self.name,
                                  previous_tools=old_tools,
                                  new_tools=new_tools,
                              )
                              self.agent_reset.emit(event)
                      
                          @property
                          def runtime(self) -> RuntimeConfig:
                              """Get runtime configuration from context."""
                              assert self.context.runtime
                              return self.context.runtime
                      
                          @runtime.setter
                          def runtime(self, value: RuntimeConfig):
                              """Set runtime configuration and update context."""
                              self.context.runtime = value
                      
                          @property
                          def stats(self) -> MessageStats:
                              return MessageStats(messages=self._logger.message_history)
                      
                          @asynccontextmanager
                          async def temporary_state(
                              self,
                              *,
                              system_prompts: list[AnyPromptType] | None = None,
                              replace_prompts: bool = False,
                              tools: list[ToolType] | None = None,
                              replace_tools: bool = False,
                              history: list[AnyPromptType] | SessionQuery | None = None,
                              replace_history: bool = False,
                              pause_routing: bool = False,
                              model: ModelType | None = None,
                              provider: AgentProvider | None = None,
                          ) -> AsyncIterator[Self]:
                              """Temporarily modify agent state.
                      
                              Args:
                                  system_prompts: Temporary system prompts to use
                                  replace_prompts: Whether to replace existing prompts
                                  tools: Temporary tools to make available
                                  replace_tools: Whether to replace existing tools
                                  history: Conversation history (prompts or query)
                                  replace_history: Whether to replace existing history
                                  pause_routing: Whether to pause message routing
                                  model: Temporary model override
                                  provider: Temporary provider override
                              """
                              old_model = self._provider.model if hasattr(self._provider, "model") else None  # pyright: ignore
                              old_provider = self._provider
                      
                              async with AsyncExitStack() as stack:
                                  # System prompts (async)
                                  if system_prompts is not None:
                                      await stack.enter_async_context(
                                          self.sys_prompts.temporary_prompt(
                                              system_prompts, exclusive=replace_prompts
                                          )
                                      )
                      
                                  # Tools (sync)
                                  if tools is not None:
                                      stack.enter_context(
                                          self.tools.temporary_tools(tools, exclusive=replace_tools)
                                      )
                      
                                  # History (async)
                                  if history is not None:
                                      await stack.enter_async_context(
                                          self.conversation.temporary_state(
                                              history, replace_history=replace_history
                                          )
                                      )
                      
                                  # Routing (async)
                                  if pause_routing:
                                      await stack.enter_async_context(self.connections.paused_routing())
                      
                                  # Model/Provider
                                  if provider is not None:
                                      self._provider = provider
                                  elif model is not None:
                                      self._provider.set_model(model)
                      
                                  try:
                                      yield self
                                  finally:
                                      # Restore model/provider
                                      if provider is not None:
                                          self._provider = old_provider
                                      elif model is not None and old_model:
                                          self._provider.set_model(old_model)
                      

                      context property writable

                      context: AgentContext[TDeps]
                      

                      Get agent context.

                      model_name property

                      model_name: str | None
                      

                      Get the model name in a consistent format.

                      name property writable

                      name: str
                      

                      Get agent name.

                      provider property writable

                      provider: AgentProvider
                      

                      Get the underlying provider.

                      runtime property writable

                      runtime: RuntimeConfig
                      

                      Get runtime configuration from context.

                      AgentReset dataclass

                      Emitted when agent is reset.

                      Source code in src/llmling_agent/agent/agent.py
                      132
                      133
                      134
                      135
                      136
                      137
                      138
                      139
                      @dataclass(frozen=True)
                      class AgentReset:
                          """Emitted when agent is reset."""
                      
                          agent_name: AgentName
                          previous_tools: dict[str, bool]
                          new_tools: dict[str, bool]
                          timestamp: datetime = field(default_factory=datetime.now)
                      

                      __aenter__ async

                      __aenter__() -> Self
                      

                      Enter async context and set up MCP servers.

                      Source code in src/llmling_agent/agent/agent.py
                      348
                      349
                      350
                      351
                      352
                      353
                      354
                      355
                      356
                      357
                      358
                      359
                      360
                      361
                      362
                      363
                      364
                      365
                      366
                      367
                      368
                      369
                      370
                      371
                      372
                      373
                      374
                      375
                      376
                      377
                      378
                      379
                      380
                      381
                      382
                      383
                      async def __aenter__(self) -> Self:
                          """Enter async context and set up MCP servers."""
                          try:
                              # Collect all coroutines that need to be run
                              coros: list[Coroutine[Any, Any, Any]] = []
                      
                              # Runtime initialization if needed
                              runtime_ref = self.context.runtime
                              if runtime_ref and not runtime_ref._initialized:
                                  self._owns_runtime = True
                                  coros.append(runtime_ref.__aenter__())
                      
                              # Events initialization
                              coros.append(super().__aenter__())
                      
                              # Get conversation init tasks directly
                              coros.extend(self.conversation.get_initialization_tasks())
                      
                              # Execute coroutines either in parallel or sequentially
                              if self.parallel_init and coros:
                                  await asyncio.gather(*coros)
                              else:
                                  for coro in coros:
                                      await coro
                              if runtime_ref:
                                  self.tools.add_provider(RuntimeResourceProvider(runtime_ref))
                              for provider in await self.context.config.get_toolsets():
                                  self.tools.add_provider(provider)
                          except Exception as e:
                              # Clean up in reverse order
                              if self._owns_runtime and runtime_ref and self.context.runtime == runtime_ref:
                                  await runtime_ref.__aexit__(type(e), e, e.__traceback__)
                              msg = "Failed to initialize agent"
                              raise RuntimeError(msg) from e
                          else:
                              return self
                      

                      __aexit__ async

                      __aexit__(
                          exc_type: type[BaseException] | None,
                          exc_val: BaseException | None,
                          exc_tb: TracebackType | None,
                      )
                      

                      Exit async context.

                      Source code in src/llmling_agent/agent/agent.py
                      385
                      386
                      387
                      388
                      389
                      390
                      391
                      392
                      393
                      394
                      395
                      396
                      397
                      398
                      async def __aexit__(
                          self,
                          exc_type: type[BaseException] | None,
                          exc_val: BaseException | None,
                          exc_tb: TracebackType | None,
                      ):
                          """Exit async context."""
                          await super().__aexit__(exc_type, exc_val, exc_tb)
                          try:
                              await self.mcp.__aexit__(exc_type, exc_val, exc_tb)
                          finally:
                              if self._owns_runtime and self.context.runtime:
                                  self.tools.remove_provider("runtime")
                                  await self.context.runtime.__aexit__(exc_type, exc_val, exc_tb)
                      

                      __and__

                      __and__(other: Agent[TDeps] | StructuredAgent[TDeps, Any]) -> Team[TDeps]
                      
                      __and__(other: Team[TDeps]) -> Team[TDeps]
                      
                      __and__(other: ProcessorCallback[Any]) -> Team[TDeps]
                      
                      __and__(other: MessageNode[Any, Any] | ProcessorCallback[Any]) -> Team[Any]
                      

                      Create agent group using | operator.

                      Example

                      group = analyzer & planner & executor # Create group of 3 group = analyzer & existing_group # Add to existing group

                      Source code in src/llmling_agent/agent/agent.py
                      413
                      414
                      415
                      416
                      417
                      418
                      419
                      420
                      421
                      422
                      423
                      424
                      425
                      426
                      427
                      428
                      429
                      430
                      431
                      432
                      433
                      434
                      435
                      436
                      437
                      438
                      def __and__(self, other: MessageNode[Any, Any] | ProcessorCallback[Any]) -> Team[Any]:
                          """Create agent group using | operator.
                      
                          Example:
                              group = analyzer & planner & executor  # Create group of 3
                              group = analyzer & existing_group  # Add to existing group
                          """
                          from llmling_agent.agent import StructuredAgent
                          from llmling_agent.delegation.team import Team
                      
                          match other:
                              case Team():
                                  return Team([self, *other.agents])
                              case Callable():
                                  if callable(other):
                                      if has_return_type(other, str):
                                          agent_2 = Agent.from_callback(other)
                                      else:
                                          agent_2 = StructuredAgent.from_callback(other)
                                  agent_2.context.pool = self.context.pool
                                  return Team([self, agent_2])
                              case MessageNode():
                                  return Team([self, other])
                              case _:
                                  msg = f"Invalid agent type: {type(other)}"
                                  raise ValueError(msg)
                      

                      __init__

                      __init__(
                          name: str = "llmling-agent",
                          provider: AgentType = "pydantic_ai",
                          *,
                          model: ModelType = None,
                          runtime: RuntimeConfig | Config | StrPath | None = None,
                          context: AgentContext[TDeps] | None = None,
                          session: SessionIdType | SessionQuery | MemoryConfig | bool | int = None,
                          system_prompt: AnyPromptType | Sequence[AnyPromptType] = (),
                          description: str | None = None,
                          tools: Sequence[ToolType] | None = None,
                          capabilities: Capabilities | None = None,
                          mcp_servers: Sequence[str | MCPServerConfig] | None = None,
                          resources: Sequence[Resource | PromptType | str] = (),
                          retries: int = 1,
                          result_retries: int | None = None,
                          end_strategy: EndStrategy = "early",
                          defer_model_check: bool = False,
                          input_provider: InputProvider | None = None,
                          parallel_init: bool = True,
                          debug: bool = False,
                      )
                      

                      Initialize agent with runtime configuration.

                      Parameters:

                      Name Type Description Default
                      runtime RuntimeConfig | Config | StrPath | None

                      Runtime configuration providing access to resources/tools

                      None
                      context AgentContext[TDeps] | None

                      Agent context with capabilities and configuration

                      None
                      provider AgentType

                      Agent type to use (ai: PydanticAIProvider, human: HumanProvider)

                      'pydantic_ai'
                      session SessionIdType | SessionQuery | MemoryConfig | bool | int

                      Memory configuration. - None: Default memory config - False: Disable message history (max_messages=0) - int: Max tokens for memory - str/UUID: Session identifier - SessionQuery: Query to recover conversation - MemoryConfig: Complete memory configuration

                      None
                      model ModelType

                      The default model to use (defaults to GPT-4)

                      None
                      system_prompt AnyPromptType | Sequence[AnyPromptType]

                      Static system prompts to use for this agent

                      ()
                      name str

                      Name of the agent for logging

                      'llmling-agent'
                      description str | None

                      Description of the Agent ("what it can do")

                      None
                      tools Sequence[ToolType] | None

                      List of tools to register with the agent

                      None
                      capabilities Capabilities | None

                      Capabilities for the agent

                      None
                      mcp_servers Sequence[str | MCPServerConfig] | None

                      MCP servers to connect to

                      None
                      resources Sequence[Resource | PromptType | str]

                      Additional resources to load

                      ()
                      retries int

                      Default number of retries for failed operations

                      1
                      result_retries int | None

                      Max retries for result validation (defaults to retries)

                      None
                      end_strategy EndStrategy

                      Strategy for handling tool calls that are requested alongside a final result

                      'early'
                      defer_model_check bool

                      Whether to defer model evaluation until first run

                      False
                      input_provider InputProvider | None

                      Provider for human input (tool confirmation / HumanProviders)

                      None
                      parallel_init bool

                      Whether to initialize resources in parallel

                      True
                      debug bool

                      Whether to enable debug mode

                      False
                      Source code in src/llmling_agent/agent/agent.py
                      149
                      150
                      151
                      152
                      153
                      154
                      155
                      156
                      157
                      158
                      159
                      160
                      161
                      162
                      163
                      164
                      165
                      166
                      167
                      168
                      169
                      170
                      171
                      172
                      173
                      174
                      175
                      176
                      177
                      178
                      179
                      180
                      181
                      182
                      183
                      184
                      185
                      186
                      187
                      188
                      189
                      190
                      191
                      192
                      193
                      194
                      195
                      196
                      197
                      198
                      199
                      200
                      201
                      202
                      203
                      204
                      205
                      206
                      207
                      208
                      209
                      210
                      211
                      212
                      213
                      214
                      215
                      216
                      217
                      218
                      219
                      220
                      221
                      222
                      223
                      224
                      225
                      226
                      227
                      228
                      229
                      230
                      231
                      232
                      233
                      234
                      235
                      236
                      237
                      238
                      239
                      240
                      241
                      242
                      243
                      244
                      245
                      246
                      247
                      248
                      249
                      250
                      251
                      252
                      253
                      254
                      255
                      256
                      257
                      258
                      259
                      260
                      261
                      262
                      263
                      264
                      265
                      266
                      267
                      268
                      269
                      270
                      271
                      272
                      273
                      274
                      275
                      276
                      277
                      278
                      279
                      280
                      281
                      282
                      283
                      284
                      285
                      286
                      287
                      288
                      289
                      290
                      291
                      292
                      293
                      294
                      295
                      296
                      297
                      298
                      299
                      300
                      301
                      302
                      303
                      304
                      305
                      306
                      307
                      308
                      309
                      310
                      311
                      312
                      313
                      314
                      315
                      316
                      317
                      318
                      319
                      320
                      321
                      322
                      323
                      324
                      325
                      326
                      327
                      328
                      329
                      330
                      331
                      def __init__(
                          # we dont use AgentKwargs here so that we can work with explicit ones in the ctor
                          self,
                          name: str = "llmling-agent",
                          provider: AgentType = "pydantic_ai",
                          *,
                          model: ModelType = None,
                          runtime: RuntimeConfig | Config | StrPath | None = None,
                          context: AgentContext[TDeps] | None = None,
                          session: SessionIdType | SessionQuery | MemoryConfig | bool | int = None,
                          system_prompt: AnyPromptType | Sequence[AnyPromptType] = (),
                          description: str | None = None,
                          tools: Sequence[ToolType] | None = None,
                          capabilities: Capabilities | None = None,
                          mcp_servers: Sequence[str | MCPServerConfig] | None = None,
                          resources: Sequence[Resource | PromptType | str] = (),
                          retries: int = 1,
                          result_retries: int | None = None,
                          end_strategy: EndStrategy = "early",
                          defer_model_check: bool = False,
                          input_provider: InputProvider | None = None,
                          parallel_init: bool = True,
                          debug: bool = False,
                      ):
                          """Initialize agent with runtime configuration.
                      
                          Args:
                              runtime: Runtime configuration providing access to resources/tools
                              context: Agent context with capabilities and configuration
                              provider: Agent type to use (ai: PydanticAIProvider, human: HumanProvider)
                              session: Memory configuration.
                                  - None: Default memory config
                                  - False: Disable message history (max_messages=0)
                                  - int: Max tokens for memory
                                  - str/UUID: Session identifier
                                  - SessionQuery: Query to recover conversation
                                  - MemoryConfig: Complete memory configuration
                              model: The default model to use (defaults to GPT-4)
                              system_prompt: Static system prompts to use for this agent
                              name: Name of the agent for logging
                              description: Description of the Agent ("what it can do")
                              tools: List of tools to register with the agent
                              capabilities: Capabilities for the agent
                              mcp_servers: MCP servers to connect to
                              resources: Additional resources to load
                              retries: Default number of retries for failed operations
                              result_retries: Max retries for result validation (defaults to retries)
                              end_strategy: Strategy for handling tool calls that are requested alongside
                                            a final result
                              defer_model_check: Whether to defer model evaluation until first run
                              input_provider: Provider for human input (tool confirmation / HumanProviders)
                              parallel_init: Whether to initialize resources in parallel
                              debug: Whether to enable debug mode
                          """
                          from llmling_agent.agent import AgentContext
                          from llmling_agent.agent.conversation import ConversationManager
                          from llmling_agent.agent.interactions import Interactions
                          from llmling_agent.agent.sys_prompts import SystemPrompts
                          from llmling_agent.resource_providers.capability_provider import (
                              CapabilitiesResourceProvider,
                          )
                          from llmling_agent_providers.base import AgentProvider
                      
                          self._infinite = False
                          # save some stuff for asnyc init
                          self._owns_runtime = False
                          # prepare context
                          ctx = context or AgentContext[TDeps].create_default(
                              name,
                              input_provider=input_provider,
                              capabilities=capabilities,
                          )
                          self._context = ctx
                          memory_cfg = (
                              session
                              if isinstance(session, MemoryConfig)
                              else MemoryConfig.from_value(session)
                          )
                          super().__init__(
                              name=name,
                              context=ctx,
                              description=description,
                              enable_logging=memory_cfg.enable,
                          )
                          # Initialize runtime
                          match runtime:
                              case None:
                                  ctx.runtime = RuntimeConfig.from_config(Config())
                              case Config() | str() | PathLike():
                                  ctx.runtime = RuntimeConfig.from_config(runtime)
                              case RuntimeConfig():
                                  ctx.runtime = runtime
                      
                          runtime_provider = RuntimePromptProvider(ctx.runtime)
                          ctx.definition.prompt_manager.providers["runtime"] = runtime_provider
                          # Initialize tool manager
                          all_tools = list(tools or [])
                          self.tools = ToolManager(all_tools)
                          self.tools.add_provider(self.mcp)
                          if builtin_tools := ctx.config.get_tool_provider():
                              self.tools.add_provider(builtin_tools)
                      
                          # Initialize conversation manager
                          resources = list(resources)
                          if ctx.config.knowledge:
                              resources.extend(ctx.config.knowledge.get_resources())
                          self.conversation = ConversationManager(self, memory_cfg, resources=resources)
                          # Initialize provider
                          match provider:
                              case "pydantic_ai":
                                  validate_import("pydantic_ai", "pydantic_ai")
                                  from llmling_agent_providers.pydanticai import PydanticAIProvider
                      
                                  if model and not isinstance(model, str):
                                      from pydantic_ai import models
                      
                                      assert isinstance(model, models.Model)
                                  self._provider: AgentProvider = PydanticAIProvider(
                                      model=model,
                                      retries=retries,
                                      end_strategy=end_strategy,
                                      result_retries=result_retries,
                                      defer_model_check=defer_model_check,
                                      debug=debug,
                                      context=ctx,
                                  )
                              case "human":
                                  from llmling_agent_providers.human import HumanProvider
                      
                                  self._provider = HumanProvider(name=name, debug=debug, context=ctx)
                              case Callable():
                                  from llmling_agent_providers.callback import CallbackProvider
                      
                                  self._provider = CallbackProvider(
                                      provider, name=name, debug=debug, context=ctx
                                  )
                              case "litellm":
                                  validate_import("litellm", "litellm")
                                  from llmling_agent_providers.litellm_provider import LiteLLMProvider
                      
                                  self._provider = LiteLLMProvider(
                                      name=name,
                                      debug=debug,
                                      retries=retries,
                                      context=ctx,
                                      model=model,
                                  )
                              case AgentProvider():
                                  self._provider = provider
                                  self._provider.context = ctx
                              case _:
                                  msg = f"Invalid agent type: {type}"
                                  raise ValueError(msg)
                          self.tools.add_provider(CapabilitiesResourceProvider(ctx.capabilities))
                      
                          if ctx and ctx.definition:
                              from llmling_agent.observability import registry
                      
                              registry.register_providers(ctx.definition.observability)
                      
                          # init variables
                          self._debug = debug
                          self._result_type: type | None = None
                          self.parallel_init = parallel_init
                          self.name = name
                          self._background_task: asyncio.Task[Any] | None = None
                      
                          # Forward provider signals
                          self._provider.chunk_streamed.connect(self.chunk_streamed)
                          self._provider.model_changed.connect(self.model_changed)
                          self._provider.tool_used.connect(self.tool_used)
                          self._provider.model_changed.connect(self.model_changed)
                      
                          self.talk = Interactions(self)
                      
                          # Set up system prompts
                          config_prompts = ctx.config.system_prompts if ctx else []
                          all_prompts: list[AnyPromptType] = list(config_prompts)
                          if isinstance(system_prompt, list):
                              all_prompts.extend(system_prompt)
                          else:
                              all_prompts.append(system_prompt)
                          self.sys_prompts = SystemPrompts(all_prompts, context=ctx)
                      

                      _run async

                      _run(
                          *prompts: AnyPromptType | Image | PathLike[str] | ChatMessage[Any],
                          result_type: type[TResult] | None = None,
                          model: ModelType = None,
                          store_history: bool = True,
                          tool_choice: bool | str | list[str] = True,
                          usage_limits: UsageLimits | None = None,
                          message_id: str | None = None,
                          conversation_id: str | None = None,
                          wait_for_connections: bool | None = None,
                      ) -> ChatMessage[TResult]
                      

                      Run agent with prompt and get response.

                      Parameters:

                      Name Type Description Default
                      prompts AnyPromptType | Image | PathLike[str] | ChatMessage[Any]

                      User query or instruction

                      ()
                      result_type type[TResult] | None

                      Optional type for structured responses

                      None
                      model ModelType

                      Optional model override

                      None
                      store_history bool

                      Whether the message exchange should be added to the context window

                      True
                      tool_choice bool | str | list[str]

                      Control tool usage: - True: Allow all tools - False: No tools - str: Use specific tool - list[str]: Allow specific tools

                      True
                      usage_limits UsageLimits | None

                      Optional usage limits for the model

                      None
                      message_id str | None

                      Optional message id for the returned message. Automatically generated if not provided.

                      None
                      conversation_id str | None

                      Optional conversation id for the returned message.

                      None
                      wait_for_connections bool | None

                      Whether to wait for connected agents to complete

                      None

                      Returns:

                      Type Description
                      ChatMessage[TResult]

                      Result containing response and run information

                      Raises:

                      Type Description
                      UnexpectedModelBehavior

                      If the model fails or behaves unexpectedly

                      Source code in src/llmling_agent/agent/agent.py
                      696
                      697
                      698
                      699
                      700
                      701
                      702
                      703
                      704
                      705
                      706
                      707
                      708
                      709
                      710
                      711
                      712
                      713
                      714
                      715
                      716
                      717
                      718
                      719
                      720
                      721
                      722
                      723
                      724
                      725
                      726
                      727
                      728
                      729
                      730
                      731
                      732
                      733
                      734
                      735
                      736
                      737
                      738
                      739
                      740
                      741
                      742
                      743
                      744
                      745
                      746
                      747
                      748
                      749
                      750
                      751
                      752
                      753
                      754
                      755
                      756
                      757
                      758
                      759
                      760
                      761
                      762
                      763
                      764
                      765
                      766
                      767
                      768
                      769
                      770
                      771
                      772
                      773
                      774
                      775
                      776
                      777
                      778
                      779
                      780
                      781
                      782
                      783
                      @track_action("Calling Agent.run: {prompts}:")
                      async def _run(
                          self,
                          *prompts: AnyPromptType | PIL.Image.Image | os.PathLike[str] | ChatMessage[Any],
                          result_type: type[TResult] | None = None,
                          model: ModelType = None,
                          store_history: bool = True,
                          tool_choice: bool | str | list[str] = True,
                          usage_limits: UsageLimits | None = None,
                          message_id: str | None = None,
                          conversation_id: str | None = None,
                          wait_for_connections: bool | None = None,
                      ) -> ChatMessage[TResult]:
                          """Run agent with prompt and get response.
                      
                          Args:
                              prompts: User query or instruction
                              result_type: Optional type for structured responses
                              model: Optional model override
                              store_history: Whether the message exchange should be added to the
                                              context window
                              tool_choice: Control tool usage:
                                  - True: Allow all tools
                                  - False: No tools
                                  - str: Use specific tool
                                  - list[str]: Allow specific tools
                              usage_limits: Optional usage limits for the model
                              message_id: Optional message id for the returned message.
                                          Automatically generated if not provided.
                              conversation_id: Optional conversation id for the returned message.
                              wait_for_connections: Whether to wait for connected agents to complete
                      
                          Returns:
                              Result containing response and run information
                      
                          Raises:
                              UnexpectedModelBehavior: If the model fails or behaves unexpectedly
                          """
                          """Run agent with prompt and get response."""
                          message_id = message_id or str(uuid4())
                      
                          tools = await self.tools.get_tools(state="enabled")
                          match tool_choice:
                              case str():
                                  tools = [t for t in tools if t.name == tool_choice]
                              case list():
                                  tools = [t for t in tools if t.name in tool_choice]
                              case False:
                                  tools = []
                              case True | None:
                                  pass  # Keep all tools
                          self.set_result_type(result_type)
                          start_time = time.perf_counter()
                          sys_prompt = await self.sys_prompts.format_system_prompt(self)
                          message_history = self.conversation.get_history()
                          try:
                              result = await self._provider.generate_response(
                                  *await convert_prompts(prompts),
                                  message_id=message_id,
                                  message_history=message_history,
                                  tools=tools,
                                  result_type=result_type,
                                  usage_limits=usage_limits,
                                  model=model,
                                  system_prompt=sys_prompt,
                              )
                          except Exception as e:
                              logger.exception("Agent run failed")
                              self.run_failed.emit("Agent run failed", e)
                              raise
                          else:
                              response_msg = ChatMessage[TResult](
                                  content=result.content,
                                  role="assistant",
                                  name=self.name,
                                  model=result.model_name,
                                  message_id=message_id,
                                  conversation_id=conversation_id,
                                  tool_calls=result.tool_calls,
                                  cost_info=result.cost_and_usage,
                                  response_time=time.perf_counter() - start_time,
                                  provider_extra=result.provider_extra or {},
                              )
                              if self._debug:
                                  import devtools
                      
                                  devtools.debug(response_msg)
                              return response_msg
                      

                      clear_history

                      clear_history()
                      

                      Clear both internal and pydantic-ai history.

                      Source code in src/llmling_agent/agent/agent.py
                      1089
                      1090
                      1091
                      1092
                      1093
                      def clear_history(self):
                          """Clear both internal and pydantic-ai history."""
                          self._logger.clear_state()
                          self.conversation.clear()
                          logger.debug("Cleared history and reset tool state")
                      

                      from_callback classmethod

                      from_callback(
                          callback: ProcessorCallback[str],
                          *,
                          name: str | None = None,
                          debug: bool = False,
                          **kwargs: Any,
                      ) -> Agent[None]
                      

                      Create an agent from a processing callback.

                      Parameters:

                      Name Type Description Default
                      callback ProcessorCallback[str]

                      Function to process messages. Can be: - sync or async - with or without context - must return str for pipeline compatibility

                      required
                      name str | None

                      Optional name for the agent

                      None
                      debug bool

                      Whether to enable debug mode

                      False
                      kwargs Any

                      Additional arguments for agent

                      {}
                      Source code in src/llmling_agent/agent/agent.py
                      465
                      466
                      467
                      468
                      469
                      470
                      471
                      472
                      473
                      474
                      475
                      476
                      477
                      478
                      479
                      480
                      481
                      482
                      483
                      484
                      485
                      486
                      487
                      488
                      489
                      @classmethod
                      def from_callback(
                          cls,
                          callback: ProcessorCallback[str],
                          *,
                          name: str | None = None,
                          debug: bool = False,
                          **kwargs: Any,
                      ) -> Agent[None]:
                          """Create an agent from a processing callback.
                      
                          Args:
                              callback: Function to process messages. Can be:
                                  - sync or async
                                  - with or without context
                                  - must return str for pipeline compatibility
                              name: Optional name for the agent
                              debug: Whether to enable debug mode
                              kwargs: Additional arguments for agent
                          """
                          from llmling_agent_providers.callback import CallbackProvider
                      
                          name = name or callback.__name__ or "processor"
                          provider = CallbackProvider(callback, name=name)
                          return Agent[None](provider=provider, name=name, debug=debug, **kwargs)
                      

                      is_busy

                      is_busy() -> bool
                      

                      Check if agent is currently processing tasks.

                      Source code in src/llmling_agent/agent/agent.py
                      635
                      636
                      637
                      def is_busy(self) -> bool:
                          """Check if agent is currently processing tasks."""
                          return bool(self._pending_tasks or self._background_task)
                      

                      register_worker

                      register_worker(
                          worker: AnyAgent[Any, Any],
                          *,
                          name: str | None = None,
                          reset_history_on_run: bool = True,
                          pass_message_history: bool = False,
                          share_context: bool = False,
                      ) -> ToolInfo
                      

                      Register another agent as a worker tool.

                      Source code in src/llmling_agent/agent/agent.py
                      1151
                      1152
                      1153
                      1154
                      1155
                      1156
                      1157
                      1158
                      1159
                      1160
                      1161
                      1162
                      1163
                      1164
                      1165
                      1166
                      1167
                      1168
                      def register_worker(
                          self,
                          worker: AnyAgent[Any, Any],
                          *,
                          name: str | None = None,
                          reset_history_on_run: bool = True,
                          pass_message_history: bool = False,
                          share_context: bool = False,
                      ) -> ToolInfo:
                          """Register another agent as a worker tool."""
                          return self.tools.register_worker(
                              worker,
                              name=name,
                              reset_history_on_run=reset_history_on_run,
                              pass_message_history=pass_message_history,
                              share_context=share_context,
                              parent=self if (pass_message_history or share_context) else None,
                          )
                      

                      reset async

                      reset()
                      

                      Reset agent state (conversation history and tool states).

                      Source code in src/llmling_agent/agent/agent.py
                      1181
                      1182
                      1183
                      1184
                      1185
                      1186
                      1187
                      1188
                      1189
                      1190
                      1191
                      1192
                      1193
                      async def reset(self):
                          """Reset agent state (conversation history and tool states)."""
                          old_tools = await self.tools.list_tools()
                          self.conversation.clear()
                          self.tools.reset_states()
                          new_tools = await self.tools.list_tools()
                      
                          event = self.AgentReset(
                              agent_name=self.name,
                              previous_tools=old_tools,
                              new_tools=new_tools,
                          )
                          self.agent_reset.emit(event)
                      

                      run_in_background async

                      run_in_background(
                          *prompt: AnyPromptType | Image | PathLike[str],
                          max_count: int | None = None,
                          interval: float = 1.0,
                          block: bool = False,
                          **kwargs: Any,
                      ) -> ChatMessage[TResult] | None
                      

                      Run agent continuously in background with prompt or dynamic prompt function.

                      Parameters:

                      Name Type Description Default
                      prompt AnyPromptType | Image | PathLike[str]

                      Static prompt or function that generates prompts

                      ()
                      max_count int | None

                      Maximum number of runs (None = infinite)

                      None
                      interval float

                      Seconds between runs

                      1.0
                      block bool

                      Whether to block until completion

                      False
                      **kwargs Any

                      Arguments passed to run()

                      {}
                      Source code in src/llmling_agent/agent/agent.py
                      1006
                      1007
                      1008
                      1009
                      1010
                      1011
                      1012
                      1013
                      1014
                      1015
                      1016
                      1017
                      1018
                      1019
                      1020
                      1021
                      1022
                      1023
                      1024
                      1025
                      1026
                      1027
                      1028
                      1029
                      1030
                      1031
                      1032
                      1033
                      1034
                      1035
                      1036
                      1037
                      1038
                      1039
                      1040
                      1041
                      1042
                      1043
                      1044
                      1045
                      1046
                      1047
                      1048
                      1049
                      1050
                      1051
                      1052
                      1053
                      1054
                      1055
                      1056
                      1057
                      1058
                      1059
                      1060
                      1061
                      1062
                      1063
                      1064
                      1065
                      1066
                      1067
                      async def run_in_background(
                          self,
                          *prompt: AnyPromptType | PIL.Image.Image | os.PathLike[str],
                          max_count: int | None = None,
                          interval: float = 1.0,
                          block: bool = False,
                          **kwargs: Any,
                      ) -> ChatMessage[TResult] | None:
                          """Run agent continuously in background with prompt or dynamic prompt function.
                      
                          Args:
                              prompt: Static prompt or function that generates prompts
                              max_count: Maximum number of runs (None = infinite)
                              interval: Seconds between runs
                              block: Whether to block until completion
                              **kwargs: Arguments passed to run()
                          """
                          self._infinite = max_count is None
                      
                          async def _continuous():
                              count = 0
                              msg = "%s: Starting continuous run (max_count=%s, interval=%s) for %r"
                              logger.debug(msg, self.name, max_count, interval, self.name)
                              latest = None
                              while max_count is None or count < max_count:
                                  try:
                                      current_prompts = [
                                          call_with_context(p, self.context, **kwargs) if callable(p) else p
                                          for p in prompt
                                      ]
                                      msg = "%s: Generated prompt #%d: %s"
                                      logger.debug(msg, self.name, count, current_prompts)
                      
                                      latest = await self.run(current_prompts, **kwargs)
                                      msg = "%s: Run continous result #%d"
                                      logger.debug(msg, self.name, count)
                      
                                      count += 1
                                      await asyncio.sleep(interval)
                                  except asyncio.CancelledError:
                                      logger.debug("%s: Continuous run cancelled", self.name)
                                      break
                                  except Exception:
                                      logger.exception("%s: Background run failed", self.name)
                                      await asyncio.sleep(interval)
                              msg = "%s: Continuous run completed after %d iterations"
                              logger.debug(msg, self.name, count)
                              return latest
                      
                          # Cancel any existing background task
                          await self.stop()
                          task = asyncio.create_task(_continuous(), name=f"background_{self.name}")
                          if block:
                              try:
                                  return await task  # type: ignore
                              finally:
                                  if not task.done():
                                      task.cancel()
                          else:
                              logger.debug("%s: Started background task %s", self.name, task.get_name())
                              self._background_task = task
                              return None
                      

                      run_iter async

                      run_iter(
                          *prompt_groups: Sequence[AnyPromptType | Image | PathLike[str]],
                          result_type: type[TResult] | None = None,
                          model: ModelType = None,
                          store_history: bool = True,
                          wait_for_connections: bool | None = None,
                      ) -> AsyncIterator[ChatMessage[TResult]]
                      

                      Run agent sequentially on multiple prompt groups.

                      Parameters:

                      Name Type Description Default
                      prompt_groups Sequence[AnyPromptType | Image | PathLike[str]]

                      Groups of prompts to process sequentially

                      ()
                      result_type type[TResult] | None

                      Optional type for structured responses

                      None
                      model ModelType

                      Optional model override

                      None
                      store_history bool

                      Whether to store in conversation history

                      True
                      wait_for_connections bool | None

                      Whether to wait for connected agents

                      None

                      Yields:

                      Type Description
                      AsyncIterator[ChatMessage[TResult]]

                      Response messages in sequence

                      Example

                      questions = [ ["What is your name?"], ["How old are you?", image1], ["Describe this image", image2], ] async for response in agent.run_iter(*questions): print(response.content)

                      Source code in src/llmling_agent/agent/agent.py
                      886
                      887
                      888
                      889
                      890
                      891
                      892
                      893
                      894
                      895
                      896
                      897
                      898
                      899
                      900
                      901
                      902
                      903
                      904
                      905
                      906
                      907
                      908
                      909
                      910
                      911
                      912
                      913
                      914
                      915
                      916
                      917
                      918
                      919
                      920
                      921
                      922
                      923
                      async def run_iter(
                          self,
                          *prompt_groups: Sequence[AnyPromptType | PIL.Image.Image | os.PathLike[str]],
                          result_type: type[TResult] | None = None,
                          model: ModelType = None,
                          store_history: bool = True,
                          wait_for_connections: bool | None = None,
                      ) -> AsyncIterator[ChatMessage[TResult]]:
                          """Run agent sequentially on multiple prompt groups.
                      
                          Args:
                              prompt_groups: Groups of prompts to process sequentially
                              result_type: Optional type for structured responses
                              model: Optional model override
                              store_history: Whether to store in conversation history
                              wait_for_connections: Whether to wait for connected agents
                      
                          Yields:
                              Response messages in sequence
                      
                          Example:
                              questions = [
                                  ["What is your name?"],
                                  ["How old are you?", image1],
                                  ["Describe this image", image2],
                              ]
                              async for response in agent.run_iter(*questions):
                                  print(response.content)
                          """
                          for prompts in prompt_groups:
                              response = await self.run(
                                  *prompts,
                                  result_type=result_type,
                                  model=model,
                                  store_history=store_history,
                                  wait_for_connections=wait_for_connections,
                              )
                              yield response  # pyright: ignore
                      

                      run_job async

                      run_job(
                          job: Job[TDeps, str | None],
                          *,
                          store_history: bool = True,
                          include_agent_tools: bool = True,
                      ) -> ChatMessage[str]
                      

                      Execute a pre-defined task.

                      Parameters:

                      Name Type Description Default
                      job Job[TDeps, str | None]

                      Job configuration to execute

                      required
                      store_history bool

                      Whether the message exchange should be added to the context window

                      True
                      include_agent_tools bool

                      Whether to include agent tools

                      True

                      Returns: Job execution result

                      Raises:

                      Type Description
                      JobError

                      If task execution fails

                      ValueError

                      If task configuration is invalid

                      Source code in src/llmling_agent/agent/agent.py
                       953
                       954
                       955
                       956
                       957
                       958
                       959
                       960
                       961
                       962
                       963
                       964
                       965
                       966
                       967
                       968
                       969
                       970
                       971
                       972
                       973
                       974
                       975
                       976
                       977
                       978
                       979
                       980
                       981
                       982
                       983
                       984
                       985
                       986
                       987
                       988
                       989
                       990
                       991
                       992
                       993
                       994
                       995
                       996
                       997
                       998
                       999
                      1000
                      1001
                      1002
                      1003
                      1004
                      async def run_job(
                          self,
                          job: Job[TDeps, str | None],
                          *,
                          store_history: bool = True,
                          include_agent_tools: bool = True,
                      ) -> ChatMessage[str]:
                          """Execute a pre-defined task.
                      
                          Args:
                              job: Job configuration to execute
                              store_history: Whether the message exchange should be added to the
                                             context window
                              include_agent_tools: Whether to include agent tools
                          Returns:
                              Job execution result
                      
                          Raises:
                              JobError: If task execution fails
                              ValueError: If task configuration is invalid
                          """
                          from llmling_agent.tasks import JobError
                      
                          if job.required_dependency is not None:  # noqa: SIM102
                              if not isinstance(self.context.data, job.required_dependency):
                                  msg = (
                                      f"Agent dependencies ({type(self.context.data)}) "
                                      f"don't match job requirement ({job.required_dependency})"
                                  )
                                  raise JobError(msg)
                      
                          # Load task knowledge
                          if job.knowledge:
                              # Add knowledge sources to context
                              resources: list[Resource | str] = list(job.knowledge.paths) + list(
                                  job.knowledge.resources
                              )
                              for source in resources:
                                  await self.conversation.load_context_source(source)
                              for prompt in job.knowledge.prompts:
                                  await self.conversation.load_context_source(prompt)
                          try:
                              # Register task tools temporarily
                              tools = job.get_tools()
                              with self.tools.temporary_tools(tools, exclusive=not include_agent_tools):
                                  # Execute job with job-specific tools
                                  return await self.run(await job.get_prompt(), store_history=store_history)
                      
                          except Exception as e:
                              msg = f"Task execution failed: {e}"
                              logger.exception(msg)
                              raise JobError(msg) from e
                      

                      run_stream async

                      run_stream(
                          *prompt: AnyPromptType | Image | PathLike[str],
                          result_type: type[TResult] | None = None,
                          model: ModelType = None,
                          tool_choice: bool | str | list[str] = True,
                          store_history: bool = True,
                          usage_limits: UsageLimits | None = None,
                          message_id: str | None = None,
                          conversation_id: str | None = None,
                          wait_for_connections: bool | None = None,
                      ) -> AsyncIterator[StreamingResponseProtocol[TResult]]
                      

                      Run agent with prompt and get a streaming response.

                      Parameters:

                      Name Type Description Default
                      prompt AnyPromptType | Image | PathLike[str]

                      User query or instruction

                      ()
                      result_type type[TResult] | None

                      Optional type for structured responses

                      None
                      model ModelType

                      Optional model override

                      None
                      tool_choice bool | str | list[str]

                      Control tool usage: - True: Allow all tools - False: No tools - str: Use specific tool - list[str]: Allow specific tools

                      True
                      store_history bool

                      Whether the message exchange should be added to the context window

                      True
                      usage_limits UsageLimits | None

                      Optional usage limits for the model

                      None
                      message_id str | None

                      Optional message id for the returned message. Automatically generated if not provided.

                      None
                      conversation_id str | None

                      Optional conversation id for the returned message.

                      None
                      wait_for_connections bool | None

                      Whether to wait for connected agents to complete

                      None

                      Returns:

                      Type Description
                      AsyncIterator[StreamingResponseProtocol[TResult]]

                      A streaming result to iterate over.

                      Raises:

                      Type Description
                      UnexpectedModelBehavior

                      If the model fails or behaves unexpectedly

                      Source code in src/llmling_agent/agent/agent.py
                      785
                      786
                      787
                      788
                      789
                      790
                      791
                      792
                      793
                      794
                      795
                      796
                      797
                      798
                      799
                      800
                      801
                      802
                      803
                      804
                      805
                      806
                      807
                      808
                      809
                      810
                      811
                      812
                      813
                      814
                      815
                      816
                      817
                      818
                      819
                      820
                      821
                      822
                      823
                      824
                      825
                      826
                      827
                      828
                      829
                      830
                      831
                      832
                      833
                      834
                      835
                      836
                      837
                      838
                      839
                      840
                      841
                      842
                      843
                      844
                      845
                      846
                      847
                      848
                      849
                      850
                      851
                      852
                      853
                      854
                      855
                      856
                      857
                      858
                      859
                      860
                      861
                      862
                      863
                      864
                      865
                      866
                      867
                      868
                      869
                      870
                      871
                      872
                      873
                      874
                      875
                      876
                      877
                      878
                      879
                      880
                      881
                      882
                      883
                      884
                      @asynccontextmanager
                      async def run_stream(
                          self,
                          *prompt: AnyPromptType | PIL.Image.Image | os.PathLike[str],
                          result_type: type[TResult] | None = None,
                          model: ModelType = None,
                          tool_choice: bool | str | list[str] = True,
                          store_history: bool = True,
                          usage_limits: UsageLimits | None = None,
                          message_id: str | None = None,
                          conversation_id: str | None = None,
                          wait_for_connections: bool | None = None,
                      ) -> AsyncIterator[StreamingResponseProtocol[TResult]]:
                          """Run agent with prompt and get a streaming response.
                      
                          Args:
                              prompt: User query or instruction
                              result_type: Optional type for structured responses
                              model: Optional model override
                              tool_choice: Control tool usage:
                                  - True: Allow all tools
                                  - False: No tools
                                  - str: Use specific tool
                                  - list[str]: Allow specific tools
                              store_history: Whether the message exchange should be added to the
                                             context window
                              usage_limits: Optional usage limits for the model
                              message_id: Optional message id for the returned message.
                                          Automatically generated if not provided.
                              conversation_id: Optional conversation id for the returned message.
                              wait_for_connections: Whether to wait for connected agents to complete
                      
                          Returns:
                              A streaming result to iterate over.
                      
                          Raises:
                              UnexpectedModelBehavior: If the model fails or behaves unexpectedly
                          """
                          message_id = message_id or str(uuid4())
                          user_msg, prompts = await self.pre_run(*prompt)
                          self.set_result_type(result_type)
                          start_time = time.perf_counter()
                          sys_prompt = await self.sys_prompts.format_system_prompt(self)
                          tools = await self.tools.get_tools(state="enabled")
                          match tool_choice:
                              case str():
                                  tools = [t for t in tools if t.name == tool_choice]
                              case list():
                                  tools = [t for t in tools if t.name in tool_choice]
                              case False:
                                  tools = []
                              case True | None:
                                  pass  # Keep all tools
                          try:
                              message_history = self.conversation.get_history()
                              async with self._provider.stream_response(
                                  *prompts,
                                  message_id=message_id,
                                  message_history=message_history,
                                  result_type=result_type,
                                  model=model,
                                  store_history=store_history,
                                  tools=tools,
                                  usage_limits=usage_limits,
                                  system_prompt=sys_prompt,
                              ) as stream:
                                  yield stream
                                  usage = stream.usage()
                                  cost_info = None
                                  model_name = stream.model_name  # type: ignore
                                  if model_name:
                                      cost_info = await TokenCost.from_usage(
                                          usage,
                                          model_name,
                                          str(user_msg.content),
                                          str(stream.formatted_content),  # type: ignore
                                      )
                                  response_msg = ChatMessage[TResult](
                                      content=cast(TResult, stream.formatted_content),  # type: ignore
                                      role="assistant",
                                      name=self.name,
                                      model=model_name,
                                      message_id=message_id,
                                      conversation_id=conversation_id,
                                      cost_info=cost_info,
                                      response_time=time.perf_counter() - start_time,
                                      # provider_extra=stream.provider_extra or {},
                                  )
                                  self.message_sent.emit(response_msg)
                                  if store_history:
                                      self.conversation.add_chat_messages([user_msg, response_msg])
                                  await self.connections.route_message(
                                      response_msg,
                                      wait=wait_for_connections,
                                  )
                      
                          except Exception as e:
                              logger.exception("Agent stream failed")
                              self.run_failed.emit("Agent stream failed", e)
                              raise
                      

                      run_sync

                      run_sync(
                          *prompt: AnyPromptType | Image | PathLike[str],
                          result_type: type[TResult] | None = None,
                          deps: TDeps | None = None,
                          model: ModelType = None,
                          store_history: bool = True,
                      ) -> ChatMessage[TResult]
                      

                      Run agent synchronously (convenience wrapper).

                      Parameters:

                      Name Type Description Default
                      prompt AnyPromptType | Image | PathLike[str]

                      User query or instruction

                      ()
                      result_type type[TResult] | None

                      Optional type for structured responses

                      None
                      deps TDeps | None

                      Optional dependencies for the agent

                      None
                      model ModelType

                      Optional model override

                      None
                      store_history bool

                      Whether the message exchange should be added to the context window

                      True

                      Returns: Result containing response and run information

                      Source code in src/llmling_agent/agent/agent.py
                      925
                      926
                      927
                      928
                      929
                      930
                      931
                      932
                      933
                      934
                      935
                      936
                      937
                      938
                      939
                      940
                      941
                      942
                      943
                      944
                      945
                      946
                      947
                      948
                      949
                      950
                      951
                      def run_sync(
                          self,
                          *prompt: AnyPromptType | PIL.Image.Image | os.PathLike[str],
                          result_type: type[TResult] | None = None,
                          deps: TDeps | None = None,
                          model: ModelType = None,
                          store_history: bool = True,
                      ) -> ChatMessage[TResult]:
                          """Run agent synchronously (convenience wrapper).
                      
                          Args:
                              prompt: User query or instruction
                              result_type: Optional type for structured responses
                              deps: Optional dependencies for the agent
                              model: Optional model override
                              store_history: Whether the message exchange should be added to the
                                             context window
                          Returns:
                              Result containing response and run information
                          """
                          coro = self.run(
                              *prompt,
                              model=model,
                              store_history=store_history,
                              result_type=result_type,
                          )
                          return self.run_task_sync(coro)  # type: ignore
                      

                      set_model

                      set_model(model: ModelType)
                      

                      Set the model for this agent.

                      Parameters:

                      Name Type Description Default
                      model ModelType

                      New model to use (name or instance)

                      required
                      Emits

                      model_changed signal with the new model

                      Source code in src/llmling_agent/agent/agent.py
                      1170
                      1171
                      1172
                      1173
                      1174
                      1175
                      1176
                      1177
                      1178
                      1179
                      def set_model(self, model: ModelType):
                          """Set the model for this agent.
                      
                          Args:
                              model: New model to use (name or instance)
                      
                          Emits:
                              model_changed signal with the new model
                          """
                          self._provider.set_model(model)
                      

                      set_result_type

                      set_result_type(
                          result_type: type[TResult] | str | ResponseDefinition | None,
                          *,
                          tool_name: str | None = None,
                          tool_description: str | None = None,
                      )
                      

                      Set or update the result type for this agent.

                      Parameters:

                      Name Type Description Default
                      result_type type[TResult] | str | ResponseDefinition | None

                      New result type, can be: - A Python type for validation - Name of a response definition - Response definition instance - None to reset to unstructured mode

                      required
                      tool_name str | None

                      Optional override for tool name

                      None
                      tool_description str | None

                      Optional override for tool description

                      None
                      Source code in src/llmling_agent/agent/agent.py
                      513
                      514
                      515
                      516
                      517
                      518
                      519
                      520
                      521
                      522
                      523
                      524
                      525
                      526
                      527
                      528
                      529
                      530
                      531
                      532
                      def set_result_type(
                          self,
                          result_type: type[TResult] | str | ResponseDefinition | None,
                          *,
                          tool_name: str | None = None,
                          tool_description: str | None = None,
                      ):
                          """Set or update the result type for this agent.
                      
                          Args:
                              result_type: New result type, can be:
                                  - A Python type for validation
                                  - Name of a response definition
                                  - Response definition instance
                                  - None to reset to unstructured mode
                              tool_name: Optional override for tool name
                              tool_description: Optional override for tool description
                          """
                          logger.debug("Setting result type to: %s for %r", result_type, self.name)
                          self._result_type = to_type(result_type)
                      

                      share async

                      share(
                          target: AnyAgent[TDeps, Any],
                          *,
                          tools: list[str] | None = None,
                          resources: list[str] | None = None,
                          history: bool | int | None = None,
                          token_limit: int | None = None,
                      )
                      

                      Share capabilities and knowledge with another agent.

                      Parameters:

                      Name Type Description Default
                      target AnyAgent[TDeps, Any]

                      Agent to share with

                      required
                      tools list[str] | None

                      List of tool names to share

                      None
                      resources list[str] | None

                      List of resource names to share

                      None
                      history bool | int | None

                      Share conversation history: - True: Share full history - int: Number of most recent messages to share - None: Don't share history

                      None
                      token_limit int | None

                      Optional max tokens for history

                      None

                      Raises:

                      Type Description
                      ValueError

                      If requested items don't exist

                      RuntimeError

                      If runtime not available for resources

                      Source code in src/llmling_agent/agent/agent.py
                      1095
                      1096
                      1097
                      1098
                      1099
                      1100
                      1101
                      1102
                      1103
                      1104
                      1105
                      1106
                      1107
                      1108
                      1109
                      1110
                      1111
                      1112
                      1113
                      1114
                      1115
                      1116
                      1117
                      1118
                      1119
                      1120
                      1121
                      1122
                      1123
                      1124
                      1125
                      1126
                      1127
                      1128
                      1129
                      1130
                      1131
                      1132
                      1133
                      1134
                      1135
                      1136
                      1137
                      1138
                      1139
                      1140
                      1141
                      1142
                      1143
                      1144
                      1145
                      1146
                      1147
                      1148
                      1149
                      async def share(
                          self,
                          target: AnyAgent[TDeps, Any],
                          *,
                          tools: list[str] | None = None,
                          resources: list[str] | None = None,
                          history: bool | int | None = None,  # bool or number of messages
                          token_limit: int | None = None,
                      ):
                          """Share capabilities and knowledge with another agent.
                      
                          Args:
                              target: Agent to share with
                              tools: List of tool names to share
                              resources: List of resource names to share
                              history: Share conversation history:
                                      - True: Share full history
                                      - int: Number of most recent messages to share
                                      - None: Don't share history
                              token_limit: Optional max tokens for history
                      
                          Raises:
                              ValueError: If requested items don't exist
                              RuntimeError: If runtime not available for resources
                          """
                          # Share tools if requested
                          for name in tools or []:
                              if tool := self.tools.get(name):
                                  meta = {"shared_from": self.name}
                                  target.tools.register_tool(tool.callable, metadata=meta)
                              else:
                                  msg = f"Tool not found: {name}"
                                  raise ValueError(msg)
                      
                          # Share resources if requested
                          if resources:
                              if not self.runtime:
                                  msg = "No runtime available for sharing resources"
                                  raise RuntimeError(msg)
                              for name in resources:
                                  if resource := self.runtime.get_resource(name):
                                      await target.conversation.load_context_source(resource)  # type: ignore
                                  else:
                                      msg = f"Resource not found: {name}"
                                      raise ValueError(msg)
                      
                          # Share history if requested
                          if history:
                              history_text = await self.conversation.format_history(
                                  max_tokens=token_limit,
                                  num_messages=history if isinstance(history, int) else None,
                              )
                              target.conversation.add_context_message(
                                  history_text, source=self.name, metadata={"type": "shared_history"}
                              )
                      

                      stop async

                      stop()
                      

                      Stop continuous execution if running.

                      Source code in src/llmling_agent/agent/agent.py
                      1069
                      1070
                      1071
                      1072
                      1073
                      1074
                      async def stop(self):
                          """Stop continuous execution if running."""
                          if self._background_task and not self._background_task.done():
                              self._background_task.cancel()
                              await self._background_task
                              self._background_task = None
                      

                      temporary_state async

                      temporary_state(
                          *,
                          system_prompts: list[AnyPromptType] | None = None,
                          replace_prompts: bool = False,
                          tools: list[ToolType] | None = None,
                          replace_tools: bool = False,
                          history: list[AnyPromptType] | SessionQuery | None = None,
                          replace_history: bool = False,
                          pause_routing: bool = False,
                          model: ModelType | None = None,
                          provider: AgentProvider | None = None,
                      ) -> AsyncIterator[Self]
                      

                      Temporarily modify agent state.

                      Parameters:

                      Name Type Description Default
                      system_prompts list[AnyPromptType] | None

                      Temporary system prompts to use

                      None
                      replace_prompts bool

                      Whether to replace existing prompts

                      False
                      tools list[ToolType] | None

                      Temporary tools to make available

                      None
                      replace_tools bool

                      Whether to replace existing tools

                      False
                      history list[AnyPromptType] | SessionQuery | None

                      Conversation history (prompts or query)

                      None
                      replace_history bool

                      Whether to replace existing history

                      False
                      pause_routing bool

                      Whether to pause message routing

                      False
                      model ModelType | None

                      Temporary model override

                      None
                      provider AgentProvider | None

                      Temporary provider override

                      None
                      Source code in src/llmling_agent/agent/agent.py
                      1210
                      1211
                      1212
                      1213
                      1214
                      1215
                      1216
                      1217
                      1218
                      1219
                      1220
                      1221
                      1222
                      1223
                      1224
                      1225
                      1226
                      1227
                      1228
                      1229
                      1230
                      1231
                      1232
                      1233
                      1234
                      1235
                      1236
                      1237
                      1238
                      1239
                      1240
                      1241
                      1242
                      1243
                      1244
                      1245
                      1246
                      1247
                      1248
                      1249
                      1250
                      1251
                      1252
                      1253
                      1254
                      1255
                      1256
                      1257
                      1258
                      1259
                      1260
                      1261
                      1262
                      1263
                      1264
                      1265
                      1266
                      1267
                      1268
                      1269
                      1270
                      1271
                      1272
                      1273
                      1274
                      1275
                      1276
                      1277
                      1278
                      1279
                      1280
                      @asynccontextmanager
                      async def temporary_state(
                          self,
                          *,
                          system_prompts: list[AnyPromptType] | None = None,
                          replace_prompts: bool = False,
                          tools: list[ToolType] | None = None,
                          replace_tools: bool = False,
                          history: list[AnyPromptType] | SessionQuery | None = None,
                          replace_history: bool = False,
                          pause_routing: bool = False,
                          model: ModelType | None = None,
                          provider: AgentProvider | None = None,
                      ) -> AsyncIterator[Self]:
                          """Temporarily modify agent state.
                      
                          Args:
                              system_prompts: Temporary system prompts to use
                              replace_prompts: Whether to replace existing prompts
                              tools: Temporary tools to make available
                              replace_tools: Whether to replace existing tools
                              history: Conversation history (prompts or query)
                              replace_history: Whether to replace existing history
                              pause_routing: Whether to pause message routing
                              model: Temporary model override
                              provider: Temporary provider override
                          """
                          old_model = self._provider.model if hasattr(self._provider, "model") else None  # pyright: ignore
                          old_provider = self._provider
                      
                          async with AsyncExitStack() as stack:
                              # System prompts (async)
                              if system_prompts is not None:
                                  await stack.enter_async_context(
                                      self.sys_prompts.temporary_prompt(
                                          system_prompts, exclusive=replace_prompts
                                      )
                                  )
                      
                              # Tools (sync)
                              if tools is not None:
                                  stack.enter_context(
                                      self.tools.temporary_tools(tools, exclusive=replace_tools)
                                  )
                      
                              # History (async)
                              if history is not None:
                                  await stack.enter_async_context(
                                      self.conversation.temporary_state(
                                          history, replace_history=replace_history
                                      )
                                  )
                      
                              # Routing (async)
                              if pause_routing:
                                  await stack.enter_async_context(self.connections.paused_routing())
                      
                              # Model/Provider
                              if provider is not None:
                                  self._provider = provider
                              elif model is not None:
                                  self._provider.set_model(model)
                      
                              try:
                                  yield self
                              finally:
                                  # Restore model/provider
                                  if provider is not None:
                                      self._provider = old_provider
                                  elif model is not None and old_model:
                                      self._provider.set_model(old_model)
                      

                      to_structured

                      to_structured(
                          result_type: None,
                          *,
                          tool_name: str | None = None,
                          tool_description: str | None = None,
                      ) -> Self
                      
                      to_structured(
                          result_type: type[TResult] | str | ResponseDefinition,
                          *,
                          tool_name: str | None = None,
                          tool_description: str | None = None,
                      ) -> StructuredAgent[TDeps, TResult]
                      
                      to_structured(
                          result_type: type[TResult] | str | ResponseDefinition | None,
                          *,
                          tool_name: str | None = None,
                          tool_description: str | None = None,
                      ) -> StructuredAgent[TDeps, TResult] | Self
                      

                      Convert this agent to a structured agent.

                      If result_type is None, returns self unchanged (no wrapping). Otherwise creates a StructuredAgent wrapper.

                      Parameters:

                      Name Type Description Default
                      result_type type[TResult] | str | ResponseDefinition | None

                      Type for structured responses. Can be: - A Python type (Pydantic model) - Name of response definition from context - Complete response definition - None to skip wrapping

                      required
                      tool_name str | None

                      Optional override for result tool name

                      None
                      tool_description str | None

                      Optional override for result tool description

                      None

                      Returns:

                      Type Description
                      StructuredAgent[TDeps, TResult] | Self

                      Either StructuredAgent wrapper or self unchanged

                      from llmling_agent.agent import StructuredAgent

                      Source code in src/llmling_agent/agent/agent.py
                      598
                      599
                      600
                      601
                      602
                      603
                      604
                      605
                      606
                      607
                      608
                      609
                      610
                      611
                      612
                      613
                      614
                      615
                      616
                      617
                      618
                      619
                      620
                      621
                      622
                      623
                      624
                      625
                      626
                      627
                      628
                      629
                      630
                      631
                      632
                      633
                      def to_structured[TResult](
                          self,
                          result_type: type[TResult] | str | ResponseDefinition | None,
                          *,
                          tool_name: str | None = None,
                          tool_description: str | None = None,
                      ) -> StructuredAgent[TDeps, TResult] | Self:
                          """Convert this agent to a structured agent.
                      
                          If result_type is None, returns self unchanged (no wrapping).
                          Otherwise creates a StructuredAgent wrapper.
                      
                          Args:
                              result_type: Type for structured responses. Can be:
                                  - A Python type (Pydantic model)
                                  - Name of response definition from context
                                  - Complete response definition
                                  - None to skip wrapping
                              tool_name: Optional override for result tool name
                              tool_description: Optional override for result tool description
                      
                          Returns:
                              Either StructuredAgent wrapper or self unchanged
                          from llmling_agent.agent import StructuredAgent
                          """
                          if result_type is None:
                              return self
                      
                          from llmling_agent.agent import StructuredAgent
                      
                          return StructuredAgent(
                              self,
                              result_type=result_type,
                              tool_name=tool_name,
                              tool_description=tool_description,
                          )
                      

                      to_tool

                      to_tool(
                          *,
                          name: str | None = None,
                          reset_history_on_run: bool = True,
                          pass_message_history: bool = False,
                          share_context: bool = False,
                          parent: AnyAgent[Any, Any] | None = None,
                      ) -> ToolInfo
                      

                      Create a tool from this agent.

                      Parameters:

                      Name Type Description Default
                      name str | None

                      Optional tool name override

                      None
                      reset_history_on_run bool

                      Clear agent's history before each run

                      True
                      pass_message_history bool

                      Pass parent's message history to agent

                      False
                      share_context bool

                      Whether to pass parent's context/deps

                      False
                      parent AnyAgent[Any, Any] | None

                      Optional parent agent for history/context sharing

                      None
                      Source code in src/llmling_agent/agent/agent.py
                      644
                      645
                      646
                      647
                      648
                      649
                      650
                      651
                      652
                      653
                      654
                      655
                      656
                      657
                      658
                      659
                      660
                      661
                      662
                      663
                      664
                      665
                      666
                      667
                      668
                      669
                      670
                      671
                      672
                      673
                      674
                      675
                      676
                      677
                      678
                      679
                      680
                      681
                      682
                      683
                      684
                      685
                      686
                      687
                      688
                      689
                      690
                      691
                      692
                      693
                      694
                      def to_tool(
                          self,
                          *,
                          name: str | None = None,
                          reset_history_on_run: bool = True,
                          pass_message_history: bool = False,
                          share_context: bool = False,
                          parent: AnyAgent[Any, Any] | None = None,
                      ) -> ToolInfo:
                          """Create a tool from this agent.
                      
                          Args:
                              name: Optional tool name override
                              reset_history_on_run: Clear agent's history before each run
                              pass_message_history: Pass parent's message history to agent
                              share_context: Whether to pass parent's context/deps
                              parent: Optional parent agent for history/context sharing
                          """
                          tool_name = f"ask_{self.name}"
                      
                          async def wrapped_tool(prompt: str) -> str:
                              if pass_message_history and not parent:
                                  msg = "Parent agent required for message history sharing"
                                  raise ToolError(msg)
                      
                              if reset_history_on_run:
                                  self.conversation.clear()
                      
                              history = None
                              if pass_message_history and parent:
                                  history = parent.conversation.get_history()
                                  old = self.conversation.get_history()
                                  self.conversation.set_history(history)
                              result = await self.run(prompt, result_type=self._result_type)
                              if history:
                                  self.conversation.set_history(old)
                              return result.data
                      
                          normalized_name = self.name.replace("_", " ").title()
                          docstring = f"Get expert answer from specialized agent: {normalized_name}"
                          if self.description:
                              docstring = f"{docstring}\n\n{self.description}"
                      
                          wrapped_tool.__doc__ = docstring
                          wrapped_tool.__name__ = tool_name
                      
                          return ToolInfo.from_callable(
                              wrapped_tool,
                              name_override=tool_name,
                              description_override=docstring,
                          )
                      

                      wait async

                      wait() -> ChatMessage[TResult]
                      

                      Wait for background execution to complete.

                      Source code in src/llmling_agent/agent/agent.py
                      1076
                      1077
                      1078
                      1079
                      1080
                      1081
                      1082
                      1083
                      1084
                      1085
                      1086
                      1087
                      async def wait(self) -> ChatMessage[TResult]:
                          """Wait for background execution to complete."""
                          if not self._background_task:
                              msg = "No background task running"
                              raise RuntimeError(msg)
                          if self._infinite:
                              msg = "Cannot wait on infinite execution"
                              raise RuntimeError(msg)
                          try:
                              return await self._background_task
                          finally:
                              self._background_task = None
                      

                      AgentConfig

                      Bases: NodeConfig

                      Configuration for a single agent in the system.

                      Defines an agent's complete configuration including its model, environment, capabilities, and behavior settings. Each agent can have its own: - Language model configuration - Environment setup (tools and resources) - Response type definitions - System prompts and default user prompts - Role-based capabilities

                      The configuration can be loaded from YAML or created programmatically.

                      Source code in src/llmling_agent/models/agents.py
                       84
                       85
                       86
                       87
                       88
                       89
                       90
                       91
                       92
                       93
                       94
                       95
                       96
                       97
                       98
                       99
                      100
                      101
                      102
                      103
                      104
                      105
                      106
                      107
                      108
                      109
                      110
                      111
                      112
                      113
                      114
                      115
                      116
                      117
                      118
                      119
                      120
                      121
                      122
                      123
                      124
                      125
                      126
                      127
                      128
                      129
                      130
                      131
                      132
                      133
                      134
                      135
                      136
                      137
                      138
                      139
                      140
                      141
                      142
                      143
                      144
                      145
                      146
                      147
                      148
                      149
                      150
                      151
                      152
                      153
                      154
                      155
                      156
                      157
                      158
                      159
                      160
                      161
                      162
                      163
                      164
                      165
                      166
                      167
                      168
                      169
                      170
                      171
                      172
                      173
                      174
                      175
                      176
                      177
                      178
                      179
                      180
                      181
                      182
                      183
                      184
                      185
                      186
                      187
                      188
                      189
                      190
                      191
                      192
                      193
                      194
                      195
                      196
                      197
                      198
                      199
                      200
                      201
                      202
                      203
                      204
                      205
                      206
                      207
                      208
                      209
                      210
                      211
                      212
                      213
                      214
                      215
                      216
                      217
                      218
                      219
                      220
                      221
                      222
                      223
                      224
                      225
                      226
                      227
                      228
                      229
                      230
                      231
                      232
                      233
                      234
                      235
                      236
                      237
                      238
                      239
                      240
                      241
                      242
                      243
                      244
                      245
                      246
                      247
                      248
                      249
                      250
                      251
                      252
                      253
                      254
                      255
                      256
                      257
                      258
                      259
                      260
                      261
                      262
                      263
                      264
                      265
                      266
                      267
                      268
                      269
                      270
                      271
                      272
                      273
                      274
                      275
                      276
                      277
                      278
                      279
                      280
                      281
                      282
                      283
                      284
                      285
                      286
                      287
                      288
                      289
                      290
                      291
                      292
                      293
                      294
                      295
                      296
                      297
                      298
                      299
                      300
                      301
                      302
                      303
                      304
                      305
                      306
                      307
                      308
                      309
                      310
                      311
                      312
                      313
                      314
                      315
                      316
                      317
                      318
                      319
                      320
                      321
                      322
                      323
                      324
                      325
                      326
                      327
                      328
                      329
                      330
                      331
                      332
                      333
                      334
                      335
                      336
                      337
                      338
                      339
                      340
                      341
                      342
                      343
                      344
                      345
                      346
                      347
                      348
                      349
                      350
                      351
                      352
                      353
                      354
                      355
                      356
                      357
                      358
                      359
                      360
                      361
                      362
                      363
                      364
                      365
                      366
                      367
                      368
                      369
                      370
                      371
                      372
                      373
                      374
                      375
                      376
                      377
                      378
                      379
                      380
                      381
                      382
                      383
                      384
                      385
                      386
                      387
                      388
                      389
                      390
                      391
                      392
                      393
                      394
                      395
                      396
                      397
                      398
                      399
                      class AgentConfig(NodeConfig):
                          """Configuration for a single agent in the system.
                      
                          Defines an agent's complete configuration including its model, environment,
                          capabilities, and behavior settings. Each agent can have its own:
                          - Language model configuration
                          - Environment setup (tools and resources)
                          - Response type definitions
                          - System prompts and default user prompts
                          - Role-based capabilities
                      
                          The configuration can be loaded from YAML or created programmatically.
                          """
                      
                          provider: ProviderConfig | Literal["pydantic_ai", "human", "litellm"] = "pydantic_ai"
                          """Provider configuration or shorthand type"""
                      
                          inherits: str | None = None
                          """Name of agent config to inherit from"""
                      
                          model: str | AnyModelConfig | None = None
                          """The model to use for this agent. Can be either a simple model name
                          string (e.g. 'openai:gpt-4') or a structured model definition."""
                      
                          tools: list[ToolConfig | str] = Field(default_factory=list)
                          """A list of tools to register with this agent."""
                      
                          toolsets: list[ToolsetConfig] = Field(default_factory=list)
                          """Toolset configurations for extensible tool collections."""
                      
                          environment: str | AgentEnvironment | None = None
                          """Environments configuration (path or object)"""
                      
                          capabilities: Capabilities = Field(default_factory=Capabilities)
                          """Current agent's capabilities."""
                      
                          session: str | SessionQuery | MemoryConfig | None = None
                          """Session configuration for conversation recovery."""
                      
                          result_type: str | ResponseDefinition | None = None
                          """Name of the response definition to use"""
                      
                          retries: int = 1
                          """Number of retries for failed operations (maps to pydantic-ai's retries)"""
                      
                          result_tool_name: str = "final_result"
                          """Name of the tool used for structured responses"""
                      
                          result_tool_description: str | None = None
                          """Custom description for the result tool"""
                      
                          result_retries: int | None = None
                          """Max retries for result validation"""
                      
                          end_strategy: EndStrategy = "early"
                          """The strategy for handling multiple tool calls when a final result is found"""
                      
                          avatar: str | None = None
                          """URL or path to agent's avatar image"""
                      
                          system_prompts: list[str] = Field(default_factory=list)
                          """System prompts for the agent"""
                      
                          library_system_prompts: list[str] = Field(default_factory=list)
                          """System prompts for the agent from the library"""
                      
                          user_prompts: list[str] = Field(default_factory=list)
                          """Default user prompts for the agent"""
                      
                          # context_sources: list[ContextSource] = Field(default_factory=list)
                          # """Initial context sources to load"""
                      
                          config_file_path: str | None = None
                          """Config file path for resolving environment."""
                      
                          knowledge: Knowledge | None = None
                          """Knowledge sources for this agent."""
                      
                          workers: list[WorkerConfig] = Field(default_factory=list)
                          """Worker agents which will be available as tools."""
                      
                          requires_tool_confirmation: ToolConfirmationMode = "per_tool"
                          """How to handle tool confirmation:
                          - "always": Always require confirmation for all tools
                          - "never": Never require confirmation (ignore tool settings)
                          - "per_tool": Use individual tool settings
                          """
                      
                          debug: bool = False
                          """Enable debug output for this agent."""
                      
                          def is_structured(self) -> bool:
                              """Check if this config defines a structured agent."""
                              return self.result_type is not None
                      
                          @model_validator(mode="before")
                          @classmethod
                          def normalize_workers(cls, data: dict[str, Any]) -> dict[str, Any]:
                              """Convert string workers to WorkerConfig."""
                              if workers := data.get("workers"):
                                  data["workers"] = [
                                      WorkerConfig.from_str(w)
                                      if isinstance(w, str)
                                      else w
                                      if isinstance(w, WorkerConfig)  # Keep existing WorkerConfig
                                      else WorkerConfig(**w)  # Convert dict to WorkerConfig
                                      for w in workers
                                  ]
                              return data
                      
                          @model_validator(mode="before")
                          @classmethod
                          def validate_result_type(cls, data: dict[str, Any]) -> dict[str, Any]:
                              """Convert result type and apply its settings."""
                              result_type = data.get("result_type")
                              if isinstance(result_type, dict):
                                  # Extract response-specific settings
                                  tool_name = result_type.pop("result_tool_name", None)
                                  tool_description = result_type.pop("result_tool_description", None)
                                  retries = result_type.pop("result_retries", None)
                      
                                  # Convert remaining dict to ResponseDefinition
                                  if "type" not in result_type:
                                      result_type["type"] = "inline"
                                  data["result_type"] = InlineResponseDefinition(**result_type)
                      
                                  # Apply extracted settings to agent config
                                  if tool_name:
                                      data["result_tool_name"] = tool_name
                                  if tool_description:
                                      data["result_tool_description"] = tool_description
                                  if retries is not None:
                                      data["result_retries"] = retries
                      
                              return data
                      
                          @model_validator(mode="before")
                          @classmethod
                          def handle_model_types(cls, data: dict[str, Any]) -> dict[str, Any]:
                              """Convert model inputs to appropriate format."""
                              model = data.get("model")
                              match model:
                                  case str():
                                      data["model"] = {"type": "string", "identifier": model}
                              return data
                      
                          async def get_toolsets(self) -> list[ResourceProvider]:
                              """Get all resource providers for this agent."""
                              providers: list[ResourceProvider] = []
                      
                              # Add providers from toolsets
                              for toolset_config in self.toolsets:
                                  try:
                                      provider = toolset_config.get_provider()
                                      providers.append(provider)
                                  except Exception as e:
                                      logger.exception(
                                          "Failed to create provider for toolset: %r", toolset_config
                                      )
                                      msg = f"Failed to create provider for toolset: {e}"
                                      raise ValueError(msg) from e
                      
                              return providers
                      
                          def get_tool_provider(self) -> ResourceProvider | None:
                              """Get tool provider for this agent."""
                              from llmling_agent.tools.base import ToolInfo
                      
                              # Create provider for static tools
                              if not self.tools:
                                  return None
                              static_tools: list[ToolInfo] = []
                              for tool_config in self.tools:
                                  try:
                                      match tool_config:
                                          case str():
                                              if tool_config.startswith("crewai_tools"):
                                                  obj = import_class(tool_config)()
                                                  static_tools.append(ToolInfo.from_crewai_tool(obj))
                                              elif tool_config.startswith("langchain"):
                                                  obj = import_class(tool_config)()
                                                  static_tools.append(ToolInfo.from_langchain_tool(obj))
                                              else:
                                                  tool = ToolInfo.from_callable(tool_config)
                                                  static_tools.append(tool)
                                          case BaseToolConfig():
                                              static_tools.append(tool_config.get_tool())
                                  except Exception:
                                      logger.exception("Failed to load tool %r", tool_config)
                                      continue
                      
                              return StaticResourceProvider(name="builtin", tools=static_tools)
                      
                          def get_session_config(self) -> MemoryConfig:
                              """Get resolved memory configuration."""
                              match self.session:
                                  case str() | UUID():
                                      return MemoryConfig(session=SessionQuery(name=str(self.session)))
                                  case SessionQuery():
                                      return MemoryConfig(session=self.session)
                                  case MemoryConfig():
                                      return self.session
                                  case None:
                                      return MemoryConfig()
                      
                          def get_system_prompts(self) -> list[BasePrompt]:
                              """Get all system prompts as BasePrompts."""
                              prompts: list[BasePrompt] = []
                              for prompt in self.system_prompts:
                                  match prompt:
                                      case str():
                                          # Convert string to StaticPrompt
                                          static_prompt = StaticPrompt(
                                              name="system",
                                              description="System prompt",
                                              messages=[PromptMessage(role="system", content=prompt)],
                                          )
                                          prompts.append(static_prompt)
                                      case BasePrompt():
                                          prompts.append(prompt)
                              return prompts
                      
                          def get_provider(self) -> AgentProvider:
                              """Get resolved provider instance.
                      
                              Creates provider instance based on configuration:
                              - Full provider config: Use as-is
                              - Shorthand type: Create default provider config
                              """
                              # If string shorthand is used, convert to default provider config
                              from llmling_agent.models.providers import (
                                  CallbackProviderConfig,
                                  HumanProviderConfig,
                                  LiteLLMProviderConfig,
                                  PydanticAIProviderConfig,
                              )
                      
                              provider_config = self.provider
                              if isinstance(provider_config, str):
                                  match provider_config:
                                      case "pydantic_ai":
                                          provider_config = PydanticAIProviderConfig(model=self.model)
                                      case "human":
                                          provider_config = HumanProviderConfig()
                                      case "litellm":
                                          provider_config = LiteLLMProviderConfig(
                                              model=self.model if isinstance(self.model, str) else None
                                          )
                                      case _:
                                          try:
                                              fn = import_callable(provider_config)
                                              provider_config = CallbackProviderConfig(fn=fn)
                                          except Exception:  # noqa: BLE001
                                              msg = f"Invalid provider type: {provider_config}"
                                              raise ValueError(msg)  # noqa: B904
                      
                              # Create provider instance from config
                              return provider_config.get_provider()
                      
                          def render_system_prompts(self, context: dict[str, Any] | None = None) -> list[str]:
                              """Render system prompts with context."""
                              if not context:
                                  # Default context
                                  context = {"name": self.name, "id": 1, "model": self.model}
                              return [render_prompt(p, {"agent": context}) for p in self.system_prompts]
                      
                          def get_config(self) -> Config:
                              """Get configuration for this agent."""
                              match self.environment:
                                  case None:
                                      # Create minimal config
                                      caps = LLMCapabilitiesConfig()
                                      global_settings = GlobalSettings(llm_capabilities=caps)
                                      return Config(global_settings=global_settings)
                                  case str() as path:
                                      # Backward compatibility: treat as file path
                                      resolved = self._resolve_environment_path(path, self.config_file_path)
                                      return Config.from_file(resolved)
                                  case FileEnvironment(uri=uri) as env:
                                      # Handle FileEnvironment instance
                                      resolved = env.get_file_path()
                                      return Config.from_file(resolved)
                                  case {"type": "file", "uri": uri}:
                                      # Handle raw dict matching file environment structure
                                      return Config.from_file(uri)
                                  case {"type": "inline", "config": config}:
                                      return config
                                  case InlineEnvironment() as config:
                                      return config
                                  case _:
                                      msg = f"Invalid environment configuration: {self.environment}"
                                      raise ValueError(msg)
                      
                          def get_environment_path(self) -> str | None:
                              """Get environment file path if available."""
                              match self.environment:
                                  case str() as path:
                                      return self._resolve_environment_path(path, self.config_file_path)
                                  case {"type": "file", "uri": uri} | FileEnvironment(uri=uri):
                                      return uri
                                  case _:
                                      return None
                      
                          @staticmethod
                          def _resolve_environment_path(env: str, config_file_path: str | None = None) -> str:
                              """Resolve environment path from config store or relative path."""
                              from upath import UPath
                      
                              try:
                                  config_store = ConfigStore()
                                  return config_store.get_config(env)
                              except KeyError:
                                  if config_file_path:
                                      base_dir = UPath(config_file_path).parent
                                      return str(base_dir / env)
                                  return env
                      

                      avatar class-attribute instance-attribute

                      avatar: str | None = None
                      

                      URL or path to agent's avatar image

                      capabilities class-attribute instance-attribute

                      capabilities: Capabilities = Field(default_factory=Capabilities)
                      

                      Current agent's capabilities.

                      config_file_path class-attribute instance-attribute

                      config_file_path: str | None = None
                      

                      Config file path for resolving environment.

                      debug class-attribute instance-attribute

                      debug: bool = False
                      

                      Enable debug output for this agent.

                      end_strategy class-attribute instance-attribute

                      end_strategy: EndStrategy = 'early'
                      

                      The strategy for handling multiple tool calls when a final result is found

                      environment class-attribute instance-attribute

                      environment: str | AgentEnvironment | None = None
                      

                      Environments configuration (path or object)

                      inherits class-attribute instance-attribute

                      inherits: str | None = None
                      

                      Name of agent config to inherit from

                      knowledge class-attribute instance-attribute

                      knowledge: Knowledge | None = None
                      

                      Knowledge sources for this agent.

                      library_system_prompts class-attribute instance-attribute

                      library_system_prompts: list[str] = Field(default_factory=list)
                      

                      System prompts for the agent from the library

                      model class-attribute instance-attribute

                      model: str | AnyModelConfig | None = None
                      

                      The model to use for this agent. Can be either a simple model name string (e.g. 'openai:gpt-4') or a structured model definition.

                      provider class-attribute instance-attribute

                      provider: ProviderConfig | Literal['pydantic_ai', 'human', 'litellm'] = 'pydantic_ai'
                      

                      Provider configuration or shorthand type

                      requires_tool_confirmation class-attribute instance-attribute

                      requires_tool_confirmation: ToolConfirmationMode = 'per_tool'
                      

                      How to handle tool confirmation: - "always": Always require confirmation for all tools - "never": Never require confirmation (ignore tool settings) - "per_tool": Use individual tool settings

                      result_retries class-attribute instance-attribute

                      result_retries: int | None = None
                      

                      Max retries for result validation

                      result_tool_description class-attribute instance-attribute

                      result_tool_description: str | None = None
                      

                      Custom description for the result tool

                      result_tool_name class-attribute instance-attribute

                      result_tool_name: str = 'final_result'
                      

                      Name of the tool used for structured responses

                      result_type class-attribute instance-attribute

                      result_type: str | ResponseDefinition | None = None
                      

                      Name of the response definition to use

                      retries class-attribute instance-attribute

                      retries: int = 1
                      

                      Number of retries for failed operations (maps to pydantic-ai's retries)

                      session class-attribute instance-attribute

                      session: str | SessionQuery | MemoryConfig | None = None
                      

                      Session configuration for conversation recovery.

                      system_prompts class-attribute instance-attribute

                      system_prompts: list[str] = Field(default_factory=list)
                      

                      System prompts for the agent

                      tools class-attribute instance-attribute

                      tools: list[ToolConfig | str] = Field(default_factory=list)
                      

                      A list of tools to register with this agent.

                      toolsets class-attribute instance-attribute

                      toolsets: list[ToolsetConfig] = Field(default_factory=list)
                      

                      Toolset configurations for extensible tool collections.

                      user_prompts class-attribute instance-attribute

                      user_prompts: list[str] = Field(default_factory=list)
                      

                      Default user prompts for the agent

                      workers class-attribute instance-attribute

                      workers: list[WorkerConfig] = Field(default_factory=list)
                      

                      Worker agents which will be available as tools.

                      _resolve_environment_path staticmethod

                      _resolve_environment_path(env: str, config_file_path: str | None = None) -> str
                      

                      Resolve environment path from config store or relative path.

                      Source code in src/llmling_agent/models/agents.py
                      387
                      388
                      389
                      390
                      391
                      392
                      393
                      394
                      395
                      396
                      397
                      398
                      399
                      @staticmethod
                      def _resolve_environment_path(env: str, config_file_path: str | None = None) -> str:
                          """Resolve environment path from config store or relative path."""
                          from upath import UPath
                      
                          try:
                              config_store = ConfigStore()
                              return config_store.get_config(env)
                          except KeyError:
                              if config_file_path:
                                  base_dir = UPath(config_file_path).parent
                                  return str(base_dir / env)
                              return env
                      

                      get_config

                      get_config() -> Config
                      

                      Get configuration for this agent.

                      Source code in src/llmling_agent/models/agents.py
                      350
                      351
                      352
                      353
                      354
                      355
                      356
                      357
                      358
                      359
                      360
                      361
                      362
                      363
                      364
                      365
                      366
                      367
                      368
                      369
                      370
                      371
                      372
                      373
                      374
                      375
                      def get_config(self) -> Config:
                          """Get configuration for this agent."""
                          match self.environment:
                              case None:
                                  # Create minimal config
                                  caps = LLMCapabilitiesConfig()
                                  global_settings = GlobalSettings(llm_capabilities=caps)
                                  return Config(global_settings=global_settings)
                              case str() as path:
                                  # Backward compatibility: treat as file path
                                  resolved = self._resolve_environment_path(path, self.config_file_path)
                                  return Config.from_file(resolved)
                              case FileEnvironment(uri=uri) as env:
                                  # Handle FileEnvironment instance
                                  resolved = env.get_file_path()
                                  return Config.from_file(resolved)
                              case {"type": "file", "uri": uri}:
                                  # Handle raw dict matching file environment structure
                                  return Config.from_file(uri)
                              case {"type": "inline", "config": config}:
                                  return config
                              case InlineEnvironment() as config:
                                  return config
                              case _:
                                  msg = f"Invalid environment configuration: {self.environment}"
                                  raise ValueError(msg)
                      

                      get_environment_path

                      get_environment_path() -> str | None
                      

                      Get environment file path if available.

                      Source code in src/llmling_agent/models/agents.py
                      377
                      378
                      379
                      380
                      381
                      382
                      383
                      384
                      385
                      def get_environment_path(self) -> str | None:
                          """Get environment file path if available."""
                          match self.environment:
                              case str() as path:
                                  return self._resolve_environment_path(path, self.config_file_path)
                              case {"type": "file", "uri": uri} | FileEnvironment(uri=uri):
                                  return uri
                              case _:
                                  return None
                      

                      get_provider

                      get_provider() -> AgentProvider
                      

                      Get resolved provider instance.

                      Creates provider instance based on configuration: - Full provider config: Use as-is - Shorthand type: Create default provider config

                      Source code in src/llmling_agent/models/agents.py
                      306
                      307
                      308
                      309
                      310
                      311
                      312
                      313
                      314
                      315
                      316
                      317
                      318
                      319
                      320
                      321
                      322
                      323
                      324
                      325
                      326
                      327
                      328
                      329
                      330
                      331
                      332
                      333
                      334
                      335
                      336
                      337
                      338
                      339
                      340
                      341
                      def get_provider(self) -> AgentProvider:
                          """Get resolved provider instance.
                      
                          Creates provider instance based on configuration:
                          - Full provider config: Use as-is
                          - Shorthand type: Create default provider config
                          """
                          # If string shorthand is used, convert to default provider config
                          from llmling_agent.models.providers import (
                              CallbackProviderConfig,
                              HumanProviderConfig,
                              LiteLLMProviderConfig,
                              PydanticAIProviderConfig,
                          )
                      
                          provider_config = self.provider
                          if isinstance(provider_config, str):
                              match provider_config:
                                  case "pydantic_ai":
                                      provider_config = PydanticAIProviderConfig(model=self.model)
                                  case "human":
                                      provider_config = HumanProviderConfig()
                                  case "litellm":
                                      provider_config = LiteLLMProviderConfig(
                                          model=self.model if isinstance(self.model, str) else None
                                      )
                                  case _:
                                      try:
                                          fn = import_callable(provider_config)
                                          provider_config = CallbackProviderConfig(fn=fn)
                                      except Exception:  # noqa: BLE001
                                          msg = f"Invalid provider type: {provider_config}"
                                          raise ValueError(msg)  # noqa: B904
                      
                          # Create provider instance from config
                          return provider_config.get_provider()
                      

                      get_session_config

                      get_session_config() -> MemoryConfig
                      

                      Get resolved memory configuration.

                      Source code in src/llmling_agent/models/agents.py
                      277
                      278
                      279
                      280
                      281
                      282
                      283
                      284
                      285
                      286
                      287
                      def get_session_config(self) -> MemoryConfig:
                          """Get resolved memory configuration."""
                          match self.session:
                              case str() | UUID():
                                  return MemoryConfig(session=SessionQuery(name=str(self.session)))
                              case SessionQuery():
                                  return MemoryConfig(session=self.session)
                              case MemoryConfig():
                                  return self.session
                              case None:
                                  return MemoryConfig()
                      

                      get_system_prompts

                      get_system_prompts() -> list[BasePrompt]
                      

                      Get all system prompts as BasePrompts.

                      Source code in src/llmling_agent/models/agents.py
                      289
                      290
                      291
                      292
                      293
                      294
                      295
                      296
                      297
                      298
                      299
                      300
                      301
                      302
                      303
                      304
                      def get_system_prompts(self) -> list[BasePrompt]:
                          """Get all system prompts as BasePrompts."""
                          prompts: list[BasePrompt] = []
                          for prompt in self.system_prompts:
                              match prompt:
                                  case str():
                                      # Convert string to StaticPrompt
                                      static_prompt = StaticPrompt(
                                          name="system",
                                          description="System prompt",
                                          messages=[PromptMessage(role="system", content=prompt)],
                                      )
                                      prompts.append(static_prompt)
                                  case BasePrompt():
                                      prompts.append(prompt)
                          return prompts
                      

                      get_tool_provider

                      get_tool_provider() -> ResourceProvider | None
                      

                      Get tool provider for this agent.

                      Source code in src/llmling_agent/models/agents.py
                      248
                      249
                      250
                      251
                      252
                      253
                      254
                      255
                      256
                      257
                      258
                      259
                      260
                      261
                      262
                      263
                      264
                      265
                      266
                      267
                      268
                      269
                      270
                      271
                      272
                      273
                      274
                      275
                      def get_tool_provider(self) -> ResourceProvider | None:
                          """Get tool provider for this agent."""
                          from llmling_agent.tools.base import ToolInfo
                      
                          # Create provider for static tools
                          if not self.tools:
                              return None
                          static_tools: list[ToolInfo] = []
                          for tool_config in self.tools:
                              try:
                                  match tool_config:
                                      case str():
                                          if tool_config.startswith("crewai_tools"):
                                              obj = import_class(tool_config)()
                                              static_tools.append(ToolInfo.from_crewai_tool(obj))
                                          elif tool_config.startswith("langchain"):
                                              obj = import_class(tool_config)()
                                              static_tools.append(ToolInfo.from_langchain_tool(obj))
                                          else:
                                              tool = ToolInfo.from_callable(tool_config)
                                              static_tools.append(tool)
                                      case BaseToolConfig():
                                          static_tools.append(tool_config.get_tool())
                              except Exception:
                                  logger.exception("Failed to load tool %r", tool_config)
                                  continue
                      
                          return StaticResourceProvider(name="builtin", tools=static_tools)
                      

                      get_toolsets async

                      get_toolsets() -> list[ResourceProvider]
                      

                      Get all resource providers for this agent.

                      Source code in src/llmling_agent/models/agents.py
                      230
                      231
                      232
                      233
                      234
                      235
                      236
                      237
                      238
                      239
                      240
                      241
                      242
                      243
                      244
                      245
                      246
                      async def get_toolsets(self) -> list[ResourceProvider]:
                          """Get all resource providers for this agent."""
                          providers: list[ResourceProvider] = []
                      
                          # Add providers from toolsets
                          for toolset_config in self.toolsets:
                              try:
                                  provider = toolset_config.get_provider()
                                  providers.append(provider)
                              except Exception as e:
                                  logger.exception(
                                      "Failed to create provider for toolset: %r", toolset_config
                                  )
                                  msg = f"Failed to create provider for toolset: {e}"
                                  raise ValueError(msg) from e
                      
                          return providers
                      

                      handle_model_types classmethod

                      handle_model_types(data: dict[str, Any]) -> dict[str, Any]
                      

                      Convert model inputs to appropriate format.

                      Source code in src/llmling_agent/models/agents.py
                      220
                      221
                      222
                      223
                      224
                      225
                      226
                      227
                      228
                      @model_validator(mode="before")
                      @classmethod
                      def handle_model_types(cls, data: dict[str, Any]) -> dict[str, Any]:
                          """Convert model inputs to appropriate format."""
                          model = data.get("model")
                          match model:
                              case str():
                                  data["model"] = {"type": "string", "identifier": model}
                          return data
                      

                      is_structured

                      is_structured() -> bool
                      

                      Check if this config defines a structured agent.

                      Source code in src/llmling_agent/models/agents.py
                      175
                      176
                      177
                      def is_structured(self) -> bool:
                          """Check if this config defines a structured agent."""
                          return self.result_type is not None
                      

                      normalize_workers classmethod

                      normalize_workers(data: dict[str, Any]) -> dict[str, Any]
                      

                      Convert string workers to WorkerConfig.

                      Source code in src/llmling_agent/models/agents.py
                      179
                      180
                      181
                      182
                      183
                      184
                      185
                      186
                      187
                      188
                      189
                      190
                      191
                      192
                      @model_validator(mode="before")
                      @classmethod
                      def normalize_workers(cls, data: dict[str, Any]) -> dict[str, Any]:
                          """Convert string workers to WorkerConfig."""
                          if workers := data.get("workers"):
                              data["workers"] = [
                                  WorkerConfig.from_str(w)
                                  if isinstance(w, str)
                                  else w
                                  if isinstance(w, WorkerConfig)  # Keep existing WorkerConfig
                                  else WorkerConfig(**w)  # Convert dict to WorkerConfig
                                  for w in workers
                              ]
                          return data
                      

                      render_system_prompts

                      render_system_prompts(context: dict[str, Any] | None = None) -> list[str]
                      

                      Render system prompts with context.

                      Source code in src/llmling_agent/models/agents.py
                      343
                      344
                      345
                      346
                      347
                      348
                      def render_system_prompts(self, context: dict[str, Any] | None = None) -> list[str]:
                          """Render system prompts with context."""
                          if not context:
                              # Default context
                              context = {"name": self.name, "id": 1, "model": self.model}
                          return [render_prompt(p, {"agent": context}) for p in self.system_prompts]
                      

                      validate_result_type classmethod

                      validate_result_type(data: dict[str, Any]) -> dict[str, Any]
                      

                      Convert result type and apply its settings.

                      Source code in src/llmling_agent/models/agents.py
                      194
                      195
                      196
                      197
                      198
                      199
                      200
                      201
                      202
                      203
                      204
                      205
                      206
                      207
                      208
                      209
                      210
                      211
                      212
                      213
                      214
                      215
                      216
                      217
                      218
                      @model_validator(mode="before")
                      @classmethod
                      def validate_result_type(cls, data: dict[str, Any]) -> dict[str, Any]:
                          """Convert result type and apply its settings."""
                          result_type = data.get("result_type")
                          if isinstance(result_type, dict):
                              # Extract response-specific settings
                              tool_name = result_type.pop("result_tool_name", None)
                              tool_description = result_type.pop("result_tool_description", None)
                              retries = result_type.pop("result_retries", None)
                      
                              # Convert remaining dict to ResponseDefinition
                              if "type" not in result_type:
                                  result_type["type"] = "inline"
                              data["result_type"] = InlineResponseDefinition(**result_type)
                      
                              # Apply extracted settings to agent config
                              if tool_name:
                                  data["result_tool_name"] = tool_name
                              if tool_description:
                                  data["result_tool_description"] = tool_description
                              if retries is not None:
                                  data["result_retries"] = retries
                      
                          return data
                      

                      AgentContext dataclass

                      Bases: NodeContext[TDeps]

                      Runtime context for agent execution.

                      Generically typed with AgentContext[Type of Dependencies]

                      Source code in src/llmling_agent/agent/context.py
                       31
                       32
                       33
                       34
                       35
                       36
                       37
                       38
                       39
                       40
                       41
                       42
                       43
                       44
                       45
                       46
                       47
                       48
                       49
                       50
                       51
                       52
                       53
                       54
                       55
                       56
                       57
                       58
                       59
                       60
                       61
                       62
                       63
                       64
                       65
                       66
                       67
                       68
                       69
                       70
                       71
                       72
                       73
                       74
                       75
                       76
                       77
                       78
                       79
                       80
                       81
                       82
                       83
                       84
                       85
                       86
                       87
                       88
                       89
                       90
                       91
                       92
                       93
                       94
                       95
                       96
                       97
                       98
                       99
                      100
                      101
                      102
                      103
                      104
                      105
                      106
                      107
                      108
                      109
                      110
                      111
                      112
                      113
                      114
                      115
                      116
                      @dataclass(kw_only=True)
                      class AgentContext[TDeps](NodeContext[TDeps]):
                          """Runtime context for agent execution.
                      
                          Generically typed with AgentContext[Type of Dependencies]
                          """
                      
                          capabilities: Capabilities
                          """Current agent's capabilities."""
                      
                          config: AgentConfig
                          """Current agent's specific configuration."""
                      
                          model_settings: dict[str, Any] = field(default_factory=dict)
                          """Model-specific settings."""
                      
                          data: TDeps | None = None
                          """Custom context data."""
                      
                          runtime: RuntimeConfig | None = None
                          """Reference to the runtime configuration."""
                      
                          @classmethod
                          def create_default(
                              cls,
                              name: str,
                              capabilities: Capabilities | None = None,
                              deps: TDeps | None = None,
                              pool: AgentPool | None = None,
                              input_provider: InputProvider | None = None,
                          ) -> AgentContext[TDeps]:
                              """Create a default agent context with minimal privileges.
                      
                              Args:
                                  name: Name of the agent
                                  capabilities: Optional custom capabilities (defaults to minimal access)
                                  deps: Optional dependencies for the agent
                                  pool: Optional pool the agent is part of
                                  input_provider: Optional input provider for the agent
                              """
                              from llmling_agent.config.capabilities import Capabilities
                              from llmling_agent.models import AgentConfig, AgentsManifest
                      
                              caps = capabilities or Capabilities()
                              defn = AgentsManifest()
                              cfg = AgentConfig(name=name)
                              return cls(
                                  input_provider=input_provider,
                                  node_name=name,
                                  capabilities=caps,
                                  definition=defn,
                                  config=cfg,
                                  data=deps,
                                  pool=pool,
                              )
                      
                          @cached_property
                          def converter(self) -> ConversionManager:
                              """Get conversion manager from global config."""
                              return ConversionManager(self.definition.conversion)
                      
                          # TODO: perhaps add agent directly to context?
                          @property
                          def agent(self) -> AnyAgent[TDeps, Any]:
                              """Get the agent instance from the pool."""
                              assert self.pool, "No agent pool available"
                              assert self.node_name, "No agent name available"
                              return self.pool.agents[self.node_name]
                      
                          async def handle_confirmation(
                              self,
                              tool: ToolInfo,
                              args: dict[str, Any],
                          ) -> ConfirmationResult:
                              """Handle tool execution confirmation.
                      
                              Returns True if:
                              - No confirmation handler is set
                              - Handler confirms the execution
                              """
                              provider = self.get_input_provider()
                              mode = self.config.requires_tool_confirmation
                              if (mode == "per_tool" and not tool.requires_confirmation) or mode == "never":
                                  return "allow"
                              history = self.agent.conversation.get_history() if self.pool else []
                              return await provider.get_tool_confirmation(self, tool, args, history)
                      

                      agent property

                      agent: AnyAgent[TDeps, Any]
                      

                      Get the agent instance from the pool.

                      capabilities instance-attribute

                      capabilities: Capabilities
                      

                      Current agent's capabilities.

                      config instance-attribute

                      config: AgentConfig
                      

                      Current agent's specific configuration.

                      converter cached property

                      converter: ConversionManager
                      

                      Get conversion manager from global config.

                      data class-attribute instance-attribute

                      data: TDeps | None = None
                      

                      Custom context data.

                      model_settings class-attribute instance-attribute

                      model_settings: dict[str, Any] = field(default_factory=dict)
                      

                      Model-specific settings.

                      runtime class-attribute instance-attribute

                      runtime: RuntimeConfig | None = None
                      

                      Reference to the runtime configuration.

                      create_default classmethod

                      create_default(
                          name: str,
                          capabilities: Capabilities | None = None,
                          deps: TDeps | None = None,
                          pool: AgentPool | None = None,
                          input_provider: InputProvider | None = None,
                      ) -> AgentContext[TDeps]
                      

                      Create a default agent context with minimal privileges.

                      Parameters:

                      Name Type Description Default
                      name str

                      Name of the agent

                      required
                      capabilities Capabilities | None

                      Optional custom capabilities (defaults to minimal access)

                      None
                      deps TDeps | None

                      Optional dependencies for the agent

                      None
                      pool AgentPool | None

                      Optional pool the agent is part of

                      None
                      input_provider InputProvider | None

                      Optional input provider for the agent

                      None
                      Source code in src/llmling_agent/agent/context.py
                      53
                      54
                      55
                      56
                      57
                      58
                      59
                      60
                      61
                      62
                      63
                      64
                      65
                      66
                      67
                      68
                      69
                      70
                      71
                      72
                      73
                      74
                      75
                      76
                      77
                      78
                      79
                      80
                      81
                      82
                      83
                      84
                      85
                      @classmethod
                      def create_default(
                          cls,
                          name: str,
                          capabilities: Capabilities | None = None,
                          deps: TDeps | None = None,
                          pool: AgentPool | None = None,
                          input_provider: InputProvider | None = None,
                      ) -> AgentContext[TDeps]:
                          """Create a default agent context with minimal privileges.
                      
                          Args:
                              name: Name of the agent
                              capabilities: Optional custom capabilities (defaults to minimal access)
                              deps: Optional dependencies for the agent
                              pool: Optional pool the agent is part of
                              input_provider: Optional input provider for the agent
                          """
                          from llmling_agent.config.capabilities import Capabilities
                          from llmling_agent.models import AgentConfig, AgentsManifest
                      
                          caps = capabilities or Capabilities()
                          defn = AgentsManifest()
                          cfg = AgentConfig(name=name)
                          return cls(
                              input_provider=input_provider,
                              node_name=name,
                              capabilities=caps,
                              definition=defn,
                              config=cfg,
                              data=deps,
                              pool=pool,
                          )
                      

                      handle_confirmation async

                      handle_confirmation(tool: ToolInfo, args: dict[str, Any]) -> ConfirmationResult
                      

                      Handle tool execution confirmation.

                      Returns True if: - No confirmation handler is set - Handler confirms the execution

                      Source code in src/llmling_agent/agent/context.py
                      100
                      101
                      102
                      103
                      104
                      105
                      106
                      107
                      108
                      109
                      110
                      111
                      112
                      113
                      114
                      115
                      116
                      async def handle_confirmation(
                          self,
                          tool: ToolInfo,
                          args: dict[str, Any],
                      ) -> ConfirmationResult:
                          """Handle tool execution confirmation.
                      
                          Returns True if:
                          - No confirmation handler is set
                          - Handler confirms the execution
                          """
                          provider = self.get_input_provider()
                          mode = self.config.requires_tool_confirmation
                          if (mode == "per_tool" and not tool.requires_confirmation) or mode == "never":
                              return "allow"
                          history = self.agent.conversation.get_history() if self.pool else []
                          return await provider.get_tool_confirmation(self, tool, args, history)
                      

                      AgentPool

                      Bases: BaseRegistry[NodeName, MessageEmitter[Any, Any]]

                      Pool managing message processing nodes (agents and teams).

                      Acts as a unified registry for all nodes, providing: - Centralized node management and lookup - Shared dependency injection - Connection management - Resource coordination

                      Nodes can be accessed through: - nodes: All registered nodes (agents and teams) - agents: Only Agent instances - teams: Only Team instances

                      Source code in src/llmling_agent/delegation/pool.py
                       61
                       62
                       63
                       64
                       65
                       66
                       67
                       68
                       69
                       70
                       71
                       72
                       73
                       74
                       75
                       76
                       77
                       78
                       79
                       80
                       81
                       82
                       83
                       84
                       85
                       86
                       87
                       88
                       89
                       90
                       91
                       92
                       93
                       94
                       95
                       96
                       97
                       98
                       99
                      100
                      101
                      102
                      103
                      104
                      105
                      106
                      107
                      108
                      109
                      110
                      111
                      112
                      113
                      114
                      115
                      116
                      117
                      118
                      119
                      120
                      121
                      122
                      123
                      124
                      125
                      126
                      127
                      128
                      129
                      130
                      131
                      132
                      133
                      134
                      135
                      136
                      137
                      138
                      139
                      140
                      141
                      142
                      143
                      144
                      145
                      146
                      147
                      148
                      149
                      150
                      151
                      152
                      153
                      154
                      155
                      156
                      157
                      158
                      159
                      160
                      161
                      162
                      163
                      164
                      165
                      166
                      167
                      168
                      169
                      170
                      171
                      172
                      173
                      174
                      175
                      176
                      177
                      178
                      179
                      180
                      181
                      182
                      183
                      184
                      185
                      186
                      187
                      188
                      189
                      190
                      191
                      192
                      193
                      194
                      195
                      196
                      197
                      198
                      199
                      200
                      201
                      202
                      203
                      204
                      205
                      206
                      207
                      208
                      209
                      210
                      211
                      212
                      213
                      214
                      215
                      216
                      217
                      218
                      219
                      220
                      221
                      222
                      223
                      224
                      225
                      226
                      227
                      228
                      229
                      230
                      231
                      232
                      233
                      234
                      235
                      236
                      237
                      238
                      239
                      240
                      241
                      242
                      243
                      244
                      245
                      246
                      247
                      248
                      249
                      250
                      251
                      252
                      253
                      254
                      255
                      256
                      257
                      258
                      259
                      260
                      261
                      262
                      263
                      264
                      265
                      266
                      267
                      268
                      269
                      270
                      271
                      272
                      273
                      274
                      275
                      276
                      277
                      278
                      279
                      280
                      281
                      282
                      283
                      284
                      285
                      286
                      287
                      288
                      289
                      290
                      291
                      292
                      293
                      294
                      295
                      296
                      297
                      298
                      299
                      300
                      301
                      302
                      303
                      304
                      305
                      306
                      307
                      308
                      309
                      310
                      311
                      312
                      313
                      314
                      315
                      316
                      317
                      318
                      319
                      320
                      321
                      322
                      323
                      324
                      325
                      326
                      327
                      328
                      329
                      330
                      331
                      332
                      333
                      334
                      335
                      336
                      337
                      338
                      339
                      340
                      341
                      342
                      343
                      344
                      345
                      346
                      347
                      348
                      349
                      350
                      351
                      352
                      353
                      354
                      355
                      356
                      357
                      358
                      359
                      360
                      361
                      362
                      363
                      364
                      365
                      366
                      367
                      368
                      369
                      370
                      371
                      372
                      373
                      374
                      375
                      376
                      377
                      378
                      379
                      380
                      381
                      382
                      383
                      384
                      385
                      386
                      387
                      388
                      389
                      390
                      391
                      392
                      393
                      394
                      395
                      396
                      397
                      398
                      399
                      400
                      401
                      402
                      403
                      404
                      405
                      406
                      407
                      408
                      409
                      410
                      411
                      412
                      413
                      414
                      415
                      416
                      417
                      418
                      419
                      420
                      421
                      422
                      423
                      424
                      425
                      426
                      427
                      428
                      429
                      430
                      431
                      432
                      433
                      434
                      435
                      436
                      437
                      438
                      439
                      440
                      441
                      442
                      443
                      444
                      445
                      446
                      447
                      448
                      449
                      450
                      451
                      452
                      453
                      454
                      455
                      456
                      457
                      458
                      459
                      460
                      461
                      462
                      463
                      464
                      465
                      466
                      467
                      468
                      469
                      470
                      471
                      472
                      473
                      474
                      475
                      476
                      477
                      478
                      479
                      480
                      481
                      482
                      483
                      484
                      485
                      486
                      487
                      488
                      489
                      490
                      491
                      492
                      493
                      494
                      495
                      496
                      497
                      498
                      499
                      500
                      501
                      502
                      503
                      504
                      505
                      506
                      507
                      508
                      509
                      510
                      511
                      512
                      513
                      514
                      515
                      516
                      517
                      518
                      519
                      520
                      521
                      522
                      523
                      524
                      525
                      526
                      527
                      528
                      529
                      530
                      531
                      532
                      533
                      534
                      535
                      536
                      537
                      538
                      539
                      540
                      541
                      542
                      543
                      544
                      545
                      546
                      547
                      548
                      549
                      550
                      551
                      552
                      553
                      554
                      555
                      556
                      557
                      558
                      559
                      560
                      561
                      562
                      563
                      564
                      565
                      566
                      567
                      568
                      569
                      570
                      571
                      572
                      573
                      574
                      575
                      576
                      577
                      578
                      579
                      580
                      581
                      582
                      583
                      584
                      585
                      586
                      587
                      588
                      589
                      590
                      591
                      592
                      593
                      594
                      595
                      596
                      597
                      598
                      599
                      600
                      601
                      602
                      603
                      604
                      605
                      606
                      607
                      608
                      609
                      610
                      611
                      612
                      613
                      614
                      615
                      616
                      617
                      618
                      619
                      620
                      621
                      622
                      623
                      624
                      625
                      626
                      627
                      628
                      629
                      630
                      631
                      632
                      633
                      634
                      635
                      636
                      637
                      638
                      639
                      640
                      641
                      642
                      643
                      644
                      645
                      646
                      647
                      648
                      649
                      650
                      651
                      652
                      653
                      654
                      655
                      656
                      657
                      658
                      659
                      660
                      661
                      662
                      663
                      664
                      665
                      666
                      667
                      668
                      669
                      670
                      671
                      672
                      673
                      674
                      675
                      676
                      677
                      678
                      679
                      680
                      681
                      682
                      683
                      684
                      685
                      686
                      687
                      688
                      689
                      690
                      691
                      692
                      693
                      694
                      695
                      696
                      697
                      698
                      699
                      700
                      701
                      702
                      703
                      704
                      705
                      706
                      707
                      708
                      709
                      710
                      711
                      712
                      713
                      714
                      715
                      716
                      717
                      718
                      719
                      720
                      721
                      722
                      723
                      724
                      725
                      726
                      727
                      728
                      729
                      730
                      731
                      732
                      733
                      734
                      735
                      736
                      737
                      738
                      739
                      740
                      741
                      742
                      743
                      744
                      745
                      746
                      747
                      748
                      749
                      750
                      751
                      752
                      753
                      754
                      755
                      756
                      757
                      758
                      759
                      760
                      761
                      762
                      763
                      764
                      765
                      766
                      767
                      768
                      769
                      770
                      771
                      772
                      773
                      774
                      775
                      776
                      777
                      778
                      779
                      780
                      781
                      782
                      783
                      784
                      785
                      786
                      787
                      788
                      789
                      790
                      791
                      792
                      793
                      794
                      795
                      796
                      797
                      798
                      799
                      800
                      801
                      802
                      803
                      804
                      805
                      806
                      807
                      808
                      809
                      810
                      811
                      812
                      813
                      814
                      815
                      816
                      817
                      818
                      819
                      820
                      821
                      822
                      823
                      824
                      825
                      826
                      827
                      828
                      829
                      830
                      831
                      832
                      833
                      834
                      835
                      836
                      837
                      838
                      839
                      840
                      841
                      842
                      843
                      844
                      845
                      846
                      847
                      848
                      849
                      850
                      851
                      852
                      853
                      854
                      855
                      856
                      857
                      858
                      859
                      860
                      861
                      862
                      863
                      864
                      865
                      866
                      867
                      868
                      869
                      870
                      871
                      872
                      873
                      874
                      875
                      876
                      877
                      878
                      879
                      880
                      881
                      882
                      883
                      884
                      885
                      886
                      887
                      888
                      889
                      890
                      891
                      892
                      893
                      894
                      895
                      896
                      897
                      898
                      899
                      900
                      901
                      902
                      903
                      904
                      905
                      906
                      907
                      908
                      909
                      910
                      911
                      912
                      913
                      914
                      915
                      916
                      917
                      918
                      919
                      920
                      921
                      922
                      923
                      924
                      925
                      926
                      class AgentPool[TPoolDeps](BaseRegistry[NodeName, MessageEmitter[Any, Any]]):
                          """Pool managing message processing nodes (agents and teams).
                      
                          Acts as a unified registry for all nodes, providing:
                          - Centralized node management and lookup
                          - Shared dependency injection
                          - Connection management
                          - Resource coordination
                      
                          Nodes can be accessed through:
                          - nodes: All registered nodes (agents and teams)
                          - agents: Only Agent instances
                          - teams: Only Team instances
                          """
                      
                          def __init__(
                              self,
                              manifest: StrPath | AgentsManifest | None = None,
                              *,
                              shared_deps: TPoolDeps | None = None,
                              connect_nodes: bool = True,
                              input_provider: InputProvider | None = None,
                              parallel_load: bool = True,
                          ):
                              """Initialize agent pool with immediate agent creation.
                      
                              Args:
                                  manifest: Agent configuration manifest
                                  shared_deps: Dependencies to share across all nodes
                                  connect_nodes: Whether to set up forwarding connections
                                  input_provider: Input provider for tool / step confirmations / HumanAgents
                                  parallel_load: Whether to load nodes in parallel (async)
                      
                              Raises:
                                  ValueError: If manifest contains invalid node configurations
                                  RuntimeError: If node initialization fails
                              """
                              super().__init__()
                              from llmling_agent.models.manifest import AgentsManifest
                              from llmling_agent.storage import StorageManager
                      
                              match manifest:
                                  case None:
                                      self.manifest = AgentsManifest()
                                  case str():
                                      self.manifest = AgentsManifest.from_file(manifest)
                                  case AgentsManifest():
                                      self.manifest = manifest
                                  case _:
                                      msg = f"Invalid config path: {manifest}"
                                      raise ValueError(msg)
                              self.shared_deps = shared_deps
                              self._input_provider = input_provider
                              self.exit_stack = AsyncExitStack()
                              self.parallel_load = parallel_load
                              self.storage = StorageManager(self.manifest.storage)
                              self.connection_registry = ConnectionRegistry()
                              self.mcp = MCPManager(
                                  name="pool_mcp", servers=self.manifest.get_mcp_servers(), owner="pool"
                              )
                              self._tasks = TaskRegistry()
                              # Register tasks from manifest
                              for name, task in self.manifest.jobs.items():
                                  self._tasks.register(name, task)
                              self.pool_talk = TeamTalk[Any].from_nodes(list(self.nodes.values()))
                              if self.manifest.pool_server and self.manifest.pool_server.enabled:
                                  from llmling_agent.resource_providers.pool import PoolResourceProvider
                                  from llmling_agent_mcp.server import LLMLingServer
                      
                                  provider = PoolResourceProvider(
                                      self, zed_mode=self.manifest.pool_server.zed_mode
                                  )
                                  self.server: LLMLingServer | None = LLMLingServer(
                                      provider=provider,
                                      config=self.manifest.pool_server,
                                  )
                              else:
                                  self.server = None
                              # Create requested agents immediately
                              for name in self.manifest.agents:
                                  agent = self.manifest.get_agent(name, deps=shared_deps)
                                  self.register(name, agent)
                      
                              # Then set up worker relationships
                              for agent in self.agents.values():
                                  self.setup_agent_workers(agent)
                              self._create_teams()
                              # Set up forwarding connections
                              if connect_nodes:
                                  self._connect_nodes()
                      
                          async def __aenter__(self) -> Self:
                              """Enter async context and initialize all agents."""
                              try:
                                  # Add MCP tool provider to all agents
                                  agents = list(self.agents.values())
                                  teams = list(self.teams.values())
                                  for agent in agents:
                                      agent.tools.add_provider(self.mcp)
                      
                                  # Collect all components to initialize
                                  components: list[AbstractAsyncContextManager[Any]] = [
                                      self.mcp,
                                      *agents,
                                      *teams,
                                  ]
                      
                                  # Add MCP server if configured
                                  if self.server:
                                      components.append(self.server)
                      
                                  # Initialize all components
                                  if self.parallel_load:
                                      await asyncio.gather(
                                          *(self.exit_stack.enter_async_context(c) for c in components)
                                      )
                                  else:
                                      for component in components:
                                          await self.exit_stack.enter_async_context(component)
                      
                              except Exception as e:
                                  await self.cleanup()
                                  msg = "Failed to initialize agent pool"
                                  logger.exception(msg, exc_info=e)
                                  raise RuntimeError(msg) from e
                              return self
                      
                          async def __aexit__(
                              self,
                              exc_type: type[BaseException] | None,
                              exc_val: BaseException | None,
                              exc_tb: TracebackType | None,
                          ):
                              """Exit async context."""
                              # Remove MCP tool provider from all agents
                              for agent in self.agents.values():
                                  if self.mcp in agent.tools.providers:
                                      agent.tools.remove_provider(self.mcp)
                              await self.cleanup()
                      
                          async def cleanup(self):
                              """Clean up all agents."""
                              await self.exit_stack.aclose()
                              self.clear()
                      
                          @overload
                          def create_team_run(
                              self,
                              agents: Sequence[str],
                              validator: MessageNode[Any, TResult] | None = None,
                              *,
                              name: str | None = None,
                              description: str | None = None,
                              shared_prompt: str | None = None,
                              picker: AnyAgent[Any, Any] | None = None,
                              num_picks: int | None = None,
                              pick_prompt: str | None = None,
                          ) -> TeamRun[TPoolDeps, TResult]: ...
                      
                          @overload
                          def create_team_run[TDeps, TResult](
                              self,
                              agents: Sequence[MessageNode[TDeps, Any]],
                              validator: MessageNode[Any, TResult] | None = None,
                              *,
                              name: str | None = None,
                              description: str | None = None,
                              shared_prompt: str | None = None,
                              picker: AnyAgent[Any, Any] | None = None,
                              num_picks: int | None = None,
                              pick_prompt: str | None = None,
                          ) -> TeamRun[TDeps, TResult]: ...
                      
                          @overload
                          def create_team_run(
                              self,
                              agents: Sequence[AgentName | MessageNode[Any, Any]],
                              validator: MessageNode[Any, TResult] | None = None,
                              *,
                              name: str | None = None,
                              description: str | None = None,
                              shared_prompt: str | None = None,
                              picker: AnyAgent[Any, Any] | None = None,
                              num_picks: int | None = None,
                              pick_prompt: str | None = None,
                          ) -> TeamRun[Any, TResult]: ...
                      
                          def create_team_run(
                              self,
                              agents: Sequence[AgentName | MessageNode[Any, Any]] | None = None,
                              validator: MessageNode[Any, TResult] | None = None,
                              *,
                              name: str | None = None,
                              description: str | None = None,
                              shared_prompt: str | None = None,
                              picker: AnyAgent[Any, Any] | None = None,
                              num_picks: int | None = None,
                              pick_prompt: str | None = None,
                          ) -> TeamRun[Any, TResult]:
                              """Create a a sequential TeamRun from a list of Agents.
                      
                              Args:
                                  agents: List of agent names or team/agent instances (all if None)
                                  validator: Node to validate the results of the TeamRun
                                  name: Optional name for the team
                                  description: Optional description for the team
                                  shared_prompt: Optional prompt for all agents
                                  picker: Agent to use for picking agents
                                  num_picks: Number of agents to pick
                                  pick_prompt: Prompt to use for picking agents
                              """
                              from llmling_agent.delegation.teamrun import TeamRun
                      
                              if agents is None:
                                  agents = list(self.agents.keys())
                      
                              # First resolve/configure agents
                              resolved_agents: list[MessageNode[Any, Any]] = []
                              for agent in agents:
                                  if isinstance(agent, str):
                                      agent = self.get_agent(agent)
                                  resolved_agents.append(agent)
                              team = TeamRun(
                                  resolved_agents,
                                  name=name,
                                  description=description,
                                  validator=validator,
                                  shared_prompt=shared_prompt,
                                  picker=picker,
                                  num_picks=num_picks,
                                  pick_prompt=pick_prompt,
                              )
                              if name:
                                  self[name] = team
                              return team
                      
                          @overload
                          def create_team(self, agents: Sequence[str]) -> Team[TPoolDeps]: ...
                      
                          @overload
                          def create_team[TDeps](
                              self,
                              agents: Sequence[MessageNode[TDeps, Any]],
                              *,
                              name: str | None = None,
                              description: str | None = None,
                              shared_prompt: str | None = None,
                              picker: AnyAgent[Any, Any] | None = None,
                              num_picks: int | None = None,
                              pick_prompt: str | None = None,
                          ) -> Team[TDeps]: ...
                      
                          @overload
                          def create_team(
                              self,
                              agents: Sequence[AgentName | MessageNode[Any, Any]],
                              *,
                              name: str | None = None,
                              description: str | None = None,
                              shared_prompt: str | None = None,
                              picker: AnyAgent[Any, Any] | None = None,
                              num_picks: int | None = None,
                              pick_prompt: str | None = None,
                          ) -> Team[Any]: ...
                      
                          def create_team(
                              self,
                              agents: Sequence[AgentName | MessageNode[Any, Any]] | None = None,
                              *,
                              name: str | None = None,
                              description: str | None = None,
                              shared_prompt: str | None = None,
                              picker: AnyAgent[Any, Any] | None = None,
                              num_picks: int | None = None,
                              pick_prompt: str | None = None,
                          ) -> Team[Any]:
                              """Create a group from agent names or instances.
                      
                              Args:
                                  agents: List of agent names or instances (all if None)
                                  name: Optional name for the team
                                  description: Optional description for the team
                                  shared_prompt: Optional prompt for all agents
                                  picker: Agent to use for picking agents
                                  num_picks: Number of agents to pick
                                  pick_prompt: Prompt to use for picking agents
                              """
                              from llmling_agent.delegation.team import Team
                      
                              if agents is None:
                                  agents = list(self.agents.keys())
                      
                              # First resolve/configure agents
                              resolved_agents: list[MessageNode[Any, Any]] = []
                              for agent in agents:
                                  if isinstance(agent, str):
                                      agent = self.get_agent(agent)
                                  resolved_agents.append(agent)
                      
                              team = Team(
                                  name=name,
                                  description=description,
                                  agents=resolved_agents,
                                  shared_prompt=shared_prompt,
                                  picker=picker,
                                  num_picks=num_picks,
                                  pick_prompt=pick_prompt,
                              )
                              if name:
                                  self[name] = team
                              return team
                      
                          @asynccontextmanager
                          async def track_message_flow(self) -> AsyncIterator[MessageFlowTracker]:
                              """Track message flow during a context."""
                              tracker = MessageFlowTracker()
                              self.connection_registry.message_flow.connect(tracker.track)
                              try:
                                  yield tracker
                              finally:
                                  self.connection_registry.message_flow.disconnect(tracker.track)
                      
                          async def run_event_loop(self):
                              """Run pool in event-watching mode until interrupted."""
                              import sys
                      
                              print("Starting event watch mode...")
                              print("Active nodes: ", ", ".join(self.list_nodes()))
                              print("Press Ctrl+C to stop")
                      
                              stop_event = asyncio.Event()
                      
                              if sys.platform != "win32":
                                  # Unix: Use signal handlers
                                  loop = asyncio.get_running_loop()
                                  for sig in (signal.SIGINT, signal.SIGTERM):
                                      loop.add_signal_handler(sig, stop_event.set)
                                  while True:
                                      await asyncio.sleep(1)
                              else:
                                  # Windows: Use keyboard interrupt
                                  with suppress(KeyboardInterrupt):
                                      while True:
                                          await asyncio.sleep(1)
                      
                          @property
                          def agents(self) -> dict[str, AnyAgent[Any, Any]]:
                              """Get agents dict (backward compatibility)."""
                              return {
                                  i.name: i
                                  for i in self._items.values()
                                  if isinstance(i, Agent | StructuredAgent)
                              }
                      
                          @property
                          def teams(self) -> dict[str, BaseTeam[Any, Any]]:
                              """Get agents dict (backward compatibility)."""
                              from llmling_agent.delegation.base_team import BaseTeam
                      
                              return {i.name: i for i in self._items.values() if isinstance(i, BaseTeam)}
                      
                          @property
                          def nodes(self) -> dict[str, MessageNode[Any, Any]]:
                              """Get agents dict (backward compatibility)."""
                              from llmling_agent.messaging.messagenode import MessageNode
                      
                              return {i.name: i for i in self._items.values() if isinstance(i, MessageNode)}
                      
                          @property
                          def event_nodes(self) -> dict[str, EventNode[Any]]:
                              """Get agents dict (backward compatibility)."""
                              from llmling_agent.messaging.eventnode import EventNode
                      
                              return {i.name: i for i in self._items.values() if isinstance(i, EventNode)}
                      
                          @property
                          def node_events(self) -> DictEvents:
                              """Get node events."""
                              return self._items.events
                      
                          @property
                          def _error_class(self) -> type[LLMLingError]:
                              """Error class for agent operations."""
                              return LLMLingError
                      
                          def _validate_item(
                              self, item: MessageEmitter[Any, Any] | Any
                          ) -> MessageEmitter[Any, Any]:
                              """Validate and convert items before registration.
                      
                              Args:
                                  item: Item to validate
                      
                              Returns:
                                  Validated Node
                      
                              Raises:
                                  LLMlingError: If item is not a valid node
                              """
                              if not isinstance(item, MessageEmitter):
                                  msg = f"Item must be Agent or Team, got {type(item)}"
                                  raise self._error_class(msg)
                              item.context.pool = self
                              return item
                      
                          def _create_teams(self):
                              """Create all teams in two phases to allow nesting."""
                              # Phase 1: Create empty teams
                      
                              empty_teams: dict[str, BaseTeam[Any, Any]] = {}
                              for name, config in self.manifest.teams.items():
                                  if config.mode == "parallel":
                                      empty_teams[name] = Team(
                                          [], name=name, shared_prompt=config.shared_prompt
                                      )
                                  else:
                                      empty_teams[name] = TeamRun(
                                          [], name=name, shared_prompt=config.shared_prompt
                                      )
                      
                              # Phase 2: Resolve members
                              for name, config in self.manifest.teams.items():
                                  team = empty_teams[name]
                                  members: list[MessageNode[Any, Any]] = []
                                  for member in config.members:
                                      if member in self.agents:
                                          members.append(self.agents[member])
                                      elif member in empty_teams:
                                          members.append(empty_teams[member])
                                      else:
                                          msg = f"Unknown team member: {member}"
                                          raise ValueError(msg)
                                  team.agents.extend(members)
                                  self[name] = team
                      
                          def _connect_nodes(self):
                              """Set up connections defined in manifest."""
                              # Merge agent and team configs into one dict of nodes with connections
                              for name, config in self.manifest.nodes.items():
                                  source = self[name]
                                  for target in config.connections or []:
                                      match target:
                                          case NodeConnectionConfig():
                                              if target.name not in self:
                                                  msg = f"Forward target {target.name} not found for {name}"
                                                  raise ValueError(msg)
                                              target_node = self[target.name]
                                          case FileConnectionConfig() | CallableConnectionConfig():
                                              target_node = Agent(provider=target.get_provider())
                                          case _:
                                              msg = f"Invalid connection config: {target}"
                                              raise ValueError(msg)
                      
                                      source.connect_to(
                                          target_node,  # type: ignore  # recognized as "Any | BaseTeam[Any, Any]" by mypy?
                                          connection_type=target.connection_type,
                                          name=name,
                                          priority=target.priority,
                                          delay=target.delay,
                                          queued=target.queued,
                                          queue_strategy=target.queue_strategy,
                                          transform=target.transform,
                                          filter_condition=target.filter_condition.check
                                          if target.filter_condition
                                          else None,
                                          stop_condition=target.stop_condition.check
                                          if target.stop_condition
                                          else None,
                                          exit_condition=target.exit_condition.check
                                          if target.exit_condition
                                          else None,
                                      )
                                      source.connections.set_wait_state(
                                          target_node,
                                          wait=target.wait_for_completion,
                                      )
                      
                          @overload
                          async def clone_agent[TDeps](
                              self,
                              agent: AgentName | Agent[TDeps],
                              new_name: AgentName | None = None,
                              *,
                              system_prompts: list[str] | None = None,
                              template_context: dict[str, Any] | None = None,
                          ) -> Agent[TDeps]: ...
                      
                          @overload
                          async def clone_agent[TDeps, TResult](
                              self,
                              agent: StructuredAgent[TDeps, TResult],
                              new_name: AgentName | None = None,
                              *,
                              system_prompts: list[str] | None = None,
                              template_context: dict[str, Any] | None = None,
                          ) -> StructuredAgent[TDeps, TResult]: ...
                      
                          async def clone_agent[TDeps, TAgentResult](
                              self,
                              agent: AgentName | AnyAgent[TDeps, TAgentResult],
                              new_name: AgentName | None = None,
                              *,
                              system_prompts: list[str] | None = None,
                              template_context: dict[str, Any] | None = None,
                          ) -> AnyAgent[TDeps, TAgentResult]:
                              """Create a copy of an agent.
                      
                              Args:
                                  agent: Agent instance or name to clone
                                  new_name: Optional name for the clone
                                  system_prompts: Optional different prompts
                                  template_context: Variables for template rendering
                      
                              Returns:
                                  The new agent instance
                              """
                              from llmling_agent.agent import Agent, StructuredAgent
                      
                              # Get original config
                              if isinstance(agent, str):
                                  if agent not in self.manifest.agents:
                                      msg = f"Agent {agent} not found"
                                      raise KeyError(msg)
                                  config = self.manifest.agents[agent]
                                  original_agent: AnyAgent[Any, Any] = self.get_agent(agent)
                              else:
                                  config = agent.context.config  # type: ignore
                                  original_agent = agent
                      
                              # Create new config
                              new_config = config.model_copy(deep=True)
                      
                              # Apply overrides
                              if system_prompts:
                                  new_config.system_prompts = system_prompts
                      
                              # Handle template rendering
                              if template_context:
                                  new_config.system_prompts = new_config.render_system_prompts(template_context)
                      
                              # Create new agent with same runtime
                              new_agent = Agent[TDeps](
                                  runtime=original_agent.runtime,
                                  context=original_agent.context,
                                  # result_type=original_agent.actual_type,
                                  provider=new_config.get_provider(),
                                  system_prompt=new_config.system_prompts,
                                  name=new_name or f"{config.name}_copy_{len(self.agents)}",
                              )
                              if isinstance(original_agent, StructuredAgent):
                                  new_agent = new_agent.to_structured(original_agent.actual_type)
                      
                              # Register in pool
                              agent_name = new_agent.name
                              self.manifest.agents[agent_name] = new_config
                              self.agents[agent_name] = new_agent
                              return await self.exit_stack.enter_async_context(new_agent)
                      
                          @overload
                          async def create_agent(
                              self,
                              name: AgentName,
                              *,
                              session: SessionIdType | SessionQuery = None,
                              name_override: str | None = None,
                          ) -> Agent[TPoolDeps]: ...
                      
                          @overload
                          async def create_agent[TCustomDeps](
                              self,
                              name: AgentName,
                              *,
                              deps: TCustomDeps,
                              session: SessionIdType | SessionQuery = None,
                              name_override: str | None = None,
                          ) -> Agent[TCustomDeps]: ...
                      
                          @overload
                          async def create_agent[TResult](
                              self,
                              name: AgentName,
                              *,
                              return_type: type[TResult],
                              session: SessionIdType | SessionQuery = None,
                              name_override: str | None = None,
                          ) -> StructuredAgent[TPoolDeps, TResult]: ...
                      
                          @overload
                          async def create_agent[TCustomDeps, TResult](
                              self,
                              name: AgentName,
                              *,
                              deps: TCustomDeps,
                              return_type: type[TResult],
                              session: SessionIdType | SessionQuery = None,
                              name_override: str | None = None,
                          ) -> StructuredAgent[TCustomDeps, TResult]: ...
                      
                          async def create_agent(
                              self,
                              name: AgentName,
                              *,
                              deps: Any | None = None,
                              return_type: Any | None = None,
                              session: SessionIdType | SessionQuery = None,
                              name_override: str | None = None,
                          ) -> AnyAgent[Any, Any]:
                              """Create a new agent instance from configuration.
                      
                              Args:
                                  name: Name of the agent configuration to use
                                  deps: Optional custom dependencies (overrides pool deps)
                                  return_type: Optional type for structured responses
                                  session: Optional session ID or query to recover conversation
                                  name_override: Optional different name for this instance
                      
                              Returns:
                                  New agent instance with the specified configuration
                      
                              Raises:
                                  KeyError: If agent configuration not found
                                  ValueError: If configuration is invalid
                              """
                              if name not in self.manifest.agents:
                                  msg = f"Agent configuration {name!r} not found"
                                  raise KeyError(msg)
                      
                              # Use Manifest.get_agent for proper initialization
                              final_deps = deps if deps is not None else self.shared_deps
                              agent = self.manifest.get_agent(name, deps=final_deps)
                              # Override name if requested
                              if name_override:
                                  agent.name = name_override
                      
                              # Set pool reference
                              agent.context.pool = self
                      
                              # Handle session if provided
                              if session:
                                  agent.conversation.load_history_from_database(session=session)
                      
                              # Initialize agent through exit stack
                              agent = await self.exit_stack.enter_async_context(agent)
                      
                              # Override structured configuration if provided
                              if return_type is not None:
                                  return agent.to_structured(return_type)
                      
                              return agent
                      
                          def setup_agent_workers(self, agent: AnyAgent[Any, Any]):
                              """Set up workers for an agent from configuration."""
                              for worker_config in agent.context.config.workers:
                                  try:
                                      worker = self.get_agent(worker_config.name)
                                      agent.register_worker(
                                          worker,
                                          name=worker_config.name,
                                          reset_history_on_run=worker_config.reset_history_on_run,
                                          pass_message_history=worker_config.pass_message_history,
                                          share_context=worker_config.share_context,
                                      )
                                  except KeyError as e:
                                      msg = f"Worker agent {worker_config.name!r} not found"
                                      raise ValueError(msg) from e
                      
                          @overload
                          def get_agent(
                              self,
                              agent: AgentName | Agent[Any],
                              *,
                              model_override: str | None = None,
                              session: SessionIdType | SessionQuery = None,
                          ) -> Agent[TPoolDeps]: ...
                      
                          @overload
                          def get_agent[TResult](
                              self,
                              agent: AgentName | Agent[Any],
                              *,
                              return_type: type[TResult],
                              model_override: str | None = None,
                              session: SessionIdType | SessionQuery = None,
                          ) -> StructuredAgent[TPoolDeps, TResult]: ...
                      
                          @overload
                          def get_agent[TCustomDeps](
                              self,
                              agent: AgentName | Agent[Any],
                              *,
                              deps: TCustomDeps,
                              model_override: str | None = None,
                              session: SessionIdType | SessionQuery = None,
                          ) -> Agent[TCustomDeps]: ...
                      
                          @overload
                          def get_agent[TCustomDeps, TResult](
                              self,
                              agent: AgentName | Agent[Any],
                              *,
                              deps: TCustomDeps,
                              return_type: type[TResult],
                              model_override: str | None = None,
                              session: SessionIdType | SessionQuery = None,
                          ) -> StructuredAgent[TCustomDeps, TResult]: ...
                      
                          def get_agent(
                              self,
                              agent: AgentName | Agent[Any],
                              *,
                              deps: Any | None = None,
                              return_type: Any | None = None,
                              model_override: str | None = None,
                              session: SessionIdType | SessionQuery = None,
                          ) -> AnyAgent[Any, Any]:
                              """Get or configure an agent from the pool.
                      
                              This method provides flexible agent configuration with dependency injection:
                              - Without deps: Agent uses pool's shared dependencies
                              - With deps: Agent uses provided custom dependencies
                              - With return_type: Returns a StructuredAgent with type validation
                      
                              Args:
                                  agent: Either agent name or instance
                                  deps: Optional custom dependencies (overrides shared deps)
                                  return_type: Optional type for structured responses
                                  model_override: Optional model override
                                  session: Optional session ID or query to recover conversation
                      
                              Returns:
                                  Either:
                                  - Agent[TPoolDeps] when using pool's shared deps
                                  - Agent[TCustomDeps] when custom deps provided
                                  - StructuredAgent when return_type provided
                      
                              Raises:
                                  KeyError: If agent name not found
                                  ValueError: If configuration is invalid
                              """
                              from llmling_agent.agent import Agent
                              from llmling_agent.agent.context import AgentContext
                      
                              # Get base agent
                              base = agent if isinstance(agent, Agent) else self.agents[agent]
                      
                              # Setup context and dependencies
                              if base.context is None:
                                  base.context = AgentContext[Any].create_default(base.name)
                      
                              # Use custom deps if provided, otherwise use shared deps
                              base.context.data = deps if deps is not None else self.shared_deps
                              base.context.pool = self
                      
                              # Apply overrides
                              if model_override:
                                  base.set_model(model_override)
                      
                              if session:
                                  base.conversation.load_history_from_database(session=session)
                      
                              # Convert to structured if needed
                              if return_type is not None:
                                  return base.to_structured(return_type)
                      
                              return base
                      
                          def list_nodes(self) -> list[str]:
                              """List available agent names."""
                              return list(self.list_items())
                      
                          def get_job(self, name: str) -> Job[Any, Any]:
                              return self._tasks[name]
                      
                          def register_task(self, name: str, task: Job[Any, Any]):
                              self._tasks.register(name, task)
                      
                          @overload
                          async def add_agent(
                              self,
                              name: AgentName,
                              *,
                              result_type: None = None,
                              **kwargs: Unpack[AgentKwargs],
                          ) -> Agent[Any]: ...
                      
                          @overload
                          async def add_agent[TResult](
                              self,
                              name: AgentName,
                              *,
                              result_type: type[TResult] | str | ResponseDefinition,
                              **kwargs: Unpack[AgentKwargs],
                          ) -> StructuredAgent[Any, TResult]: ...
                      
                          async def add_agent(
                              self,
                              name: AgentName,
                              *,
                              result_type: type[Any] | str | ResponseDefinition | None = None,
                              **kwargs: Unpack[AgentKwargs],
                          ) -> Agent[Any] | StructuredAgent[Any, Any]:
                              """Add a new permanent agent to the pool.
                      
                              Args:
                                  name: Name for the new agent
                                  result_type: Optional type for structured responses:
                                      - None: Regular unstructured agent
                                      - type: Python type for validation
                                      - str: Name of response definition
                                      - ResponseDefinition: Complete response definition
                                  **kwargs: Additional agent configuration
                      
                              Returns:
                                  Either a regular Agent or StructuredAgent depending on result_type
                              """
                              from llmling_agent.agent import Agent
                      
                              agent: AnyAgent[Any, Any] = Agent(name=name, **kwargs)
                              agent.tools.add_provider(self.mcp)
                              agent = await self.exit_stack.enter_async_context(agent)
                              # Convert to structured if needed
                              if result_type is not None:
                                  agent = agent.to_structured(result_type)
                              self.register(name, agent)
                              return agent
                      
                          def get_mermaid_diagram(
                              self,
                              include_details: bool = True,
                          ) -> str:
                              """Generate mermaid flowchart of all agents and their connections.
                      
                              Args:
                                  include_details: Whether to show connection details (types, queues, etc)
                              """
                              lines = ["flowchart LR"]
                      
                              # Add all agents as nodes
                              for name in self.agents:
                                  lines.append(f"    {name}[{name}]")  # noqa: PERF401
                      
                              # Add all connections as edges
                              for agent in self.agents.values():
                                  connections = agent.connections.get_connections()
                                  for talk in connections:
                                      talk = cast(Talk[Any], talk)  # help mypy understand it's a Talk
                                      source = talk.source.name
                                      for target in talk.targets:
                                          if include_details:
                                              details: list[str] = []
                                              details.append(talk.connection_type)
                                              if talk.queued:
                                                  details.append(f"queued({talk.queue_strategy})")
                                              if fn := talk.filter_condition:  # type: ignore
                                                  details.append(f"filter:{fn.__name__}")
                                              if fn := talk.stop_condition:  # type: ignore
                                                  details.append(f"stop:{fn.__name__}")
                                              if fn := talk.exit_condition:  # type: ignore
                                                  details.append(f"exit:{fn.__name__}")
                      
                                              label = f"|{' '.join(details)}|" if details else ""
                                              lines.append(f"    {source}--{label}-->{target.name}")
                                          else:
                                              lines.append(f"    {source}-->{target.name}")
                      
                              return "\n".join(lines)
                      

                      _error_class property

                      _error_class: type[LLMLingError]
                      

                      Error class for agent operations.

                      agents property

                      agents: dict[str, AnyAgent[Any, Any]]
                      

                      Get agents dict (backward compatibility).

                      event_nodes property

                      event_nodes: dict[str, EventNode[Any]]
                      

                      Get agents dict (backward compatibility).

                      node_events property

                      node_events: DictEvents
                      

                      Get node events.

                      nodes property

                      nodes: dict[str, MessageNode[Any, Any]]
                      

                      Get agents dict (backward compatibility).

                      teams property

                      teams: dict[str, BaseTeam[Any, Any]]
                      

                      Get agents dict (backward compatibility).

                      __aenter__ async

                      __aenter__() -> Self
                      

                      Enter async context and initialize all agents.

                      Source code in src/llmling_agent/delegation/pool.py
                      152
                      153
                      154
                      155
                      156
                      157
                      158
                      159
                      160
                      161
                      162
                      163
                      164
                      165
                      166
                      167
                      168
                      169
                      170
                      171
                      172
                      173
                      174
                      175
                      176
                      177
                      178
                      179
                      180
                      181
                      182
                      183
                      184
                      185
                      186
                      async def __aenter__(self) -> Self:
                          """Enter async context and initialize all agents."""
                          try:
                              # Add MCP tool provider to all agents
                              agents = list(self.agents.values())
                              teams = list(self.teams.values())
                              for agent in agents:
                                  agent.tools.add_provider(self.mcp)
                      
                              # Collect all components to initialize
                              components: list[AbstractAsyncContextManager[Any]] = [
                                  self.mcp,
                                  *agents,
                                  *teams,
                              ]
                      
                              # Add MCP server if configured
                              if self.server:
                                  components.append(self.server)
                      
                              # Initialize all components
                              if self.parallel_load:
                                  await asyncio.gather(
                                      *(self.exit_stack.enter_async_context(c) for c in components)
                                  )
                              else:
                                  for component in components:
                                      await self.exit_stack.enter_async_context(component)
                      
                          except Exception as e:
                              await self.cleanup()
                              msg = "Failed to initialize agent pool"
                              logger.exception(msg, exc_info=e)
                              raise RuntimeError(msg) from e
                          return self
                      

                      __aexit__ async

                      __aexit__(
                          exc_type: type[BaseException] | None,
                          exc_val: BaseException | None,
                          exc_tb: TracebackType | None,
                      )
                      

                      Exit async context.

                      Source code in src/llmling_agent/delegation/pool.py
                      188
                      189
                      190
                      191
                      192
                      193
                      194
                      195
                      196
                      197
                      198
                      199
                      async def __aexit__(
                          self,
                          exc_type: type[BaseException] | None,
                          exc_val: BaseException | None,
                          exc_tb: TracebackType | None,
                      ):
                          """Exit async context."""
                          # Remove MCP tool provider from all agents
                          for agent in self.agents.values():
                              if self.mcp in agent.tools.providers:
                                  agent.tools.remove_provider(self.mcp)
                          await self.cleanup()
                      

                      __init__

                      __init__(
                          manifest: StrPath | AgentsManifest | None = None,
                          *,
                          shared_deps: TPoolDeps | None = None,
                          connect_nodes: bool = True,
                          input_provider: InputProvider | None = None,
                          parallel_load: bool = True,
                      )
                      

                      Initialize agent pool with immediate agent creation.

                      Parameters:

                      Name Type Description Default
                      manifest StrPath | AgentsManifest | None

                      Agent configuration manifest

                      None
                      shared_deps TPoolDeps | None

                      Dependencies to share across all nodes

                      None
                      connect_nodes bool

                      Whether to set up forwarding connections

                      True
                      input_provider InputProvider | None

                      Input provider for tool / step confirmations / HumanAgents

                      None
                      parallel_load bool

                      Whether to load nodes in parallel (async)

                      True

                      Raises:

                      Type Description
                      ValueError

                      If manifest contains invalid node configurations

                      RuntimeError

                      If node initialization fails

                      Source code in src/llmling_agent/delegation/pool.py
                       76
                       77
                       78
                       79
                       80
                       81
                       82
                       83
                       84
                       85
                       86
                       87
                       88
                       89
                       90
                       91
                       92
                       93
                       94
                       95
                       96
                       97
                       98
                       99
                      100
                      101
                      102
                      103
                      104
                      105
                      106
                      107
                      108
                      109
                      110
                      111
                      112
                      113
                      114
                      115
                      116
                      117
                      118
                      119
                      120
                      121
                      122
                      123
                      124
                      125
                      126
                      127
                      128
                      129
                      130
                      131
                      132
                      133
                      134
                      135
                      136
                      137
                      138
                      139
                      140
                      141
                      142
                      143
                      144
                      145
                      146
                      147
                      148
                      149
                      150
                      def __init__(
                          self,
                          manifest: StrPath | AgentsManifest | None = None,
                          *,
                          shared_deps: TPoolDeps | None = None,
                          connect_nodes: bool = True,
                          input_provider: InputProvider | None = None,
                          parallel_load: bool = True,
                      ):
                          """Initialize agent pool with immediate agent creation.
                      
                          Args:
                              manifest: Agent configuration manifest
                              shared_deps: Dependencies to share across all nodes
                              connect_nodes: Whether to set up forwarding connections
                              input_provider: Input provider for tool / step confirmations / HumanAgents
                              parallel_load: Whether to load nodes in parallel (async)
                      
                          Raises:
                              ValueError: If manifest contains invalid node configurations
                              RuntimeError: If node initialization fails
                          """
                          super().__init__()
                          from llmling_agent.models.manifest import AgentsManifest
                          from llmling_agent.storage import StorageManager
                      
                          match manifest:
                              case None:
                                  self.manifest = AgentsManifest()
                              case str():
                                  self.manifest = AgentsManifest.from_file(manifest)
                              case AgentsManifest():
                                  self.manifest = manifest
                              case _:
                                  msg = f"Invalid config path: {manifest}"
                                  raise ValueError(msg)
                          self.shared_deps = shared_deps
                          self._input_provider = input_provider
                          self.exit_stack = AsyncExitStack()
                          self.parallel_load = parallel_load
                          self.storage = StorageManager(self.manifest.storage)
                          self.connection_registry = ConnectionRegistry()
                          self.mcp = MCPManager(
                              name="pool_mcp", servers=self.manifest.get_mcp_servers(), owner="pool"
                          )
                          self._tasks = TaskRegistry()
                          # Register tasks from manifest
                          for name, task in self.manifest.jobs.items():
                              self._tasks.register(name, task)
                          self.pool_talk = TeamTalk[Any].from_nodes(list(self.nodes.values()))
                          if self.manifest.pool_server and self.manifest.pool_server.enabled:
                              from llmling_agent.resource_providers.pool import PoolResourceProvider
                              from llmling_agent_mcp.server import LLMLingServer
                      
                              provider = PoolResourceProvider(
                                  self, zed_mode=self.manifest.pool_server.zed_mode
                              )
                              self.server: LLMLingServer | None = LLMLingServer(
                                  provider=provider,
                                  config=self.manifest.pool_server,
                              )
                          else:
                              self.server = None
                          # Create requested agents immediately
                          for name in self.manifest.agents:
                              agent = self.manifest.get_agent(name, deps=shared_deps)
                              self.register(name, agent)
                      
                          # Then set up worker relationships
                          for agent in self.agents.values():
                              self.setup_agent_workers(agent)
                          self._create_teams()
                          # Set up forwarding connections
                          if connect_nodes:
                              self._connect_nodes()
                      

                      _connect_nodes

                      _connect_nodes()
                      

                      Set up connections defined in manifest.

                      Source code in src/llmling_agent/delegation/pool.py
                      496
                      497
                      498
                      499
                      500
                      501
                      502
                      503
                      504
                      505
                      506
                      507
                      508
                      509
                      510
                      511
                      512
                      513
                      514
                      515
                      516
                      517
                      518
                      519
                      520
                      521
                      522
                      523
                      524
                      525
                      526
                      527
                      528
                      529
                      530
                      531
                      532
                      533
                      534
                      535
                      536
                      def _connect_nodes(self):
                          """Set up connections defined in manifest."""
                          # Merge agent and team configs into one dict of nodes with connections
                          for name, config in self.manifest.nodes.items():
                              source = self[name]
                              for target in config.connections or []:
                                  match target:
                                      case NodeConnectionConfig():
                                          if target.name not in self:
                                              msg = f"Forward target {target.name} not found for {name}"
                                              raise ValueError(msg)
                                          target_node = self[target.name]
                                      case FileConnectionConfig() | CallableConnectionConfig():
                                          target_node = Agent(provider=target.get_provider())
                                      case _:
                                          msg = f"Invalid connection config: {target}"
                                          raise ValueError(msg)
                      
                                  source.connect_to(
                                      target_node,  # type: ignore  # recognized as "Any | BaseTeam[Any, Any]" by mypy?
                                      connection_type=target.connection_type,
                                      name=name,
                                      priority=target.priority,
                                      delay=target.delay,
                                      queued=target.queued,
                                      queue_strategy=target.queue_strategy,
                                      transform=target.transform,
                                      filter_condition=target.filter_condition.check
                                      if target.filter_condition
                                      else None,
                                      stop_condition=target.stop_condition.check
                                      if target.stop_condition
                                      else None,
                                      exit_condition=target.exit_condition.check
                                      if target.exit_condition
                                      else None,
                                  )
                                  source.connections.set_wait_state(
                                      target_node,
                                      wait=target.wait_for_completion,
                                  )
                      

                      _create_teams

                      _create_teams()
                      

                      Create all teams in two phases to allow nesting.

                      Source code in src/llmling_agent/delegation/pool.py
                      466
                      467
                      468
                      469
                      470
                      471
                      472
                      473
                      474
                      475
                      476
                      477
                      478
                      479
                      480
                      481
                      482
                      483
                      484
                      485
                      486
                      487
                      488
                      489
                      490
                      491
                      492
                      493
                      494
                      def _create_teams(self):
                          """Create all teams in two phases to allow nesting."""
                          # Phase 1: Create empty teams
                      
                          empty_teams: dict[str, BaseTeam[Any, Any]] = {}
                          for name, config in self.manifest.teams.items():
                              if config.mode == "parallel":
                                  empty_teams[name] = Team(
                                      [], name=name, shared_prompt=config.shared_prompt
                                  )
                              else:
                                  empty_teams[name] = TeamRun(
                                      [], name=name, shared_prompt=config.shared_prompt
                                  )
                      
                          # Phase 2: Resolve members
                          for name, config in self.manifest.teams.items():
                              team = empty_teams[name]
                              members: list[MessageNode[Any, Any]] = []
                              for member in config.members:
                                  if member in self.agents:
                                      members.append(self.agents[member])
                                  elif member in empty_teams:
                                      members.append(empty_teams[member])
                                  else:
                                      msg = f"Unknown team member: {member}"
                                      raise ValueError(msg)
                              team.agents.extend(members)
                              self[name] = team
                      

                      _validate_item

                      _validate_item(item: MessageEmitter[Any, Any] | Any) -> MessageEmitter[Any, Any]
                      

                      Validate and convert items before registration.

                      Parameters:

                      Name Type Description Default
                      item MessageEmitter[Any, Any] | Any

                      Item to validate

                      required

                      Returns:

                      Type Description
                      MessageEmitter[Any, Any]

                      Validated Node

                      Raises:

                      Type Description
                      LLMlingError

                      If item is not a valid node

                      Source code in src/llmling_agent/delegation/pool.py
                      446
                      447
                      448
                      449
                      450
                      451
                      452
                      453
                      454
                      455
                      456
                      457
                      458
                      459
                      460
                      461
                      462
                      463
                      464
                      def _validate_item(
                          self, item: MessageEmitter[Any, Any] | Any
                      ) -> MessageEmitter[Any, Any]:
                          """Validate and convert items before registration.
                      
                          Args:
                              item: Item to validate
                      
                          Returns:
                              Validated Node
                      
                          Raises:
                              LLMlingError: If item is not a valid node
                          """
                          if not isinstance(item, MessageEmitter):
                              msg = f"Item must be Agent or Team, got {type(item)}"
                              raise self._error_class(msg)
                          item.context.pool = self
                          return item
                      

                      add_agent async

                      add_agent(
                          name: AgentName, *, result_type: None = None, **kwargs: Unpack[AgentKwargs]
                      ) -> Agent[Any]
                      
                      add_agent(
                          name: AgentName,
                          *,
                          result_type: type[TResult] | str | ResponseDefinition,
                          **kwargs: Unpack[AgentKwargs],
                      ) -> StructuredAgent[Any, TResult]
                      
                      add_agent(
                          name: AgentName,
                          *,
                          result_type: type[Any] | str | ResponseDefinition | None = None,
                          **kwargs: Unpack[AgentKwargs],
                      ) -> Agent[Any] | StructuredAgent[Any, Any]
                      

                      Add a new permanent agent to the pool.

                      Parameters:

                      Name Type Description Default
                      name AgentName

                      Name for the new agent

                      required
                      result_type type[Any] | str | ResponseDefinition | None

                      Optional type for structured responses: - None: Regular unstructured agent - type: Python type for validation - str: Name of response definition - ResponseDefinition: Complete response definition

                      None
                      **kwargs Unpack[AgentKwargs]

                      Additional agent configuration

                      {}

                      Returns:

                      Type Description
                      Agent[Any] | StructuredAgent[Any, Any]

                      Either a regular Agent or StructuredAgent depending on result_type

                      Source code in src/llmling_agent/delegation/pool.py
                      855
                      856
                      857
                      858
                      859
                      860
                      861
                      862
                      863
                      864
                      865
                      866
                      867
                      868
                      869
                      870
                      871
                      872
                      873
                      874
                      875
                      876
                      877
                      878
                      879
                      880
                      881
                      882
                      883
                      884
                      885
                      async def add_agent(
                          self,
                          name: AgentName,
                          *,
                          result_type: type[Any] | str | ResponseDefinition | None = None,
                          **kwargs: Unpack[AgentKwargs],
                      ) -> Agent[Any] | StructuredAgent[Any, Any]:
                          """Add a new permanent agent to the pool.
                      
                          Args:
                              name: Name for the new agent
                              result_type: Optional type for structured responses:
                                  - None: Regular unstructured agent
                                  - type: Python type for validation
                                  - str: Name of response definition
                                  - ResponseDefinition: Complete response definition
                              **kwargs: Additional agent configuration
                      
                          Returns:
                              Either a regular Agent or StructuredAgent depending on result_type
                          """
                          from llmling_agent.agent import Agent
                      
                          agent: AnyAgent[Any, Any] = Agent(name=name, **kwargs)
                          agent.tools.add_provider(self.mcp)
                          agent = await self.exit_stack.enter_async_context(agent)
                          # Convert to structured if needed
                          if result_type is not None:
                              agent = agent.to_structured(result_type)
                          self.register(name, agent)
                          return agent
                      

                      cleanup async

                      cleanup()
                      

                      Clean up all agents.

                      Source code in src/llmling_agent/delegation/pool.py
                      201
                      202
                      203
                      204
                      async def cleanup(self):
                          """Clean up all agents."""
                          await self.exit_stack.aclose()
                          self.clear()
                      

                      clone_agent async

                      clone_agent(
                          agent: AgentName | Agent[TDeps],
                          new_name: AgentName | None = None,
                          *,
                          system_prompts: list[str] | None = None,
                          template_context: dict[str, Any] | None = None,
                      ) -> Agent[TDeps]
                      
                      clone_agent(
                          agent: StructuredAgent[TDeps, TResult],
                          new_name: AgentName | None = None,
                          *,
                          system_prompts: list[str] | None = None,
                          template_context: dict[str, Any] | None = None,
                      ) -> StructuredAgent[TDeps, TResult]
                      
                      clone_agent(
                          agent: AgentName | AnyAgent[TDeps, TAgentResult],
                          new_name: AgentName | None = None,
                          *,
                          system_prompts: list[str] | None = None,
                          template_context: dict[str, Any] | None = None,
                      ) -> AnyAgent[TDeps, TAgentResult]
                      

                      Create a copy of an agent.

                      Parameters:

                      Name Type Description Default
                      agent AgentName | AnyAgent[TDeps, TAgentResult]

                      Agent instance or name to clone

                      required
                      new_name AgentName | None

                      Optional name for the clone

                      None
                      system_prompts list[str] | None

                      Optional different prompts

                      None
                      template_context dict[str, Any] | None

                      Variables for template rendering

                      None

                      Returns:

                      Type Description
                      AnyAgent[TDeps, TAgentResult]

                      The new agent instance

                      Source code in src/llmling_agent/delegation/pool.py
                      558
                      559
                      560
                      561
                      562
                      563
                      564
                      565
                      566
                      567
                      568
                      569
                      570
                      571
                      572
                      573
                      574
                      575
                      576
                      577
                      578
                      579
                      580
                      581
                      582
                      583
                      584
                      585
                      586
                      587
                      588
                      589
                      590
                      591
                      592
                      593
                      594
                      595
                      596
                      597
                      598
                      599
                      600
                      601
                      602
                      603
                      604
                      605
                      606
                      607
                      608
                      609
                      610
                      611
                      612
                      613
                      614
                      615
                      616
                      617
                      async def clone_agent[TDeps, TAgentResult](
                          self,
                          agent: AgentName | AnyAgent[TDeps, TAgentResult],
                          new_name: AgentName | None = None,
                          *,
                          system_prompts: list[str] | None = None,
                          template_context: dict[str, Any] | None = None,
                      ) -> AnyAgent[TDeps, TAgentResult]:
                          """Create a copy of an agent.
                      
                          Args:
                              agent: Agent instance or name to clone
                              new_name: Optional name for the clone
                              system_prompts: Optional different prompts
                              template_context: Variables for template rendering
                      
                          Returns:
                              The new agent instance
                          """
                          from llmling_agent.agent import Agent, StructuredAgent
                      
                          # Get original config
                          if isinstance(agent, str):
                              if agent not in self.manifest.agents:
                                  msg = f"Agent {agent} not found"
                                  raise KeyError(msg)
                              config = self.manifest.agents[agent]
                              original_agent: AnyAgent[Any, Any] = self.get_agent(agent)
                          else:
                              config = agent.context.config  # type: ignore
                              original_agent = agent
                      
                          # Create new config
                          new_config = config.model_copy(deep=True)
                      
                          # Apply overrides
                          if system_prompts:
                              new_config.system_prompts = system_prompts
                      
                          # Handle template rendering
                          if template_context:
                              new_config.system_prompts = new_config.render_system_prompts(template_context)
                      
                          # Create new agent with same runtime
                          new_agent = Agent[TDeps](
                              runtime=original_agent.runtime,
                              context=original_agent.context,
                              # result_type=original_agent.actual_type,
                              provider=new_config.get_provider(),
                              system_prompt=new_config.system_prompts,
                              name=new_name or f"{config.name}_copy_{len(self.agents)}",
                          )
                          if isinstance(original_agent, StructuredAgent):
                              new_agent = new_agent.to_structured(original_agent.actual_type)
                      
                          # Register in pool
                          agent_name = new_agent.name
                          self.manifest.agents[agent_name] = new_config
                          self.agents[agent_name] = new_agent
                          return await self.exit_stack.enter_async_context(new_agent)
                      

                      create_agent async

                      create_agent(
                          name: AgentName,
                          *,
                          session: SessionIdType | SessionQuery = None,
                          name_override: str | None = None,
                      ) -> Agent[TPoolDeps]
                      
                      create_agent(
                          name: AgentName,
                          *,
                          deps: TCustomDeps,
                          session: SessionIdType | SessionQuery = None,
                          name_override: str | None = None,
                      ) -> Agent[TCustomDeps]
                      
                      create_agent(
                          name: AgentName,
                          *,
                          return_type: type[TResult],
                          session: SessionIdType | SessionQuery = None,
                          name_override: str | None = None,
                      ) -> StructuredAgent[TPoolDeps, TResult]
                      
                      create_agent(
                          name: AgentName,
                          *,
                          deps: TCustomDeps,
                          return_type: type[TResult],
                          session: SessionIdType | SessionQuery = None,
                          name_override: str | None = None,
                      ) -> StructuredAgent[TCustomDeps, TResult]
                      
                      create_agent(
                          name: AgentName,
                          *,
                          deps: Any | None = None,
                          return_type: Any | None = None,
                          session: SessionIdType | SessionQuery = None,
                          name_override: str | None = None,
                      ) -> AnyAgent[Any, Any]
                      

                      Create a new agent instance from configuration.

                      Parameters:

                      Name Type Description Default
                      name AgentName

                      Name of the agent configuration to use

                      required
                      deps Any | None

                      Optional custom dependencies (overrides pool deps)

                      None
                      return_type Any | None

                      Optional type for structured responses

                      None
                      session SessionIdType | SessionQuery

                      Optional session ID or query to recover conversation

                      None
                      name_override str | None

                      Optional different name for this instance

                      None

                      Returns:

                      Type Description
                      AnyAgent[Any, Any]

                      New agent instance with the specified configuration

                      Raises:

                      Type Description
                      KeyError

                      If agent configuration not found

                      ValueError

                      If configuration is invalid

                      Source code in src/llmling_agent/delegation/pool.py
                      659
                      660
                      661
                      662
                      663
                      664
                      665
                      666
                      667
                      668
                      669
                      670
                      671
                      672
                      673
                      674
                      675
                      676
                      677
                      678
                      679
                      680
                      681
                      682
                      683
                      684
                      685
                      686
                      687
                      688
                      689
                      690
                      691
                      692
                      693
                      694
                      695
                      696
                      697
                      698
                      699
                      700
                      701
                      702
                      703
                      704
                      705
                      706
                      707
                      708
                      709
                      async def create_agent(
                          self,
                          name: AgentName,
                          *,
                          deps: Any | None = None,
                          return_type: Any | None = None,
                          session: SessionIdType | SessionQuery = None,
                          name_override: str | None = None,
                      ) -> AnyAgent[Any, Any]:
                          """Create a new agent instance from configuration.
                      
                          Args:
                              name: Name of the agent configuration to use
                              deps: Optional custom dependencies (overrides pool deps)
                              return_type: Optional type for structured responses
                              session: Optional session ID or query to recover conversation
                              name_override: Optional different name for this instance
                      
                          Returns:
                              New agent instance with the specified configuration
                      
                          Raises:
                              KeyError: If agent configuration not found
                              ValueError: If configuration is invalid
                          """
                          if name not in self.manifest.agents:
                              msg = f"Agent configuration {name!r} not found"
                              raise KeyError(msg)
                      
                          # Use Manifest.get_agent for proper initialization
                          final_deps = deps if deps is not None else self.shared_deps
                          agent = self.manifest.get_agent(name, deps=final_deps)
                          # Override name if requested
                          if name_override:
                              agent.name = name_override
                      
                          # Set pool reference
                          agent.context.pool = self
                      
                          # Handle session if provided
                          if session:
                              agent.conversation.load_history_from_database(session=session)
                      
                          # Initialize agent through exit stack
                          agent = await self.exit_stack.enter_async_context(agent)
                      
                          # Override structured configuration if provided
                          if return_type is not None:
                              return agent.to_structured(return_type)
                      
                          return agent
                      

                      create_team

                      create_team(agents: Sequence[str]) -> Team[TPoolDeps]
                      
                      create_team(
                          agents: Sequence[MessageNode[TDeps, Any]],
                          *,
                          name: str | None = None,
                          description: str | None = None,
                          shared_prompt: str | None = None,
                          picker: AnyAgent[Any, Any] | None = None,
                          num_picks: int | None = None,
                          pick_prompt: str | None = None,
                      ) -> Team[TDeps]
                      
                      create_team(
                          agents: Sequence[AgentName | MessageNode[Any, Any]],
                          *,
                          name: str | None = None,
                          description: str | None = None,
                          shared_prompt: str | None = None,
                          picker: AnyAgent[Any, Any] | None = None,
                          num_picks: int | None = None,
                          pick_prompt: str | None = None,
                      ) -> Team[Any]
                      
                      create_team(
                          agents: Sequence[AgentName | MessageNode[Any, Any]] | None = None,
                          *,
                          name: str | None = None,
                          description: str | None = None,
                          shared_prompt: str | None = None,
                          picker: AnyAgent[Any, Any] | None = None,
                          num_picks: int | None = None,
                          pick_prompt: str | None = None,
                      ) -> Team[Any]
                      

                      Create a group from agent names or instances.

                      Parameters:

                      Name Type Description Default
                      agents Sequence[AgentName | MessageNode[Any, Any]] | None

                      List of agent names or instances (all if None)

                      None
                      name str | None

                      Optional name for the team

                      None
                      description str | None

                      Optional description for the team

                      None
                      shared_prompt str | None

                      Optional prompt for all agents

                      None
                      picker AnyAgent[Any, Any] | None

                      Agent to use for picking agents

                      None
                      num_picks int | None

                      Number of agents to pick

                      None
                      pick_prompt str | None

                      Prompt to use for picking agents

                      None
                      Source code in src/llmling_agent/delegation/pool.py
                      326
                      327
                      328
                      329
                      330
                      331
                      332
                      333
                      334
                      335
                      336
                      337
                      338
                      339
                      340
                      341
                      342
                      343
                      344
                      345
                      346
                      347
                      348
                      349
                      350
                      351
                      352
                      353
                      354
                      355
                      356
                      357
                      358
                      359
                      360
                      361
                      362
                      363
                      364
                      365
                      366
                      367
                      368
                      369
                      370
                      371
                      def create_team(
                          self,
                          agents: Sequence[AgentName | MessageNode[Any, Any]] | None = None,
                          *,
                          name: str | None = None,
                          description: str | None = None,
                          shared_prompt: str | None = None,
                          picker: AnyAgent[Any, Any] | None = None,
                          num_picks: int | None = None,
                          pick_prompt: str | None = None,
                      ) -> Team[Any]:
                          """Create a group from agent names or instances.
                      
                          Args:
                              agents: List of agent names or instances (all if None)
                              name: Optional name for the team
                              description: Optional description for the team
                              shared_prompt: Optional prompt for all agents
                              picker: Agent to use for picking agents
                              num_picks: Number of agents to pick
                              pick_prompt: Prompt to use for picking agents
                          """
                          from llmling_agent.delegation.team import Team
                      
                          if agents is None:
                              agents = list(self.agents.keys())
                      
                          # First resolve/configure agents
                          resolved_agents: list[MessageNode[Any, Any]] = []
                          for agent in agents:
                              if isinstance(agent, str):
                                  agent = self.get_agent(agent)
                              resolved_agents.append(agent)
                      
                          team = Team(
                              name=name,
                              description=description,
                              agents=resolved_agents,
                              shared_prompt=shared_prompt,
                              picker=picker,
                              num_picks=num_picks,
                              pick_prompt=pick_prompt,
                          )
                          if name:
                              self[name] = team
                          return team
                      

                      create_team_run

                      create_team_run(
                          agents: Sequence[str],
                          validator: MessageNode[Any, TResult] | None = None,
                          *,
                          name: str | None = None,
                          description: str | None = None,
                          shared_prompt: str | None = None,
                          picker: AnyAgent[Any, Any] | None = None,
                          num_picks: int | None = None,
                          pick_prompt: str | None = None,
                      ) -> TeamRun[TPoolDeps, TResult]
                      
                      create_team_run(
                          agents: Sequence[MessageNode[TDeps, Any]],
                          validator: MessageNode[Any, TResult] | None = None,
                          *,
                          name: str | None = None,
                          description: str | None = None,
                          shared_prompt: str | None = None,
                          picker: AnyAgent[Any, Any] | None = None,
                          num_picks: int | None = None,
                          pick_prompt: str | None = None,
                      ) -> TeamRun[TDeps, TResult]
                      
                      create_team_run(
                          agents: Sequence[AgentName | MessageNode[Any, Any]],
                          validator: MessageNode[Any, TResult] | None = None,
                          *,
                          name: str | None = None,
                          description: str | None = None,
                          shared_prompt: str | None = None,
                          picker: AnyAgent[Any, Any] | None = None,
                          num_picks: int | None = None,
                          pick_prompt: str | None = None,
                      ) -> TeamRun[Any, TResult]
                      
                      create_team_run(
                          agents: Sequence[AgentName | MessageNode[Any, Any]] | None = None,
                          validator: MessageNode[Any, TResult] | None = None,
                          *,
                          name: str | None = None,
                          description: str | None = None,
                          shared_prompt: str | None = None,
                          picker: AnyAgent[Any, Any] | None = None,
                          num_picks: int | None = None,
                          pick_prompt: str | None = None,
                      ) -> TeamRun[Any, TResult]
                      

                      Create a a sequential TeamRun from a list of Agents.

                      Parameters:

                      Name Type Description Default
                      agents Sequence[AgentName | MessageNode[Any, Any]] | None

                      List of agent names or team/agent instances (all if None)

                      None
                      validator MessageNode[Any, TResult] | None

                      Node to validate the results of the TeamRun

                      None
                      name str | None

                      Optional name for the team

                      None
                      description str | None

                      Optional description for the team

                      None
                      shared_prompt str | None

                      Optional prompt for all agents

                      None
                      picker AnyAgent[Any, Any] | None

                      Agent to use for picking agents

                      None
                      num_picks int | None

                      Number of agents to pick

                      None
                      pick_prompt str | None

                      Prompt to use for picking agents

                      None
                      Source code in src/llmling_agent/delegation/pool.py
                      248
                      249
                      250
                      251
                      252
                      253
                      254
                      255
                      256
                      257
                      258
                      259
                      260
                      261
                      262
                      263
                      264
                      265
                      266
                      267
                      268
                      269
                      270
                      271
                      272
                      273
                      274
                      275
                      276
                      277
                      278
                      279
                      280
                      281
                      282
                      283
                      284
                      285
                      286
                      287
                      288
                      289
                      290
                      291
                      292
                      293
                      294
                      295
                      def create_team_run(
                          self,
                          agents: Sequence[AgentName | MessageNode[Any, Any]] | None = None,
                          validator: MessageNode[Any, TResult] | None = None,
                          *,
                          name: str | None = None,
                          description: str | None = None,
                          shared_prompt: str | None = None,
                          picker: AnyAgent[Any, Any] | None = None,
                          num_picks: int | None = None,
                          pick_prompt: str | None = None,
                      ) -> TeamRun[Any, TResult]:
                          """Create a a sequential TeamRun from a list of Agents.
                      
                          Args:
                              agents: List of agent names or team/agent instances (all if None)
                              validator: Node to validate the results of the TeamRun
                              name: Optional name for the team
                              description: Optional description for the team
                              shared_prompt: Optional prompt for all agents
                              picker: Agent to use for picking agents
                              num_picks: Number of agents to pick
                              pick_prompt: Prompt to use for picking agents
                          """
                          from llmling_agent.delegation.teamrun import TeamRun
                      
                          if agents is None:
                              agents = list(self.agents.keys())
                      
                          # First resolve/configure agents
                          resolved_agents: list[MessageNode[Any, Any]] = []
                          for agent in agents:
                              if isinstance(agent, str):
                                  agent = self.get_agent(agent)
                              resolved_agents.append(agent)
                          team = TeamRun(
                              resolved_agents,
                              name=name,
                              description=description,
                              validator=validator,
                              shared_prompt=shared_prompt,
                              picker=picker,
                              num_picks=num_picks,
                              pick_prompt=pick_prompt,
                          )
                          if name:
                              self[name] = team
                          return team
                      

                      get_agent

                      get_agent(
                          agent: AgentName | Agent[Any],
                          *,
                          model_override: str | None = None,
                          session: SessionIdType | SessionQuery = None,
                      ) -> Agent[TPoolDeps]
                      
                      get_agent(
                          agent: AgentName | Agent[Any],
                          *,
                          return_type: type[TResult],
                          model_override: str | None = None,
                          session: SessionIdType | SessionQuery = None,
                      ) -> StructuredAgent[TPoolDeps, TResult]
                      
                      get_agent(
                          agent: AgentName | Agent[Any],
                          *,
                          deps: TCustomDeps,
                          model_override: str | None = None,
                          session: SessionIdType | SessionQuery = None,
                      ) -> Agent[TCustomDeps]
                      
                      get_agent(
                          agent: AgentName | Agent[Any],
                          *,
                          deps: TCustomDeps,
                          return_type: type[TResult],
                          model_override: str | None = None,
                          session: SessionIdType | SessionQuery = None,
                      ) -> StructuredAgent[TCustomDeps, TResult]
                      
                      get_agent(
                          agent: AgentName | Agent[Any],
                          *,
                          deps: Any | None = None,
                          return_type: Any | None = None,
                          model_override: str | None = None,
                          session: SessionIdType | SessionQuery = None,
                      ) -> AnyAgent[Any, Any]
                      

                      Get or configure an agent from the pool.

                      This method provides flexible agent configuration with dependency injection: - Without deps: Agent uses pool's shared dependencies - With deps: Agent uses provided custom dependencies - With return_type: Returns a StructuredAgent with type validation

                      Parameters:

                      Name Type Description Default
                      agent AgentName | Agent[Any]

                      Either agent name or instance

                      required
                      deps Any | None

                      Optional custom dependencies (overrides shared deps)

                      None
                      return_type Any | None

                      Optional type for structured responses

                      None
                      model_override str | None

                      Optional model override

                      None
                      session SessionIdType | SessionQuery

                      Optional session ID or query to recover conversation

                      None

                      Returns:

                      Name Type Description
                      Either AnyAgent[Any, Any]
                      AnyAgent[Any, Any]
                      • Agent[TPoolDeps] when using pool's shared deps
                      AnyAgent[Any, Any]
                      • Agent[TCustomDeps] when custom deps provided
                      AnyAgent[Any, Any]
                      • StructuredAgent when return_type provided

                      Raises:

                      Type Description
                      KeyError

                      If agent name not found

                      ValueError

                      If configuration is invalid

                      Source code in src/llmling_agent/delegation/pool.py
                      767
                      768
                      769
                      770
                      771
                      772
                      773
                      774
                      775
                      776
                      777
                      778
                      779
                      780
                      781
                      782
                      783
                      784
                      785
                      786
                      787
                      788
                      789
                      790
                      791
                      792
                      793
                      794
                      795
                      796
                      797
                      798
                      799
                      800
                      801
                      802
                      803
                      804
                      805
                      806
                      807
                      808
                      809
                      810
                      811
                      812
                      813
                      814
                      815
                      816
                      817
                      818
                      819
                      820
                      821
                      822
                      823
                      824
                      825
                      def get_agent(
                          self,
                          agent: AgentName | Agent[Any],
                          *,
                          deps: Any | None = None,
                          return_type: Any | None = None,
                          model_override: str | None = None,
                          session: SessionIdType | SessionQuery = None,
                      ) -> AnyAgent[Any, Any]:
                          """Get or configure an agent from the pool.
                      
                          This method provides flexible agent configuration with dependency injection:
                          - Without deps: Agent uses pool's shared dependencies
                          - With deps: Agent uses provided custom dependencies
                          - With return_type: Returns a StructuredAgent with type validation
                      
                          Args:
                              agent: Either agent name or instance
                              deps: Optional custom dependencies (overrides shared deps)
                              return_type: Optional type for structured responses
                              model_override: Optional model override
                              session: Optional session ID or query to recover conversation
                      
                          Returns:
                              Either:
                              - Agent[TPoolDeps] when using pool's shared deps
                              - Agent[TCustomDeps] when custom deps provided
                              - StructuredAgent when return_type provided
                      
                          Raises:
                              KeyError: If agent name not found
                              ValueError: If configuration is invalid
                          """
                          from llmling_agent.agent import Agent
                          from llmling_agent.agent.context import AgentContext
                      
                          # Get base agent
                          base = agent if isinstance(agent, Agent) else self.agents[agent]
                      
                          # Setup context and dependencies
                          if base.context is None:
                              base.context = AgentContext[Any].create_default(base.name)
                      
                          # Use custom deps if provided, otherwise use shared deps
                          base.context.data = deps if deps is not None else self.shared_deps
                          base.context.pool = self
                      
                          # Apply overrides
                          if model_override:
                              base.set_model(model_override)
                      
                          if session:
                              base.conversation.load_history_from_database(session=session)
                      
                          # Convert to structured if needed
                          if return_type is not None:
                              return base.to_structured(return_type)
                      
                          return base
                      

                      get_mermaid_diagram

                      get_mermaid_diagram(include_details: bool = True) -> str
                      

                      Generate mermaid flowchart of all agents and their connections.

                      Parameters:

                      Name Type Description Default
                      include_details bool

                      Whether to show connection details (types, queues, etc)

                      True
                      Source code in src/llmling_agent/delegation/pool.py
                      887
                      888
                      889
                      890
                      891
                      892
                      893
                      894
                      895
                      896
                      897
                      898
                      899
                      900
                      901
                      902
                      903
                      904
                      905
                      906
                      907
                      908
                      909
                      910
                      911
                      912
                      913
                      914
                      915
                      916
                      917
                      918
                      919
                      920
                      921
                      922
                      923
                      924
                      925
                      926
                      def get_mermaid_diagram(
                          self,
                          include_details: bool = True,
                      ) -> str:
                          """Generate mermaid flowchart of all agents and their connections.
                      
                          Args:
                              include_details: Whether to show connection details (types, queues, etc)
                          """
                          lines = ["flowchart LR"]
                      
                          # Add all agents as nodes
                          for name in self.agents:
                              lines.append(f"    {name}[{name}]")  # noqa: PERF401
                      
                          # Add all connections as edges
                          for agent in self.agents.values():
                              connections = agent.connections.get_connections()
                              for talk in connections:
                                  talk = cast(Talk[Any], talk)  # help mypy understand it's a Talk
                                  source = talk.source.name
                                  for target in talk.targets:
                                      if include_details:
                                          details: list[str] = []
                                          details.append(talk.connection_type)
                                          if talk.queued:
                                              details.append(f"queued({talk.queue_strategy})")
                                          if fn := talk.filter_condition:  # type: ignore
                                              details.append(f"filter:{fn.__name__}")
                                          if fn := talk.stop_condition:  # type: ignore
                                              details.append(f"stop:{fn.__name__}")
                                          if fn := talk.exit_condition:  # type: ignore
                                              details.append(f"exit:{fn.__name__}")
                      
                                          label = f"|{' '.join(details)}|" if details else ""
                                          lines.append(f"    {source}--{label}-->{target.name}")
                                      else:
                                          lines.append(f"    {source}-->{target.name}")
                      
                          return "\n".join(lines)
                      

                      list_nodes

                      list_nodes() -> list[str]
                      

                      List available agent names.

                      Source code in src/llmling_agent/delegation/pool.py
                      827
                      828
                      829
                      def list_nodes(self) -> list[str]:
                          """List available agent names."""
                          return list(self.list_items())
                      

                      run_event_loop async

                      run_event_loop()
                      

                      Run pool in event-watching mode until interrupted.

                      Source code in src/llmling_agent/delegation/pool.py
                      383
                      384
                      385
                      386
                      387
                      388
                      389
                      390
                      391
                      392
                      393
                      394
                      395
                      396
                      397
                      398
                      399
                      400
                      401
                      402
                      403
                      404
                      async def run_event_loop(self):
                          """Run pool in event-watching mode until interrupted."""
                          import sys
                      
                          print("Starting event watch mode...")
                          print("Active nodes: ", ", ".join(self.list_nodes()))
                          print("Press Ctrl+C to stop")
                      
                          stop_event = asyncio.Event()
                      
                          if sys.platform != "win32":
                              # Unix: Use signal handlers
                              loop = asyncio.get_running_loop()
                              for sig in (signal.SIGINT, signal.SIGTERM):
                                  loop.add_signal_handler(sig, stop_event.set)
                              while True:
                                  await asyncio.sleep(1)
                          else:
                              # Windows: Use keyboard interrupt
                              with suppress(KeyboardInterrupt):
                                  while True:
                                      await asyncio.sleep(1)
                      

                      setup_agent_workers

                      setup_agent_workers(agent: AnyAgent[Any, Any])
                      

                      Set up workers for an agent from configuration.

                      Source code in src/llmling_agent/delegation/pool.py
                      711
                      712
                      713
                      714
                      715
                      716
                      717
                      718
                      719
                      720
                      721
                      722
                      723
                      724
                      725
                      def setup_agent_workers(self, agent: AnyAgent[Any, Any]):
                          """Set up workers for an agent from configuration."""
                          for worker_config in agent.context.config.workers:
                              try:
                                  worker = self.get_agent(worker_config.name)
                                  agent.register_worker(
                                      worker,
                                      name=worker_config.name,
                                      reset_history_on_run=worker_config.reset_history_on_run,
                                      pass_message_history=worker_config.pass_message_history,
                                      share_context=worker_config.share_context,
                                  )
                              except KeyError as e:
                                  msg = f"Worker agent {worker_config.name!r} not found"
                                  raise ValueError(msg) from e
                      

                      track_message_flow async

                      track_message_flow() -> AsyncIterator[MessageFlowTracker]
                      

                      Track message flow during a context.

                      Source code in src/llmling_agent/delegation/pool.py
                      373
                      374
                      375
                      376
                      377
                      378
                      379
                      380
                      381
                      @asynccontextmanager
                      async def track_message_flow(self) -> AsyncIterator[MessageFlowTracker]:
                          """Track message flow during a context."""
                          tracker = MessageFlowTracker()
                          self.connection_registry.message_flow.connect(tracker.track)
                          try:
                              yield tracker
                          finally:
                              self.connection_registry.message_flow.disconnect(tracker.track)
                      

                      AgentsManifest

                      Bases: ConfigModel

                      Complete agent configuration manifest defining all available agents.

                      This is the root configuration that: - Defines available response types (both inline and imported) - Configures all agent instances and their settings - Sets up custom role definitions and capabilities - Manages environment configurations

                      A single manifest can define multiple agents that can work independently or collaborate through the orchestrator.

                      Source code in src/llmling_agent/models/manifest.py
                       40
                       41
                       42
                       43
                       44
                       45
                       46
                       47
                       48
                       49
                       50
                       51
                       52
                       53
                       54
                       55
                       56
                       57
                       58
                       59
                       60
                       61
                       62
                       63
                       64
                       65
                       66
                       67
                       68
                       69
                       70
                       71
                       72
                       73
                       74
                       75
                       76
                       77
                       78
                       79
                       80
                       81
                       82
                       83
                       84
                       85
                       86
                       87
                       88
                       89
                       90
                       91
                       92
                       93
                       94
                       95
                       96
                       97
                       98
                       99
                      100
                      101
                      102
                      103
                      104
                      105
                      106
                      107
                      108
                      109
                      110
                      111
                      112
                      113
                      114
                      115
                      116
                      117
                      118
                      119
                      120
                      121
                      122
                      123
                      124
                      125
                      126
                      127
                      128
                      129
                      130
                      131
                      132
                      133
                      134
                      135
                      136
                      137
                      138
                      139
                      140
                      141
                      142
                      143
                      144
                      145
                      146
                      147
                      148
                      149
                      150
                      151
                      152
                      153
                      154
                      155
                      156
                      157
                      158
                      159
                      160
                      161
                      162
                      163
                      164
                      165
                      166
                      167
                      168
                      169
                      170
                      171
                      172
                      173
                      174
                      175
                      176
                      177
                      178
                      179
                      180
                      181
                      182
                      183
                      184
                      185
                      186
                      187
                      188
                      189
                      190
                      191
                      192
                      193
                      194
                      195
                      196
                      197
                      198
                      199
                      200
                      201
                      202
                      203
                      204
                      205
                      206
                      207
                      208
                      209
                      210
                      211
                      212
                      213
                      214
                      215
                      216
                      217
                      218
                      219
                      220
                      221
                      222
                      223
                      224
                      225
                      226
                      227
                      228
                      229
                      230
                      231
                      232
                      233
                      234
                      235
                      236
                      237
                      238
                      239
                      240
                      241
                      242
                      243
                      244
                      245
                      246
                      247
                      248
                      249
                      250
                      251
                      252
                      253
                      254
                      255
                      256
                      257
                      258
                      259
                      260
                      261
                      262
                      263
                      264
                      265
                      266
                      267
                      268
                      269
                      270
                      271
                      272
                      273
                      274
                      275
                      276
                      277
                      278
                      279
                      280
                      281
                      282
                      283
                      284
                      285
                      286
                      287
                      288
                      289
                      290
                      291
                      292
                      293
                      294
                      295
                      296
                      297
                      298
                      299
                      300
                      301
                      302
                      303
                      304
                      305
                      306
                      307
                      308
                      309
                      310
                      311
                      312
                      313
                      314
                      315
                      316
                      317
                      318
                      319
                      320
                      321
                      322
                      323
                      324
                      325
                      326
                      327
                      328
                      329
                      330
                      331
                      332
                      333
                      334
                      335
                      336
                      337
                      338
                      339
                      340
                      341
                      342
                      343
                      344
                      345
                      346
                      347
                      348
                      349
                      350
                      351
                      352
                      353
                      354
                      355
                      356
                      357
                      358
                      359
                      360
                      361
                      362
                      363
                      364
                      365
                      366
                      367
                      368
                      369
                      370
                      371
                      372
                      373
                      374
                      375
                      376
                      377
                      378
                      379
                      380
                      381
                      382
                      383
                      class AgentsManifest(ConfigModel):
                          """Complete agent configuration manifest defining all available agents.
                      
                          This is the root configuration that:
                          - Defines available response types (both inline and imported)
                          - Configures all agent instances and their settings
                          - Sets up custom role definitions and capabilities
                          - Manages environment configurations
                      
                          A single manifest can define multiple agents that can work independently
                          or collaborate through the orchestrator.
                          """
                      
                          INHERIT: str | list[str] | None = None
                          """Inheritance references."""
                      
                          agents: dict[str, AgentConfig] = Field(default_factory=dict)
                          """Mapping of agent IDs to their configurations"""
                      
                          teams: dict[str, TeamConfig] = Field(default_factory=dict)
                          """Mapping of team IDs to their configurations"""
                      
                          storage: StorageConfig = Field(default_factory=StorageConfig)
                          """Storage provider configuration."""
                      
                          observability: ObservabilityConfig = Field(default_factory=ObservabilityConfig)
                          """Observability provider configuration."""
                      
                          conversion: ConversionConfig = Field(default_factory=ConversionConfig)
                          """Document conversion configuration."""
                      
                          responses: dict[str, ResponseDefinition] = Field(default_factory=dict)
                          """Mapping of response names to their definitions"""
                      
                          jobs: dict[str, Job] = Field(default_factory=dict)
                          """Pre-defined jobs, ready to be used by nodes."""
                      
                          mcp_servers: list[str | MCPServerConfig] = Field(default_factory=list)
                          """List of MCP server configurations:
                      
                          These MCP servers are used to provide tools and other resources to the nodes.
                          """
                          pool_server: PoolServerConfig = Field(default_factory=PoolServerConfig)
                          """Pool server configuration.
                      
                          This MCP server configuration is used for the pool MCP server,
                          which exposes pool functionality to other applications / clients."""
                      
                          prompts: PromptConfig = Field(default_factory=PromptConfig)
                      
                          model_config = ConfigDict(use_attribute_docstrings=True, extra="forbid")
                      
                          def clone_agent_config(
                              self,
                              name: str,
                              new_name: str | None = None,
                              *,
                              template_context: dict[str, Any] | None = None,
                              **overrides: Any,
                          ) -> str:
                              """Create a copy of an agent configuration.
                      
                              Args:
                                  name: Name of agent to clone
                                  new_name: Optional new name (auto-generated if None)
                                  template_context: Variables for template rendering
                                  **overrides: Configuration overrides for the clone
                      
                              Returns:
                                  Name of the new agent
                      
                              Raises:
                                  KeyError: If original agent not found
                                  ValueError: If new name already exists or if overrides invalid
                              """
                              if name not in self.agents:
                                  msg = f"Agent {name} not found"
                                  raise KeyError(msg)
                      
                              actual_name = new_name or f"{name}_copy_{len(self.agents)}"
                              if actual_name in self.agents:
                                  msg = f"Agent {actual_name} already exists"
                                  raise ValueError(msg)
                      
                              # Deep copy the configuration
                              config = self.agents[name].model_copy(deep=True)
                      
                              # Apply overrides
                              for key, value in overrides.items():
                                  if not hasattr(config, key):
                                      msg = f"Invalid override: {key}"
                                      raise ValueError(msg)
                                  setattr(config, key, value)
                      
                              # Handle template rendering if context provided
                              if template_context:
                                  # Apply name from context if not explicitly overridden
                                  if "name" in template_context and "name" not in overrides:
                                      config.name = template_context["name"]
                      
                                  # Render system prompts
                                  config.system_prompts = config.render_system_prompts(template_context)
                      
                              self.agents[actual_name] = config
                              return actual_name
                      
                          @model_validator(mode="before")
                          @classmethod
                          def resolve_inheritance(cls, data: dict) -> dict:
                              """Resolve agent inheritance chains."""
                              nodes = data.get("agents", {})
                              resolved: dict[str, dict] = {}
                              seen: set[str] = set()
                      
                              def resolve_node(name: str) -> dict:
                                  if name in resolved:
                                      return resolved[name]
                      
                                  if name in seen:
                                      msg = f"Circular inheritance detected: {name}"
                                      raise ValueError(msg)
                      
                                  seen.add(name)
                                  config = (
                                      nodes[name].model_copy()
                                      if hasattr(nodes[name], "model_copy")
                                      else nodes[name].copy()
                                  )
                                  inherit = (
                                      config.get("inherits") if isinstance(config, dict) else config.inherits
                                  )
                                  if inherit:
                                      if inherit not in nodes:
                                          msg = f"Parent agent {inherit} not found"
                                          raise ValueError(msg)
                      
                                      # Get resolved parent config
                                      parent = resolve_node(inherit)
                                      # Merge parent with child (child overrides parent)
                                      merged = parent.copy()
                                      merged.update(config)
                                      config = merged
                      
                                  seen.remove(name)
                                  resolved[name] = config
                                  return config
                      
                              # Resolve all nodes
                              for name in nodes:
                                  resolved[name] = resolve_node(name)
                      
                              # Update nodes with resolved configs
                              data["agents"] = resolved
                              return data
                      
                          @model_validator(mode="after")
                          def set_instrument_libraries(self) -> Self:
                              """Auto-set libraries to instrument based on used providers."""
                              if (
                                  not self.observability.enabled
                                  or self.observability.instrument_libraries is not None
                              ):
                                  return self
                              self.observability.instrument_libraries = list(self.get_used_providers())
                              return self
                      
                          @property
                          def node_names(self) -> list[str]:
                              """Get list of all agent and team names."""
                              return list(self.agents.keys()) + list(self.teams.keys())
                      
                          @property
                          def nodes(self) -> dict[str, Any]:
                              """Get all agent and team configurations."""
                              return {**self.agents, **self.teams}
                      
                          def get_mcp_servers(self) -> list[MCPServerConfig]:
                              """Get processed MCP server configurations.
                      
                              Converts string entries to StdioMCPServer configs by splitting
                              into command and arguments.
                      
                              Returns:
                                  List of MCPServerConfig instances
                      
                              Raises:
                                  ValueError: If string entry is empty
                              """
                              configs: list[MCPServerConfig] = []
                      
                              for server in self.mcp_servers:
                                  match server:
                                      case str():
                                          parts = server.split()
                                          if not parts:
                                              msg = "Empty MCP server command"
                                              raise ValueError(msg)
                      
                                          configs.append(StdioMCPServer(command=parts[0], args=parts[1:]))
                                      case MCPServerBase():
                                          configs.append(server)
                      
                              return configs
                      
                          @cached_property
                          def prompt_manager(self) -> PromptManager:
                              """Get prompt manager for this manifest."""
                              from llmling_agent.prompts.manager import PromptManager
                      
                              return PromptManager(self.prompts)
                      
                          # @model_validator(mode="after")
                          # def validate_response_types(self) -> AgentsManifest:
                          #     """Ensure all agent result_types exist in responses or are inline."""
                          #     for agent_id, agent in self.agents.items():
                          #         if (
                          #             isinstance(agent.result_type, str)
                          #             and agent.result_type not in self.responses
                          #         ):
                          #             msg = f"'{agent.result_type=}' for '{agent_id=}' not found in responses"
                          #             raise ValueError(msg)
                          #     return self
                      
                          def get_agent[TAgentDeps](
                              self, name: str, deps: TAgentDeps | None = None
                          ) -> AnyAgent[TAgentDeps, Any]:
                              from llmling import RuntimeConfig
                      
                              from llmling_agent import Agent, AgentContext
                      
                              config = self.agents[name]
                              # Create runtime without async context
                              cfg = config.get_config()
                              runtime = RuntimeConfig.from_config(cfg)
                      
                              # Create context with config path and capabilities
                              context = AgentContext[TAgentDeps](
                                  node_name=name,
                                  data=deps,
                                  capabilities=config.capabilities,
                                  definition=self,
                                  config=config,
                                  runtime=runtime,
                                  # pool=self,
                                  # confirmation_callback=confirmation_callback,
                              )
                      
                              sys_prompts = config.system_prompts.copy()
                              # Library prompts
                              if config.library_system_prompts:
                                  for prompt_ref in config.library_system_prompts:
                                      try:
                                          content = self.prompt_manager.get_sync(prompt_ref)
                                          sys_prompts.append(content)
                                      except Exception as e:
                                          msg = f"Failed to load library prompt {prompt_ref!r} for agent {name}"
                                          logger.exception(msg)
                                          raise ValueError(msg) from e
                              # Create agent with runtime and context
                              agent = Agent[Any](
                                  runtime=runtime,
                                  context=context,
                                  provider=config.get_provider(),
                                  system_prompt=sys_prompts,
                                  name=name,
                                  description=config.description,
                                  retries=config.retries,
                                  session=config.get_session_config(),
                                  result_retries=config.result_retries,
                                  end_strategy=config.end_strategy,
                                  capabilities=config.capabilities,
                                  debug=config.debug,
                                  # name=config.name or name,
                              )
                              if result_type := self.get_result_type(name):
                                  return agent.to_structured(result_type)
                              return agent
                      
                          def get_used_providers(self) -> set[str]:
                              """Get all providers configured in this manifest."""
                              providers = set[str]()
                      
                              for agent_config in self.agents.values():
                                  match agent_config.provider:
                                      case "pydantic_ai":
                                          providers.add("pydantic_ai")
                                      case "litellm":
                                          providers.add("litellm")
                                      case BaseProviderConfig():
                                          providers.add(agent_config.provider.type)
                              return providers
                      
                          @classmethod
                          def from_file(cls, path: StrPath) -> Self:
                              """Load agent configuration from YAML file.
                      
                              Args:
                                  path: Path to the configuration file
                      
                              Returns:
                                  Loaded agent definition
                      
                              Raises:
                                  ValueError: If loading fails
                              """
                              import yamling
                      
                              try:
                                  data = yamling.load_yaml_file(path, resolve_inherit=True)
                                  agent_def = cls.model_validate(data)
                                  # Update all agents with the config file path and ensure names
                                  agents = {
                                      name: config.model_copy(update={"config_file_path": str(path)})
                                      for name, config in agent_def.agents.items()
                                  }
                                  return agent_def.model_copy(update={"agents": agents})
                              except Exception as exc:
                                  msg = f"Failed to load agent config from {path}"
                                  raise ValueError(msg) from exc
                      
                          @cached_property
                          def pool(self) -> AgentPool:
                              """Create an agent pool from this manifest.
                      
                              Returns:
                                  Configured agent pool
                              """
                              from llmling_agent.delegation import AgentPool
                      
                              return AgentPool(manifest=self)
                      
                          def get_result_type(self, agent_name: str) -> type[Any] | None:
                              """Get the resolved result type for an agent.
                      
                              Returns None if no result type is configured.
                              """
                              agent_config = self.agents[agent_name]
                              if not agent_config.result_type:
                                  return None
                              logger.debug("Building response model for %r", agent_config.result_type)
                              if isinstance(agent_config.result_type, str):
                                  response_def = self.responses[agent_config.result_type]
                                  return response_def.create_model()  # type: ignore
                              return agent_config.result_type.create_model()  # type: ignore
                      

                      INHERIT class-attribute instance-attribute

                      INHERIT: str | list[str] | None = None
                      

                      Inheritance references.

                      agents class-attribute instance-attribute

                      agents: dict[str, AgentConfig] = Field(default_factory=dict)
                      

                      Mapping of agent IDs to their configurations

                      conversion class-attribute instance-attribute

                      conversion: ConversionConfig = Field(default_factory=ConversionConfig)
                      

                      Document conversion configuration.

                      jobs class-attribute instance-attribute

                      jobs: dict[str, Job] = Field(default_factory=dict)
                      

                      Pre-defined jobs, ready to be used by nodes.

                      mcp_servers class-attribute instance-attribute

                      mcp_servers: list[str | MCPServerConfig] = Field(default_factory=list)
                      

                      List of MCP server configurations:

                      These MCP servers are used to provide tools and other resources to the nodes.

                      node_names property

                      node_names: list[str]
                      

                      Get list of all agent and team names.

                      nodes property

                      nodes: dict[str, Any]
                      

                      Get all agent and team configurations.

                      observability class-attribute instance-attribute

                      observability: ObservabilityConfig = Field(default_factory=ObservabilityConfig)
                      

                      Observability provider configuration.

                      pool cached property

                      pool: AgentPool
                      

                      Create an agent pool from this manifest.

                      Returns:

                      Type Description
                      AgentPool

                      Configured agent pool

                      pool_server class-attribute instance-attribute

                      pool_server: PoolServerConfig = Field(default_factory=PoolServerConfig)
                      

                      Pool server configuration.

                      This MCP server configuration is used for the pool MCP server, which exposes pool functionality to other applications / clients.

                      prompt_manager cached property

                      prompt_manager: PromptManager
                      

                      Get prompt manager for this manifest.

                      responses class-attribute instance-attribute

                      responses: dict[str, ResponseDefinition] = Field(default_factory=dict)
                      

                      Mapping of response names to their definitions

                      storage class-attribute instance-attribute

                      storage: StorageConfig = Field(default_factory=StorageConfig)
                      

                      Storage provider configuration.

                      teams class-attribute instance-attribute

                      teams: dict[str, TeamConfig] = Field(default_factory=dict)
                      

                      Mapping of team IDs to their configurations

                      clone_agent_config

                      clone_agent_config(
                          name: str,
                          new_name: str | None = None,
                          *,
                          template_context: dict[str, Any] | None = None,
                          **overrides: Any,
                      ) -> str
                      

                      Create a copy of an agent configuration.

                      Parameters:

                      Name Type Description Default
                      name str

                      Name of agent to clone

                      required
                      new_name str | None

                      Optional new name (auto-generated if None)

                      None
                      template_context dict[str, Any] | None

                      Variables for template rendering

                      None
                      **overrides Any

                      Configuration overrides for the clone

                      {}

                      Returns:

                      Type Description
                      str

                      Name of the new agent

                      Raises:

                      Type Description
                      KeyError

                      If original agent not found

                      ValueError

                      If new name already exists or if overrides invalid

                      Source code in src/llmling_agent/models/manifest.py
                       92
                       93
                       94
                       95
                       96
                       97
                       98
                       99
                      100
                      101
                      102
                      103
                      104
                      105
                      106
                      107
                      108
                      109
                      110
                      111
                      112
                      113
                      114
                      115
                      116
                      117
                      118
                      119
                      120
                      121
                      122
                      123
                      124
                      125
                      126
                      127
                      128
                      129
                      130
                      131
                      132
                      133
                      134
                      135
                      136
                      137
                      138
                      139
                      140
                      141
                      142
                      143
                      144
                      def clone_agent_config(
                          self,
                          name: str,
                          new_name: str | None = None,
                          *,
                          template_context: dict[str, Any] | None = None,
                          **overrides: Any,
                      ) -> str:
                          """Create a copy of an agent configuration.
                      
                          Args:
                              name: Name of agent to clone
                              new_name: Optional new name (auto-generated if None)
                              template_context: Variables for template rendering
                              **overrides: Configuration overrides for the clone
                      
                          Returns:
                              Name of the new agent
                      
                          Raises:
                              KeyError: If original agent not found
                              ValueError: If new name already exists or if overrides invalid
                          """
                          if name not in self.agents:
                              msg = f"Agent {name} not found"
                              raise KeyError(msg)
                      
                          actual_name = new_name or f"{name}_copy_{len(self.agents)}"
                          if actual_name in self.agents:
                              msg = f"Agent {actual_name} already exists"
                              raise ValueError(msg)
                      
                          # Deep copy the configuration
                          config = self.agents[name].model_copy(deep=True)
                      
                          # Apply overrides
                          for key, value in overrides.items():
                              if not hasattr(config, key):
                                  msg = f"Invalid override: {key}"
                                  raise ValueError(msg)
                              setattr(config, key, value)
                      
                          # Handle template rendering if context provided
                          if template_context:
                              # Apply name from context if not explicitly overridden
                              if "name" in template_context and "name" not in overrides:
                                  config.name = template_context["name"]
                      
                              # Render system prompts
                              config.system_prompts = config.render_system_prompts(template_context)
                      
                          self.agents[actual_name] = config
                          return actual_name
                      

                      from_file classmethod

                      from_file(path: StrPath) -> Self
                      

                      Load agent configuration from YAML file.

                      Parameters:

                      Name Type Description Default
                      path StrPath

                      Path to the configuration file

                      required

                      Returns:

                      Type Description
                      Self

                      Loaded agent definition

                      Raises:

                      Type Description
                      ValueError

                      If loading fails

                      Source code in src/llmling_agent/models/manifest.py
                      332
                      333
                      334
                      335
                      336
                      337
                      338
                      339
                      340
                      341
                      342
                      343
                      344
                      345
                      346
                      347
                      348
                      349
                      350
                      351
                      352
                      353
                      354
                      355
                      356
                      357
                      358
                      @classmethod
                      def from_file(cls, path: StrPath) -> Self:
                          """Load agent configuration from YAML file.
                      
                          Args:
                              path: Path to the configuration file
                      
                          Returns:
                              Loaded agent definition
                      
                          Raises:
                              ValueError: If loading fails
                          """
                          import yamling
                      
                          try:
                              data = yamling.load_yaml_file(path, resolve_inherit=True)
                              agent_def = cls.model_validate(data)
                              # Update all agents with the config file path and ensure names
                              agents = {
                                  name: config.model_copy(update={"config_file_path": str(path)})
                                  for name, config in agent_def.agents.items()
                              }
                              return agent_def.model_copy(update={"agents": agents})
                          except Exception as exc:
                              msg = f"Failed to load agent config from {path}"
                              raise ValueError(msg) from exc
                      

                      get_mcp_servers

                      get_mcp_servers() -> list[MCPServerConfig]
                      

                      Get processed MCP server configurations.

                      Converts string entries to StdioMCPServer configs by splitting into command and arguments.

                      Returns:

                      Type Description
                      list[MCPServerConfig]

                      List of MCPServerConfig instances

                      Raises:

                      Type Description
                      ValueError

                      If string entry is empty

                      Source code in src/llmling_agent/models/manifest.py
                      216
                      217
                      218
                      219
                      220
                      221
                      222
                      223
                      224
                      225
                      226
                      227
                      228
                      229
                      230
                      231
                      232
                      233
                      234
                      235
                      236
                      237
                      238
                      239
                      240
                      241
                      242
                      def get_mcp_servers(self) -> list[MCPServerConfig]:
                          """Get processed MCP server configurations.
                      
                          Converts string entries to StdioMCPServer configs by splitting
                          into command and arguments.
                      
                          Returns:
                              List of MCPServerConfig instances
                      
                          Raises:
                              ValueError: If string entry is empty
                          """
                          configs: list[MCPServerConfig] = []
                      
                          for server in self.mcp_servers:
                              match server:
                                  case str():
                                      parts = server.split()
                                      if not parts:
                                          msg = "Empty MCP server command"
                                          raise ValueError(msg)
                      
                                      configs.append(StdioMCPServer(command=parts[0], args=parts[1:]))
                                  case MCPServerBase():
                                      configs.append(server)
                      
                          return configs
                      

                      get_result_type

                      get_result_type(agent_name: str) -> type[Any] | None
                      

                      Get the resolved result type for an agent.

                      Returns None if no result type is configured.

                      Source code in src/llmling_agent/models/manifest.py
                      371
                      372
                      373
                      374
                      375
                      376
                      377
                      378
                      379
                      380
                      381
                      382
                      383
                      def get_result_type(self, agent_name: str) -> type[Any] | None:
                          """Get the resolved result type for an agent.
                      
                          Returns None if no result type is configured.
                          """
                          agent_config = self.agents[agent_name]
                          if not agent_config.result_type:
                              return None
                          logger.debug("Building response model for %r", agent_config.result_type)
                          if isinstance(agent_config.result_type, str):
                              response_def = self.responses[agent_config.result_type]
                              return response_def.create_model()  # type: ignore
                          return agent_config.result_type.create_model()  # type: ignore
                      

                      get_used_providers

                      get_used_providers() -> set[str]
                      

                      Get all providers configured in this manifest.

                      Source code in src/llmling_agent/models/manifest.py
                      318
                      319
                      320
                      321
                      322
                      323
                      324
                      325
                      326
                      327
                      328
                      329
                      330
                      def get_used_providers(self) -> set[str]:
                          """Get all providers configured in this manifest."""
                          providers = set[str]()
                      
                          for agent_config in self.agents.values():
                              match agent_config.provider:
                                  case "pydantic_ai":
                                      providers.add("pydantic_ai")
                                  case "litellm":
                                      providers.add("litellm")
                                  case BaseProviderConfig():
                                      providers.add(agent_config.provider.type)
                          return providers
                      

                      resolve_inheritance classmethod

                      resolve_inheritance(data: dict) -> dict
                      

                      Resolve agent inheritance chains.

                      Source code in src/llmling_agent/models/manifest.py
                      146
                      147
                      148
                      149
                      150
                      151
                      152
                      153
                      154
                      155
                      156
                      157
                      158
                      159
                      160
                      161
                      162
                      163
                      164
                      165
                      166
                      167
                      168
                      169
                      170
                      171
                      172
                      173
                      174
                      175
                      176
                      177
                      178
                      179
                      180
                      181
                      182
                      183
                      184
                      185
                      186
                      187
                      188
                      189
                      190
                      191
                      192
                      193
                      @model_validator(mode="before")
                      @classmethod
                      def resolve_inheritance(cls, data: dict) -> dict:
                          """Resolve agent inheritance chains."""
                          nodes = data.get("agents", {})
                          resolved: dict[str, dict] = {}
                          seen: set[str] = set()
                      
                          def resolve_node(name: str) -> dict:
                              if name in resolved:
                                  return resolved[name]
                      
                              if name in seen:
                                  msg = f"Circular inheritance detected: {name}"
                                  raise ValueError(msg)
                      
                              seen.add(name)
                              config = (
                                  nodes[name].model_copy()
                                  if hasattr(nodes[name], "model_copy")
                                  else nodes[name].copy()
                              )
                              inherit = (
                                  config.get("inherits") if isinstance(config, dict) else config.inherits
                              )
                              if inherit:
                                  if inherit not in nodes:
                                      msg = f"Parent agent {inherit} not found"
                                      raise ValueError(msg)
                      
                                  # Get resolved parent config
                                  parent = resolve_node(inherit)
                                  # Merge parent with child (child overrides parent)
                                  merged = parent.copy()
                                  merged.update(config)
                                  config = merged
                      
                              seen.remove(name)
                              resolved[name] = config
                              return config
                      
                          # Resolve all nodes
                          for name in nodes:
                              resolved[name] = resolve_node(name)
                      
                          # Update nodes with resolved configs
                          data["agents"] = resolved
                          return data
                      

                      set_instrument_libraries

                      set_instrument_libraries() -> Self
                      

                      Auto-set libraries to instrument based on used providers.

                      Source code in src/llmling_agent/models/manifest.py
                      195
                      196
                      197
                      198
                      199
                      200
                      201
                      202
                      203
                      204
                      @model_validator(mode="after")
                      def set_instrument_libraries(self) -> Self:
                          """Auto-set libraries to instrument based on used providers."""
                          if (
                              not self.observability.enabled
                              or self.observability.instrument_libraries is not None
                          ):
                              return self
                          self.observability.instrument_libraries = list(self.get_used_providers())
                          return self
                      

                      ChatMessage dataclass

                      Common message format for all UI types.

                      Generically typed with: ChatMessage[Type of Content] The type can either be str or a BaseModel subclass.

                      Source code in src/llmling_agent/messaging/messages.py
                      152
                      153
                      154
                      155
                      156
                      157
                      158
                      159
                      160
                      161
                      162
                      163
                      164
                      165
                      166
                      167
                      168
                      169
                      170
                      171
                      172
                      173
                      174
                      175
                      176
                      177
                      178
                      179
                      180
                      181
                      182
                      183
                      184
                      185
                      186
                      187
                      188
                      189
                      190
                      191
                      192
                      193
                      194
                      195
                      196
                      197
                      198
                      199
                      200
                      201
                      202
                      203
                      204
                      205
                      206
                      207
                      208
                      209
                      210
                      211
                      212
                      213
                      214
                      215
                      216
                      217
                      218
                      219
                      220
                      221
                      222
                      223
                      224
                      225
                      226
                      227
                      228
                      229
                      230
                      231
                      232
                      233
                      234
                      235
                      236
                      237
                      238
                      239
                      240
                      241
                      242
                      243
                      244
                      245
                      246
                      247
                      248
                      249
                      250
                      251
                      252
                      253
                      254
                      255
                      256
                      257
                      258
                      259
                      260
                      261
                      262
                      263
                      264
                      265
                      266
                      267
                      268
                      269
                      270
                      271
                      272
                      273
                      274
                      275
                      276
                      277
                      278
                      279
                      @dataclass
                      class ChatMessage[TContent]:
                          """Common message format for all UI types.
                      
                          Generically typed with: ChatMessage[Type of Content]
                          The type can either be str or a BaseModel subclass.
                          """
                      
                          content: TContent
                          """Message content, typed as TContent (either str or BaseModel)."""
                      
                          role: MessageRole
                          """Role of the message sender (user/assistant/system)."""
                      
                          model: str | None = None
                          """Name of the model that generated this message."""
                      
                          metadata: JsonObject = field(default_factory=dict)
                          """Additional metadata about the message."""
                      
                          timestamp: datetime = field(default_factory=datetime.now)
                          """When this message was created."""
                      
                          cost_info: TokenCost | None = None
                          """Token usage and costs for this specific message if available."""
                      
                          message_id: str = field(default_factory=lambda: str(uuid4()))
                          """Unique identifier for this message."""
                      
                          conversation_id: str | None = None
                          """ID of the conversation this message belongs to."""
                      
                          response_time: float | None = None
                          """Time it took the LLM to respond."""
                      
                          tool_calls: list[ToolCallInfo] = field(default_factory=list)
                          """List of tool calls made during message generation."""
                      
                          associated_messages: list[ChatMessage[Any]] = field(default_factory=list)
                          """List of messages which were generated during the the creation of this messsage."""
                      
                          name: str | None = None
                          """Display name for the message sender in UI."""
                      
                          forwarded_from: list[str] = field(default_factory=list)
                          """List of agent names (the chain) that forwarded this message to the sender."""
                      
                          provider_extra: dict[str, Any] = field(default_factory=dict)
                          """Provider specific metadata / extra information."""
                      
                          def forwarded(self, previous_message: ChatMessage[Any]) -> Self:
                              """Create new message showing it was forwarded from another message.
                      
                              Args:
                                  previous_message: The message that led to this one's creation
                      
                              Returns:
                                  New message with updated chain showing the path through previous message
                              """
                              from_ = [*previous_message.forwarded_from, previous_message.name or "unknown"]
                              return replace(self, forwarded_from=from_)
                      
                          def to_text_message(self) -> ChatMessage[str]:
                              """Convert this message to a text-only version."""
                              return dataclasses.replace(self, content=str(self.content))  # type: ignore
                      
                          def _get_content_str(self) -> str:
                              """Get string representation of content."""
                              match self.content:
                                  case str():
                                      return self.content
                                  case BaseModel():
                                      return self.content.model_dump_json(indent=2)
                                  case _:
                                      msg = f"Unexpected content type: {type(self.content)}"
                                      raise ValueError(msg)
                      
                          @property
                          def data(self) -> TContent:
                              """Get content as typed data. Provides compat to RunResult."""
                              return self.content
                      
                          def format(
                              self,
                              style: FormatStyle = "simple",
                              *,
                              template: str | None = None,
                              variables: dict[str, Any] | None = None,
                              show_metadata: bool = False,
                              show_costs: bool = False,
                          ) -> str:
                              """Format message with configurable style.
                      
                              Args:
                                  style: Predefined style or "custom" for custom template
                                  template: Custom Jinja template (required if style="custom")
                                  variables: Additional variables for template rendering
                                  show_metadata: Whether to include metadata
                                  show_costs: Whether to include cost information
                      
                              Raises:
                                  ValueError: If style is "custom" but no template provided
                                          or if style is invalid
                              """
                              from jinja2 import Environment
                              import yamling
                      
                              env = Environment(trim_blocks=True, lstrip_blocks=True)
                              env.filters["to_yaml"] = yamling.dump_yaml
                      
                              match style:
                                  case "custom":
                                      if not template:
                                          msg = "Custom style requires a template"
                                          raise ValueError(msg)
                                      template_str = template
                                  case _ if style in MESSAGE_TEMPLATES:
                                      template_str = MESSAGE_TEMPLATES[style]
                                  case _:
                                      msg = f"Invalid style: {style}"
                                      raise ValueError(msg)
                      
                              template_obj = env.from_string(template_str)
                              vars_ = {**asdict(self), "show_metadata": show_metadata, "show_costs": show_costs}
                              if variables:
                                  vars_.update(variables)
                      
                              return template_obj.render(**vars_)
                      

                      associated_messages class-attribute instance-attribute

                      associated_messages: list[ChatMessage[Any]] = field(default_factory=list)
                      

                      List of messages which were generated during the the creation of this messsage.

                      content instance-attribute

                      content: TContent
                      

                      Message content, typed as TContent (either str or BaseModel).

                      conversation_id class-attribute instance-attribute

                      conversation_id: str | None = None
                      

                      ID of the conversation this message belongs to.

                      cost_info class-attribute instance-attribute

                      cost_info: TokenCost | None = None
                      

                      Token usage and costs for this specific message if available.

                      data property

                      data: TContent
                      

                      Get content as typed data. Provides compat to RunResult.

                      forwarded_from class-attribute instance-attribute

                      forwarded_from: list[str] = field(default_factory=list)
                      

                      List of agent names (the chain) that forwarded this message to the sender.

                      message_id class-attribute instance-attribute

                      message_id: str = field(default_factory=lambda: str(uuid4()))
                      

                      Unique identifier for this message.

                      metadata class-attribute instance-attribute

                      metadata: JsonObject = field(default_factory=dict)
                      

                      Additional metadata about the message.

                      model class-attribute instance-attribute

                      model: str | None = None
                      

                      Name of the model that generated this message.

                      name class-attribute instance-attribute

                      name: str | None = None
                      

                      Display name for the message sender in UI.

                      provider_extra class-attribute instance-attribute

                      provider_extra: dict[str, Any] = field(default_factory=dict)
                      

                      Provider specific metadata / extra information.

                      response_time class-attribute instance-attribute

                      response_time: float | None = None
                      

                      Time it took the LLM to respond.

                      role instance-attribute

                      role: MessageRole
                      

                      Role of the message sender (user/assistant/system).

                      timestamp class-attribute instance-attribute

                      timestamp: datetime = field(default_factory=now)
                      

                      When this message was created.

                      tool_calls class-attribute instance-attribute

                      tool_calls: list[ToolCallInfo] = field(default_factory=list)
                      

                      List of tool calls made during message generation.

                      _get_content_str

                      _get_content_str() -> str
                      

                      Get string representation of content.

                      Source code in src/llmling_agent/messaging/messages.py
                      218
                      219
                      220
                      221
                      222
                      223
                      224
                      225
                      226
                      227
                      def _get_content_str(self) -> str:
                          """Get string representation of content."""
                          match self.content:
                              case str():
                                  return self.content
                              case BaseModel():
                                  return self.content.model_dump_json(indent=2)
                              case _:
                                  msg = f"Unexpected content type: {type(self.content)}"
                                  raise ValueError(msg)
                      

                      format

                      format(
                          style: FormatStyle = "simple",
                          *,
                          template: str | None = None,
                          variables: dict[str, Any] | None = None,
                          show_metadata: bool = False,
                          show_costs: bool = False,
                      ) -> str
                      

                      Format message with configurable style.

                      Parameters:

                      Name Type Description Default
                      style FormatStyle

                      Predefined style or "custom" for custom template

                      'simple'
                      template str | None

                      Custom Jinja template (required if style="custom")

                      None
                      variables dict[str, Any] | None

                      Additional variables for template rendering

                      None
                      show_metadata bool

                      Whether to include metadata

                      False
                      show_costs bool

                      Whether to include cost information

                      False

                      Raises:

                      Type Description
                      ValueError

                      If style is "custom" but no template provided or if style is invalid

                      Source code in src/llmling_agent/messaging/messages.py
                      234
                      235
                      236
                      237
                      238
                      239
                      240
                      241
                      242
                      243
                      244
                      245
                      246
                      247
                      248
                      249
                      250
                      251
                      252
                      253
                      254
                      255
                      256
                      257
                      258
                      259
                      260
                      261
                      262
                      263
                      264
                      265
                      266
                      267
                      268
                      269
                      270
                      271
                      272
                      273
                      274
                      275
                      276
                      277
                      278
                      279
                      def format(
                          self,
                          style: FormatStyle = "simple",
                          *,
                          template: str | None = None,
                          variables: dict[str, Any] | None = None,
                          show_metadata: bool = False,
                          show_costs: bool = False,
                      ) -> str:
                          """Format message with configurable style.
                      
                          Args:
                              style: Predefined style or "custom" for custom template
                              template: Custom Jinja template (required if style="custom")
                              variables: Additional variables for template rendering
                              show_metadata: Whether to include metadata
                              show_costs: Whether to include cost information
                      
                          Raises:
                              ValueError: If style is "custom" but no template provided
                                      or if style is invalid
                          """
                          from jinja2 import Environment
                          import yamling
                      
                          env = Environment(trim_blocks=True, lstrip_blocks=True)
                          env.filters["to_yaml"] = yamling.dump_yaml
                      
                          match style:
                              case "custom":
                                  if not template:
                                      msg = "Custom style requires a template"
                                      raise ValueError(msg)
                                  template_str = template
                              case _ if style in MESSAGE_TEMPLATES:
                                  template_str = MESSAGE_TEMPLATES[style]
                              case _:
                                  msg = f"Invalid style: {style}"
                                  raise ValueError(msg)
                      
                          template_obj = env.from_string(template_str)
                          vars_ = {**asdict(self), "show_metadata": show_metadata, "show_costs": show_costs}
                          if variables:
                              vars_.update(variables)
                      
                          return template_obj.render(**vars_)
                      

                      forwarded

                      forwarded(previous_message: ChatMessage[Any]) -> Self
                      

                      Create new message showing it was forwarded from another message.

                      Parameters:

                      Name Type Description Default
                      previous_message ChatMessage[Any]

                      The message that led to this one's creation

                      required

                      Returns:

                      Type Description
                      Self

                      New message with updated chain showing the path through previous message

                      Source code in src/llmling_agent/messaging/messages.py
                      202
                      203
                      204
                      205
                      206
                      207
                      208
                      209
                      210
                      211
                      212
                      def forwarded(self, previous_message: ChatMessage[Any]) -> Self:
                          """Create new message showing it was forwarded from another message.
                      
                          Args:
                              previous_message: The message that led to this one's creation
                      
                          Returns:
                              New message with updated chain showing the path through previous message
                          """
                          from_ = [*previous_message.forwarded_from, previous_message.name or "unknown"]
                          return replace(self, forwarded_from=from_)
                      

                      to_text_message

                      to_text_message() -> ChatMessage[str]
                      

                      Convert this message to a text-only version.

                      Source code in src/llmling_agent/messaging/messages.py
                      214
                      215
                      216
                      def to_text_message(self) -> ChatMessage[str]:
                          """Convert this message to a text-only version."""
                          return dataclasses.replace(self, content=str(self.content))  # type: ignore
                      

                      StructuredAgent

                      Bases: MessageNode[TDeps, TResult]

                      Wrapper for Agent that enforces a specific result type.

                      This wrapper ensures the agent always returns results of the specified type. The type can be provided as: - A Python type for validation - A response definition name from the manifest - A complete response definition instance

                      Source code in src/llmling_agent/agent/structured.py
                       51
                       52
                       53
                       54
                       55
                       56
                       57
                       58
                       59
                       60
                       61
                       62
                       63
                       64
                       65
                       66
                       67
                       68
                       69
                       70
                       71
                       72
                       73
                       74
                       75
                       76
                       77
                       78
                       79
                       80
                       81
                       82
                       83
                       84
                       85
                       86
                       87
                       88
                       89
                       90
                       91
                       92
                       93
                       94
                       95
                       96
                       97
                       98
                       99
                      100
                      101
                      102
                      103
                      104
                      105
                      106
                      107
                      108
                      109
                      110
                      111
                      112
                      113
                      114
                      115
                      116
                      117
                      118
                      119
                      120
                      121
                      122
                      123
                      124
                      125
                      126
                      127
                      128
                      129
                      130
                      131
                      132
                      133
                      134
                      135
                      136
                      137
                      138
                      139
                      140
                      141
                      142
                      143
                      144
                      145
                      146
                      147
                      148
                      149
                      150
                      151
                      152
                      153
                      154
                      155
                      156
                      157
                      158
                      159
                      160
                      161
                      162
                      163
                      164
                      165
                      166
                      167
                      168
                      169
                      170
                      171
                      172
                      173
                      174
                      175
                      176
                      177
                      178
                      179
                      180
                      181
                      182
                      183
                      184
                      185
                      186
                      187
                      188
                      189
                      190
                      191
                      192
                      193
                      194
                      195
                      196
                      197
                      198
                      199
                      200
                      201
                      202
                      203
                      204
                      205
                      206
                      207
                      208
                      209
                      210
                      211
                      212
                      213
                      214
                      215
                      216
                      217
                      218
                      219
                      220
                      221
                      222
                      223
                      224
                      225
                      226
                      227
                      228
                      229
                      230
                      231
                      232
                      233
                      234
                      235
                      236
                      237
                      238
                      239
                      240
                      241
                      242
                      243
                      244
                      245
                      246
                      247
                      248
                      249
                      250
                      251
                      252
                      253
                      254
                      255
                      256
                      257
                      258
                      259
                      260
                      261
                      262
                      263
                      264
                      265
                      266
                      267
                      268
                      269
                      270
                      271
                      272
                      273
                      274
                      275
                      276
                      277
                      278
                      279
                      280
                      281
                      282
                      283
                      284
                      285
                      286
                      287
                      288
                      289
                      290
                      291
                      292
                      293
                      294
                      295
                      296
                      297
                      298
                      299
                      300
                      301
                      302
                      303
                      304
                      305
                      306
                      307
                      308
                      309
                      310
                      311
                      312
                      313
                      314
                      315
                      316
                      317
                      318
                      319
                      320
                      321
                      322
                      323
                      324
                      325
                      326
                      327
                      328
                      329
                      330
                      331
                      332
                      333
                      334
                      335
                      336
                      337
                      338
                      339
                      340
                      341
                      342
                      343
                      344
                      345
                      346
                      347
                      348
                      349
                      350
                      351
                      352
                      353
                      354
                      355
                      356
                      357
                      358
                      359
                      360
                      361
                      362
                      363
                      364
                      365
                      366
                      367
                      368
                      369
                      370
                      371
                      372
                      373
                      374
                      375
                      376
                      377
                      378
                      379
                      380
                      381
                      382
                      383
                      384
                      385
                      386
                      387
                      388
                      389
                      390
                      391
                      392
                      393
                      394
                      395
                      396
                      397
                      398
                      class StructuredAgent[TDeps, TResult](MessageNode[TDeps, TResult]):
                          """Wrapper for Agent that enforces a specific result type.
                      
                          This wrapper ensures the agent always returns results of the specified type.
                          The type can be provided as:
                          - A Python type for validation
                          - A response definition name from the manifest
                          - A complete response definition instance
                          """
                      
                          def __init__(
                              self,
                              agent: Agent[TDeps] | StructuredAgent[TDeps, TResult] | Callable[..., TResult],
                              result_type: type[TResult] | str | ResponseDefinition,
                              *,
                              tool_name: str | None = None,
                              tool_description: str | None = None,
                          ):
                              """Initialize structured agent wrapper.
                      
                              Args:
                                  agent: Base agent to wrap
                                  result_type: Expected result type:
                                      - BaseModel / dataclasses
                                      - Name of response definition in manifest
                                      - Complete response definition instance
                                  tool_name: Optional override for tool name
                                  tool_description: Optional override for tool description
                      
                              Raises:
                                  ValueError: If named response type not found in manifest
                              """
                              from llmling_agent.agent.agent import Agent
                      
                              logger.debug("StructuredAgent.run result_type = %s", result_type)
                              match agent:
                                  case StructuredAgent():
                                      self._agent: Agent[TDeps] = agent._agent
                                  case Callable():
                                      self._agent = Agent[TDeps](provider=agent, name=agent.__name__)
                                  case Agent():
                                      self._agent = agent
                                  case _:
                                      msg = "Invalid agent type"
                                      raise ValueError(msg)
                      
                              super().__init__(name=self._agent.name)
                      
                              self._result_type = to_type(result_type)
                              agent.set_result_type(result_type)
                      
                              match result_type:
                                  case type() | str():
                                      # For types and named definitions, use overrides if provided
                                      self._agent.set_result_type(
                                          result_type,
                                          tool_name=tool_name,
                                          tool_description=tool_description,
                                      )
                                  case BaseResponseDefinition():
                                      # For response definitions, use as-is
                                      # (overrides don't apply to complete definitions)
                                      self._agent.set_result_type(result_type)
                      
                          async def __aenter__(self) -> Self:
                              """Enter async context and set up MCP servers.
                      
                              Called when agent enters its async context. Sets up any configured
                              MCP servers and their tools.
                              """
                              await self._agent.__aenter__()
                              return self
                      
                          async def __aexit__(
                              self,
                              exc_type: type[BaseException] | None,
                              exc_val: BaseException | None,
                              exc_tb: TracebackType | None,
                          ):
                              """Exit async context."""
                              await self._agent.__aexit__(exc_type, exc_val, exc_tb)
                      
                          def __and__(
                              self, other: AnyAgent[Any, Any] | Team[Any] | ProcessorCallback[TResult]
                          ) -> Team[TDeps]:
                              return self._agent.__and__(other)
                      
                          def __or__(self, other: Agent | ProcessorCallback | BaseTeam) -> TeamRun:
                              return self._agent.__or__(other)
                      
                          async def _run(
                              self,
                              *prompt: AnyPromptType | TResult,
                              result_type: type[TResult] | None = None,
                              model: ModelType = None,
                              tool_choice: bool | str | list[str] = True,
                              store_history: bool = True,
                              message_id: str | None = None,
                              conversation_id: str | None = None,
                              wait_for_connections: bool | None = None,
                          ) -> ChatMessage[TResult]:
                              """Run with fixed result type.
                      
                              Args:
                                  prompt: Any prompt-compatible object or structured objects of type TResult
                                  result_type: Expected result type:
                                      - BaseModel / dataclasses
                                      - Name of response definition in manifest
                                      - Complete response definition instance
                                  model: Optional model override
                                  tool_choice: Control tool usage:
                                      - True: Allow all tools
                                      - False: No tools
                                      - str: Use specific tool
                                      - list[str]: Allow specific tools
                                  store_history: Whether the message exchange should be added to the
                                                 context window
                                  message_id: Optional message id for the returned message.
                                              Automatically generated if not provided.
                                  conversation_id: Optional conversation id for the returned message.
                                  wait_for_connections: Whether to wait for all connections to complete
                              """
                              typ = result_type or self._result_type
                              return await self._agent._run(
                                  *prompt,
                                  result_type=typ,
                                  model=model,
                                  store_history=store_history,
                                  tool_choice=tool_choice,
                                  message_id=message_id,
                                  conversation_id=conversation_id,
                                  wait_for_connections=wait_for_connections,
                              )
                      
                          async def validate_against(
                              self,
                              prompt: str,
                              criteria: type[TResult],
                              **kwargs: Any,
                          ) -> bool:
                              """Check if agent's response satisfies stricter criteria."""
                              result = await self.run(prompt, **kwargs)
                              try:
                                  criteria.model_validate(result.content.model_dump())  # type: ignore
                              except ValidationError:
                                  return False
                              else:
                                  return True
                      
                          def __repr__(self) -> str:
                              type_name = getattr(self._result_type, "__name__", str(self._result_type))
                              return f"StructuredAgent({self._agent!r}, result_type={type_name})"
                      
                          def __prompt__(self) -> str:
                              type_name = getattr(self._result_type, "__name__", str(self._result_type))
                              base_info = self._agent.__prompt__()
                              return f"{base_info}\nStructured output type: {type_name}"
                      
                          def __getattr__(self, name: str) -> Any:
                              return getattr(self._agent, name)
                      
                          @property
                          def context(self) -> AgentContext[TDeps]:
                              return self._agent.context
                      
                          @context.setter
                          def context(self, value: Any):
                              self._agent.context = value
                      
                          @property
                          def name(self) -> str:
                              return self._agent.name
                      
                          @name.setter
                          def name(self, value: str):
                              self._agent.name = value
                      
                          @property
                          def tools(self) -> ToolManager:
                              return self._agent.tools
                      
                          @overload
                          def to_structured(
                              self,
                              result_type: None,
                              *,
                              tool_name: str | None = None,
                              tool_description: str | None = None,
                          ) -> Agent[TDeps]: ...
                      
                          @overload
                          def to_structured[TNewResult](
                              self,
                              result_type: type[TNewResult] | str | ResponseDefinition,
                              *,
                              tool_name: str | None = None,
                              tool_description: str | None = None,
                          ) -> StructuredAgent[TDeps, TNewResult]: ...
                      
                          def to_structured[TNewResult](
                              self,
                              result_type: type[TNewResult] | str | ResponseDefinition | None,
                              *,
                              tool_name: str | None = None,
                              tool_description: str | None = None,
                          ) -> Agent[TDeps] | StructuredAgent[TDeps, TNewResult]:
                              if result_type is None:
                                  return self._agent
                      
                              return StructuredAgent(
                                  self._agent,
                                  result_type=result_type,
                                  tool_name=tool_name,
                                  tool_description=tool_description,
                              )
                      
                          @property
                          def stats(self) -> MessageStats:
                              return self._agent.stats
                      
                          async def run_iter(
                              self,
                              *prompt_groups: Sequence[AnyPromptType | PIL.Image.Image | os.PathLike[str]],
                              **kwargs: Any,
                          ) -> AsyncIterator[ChatMessage[Any]]:
                              """Forward run_iter to wrapped agent."""
                              async for message in self._agent.run_iter(*prompt_groups, **kwargs):
                                  yield message
                      
                          async def run_job(
                              self,
                              job: Job[TDeps, TResult],
                              *,
                              store_history: bool = True,
                              include_agent_tools: bool = True,
                          ) -> ChatMessage[TResult]:
                              """Execute a pre-defined job ensuring type compatibility.
                      
                              Args:
                                  job: Job configuration to execute
                                  store_history: Whether to add job execution to conversation history
                                  include_agent_tools: Whether to include agent's tools alongside job tools
                      
                              Returns:
                                  Task execution result
                      
                              Raises:
                                  JobError: If job execution fails or types don't match
                                  ValueError: If job configuration is invalid
                              """
                              from llmling_agent.tasks import JobError
                      
                              # Validate dependency requirement
                              if job.required_dependency is not None:  # noqa: SIM102
                                  if not isinstance(self.context.data, job.required_dependency):
                                      msg = (
                                          f"Agent dependencies ({type(self.context.data)}) "
                                          f"don't match job requirement ({job.required_dependency})"
                                      )
                                      raise JobError(msg)
                      
                              # Validate return type requirement
                              if job.required_return_type != self._result_type:
                                  msg = (
                                      f"Agent result type ({self._result_type}) "
                                      f"doesn't match job requirement ({job.required_return_type})"
                                  )
                                  raise JobError(msg)
                      
                              # Load task knowledge if provided
                              if job.knowledge:
                                  # Add knowledge sources to context
                                  resources: list[Resource | str] = list(job.knowledge.paths) + list(
                                      job.knowledge.resources
                                  )
                                  for source in resources:
                                      await self.conversation.load_context_source(source)
                                  for prompt in job.knowledge.prompts:
                                      await self.conversation.load_context_source(prompt)
                      
                              try:
                                  # Register task tools temporarily
                                  tools = job.get_tools()
                      
                                  # Use temporary tools
                                  with self._agent.tools.temporary_tools(
                                      tools, exclusive=not include_agent_tools
                                  ):
                                      # Execute job using StructuredAgent's run to maintain type safety
                                      return await self.run(await job.get_prompt(), store_history=store_history)
                      
                              except Exception as e:
                                  msg = f"Task execution failed: {e}"
                                  logger.exception(msg)
                                  raise JobError(msg) from e
                      
                          @classmethod
                          def from_callback(
                              cls,
                              callback: ProcessorCallback[TResult],
                              *,
                              name: str | None = None,
                              **kwargs: Any,
                          ) -> StructuredAgent[None, TResult]:
                              """Create a structured agent from a processing callback.
                      
                              Args:
                                  callback: Function to process messages. Can be:
                                      - sync or async
                                      - with or without context
                                      - with explicit return type
                                  name: Optional name for the agent
                                  **kwargs: Additional arguments for agent
                      
                              Example:
                                  ```python
                                  class AnalysisResult(BaseModel):
                                      sentiment: float
                                      topics: list[str]
                      
                                  def analyze(msg: str) -> AnalysisResult:
                                      return AnalysisResult(sentiment=0.8, topics=["tech"])
                      
                                  analyzer = StructuredAgent.from_callback(analyze)
                                  ```
                              """
                              from llmling_agent.agent.agent import Agent
                              from llmling_agent_providers.callback import CallbackProvider
                      
                              name = name or callback.__name__ or "processor"
                              provider = CallbackProvider(callback, name=name)
                              agent = Agent[None](provider=provider, name=name, **kwargs)
                              # Get return type from signature for validation
                              hints = get_type_hints(callback)
                              return_type = hints.get("return")
                      
                              # If async, unwrap from Awaitable
                              if (
                                  return_type
                                  and hasattr(return_type, "__origin__")
                                  and return_type.__origin__ is Awaitable
                              ):
                                  return_type = return_type.__args__[0]
                              return StructuredAgent[None, TResult](agent, return_type or str)  # type: ignore
                      
                          def is_busy(self) -> bool:
                              """Check if agent is currently processing tasks."""
                              return bool(self._pending_tasks or self._background_task)
                      

                      __aenter__ async

                      __aenter__() -> Self
                      

                      Enter async context and set up MCP servers.

                      Called when agent enters its async context. Sets up any configured MCP servers and their tools.

                      Source code in src/llmling_agent/agent/structured.py
                      115
                      116
                      117
                      118
                      119
                      120
                      121
                      122
                      async def __aenter__(self) -> Self:
                          """Enter async context and set up MCP servers.
                      
                          Called when agent enters its async context. Sets up any configured
                          MCP servers and their tools.
                          """
                          await self._agent.__aenter__()
                          return self
                      

                      __aexit__ async

                      __aexit__(
                          exc_type: type[BaseException] | None,
                          exc_val: BaseException | None,
                          exc_tb: TracebackType | None,
                      )
                      

                      Exit async context.

                      Source code in src/llmling_agent/agent/structured.py
                      124
                      125
                      126
                      127
                      128
                      129
                      130
                      131
                      async def __aexit__(
                          self,
                          exc_type: type[BaseException] | None,
                          exc_val: BaseException | None,
                          exc_tb: TracebackType | None,
                      ):
                          """Exit async context."""
                          await self._agent.__aexit__(exc_type, exc_val, exc_tb)
                      

                      __init__

                      __init__(
                          agent: Agent[TDeps] | StructuredAgent[TDeps, TResult] | Callable[..., TResult],
                          result_type: type[TResult] | str | ResponseDefinition,
                          *,
                          tool_name: str | None = None,
                          tool_description: str | None = None,
                      )
                      

                      Initialize structured agent wrapper.

                      Parameters:

                      Name Type Description Default
                      agent Agent[TDeps] | StructuredAgent[TDeps, TResult] | Callable[..., TResult]

                      Base agent to wrap

                      required
                      result_type type[TResult] | str | ResponseDefinition

                      Expected result type: - BaseModel / dataclasses - Name of response definition in manifest - Complete response definition instance

                      required
                      tool_name str | None

                      Optional override for tool name

                      None
                      tool_description str | None

                      Optional override for tool description

                      None

                      Raises:

                      Type Description
                      ValueError

                      If named response type not found in manifest

                      Source code in src/llmling_agent/agent/structured.py
                       61
                       62
                       63
                       64
                       65
                       66
                       67
                       68
                       69
                       70
                       71
                       72
                       73
                       74
                       75
                       76
                       77
                       78
                       79
                       80
                       81
                       82
                       83
                       84
                       85
                       86
                       87
                       88
                       89
                       90
                       91
                       92
                       93
                       94
                       95
                       96
                       97
                       98
                       99
                      100
                      101
                      102
                      103
                      104
                      105
                      106
                      107
                      108
                      109
                      110
                      111
                      112
                      113
                      def __init__(
                          self,
                          agent: Agent[TDeps] | StructuredAgent[TDeps, TResult] | Callable[..., TResult],
                          result_type: type[TResult] | str | ResponseDefinition,
                          *,
                          tool_name: str | None = None,
                          tool_description: str | None = None,
                      ):
                          """Initialize structured agent wrapper.
                      
                          Args:
                              agent: Base agent to wrap
                              result_type: Expected result type:
                                  - BaseModel / dataclasses
                                  - Name of response definition in manifest
                                  - Complete response definition instance
                              tool_name: Optional override for tool name
                              tool_description: Optional override for tool description
                      
                          Raises:
                              ValueError: If named response type not found in manifest
                          """
                          from llmling_agent.agent.agent import Agent
                      
                          logger.debug("StructuredAgent.run result_type = %s", result_type)
                          match agent:
                              case StructuredAgent():
                                  self._agent: Agent[TDeps] = agent._agent
                              case Callable():
                                  self._agent = Agent[TDeps](provider=agent, name=agent.__name__)
                              case Agent():
                                  self._agent = agent
                              case _:
                                  msg = "Invalid agent type"
                                  raise ValueError(msg)
                      
                          super().__init__(name=self._agent.name)
                      
                          self._result_type = to_type(result_type)
                          agent.set_result_type(result_type)
                      
                          match result_type:
                              case type() | str():
                                  # For types and named definitions, use overrides if provided
                                  self._agent.set_result_type(
                                      result_type,
                                      tool_name=tool_name,
                                      tool_description=tool_description,
                                  )
                              case BaseResponseDefinition():
                                  # For response definitions, use as-is
                                  # (overrides don't apply to complete definitions)
                                  self._agent.set_result_type(result_type)
                      

                      _run async

                      _run(
                          *prompt: AnyPromptType | TResult,
                          result_type: type[TResult] | None = None,
                          model: ModelType = None,
                          tool_choice: bool | str | list[str] = True,
                          store_history: bool = True,
                          message_id: str | None = None,
                          conversation_id: str | None = None,
                          wait_for_connections: bool | None = None,
                      ) -> ChatMessage[TResult]
                      

                      Run with fixed result type.

                      Parameters:

                      Name Type Description Default
                      prompt AnyPromptType | TResult

                      Any prompt-compatible object or structured objects of type TResult

                      ()
                      result_type type[TResult] | None

                      Expected result type: - BaseModel / dataclasses - Name of response definition in manifest - Complete response definition instance

                      None
                      model ModelType

                      Optional model override

                      None
                      tool_choice bool | str | list[str]

                      Control tool usage: - True: Allow all tools - False: No tools - str: Use specific tool - list[str]: Allow specific tools

                      True
                      store_history bool

                      Whether the message exchange should be added to the context window

                      True
                      message_id str | None

                      Optional message id for the returned message. Automatically generated if not provided.

                      None
                      conversation_id str | None

                      Optional conversation id for the returned message.

                      None
                      wait_for_connections bool | None

                      Whether to wait for all connections to complete

                      None
                      Source code in src/llmling_agent/agent/structured.py
                      141
                      142
                      143
                      144
                      145
                      146
                      147
                      148
                      149
                      150
                      151
                      152
                      153
                      154
                      155
                      156
                      157
                      158
                      159
                      160
                      161
                      162
                      163
                      164
                      165
                      166
                      167
                      168
                      169
                      170
                      171
                      172
                      173
                      174
                      175
                      176
                      177
                      178
                      179
                      180
                      181
                      182
                      183
                      async def _run(
                          self,
                          *prompt: AnyPromptType | TResult,
                          result_type: type[TResult] | None = None,
                          model: ModelType = None,
                          tool_choice: bool | str | list[str] = True,
                          store_history: bool = True,
                          message_id: str | None = None,
                          conversation_id: str | None = None,
                          wait_for_connections: bool | None = None,
                      ) -> ChatMessage[TResult]:
                          """Run with fixed result type.
                      
                          Args:
                              prompt: Any prompt-compatible object or structured objects of type TResult
                              result_type: Expected result type:
                                  - BaseModel / dataclasses
                                  - Name of response definition in manifest
                                  - Complete response definition instance
                              model: Optional model override
                              tool_choice: Control tool usage:
                                  - True: Allow all tools
                                  - False: No tools
                                  - str: Use specific tool
                                  - list[str]: Allow specific tools
                              store_history: Whether the message exchange should be added to the
                                             context window
                              message_id: Optional message id for the returned message.
                                          Automatically generated if not provided.
                              conversation_id: Optional conversation id for the returned message.
                              wait_for_connections: Whether to wait for all connections to complete
                          """
                          typ = result_type or self._result_type
                          return await self._agent._run(
                              *prompt,
                              result_type=typ,
                              model=model,
                              store_history=store_history,
                              tool_choice=tool_choice,
                              message_id=message_id,
                              conversation_id=conversation_id,
                              wait_for_connections=wait_for_connections,
                          )
                      

                      from_callback classmethod

                      from_callback(
                          callback: ProcessorCallback[TResult], *, name: str | None = None, **kwargs: Any
                      ) -> StructuredAgent[None, TResult]
                      

                      Create a structured agent from a processing callback.

                      Parameters:

                      Name Type Description Default
                      callback ProcessorCallback[TResult]

                      Function to process messages. Can be: - sync or async - with or without context - with explicit return type

                      required
                      name str | None

                      Optional name for the agent

                      None
                      **kwargs Any

                      Additional arguments for agent

                      {}
                      Example
                      class AnalysisResult(BaseModel):
                          sentiment: float
                          topics: list[str]
                      
                      def analyze(msg: str) -> AnalysisResult:
                          return AnalysisResult(sentiment=0.8, topics=["tech"])
                      
                      analyzer = StructuredAgent.from_callback(analyze)
                      
                      Source code in src/llmling_agent/agent/structured.py
                      347
                      348
                      349
                      350
                      351
                      352
                      353
                      354
                      355
                      356
                      357
                      358
                      359
                      360
                      361
                      362
                      363
                      364
                      365
                      366
                      367
                      368
                      369
                      370
                      371
                      372
                      373
                      374
                      375
                      376
                      377
                      378
                      379
                      380
                      381
                      382
                      383
                      384
                      385
                      386
                      387
                      388
                      389
                      390
                      391
                      392
                      393
                      394
                      @classmethod
                      def from_callback(
                          cls,
                          callback: ProcessorCallback[TResult],
                          *,
                          name: str | None = None,
                          **kwargs: Any,
                      ) -> StructuredAgent[None, TResult]:
                          """Create a structured agent from a processing callback.
                      
                          Args:
                              callback: Function to process messages. Can be:
                                  - sync or async
                                  - with or without context
                                  - with explicit return type
                              name: Optional name for the agent
                              **kwargs: Additional arguments for agent
                      
                          Example:
                              ```python
                              class AnalysisResult(BaseModel):
                                  sentiment: float
                                  topics: list[str]
                      
                              def analyze(msg: str) -> AnalysisResult:
                                  return AnalysisResult(sentiment=0.8, topics=["tech"])
                      
                              analyzer = StructuredAgent.from_callback(analyze)
                              ```
                          """
                          from llmling_agent.agent.agent import Agent
                          from llmling_agent_providers.callback import CallbackProvider
                      
                          name = name or callback.__name__ or "processor"
                          provider = CallbackProvider(callback, name=name)
                          agent = Agent[None](provider=provider, name=name, **kwargs)
                          # Get return type from signature for validation
                          hints = get_type_hints(callback)
                          return_type = hints.get("return")
                      
                          # If async, unwrap from Awaitable
                          if (
                              return_type
                              and hasattr(return_type, "__origin__")
                              and return_type.__origin__ is Awaitable
                          ):
                              return_type = return_type.__args__[0]
                          return StructuredAgent[None, TResult](agent, return_type or str)  # type: ignore
                      

                      is_busy

                      is_busy() -> bool
                      

                      Check if agent is currently processing tasks.

                      Source code in src/llmling_agent/agent/structured.py
                      396
                      397
                      398
                      def is_busy(self) -> bool:
                          """Check if agent is currently processing tasks."""
                          return bool(self._pending_tasks or self._background_task)
                      

                      run_iter async

                      run_iter(
                          *prompt_groups: Sequence[AnyPromptType | Image | PathLike[str]], **kwargs: Any
                      ) -> AsyncIterator[ChatMessage[Any]]
                      

                      Forward run_iter to wrapped agent.

                      Source code in src/llmling_agent/agent/structured.py
                      271
                      272
                      273
                      274
                      275
                      276
                      277
                      278
                      async def run_iter(
                          self,
                          *prompt_groups: Sequence[AnyPromptType | PIL.Image.Image | os.PathLike[str]],
                          **kwargs: Any,
                      ) -> AsyncIterator[ChatMessage[Any]]:
                          """Forward run_iter to wrapped agent."""
                          async for message in self._agent.run_iter(*prompt_groups, **kwargs):
                              yield message
                      

                      run_job async

                      run_job(
                          job: Job[TDeps, TResult],
                          *,
                          store_history: bool = True,
                          include_agent_tools: bool = True,
                      ) -> ChatMessage[TResult]
                      

                      Execute a pre-defined job ensuring type compatibility.

                      Parameters:

                      Name Type Description Default
                      job Job[TDeps, TResult]

                      Job configuration to execute

                      required
                      store_history bool

                      Whether to add job execution to conversation history

                      True
                      include_agent_tools bool

                      Whether to include agent's tools alongside job tools

                      True

                      Returns:

                      Type Description
                      ChatMessage[TResult]

                      Task execution result

                      Raises:

                      Type Description
                      JobError

                      If job execution fails or types don't match

                      ValueError

                      If job configuration is invalid

                      Source code in src/llmling_agent/agent/structured.py
                      280
                      281
                      282
                      283
                      284
                      285
                      286
                      287
                      288
                      289
                      290
                      291
                      292
                      293
                      294
                      295
                      296
                      297
                      298
                      299
                      300
                      301
                      302
                      303
                      304
                      305
                      306
                      307
                      308
                      309
                      310
                      311
                      312
                      313
                      314
                      315
                      316
                      317
                      318
                      319
                      320
                      321
                      322
                      323
                      324
                      325
                      326
                      327
                      328
                      329
                      330
                      331
                      332
                      333
                      334
                      335
                      336
                      337
                      338
                      339
                      340
                      341
                      342
                      343
                      344
                      345
                      async def run_job(
                          self,
                          job: Job[TDeps, TResult],
                          *,
                          store_history: bool = True,
                          include_agent_tools: bool = True,
                      ) -> ChatMessage[TResult]:
                          """Execute a pre-defined job ensuring type compatibility.
                      
                          Args:
                              job: Job configuration to execute
                              store_history: Whether to add job execution to conversation history
                              include_agent_tools: Whether to include agent's tools alongside job tools
                      
                          Returns:
                              Task execution result
                      
                          Raises:
                              JobError: If job execution fails or types don't match
                              ValueError: If job configuration is invalid
                          """
                          from llmling_agent.tasks import JobError
                      
                          # Validate dependency requirement
                          if job.required_dependency is not None:  # noqa: SIM102
                              if not isinstance(self.context.data, job.required_dependency):
                                  msg = (
                                      f"Agent dependencies ({type(self.context.data)}) "
                                      f"don't match job requirement ({job.required_dependency})"
                                  )
                                  raise JobError(msg)
                      
                          # Validate return type requirement
                          if job.required_return_type != self._result_type:
                              msg = (
                                  f"Agent result type ({self._result_type}) "
                                  f"doesn't match job requirement ({job.required_return_type})"
                              )
                              raise JobError(msg)
                      
                          # Load task knowledge if provided
                          if job.knowledge:
                              # Add knowledge sources to context
                              resources: list[Resource | str] = list(job.knowledge.paths) + list(
                                  job.knowledge.resources
                              )
                              for source in resources:
                                  await self.conversation.load_context_source(source)
                              for prompt in job.knowledge.prompts:
                                  await self.conversation.load_context_source(prompt)
                      
                          try:
                              # Register task tools temporarily
                              tools = job.get_tools()
                      
                              # Use temporary tools
                              with self._agent.tools.temporary_tools(
                                  tools, exclusive=not include_agent_tools
                              ):
                                  # Execute job using StructuredAgent's run to maintain type safety
                                  return await self.run(await job.get_prompt(), store_history=store_history)
                      
                          except Exception as e:
                              msg = f"Task execution failed: {e}"
                              logger.exception(msg)
                              raise JobError(msg) from e
                      

                      validate_against async

                      validate_against(prompt: str, criteria: type[TResult], **kwargs: Any) -> bool
                      

                      Check if agent's response satisfies stricter criteria.

                      Source code in src/llmling_agent/agent/structured.py
                      185
                      186
                      187
                      188
                      189
                      190
                      191
                      192
                      193
                      194
                      195
                      196
                      197
                      198
                      async def validate_against(
                          self,
                          prompt: str,
                          criteria: type[TResult],
                          **kwargs: Any,
                      ) -> bool:
                          """Check if agent's response satisfies stricter criteria."""
                          result = await self.run(prompt, **kwargs)
                          try:
                              criteria.model_validate(result.content.model_dump())  # type: ignore
                          except ValidationError:
                              return False
                          else:
                              return True
                      

                      Team

                      Bases: BaseTeam[TDeps, Any]

                      Group of agents that can execute together.

                      Source code in src/llmling_agent/delegation/team.py
                       34
                       35
                       36
                       37
                       38
                       39
                       40
                       41
                       42
                       43
                       44
                       45
                       46
                       47
                       48
                       49
                       50
                       51
                       52
                       53
                       54
                       55
                       56
                       57
                       58
                       59
                       60
                       61
                       62
                       63
                       64
                       65
                       66
                       67
                       68
                       69
                       70
                       71
                       72
                       73
                       74
                       75
                       76
                       77
                       78
                       79
                       80
                       81
                       82
                       83
                       84
                       85
                       86
                       87
                       88
                       89
                       90
                       91
                       92
                       93
                       94
                       95
                       96
                       97
                       98
                       99
                      100
                      101
                      102
                      103
                      104
                      105
                      106
                      107
                      108
                      109
                      110
                      111
                      112
                      113
                      114
                      115
                      116
                      117
                      118
                      119
                      120
                      121
                      122
                      123
                      124
                      125
                      126
                      127
                      128
                      129
                      130
                      131
                      132
                      133
                      134
                      135
                      136
                      137
                      138
                      139
                      140
                      141
                      142
                      143
                      144
                      145
                      146
                      147
                      148
                      149
                      150
                      151
                      152
                      153
                      154
                      155
                      156
                      157
                      158
                      159
                      160
                      161
                      162
                      163
                      164
                      165
                      166
                      167
                      168
                      169
                      170
                      171
                      172
                      173
                      174
                      175
                      176
                      177
                      178
                      179
                      180
                      181
                      182
                      183
                      184
                      185
                      186
                      187
                      188
                      189
                      190
                      191
                      192
                      193
                      194
                      195
                      196
                      197
                      198
                      199
                      200
                      201
                      202
                      203
                      204
                      205
                      206
                      207
                      208
                      209
                      210
                      211
                      212
                      213
                      214
                      215
                      216
                      217
                      218
                      219
                      220
                      221
                      222
                      223
                      224
                      225
                      226
                      227
                      228
                      229
                      230
                      231
                      232
                      233
                      234
                      235
                      236
                      237
                      238
                      239
                      class Team[TDeps](BaseTeam[TDeps, Any]):
                          """Group of agents that can execute together."""
                      
                          async def execute(
                              self,
                              *prompts: AnyPromptType | PIL.Image.Image | os.PathLike[str] | None,
                              **kwargs: Any,
                          ) -> TeamResponse:
                              """Run all agents in parallel with monitoring."""
                              from llmling_agent.talk.talk import Talk
                      
                              self._team_talk.clear()
                      
                              start_time = datetime.now()
                              responses: list[AgentResponse[Any]] = []
                              errors: dict[str, Exception] = {}
                              final_prompt = list(prompts)
                              if self.shared_prompt:
                                  final_prompt.insert(0, self.shared_prompt)
                              combined_prompt = "\n".join([await to_prompt(p) for p in final_prompt])
                              all_nodes = list(await self.pick_agents(combined_prompt))
                              # Create Talk connections for monitoring this execution
                              execution_talks: list[Talk[Any]] = []
                              for node in all_nodes:
                                  talk = Talk[Any](
                                      node,
                                      [],  # No actual forwarding, just for tracking
                                      connection_type="run",
                                      queued=True,
                                      queue_strategy="latest",
                                  )
                                  execution_talks.append(talk)
                                  self._team_talk.append(talk)  # Add to base class's TeamTalk
                      
                              async def _run(node: MessageNode[TDeps, Any]):
                                  try:
                                      start = perf_counter()
                                      message = await node.run(*final_prompt, **kwargs)
                                      timing = perf_counter() - start
                                      r = AgentResponse(agent_name=node.name, message=message, timing=timing)
                                      responses.append(r)
                      
                                      # Update talk stats for this agent
                                      talk = next(t for t in execution_talks if t.source == node)
                                      talk._stats.messages.append(message)
                      
                                  except Exception as e:  # noqa: BLE001
                                      errors[node.name] = e
                      
                              # Run all agents in parallel
                              await asyncio.gather(*[_run(node) for node in all_nodes])
                      
                              return TeamResponse(responses=responses, start_time=start_time, errors=errors)
                      
                          def __prompt__(self) -> str:
                              """Format team info for prompts."""
                              members = ", ".join(a.name for a in self.agents)
                              desc = f" - {self.description}" if self.description else ""
                              return f"Parallel Team '{self.name}'{desc}\nMembers: {members}"
                      
                          async def run_iter(
                              self,
                              *prompts: AnyPromptType,
                              **kwargs: Any,
                          ) -> AsyncIterator[ChatMessage[Any]]:
                              """Yield messages as they arrive from parallel execution."""
                              queue: asyncio.Queue[ChatMessage[Any] | None] = asyncio.Queue()
                              failures: dict[str, Exception] = {}
                      
                              async def _run(node: MessageNode[TDeps, Any]):
                                  try:
                                      message = await node.run(*prompts, **kwargs)
                                      await queue.put(message)
                                  except Exception as e:
                                      logger.exception("Error executing node %s", node.name)
                                      failures[node.name] = e
                                      # Put None to maintain queue count
                                      await queue.put(None)
                      
                              # Get nodes to run
                              combined_prompt = "\n".join([await to_prompt(p) for p in prompts])
                              all_nodes = list(await self.pick_agents(combined_prompt))
                      
                              # Start all agents
                              tasks = [asyncio.create_task(_run(n), name=f"run_{n.name}") for n in all_nodes]
                      
                              try:
                                  # Yield messages as they arrive
                                  for _ in all_nodes:
                                      if msg := await queue.get():
                                          yield msg
                      
                                  # If any failures occurred, raise error with details
                                  if failures:
                                      error_details = "\n".join(
                                          f"- {name}: {error}" for name, error in failures.items()
                                      )
                                      error_msg = f"Some nodes failed to execute:\n{error_details}"
                                      raise RuntimeError(error_msg)
                      
                              finally:
                                  # Clean up any remaining tasks
                                  for task in tasks:
                                      if not task.done():
                                          task.cancel()
                      
                          async def _run(
                              self,
                              *prompts: AnyPromptType | PIL.Image.Image | os.PathLike[str] | None,
                              wait_for_connections: bool | None = None,
                              message_id: str | None = None,
                              conversation_id: str | None = None,
                              **kwargs: Any,
                          ) -> ChatMessage[list[Any]]:
                              """Run all agents in parallel and return combined message."""
                              result: TeamResponse = await self.execute(*prompts, **kwargs)
                              message_id = message_id or str(uuid4())
                              return ChatMessage(
                                  content=[r.message.content for r in result if r.message],
                                  role="assistant",
                                  name=self.name,
                                  message_id=message_id,
                                  conversation_id=conversation_id,
                                  metadata={
                                      "agent_names": [r.agent_name for r in result],
                                      "errors": {name: str(error) for name, error in result.errors.items()},
                                      "start_time": result.start_time.isoformat(),
                                  },
                              )
                      
                          async def run_job[TJobResult](
                              self,
                              job: Job[TDeps, TJobResult],
                              *,
                              store_history: bool = True,
                              include_agent_tools: bool = True,
                          ) -> list[AgentResponse[TJobResult]]:
                              """Execute a job across all team members in parallel.
                      
                              Args:
                                  job: Job configuration to execute
                                  store_history: Whether to add job execution to conversation history
                                  include_agent_tools: Whether to include agent's tools alongside job tools
                      
                              Returns:
                                  List of responses from all agents
                      
                              Raises:
                                  JobError: If job execution fails for any agent
                                  ValueError: If job configuration is invalid
                              """
                              from llmling_agent.agent import Agent, StructuredAgent
                              from llmling_agent.tasks import JobError
                      
                              responses: list[AgentResponse[TJobResult]] = []
                              errors: dict[str, Exception] = {}
                              start_time = datetime.now()
                      
                              # Validate dependencies for all agents
                              if job.required_dependency is not None:
                                  invalid_agents = [
                                      agent.name
                                      for agent in self.iter_agents()
                                      if not isinstance(agent.context.data, job.required_dependency)
                                  ]
                                  if invalid_agents:
                                      msg = (
                                          f"Agents {', '.join(invalid_agents)} don't have required "
                                          f"dependency type: {job.required_dependency}"
                                      )
                                      raise JobError(msg)
                      
                              try:
                                  # Load knowledge for all agents if provided
                                  if job.knowledge:
                                      # TODO: resources
                                      tools = [t.name for t in job.get_tools()]
                                      await self.distribute(content="", tools=tools)
                      
                                  prompt = await job.get_prompt()
                      
                                  async def _run(agent: MessageNode[TDeps, TJobResult]):
                                      assert isinstance(agent, Agent | StructuredAgent)
                                      try:
                                          with agent.tools.temporary_tools(
                                              job.get_tools(), exclusive=not include_agent_tools
                                          ):
                                              start = perf_counter()
                                              resp = AgentResponse(
                                                  agent_name=agent.name,
                                                  message=await agent.run(prompt, store_history=store_history),  # pyright: ignore
                                                  timing=perf_counter() - start,
                                              )
                                              responses.append(resp)
                                      except Exception as e:  # noqa: BLE001
                                          errors[agent.name] = e
                      
                                  # Run job in parallel on all agents
                                  await asyncio.gather(*[_run(node) for node in self.agents])
                      
                                  return TeamResponse(responses=responses, start_time=start_time, errors=errors)
                      
                              except Exception as e:
                                  msg = "Job execution failed"
                                  logger.exception(msg)
                                  raise JobError(msg) from e
                      

                      __prompt__

                      __prompt__() -> str
                      

                      Format team info for prompts.

                      Source code in src/llmling_agent/delegation/team.py
                      88
                      89
                      90
                      91
                      92
                      def __prompt__(self) -> str:
                          """Format team info for prompts."""
                          members = ", ".join(a.name for a in self.agents)
                          desc = f" - {self.description}" if self.description else ""
                          return f"Parallel Team '{self.name}'{desc}\nMembers: {members}"
                      

                      _run async

                      _run(
                          *prompts: AnyPromptType | Image | PathLike[str] | None,
                          wait_for_connections: bool | None = None,
                          message_id: str | None = None,
                          conversation_id: str | None = None,
                          **kwargs: Any,
                      ) -> ChatMessage[list[Any]]
                      

                      Run all agents in parallel and return combined message.

                      Source code in src/llmling_agent/delegation/team.py
                      140
                      141
                      142
                      143
                      144
                      145
                      146
                      147
                      148
                      149
                      150
                      151
                      152
                      153
                      154
                      155
                      156
                      157
                      158
                      159
                      160
                      161
                      162
                      async def _run(
                          self,
                          *prompts: AnyPromptType | PIL.Image.Image | os.PathLike[str] | None,
                          wait_for_connections: bool | None = None,
                          message_id: str | None = None,
                          conversation_id: str | None = None,
                          **kwargs: Any,
                      ) -> ChatMessage[list[Any]]:
                          """Run all agents in parallel and return combined message."""
                          result: TeamResponse = await self.execute(*prompts, **kwargs)
                          message_id = message_id or str(uuid4())
                          return ChatMessage(
                              content=[r.message.content for r in result if r.message],
                              role="assistant",
                              name=self.name,
                              message_id=message_id,
                              conversation_id=conversation_id,
                              metadata={
                                  "agent_names": [r.agent_name for r in result],
                                  "errors": {name: str(error) for name, error in result.errors.items()},
                                  "start_time": result.start_time.isoformat(),
                              },
                          )
                      

                      execute async

                      execute(
                          *prompts: AnyPromptType | Image | PathLike[str] | None, **kwargs: Any
                      ) -> TeamResponse
                      

                      Run all agents in parallel with monitoring.

                      Source code in src/llmling_agent/delegation/team.py
                      37
                      38
                      39
                      40
                      41
                      42
                      43
                      44
                      45
                      46
                      47
                      48
                      49
                      50
                      51
                      52
                      53
                      54
                      55
                      56
                      57
                      58
                      59
                      60
                      61
                      62
                      63
                      64
                      65
                      66
                      67
                      68
                      69
                      70
                      71
                      72
                      73
                      74
                      75
                      76
                      77
                      78
                      79
                      80
                      81
                      82
                      83
                      84
                      85
                      86
                      async def execute(
                          self,
                          *prompts: AnyPromptType | PIL.Image.Image | os.PathLike[str] | None,
                          **kwargs: Any,
                      ) -> TeamResponse:
                          """Run all agents in parallel with monitoring."""
                          from llmling_agent.talk.talk import Talk
                      
                          self._team_talk.clear()
                      
                          start_time = datetime.now()
                          responses: list[AgentResponse[Any]] = []
                          errors: dict[str, Exception] = {}
                          final_prompt = list(prompts)
                          if self.shared_prompt:
                              final_prompt.insert(0, self.shared_prompt)
                          combined_prompt = "\n".join([await to_prompt(p) for p in final_prompt])
                          all_nodes = list(await self.pick_agents(combined_prompt))
                          # Create Talk connections for monitoring this execution
                          execution_talks: list[Talk[Any]] = []
                          for node in all_nodes:
                              talk = Talk[Any](
                                  node,
                                  [],  # No actual forwarding, just for tracking
                                  connection_type="run",
                                  queued=True,
                                  queue_strategy="latest",
                              )
                              execution_talks.append(talk)
                              self._team_talk.append(talk)  # Add to base class's TeamTalk
                      
                          async def _run(node: MessageNode[TDeps, Any]):
                              try:
                                  start = perf_counter()
                                  message = await node.run(*final_prompt, **kwargs)
                                  timing = perf_counter() - start
                                  r = AgentResponse(agent_name=node.name, message=message, timing=timing)
                                  responses.append(r)
                      
                                  # Update talk stats for this agent
                                  talk = next(t for t in execution_talks if t.source == node)
                                  talk._stats.messages.append(message)
                      
                              except Exception as e:  # noqa: BLE001
                                  errors[node.name] = e
                      
                          # Run all agents in parallel
                          await asyncio.gather(*[_run(node) for node in all_nodes])
                      
                          return TeamResponse(responses=responses, start_time=start_time, errors=errors)
                      

                      run_iter async

                      run_iter(*prompts: AnyPromptType, **kwargs: Any) -> AsyncIterator[ChatMessage[Any]]
                      

                      Yield messages as they arrive from parallel execution.

                      Source code in src/llmling_agent/delegation/team.py
                       94
                       95
                       96
                       97
                       98
                       99
                      100
                      101
                      102
                      103
                      104
                      105
                      106
                      107
                      108
                      109
                      110
                      111
                      112
                      113
                      114
                      115
                      116
                      117
                      118
                      119
                      120
                      121
                      122
                      123
                      124
                      125
                      126
                      127
                      128
                      129
                      130
                      131
                      132
                      133
                      134
                      135
                      136
                      137
                      138
                      async def run_iter(
                          self,
                          *prompts: AnyPromptType,
                          **kwargs: Any,
                      ) -> AsyncIterator[ChatMessage[Any]]:
                          """Yield messages as they arrive from parallel execution."""
                          queue: asyncio.Queue[ChatMessage[Any] | None] = asyncio.Queue()
                          failures: dict[str, Exception] = {}
                      
                          async def _run(node: MessageNode[TDeps, Any]):
                              try:
                                  message = await node.run(*prompts, **kwargs)
                                  await queue.put(message)
                              except Exception as e:
                                  logger.exception("Error executing node %s", node.name)
                                  failures[node.name] = e
                                  # Put None to maintain queue count
                                  await queue.put(None)
                      
                          # Get nodes to run
                          combined_prompt = "\n".join([await to_prompt(p) for p in prompts])
                          all_nodes = list(await self.pick_agents(combined_prompt))
                      
                          # Start all agents
                          tasks = [asyncio.create_task(_run(n), name=f"run_{n.name}") for n in all_nodes]
                      
                          try:
                              # Yield messages as they arrive
                              for _ in all_nodes:
                                  if msg := await queue.get():
                                      yield msg
                      
                              # If any failures occurred, raise error with details
                              if failures:
                                  error_details = "\n".join(
                                      f"- {name}: {error}" for name, error in failures.items()
                                  )
                                  error_msg = f"Some nodes failed to execute:\n{error_details}"
                                  raise RuntimeError(error_msg)
                      
                          finally:
                              # Clean up any remaining tasks
                              for task in tasks:
                                  if not task.done():
                                      task.cancel()
                      

                      run_job async

                      run_job(
                          job: Job[TDeps, TJobResult],
                          *,
                          store_history: bool = True,
                          include_agent_tools: bool = True,
                      ) -> list[AgentResponse[TJobResult]]
                      

                      Execute a job across all team members in parallel.

                      Parameters:

                      Name Type Description Default
                      job Job[TDeps, TJobResult]

                      Job configuration to execute

                      required
                      store_history bool

                      Whether to add job execution to conversation history

                      True
                      include_agent_tools bool

                      Whether to include agent's tools alongside job tools

                      True

                      Returns:

                      Type Description
                      list[AgentResponse[TJobResult]]

                      List of responses from all agents

                      Raises:

                      Type Description
                      JobError

                      If job execution fails for any agent

                      ValueError

                      If job configuration is invalid

                      Source code in src/llmling_agent/delegation/team.py
                      164
                      165
                      166
                      167
                      168
                      169
                      170
                      171
                      172
                      173
                      174
                      175
                      176
                      177
                      178
                      179
                      180
                      181
                      182
                      183
                      184
                      185
                      186
                      187
                      188
                      189
                      190
                      191
                      192
                      193
                      194
                      195
                      196
                      197
                      198
                      199
                      200
                      201
                      202
                      203
                      204
                      205
                      206
                      207
                      208
                      209
                      210
                      211
                      212
                      213
                      214
                      215
                      216
                      217
                      218
                      219
                      220
                      221
                      222
                      223
                      224
                      225
                      226
                      227
                      228
                      229
                      230
                      231
                      232
                      233
                      234
                      235
                      236
                      237
                      238
                      239
                      async def run_job[TJobResult](
                          self,
                          job: Job[TDeps, TJobResult],
                          *,
                          store_history: bool = True,
                          include_agent_tools: bool = True,
                      ) -> list[AgentResponse[TJobResult]]:
                          """Execute a job across all team members in parallel.
                      
                          Args:
                              job: Job configuration to execute
                              store_history: Whether to add job execution to conversation history
                              include_agent_tools: Whether to include agent's tools alongside job tools
                      
                          Returns:
                              List of responses from all agents
                      
                          Raises:
                              JobError: If job execution fails for any agent
                              ValueError: If job configuration is invalid
                          """
                          from llmling_agent.agent import Agent, StructuredAgent
                          from llmling_agent.tasks import JobError
                      
                          responses: list[AgentResponse[TJobResult]] = []
                          errors: dict[str, Exception] = {}
                          start_time = datetime.now()
                      
                          # Validate dependencies for all agents
                          if job.required_dependency is not None:
                              invalid_agents = [
                                  agent.name
                                  for agent in self.iter_agents()
                                  if not isinstance(agent.context.data, job.required_dependency)
                              ]
                              if invalid_agents:
                                  msg = (
                                      f"Agents {', '.join(invalid_agents)} don't have required "
                                      f"dependency type: {job.required_dependency}"
                                  )
                                  raise JobError(msg)
                      
                          try:
                              # Load knowledge for all agents if provided
                              if job.knowledge:
                                  # TODO: resources
                                  tools = [t.name for t in job.get_tools()]
                                  await self.distribute(content="", tools=tools)
                      
                              prompt = await job.get_prompt()
                      
                              async def _run(agent: MessageNode[TDeps, TJobResult]):
                                  assert isinstance(agent, Agent | StructuredAgent)
                                  try:
                                      with agent.tools.temporary_tools(
                                          job.get_tools(), exclusive=not include_agent_tools
                                      ):
                                          start = perf_counter()
                                          resp = AgentResponse(
                                              agent_name=agent.name,
                                              message=await agent.run(prompt, store_history=store_history),  # pyright: ignore
                                              timing=perf_counter() - start,
                                          )
                                          responses.append(resp)
                                  except Exception as e:  # noqa: BLE001
                                      errors[agent.name] = e
                      
                              # Run job in parallel on all agents
                              await asyncio.gather(*[_run(node) for node in self.agents])
                      
                              return TeamResponse(responses=responses, start_time=start_time, errors=errors)
                      
                          except Exception as e:
                              msg = "Job execution failed"
                              logger.exception(msg)
                              raise JobError(msg) from e
                      

                      TeamRun

                      Bases: BaseTeam[TDeps, TResult]

                      Handles team operations with monitoring.

                      Source code in src/llmling_agent/delegation/teamrun.py
                       59
                       60
                       61
                       62
                       63
                       64
                       65
                       66
                       67
                       68
                       69
                       70
                       71
                       72
                       73
                       74
                       75
                       76
                       77
                       78
                       79
                       80
                       81
                       82
                       83
                       84
                       85
                       86
                       87
                       88
                       89
                       90
                       91
                       92
                       93
                       94
                       95
                       96
                       97
                       98
                       99
                      100
                      101
                      102
                      103
                      104
                      105
                      106
                      107
                      108
                      109
                      110
                      111
                      112
                      113
                      114
                      115
                      116
                      117
                      118
                      119
                      120
                      121
                      122
                      123
                      124
                      125
                      126
                      127
                      128
                      129
                      130
                      131
                      132
                      133
                      134
                      135
                      136
                      137
                      138
                      139
                      140
                      141
                      142
                      143
                      144
                      145
                      146
                      147
                      148
                      149
                      150
                      151
                      152
                      153
                      154
                      155
                      156
                      157
                      158
                      159
                      160
                      161
                      162
                      163
                      164
                      165
                      166
                      167
                      168
                      169
                      170
                      171
                      172
                      173
                      174
                      175
                      176
                      177
                      178
                      179
                      180
                      181
                      182
                      183
                      184
                      185
                      186
                      187
                      188
                      189
                      190
                      191
                      192
                      193
                      194
                      195
                      196
                      197
                      198
                      199
                      200
                      201
                      202
                      203
                      204
                      205
                      206
                      207
                      208
                      209
                      210
                      211
                      212
                      213
                      214
                      215
                      216
                      217
                      218
                      219
                      220
                      221
                      222
                      223
                      224
                      225
                      226
                      227
                      228
                      229
                      230
                      231
                      232
                      233
                      234
                      235
                      236
                      237
                      238
                      239
                      240
                      241
                      242
                      243
                      244
                      245
                      246
                      247
                      248
                      249
                      250
                      251
                      252
                      253
                      254
                      255
                      256
                      257
                      258
                      259
                      260
                      261
                      262
                      263
                      264
                      265
                      266
                      267
                      268
                      269
                      270
                      271
                      272
                      273
                      274
                      275
                      276
                      277
                      278
                      279
                      280
                      281
                      282
                      283
                      284
                      285
                      286
                      287
                      288
                      289
                      290
                      291
                      292
                      293
                      294
                      295
                      296
                      297
                      298
                      299
                      class TeamRun[TDeps, TResult](BaseTeam[TDeps, TResult]):
                          """Handles team operations with monitoring."""
                      
                          def __init__(
                              self,
                              agents: Sequence[MessageNode[TDeps, Any]],
                              *,
                              name: str | None = None,
                              description: str | None = None,
                              shared_prompt: str | None = None,
                              validator: MessageNode[Any, TResult] | None = None,
                              picker: AnyAgent[Any, Any] | None = None,
                              num_picks: int | None = None,
                              pick_prompt: str | None = None,
                              # result_mode: ResultMode = "last",
                          ):
                              super().__init__(
                                  agents,
                                  name=name,
                                  description=description,
                                  shared_prompt=shared_prompt,
                                  picker=picker,
                                  num_picks=num_picks,
                                  pick_prompt=pick_prompt,
                              )
                              self.validator = validator
                              self.result_mode = "last"
                      
                          def __prompt__(self) -> str:
                              """Format team info for prompts."""
                              members = " -> ".join(a.name for a in self.agents)
                              desc = f" - {self.description}" if self.description else ""
                              return f"Sequential Team '{self.name}'{desc}\nPipeline: {members}"
                      
                          async def _run(
                              self,
                              *prompts: AnyPromptType | PIL.Image.Image | os.PathLike[str] | None,
                              wait_for_connections: bool | None = None,
                              message_id: str | None = None,
                              conversation_id: str | None = None,
                              **kwargs: Any,
                          ) -> ChatMessage[TResult]:
                              """Run agents sequentially and return combined message.
                      
                              This message wraps execute and extracts the ChatMessage in order to fulfill
                              the "message protocol".
                              """
                              message_id = message_id or str(uuid4())
                      
                              result = await self.execute(*prompts, **kwargs)
                              all_messages = [r.message for r in result if r.message]
                              assert all_messages, "Error during execution, returned None for TeamRun"
                              # Determine content based on mode
                              match self.result_mode:
                                  case "last":
                                      content = all_messages[-1].content
                                  # case "concat":
                                  #     content = "\n".join(msg.format() for msg in all_messages)
                                  case _:
                                      msg = f"Invalid result mode: {self.result_mode}"
                                      raise ValueError(msg)
                      
                              return ChatMessage(
                                  content=content,
                                  role="assistant",
                                  name=self.name,
                                  associated_messages=all_messages,
                                  message_id=message_id,
                                  conversation_id=conversation_id,
                                  metadata={
                                      "execution_order": [r.agent_name for r in result],
                                      "start_time": result.start_time.isoformat(),
                                      "errors": {name: str(error) for name, error in result.errors.items()},
                                  },
                              )
                      
                          async def execute(
                              self,
                              *prompts: AnyPromptType | PIL.Image.Image | os.PathLike[str] | None,
                              **kwargs: Any,
                          ) -> TeamResponse[TResult]:
                              """Start execution with optional monitoring."""
                              self._team_talk.clear()
                              start_time = datetime.now()
                              final_prompt = list(prompts)
                              if self.shared_prompt:
                                  final_prompt.insert(0, self.shared_prompt)
                      
                              responses = [
                                  i
                                  async for i in self.execute_iter(*final_prompt)
                                  if isinstance(i, AgentResponse)
                              ]
                              return TeamResponse(responses, start_time)
                      
                          async def run_iter(
                              self,
                              *prompts: AnyPromptType | PIL.Image.Image | os.PathLike[str],
                              **kwargs: Any,
                          ) -> AsyncIterator[ChatMessage[Any]]:
                              """Yield messages from the execution chain."""
                              async for item in self.execute_iter(*prompts, **kwargs):
                                  match item:
                                      case AgentResponse():
                                          if item.message:
                                              yield item.message
                                      case Talk():
                                          pass
                      
                          async def execute_iter(
                              self,
                              *prompt: AnyPromptType | PIL.Image.Image | os.PathLike[str],
                              **kwargs: Any,
                          ) -> AsyncIterator[Talk[Any] | AgentResponse[Any]]:
                              from toprompt import to_prompt
                      
                              connections: list[Talk[Any]] = []
                              try:
                                  combined_prompt = "\n".join([await to_prompt(p) for p in prompt])
                                  all_nodes = list(await self.pick_agents(combined_prompt))
                                  if self.validator:
                                      all_nodes.append(self.validator)
                                  first = all_nodes[0]
                                  connections = [
                                      source.connect_to(target, queued=True)
                                      for source, target in pairwise(all_nodes)
                                  ]
                                  for conn in connections:
                                      self._team_talk.append(conn)
                      
                                  # First agent
                                  start = perf_counter()
                                  message = await first.run(*prompt, **kwargs)
                                  timing = perf_counter() - start
                                  response = AgentResponse[Any](first.name, message=message, timing=timing)
                                  yield response
                      
                                  # Process through chain
                                  for connection in connections:
                                      target = connection.targets[0]
                                      target_name = target.name
                                      yield connection
                      
                                      # Let errors propagate - they break the chain
                                      start = perf_counter()
                                      messages = await connection.trigger()
                      
                                      # If this is the last node
                                      if target == all_nodes[-1]:
                                          last_talk = Talk[Any](target, [], connection_type="run")
                                          if response.message:
                                              last_talk.stats.messages.append(response.message)
                                          self._team_talk.append(last_talk)
                      
                                      timing = perf_counter() - start
                                      msg = messages[0]
                                      response = AgentResponse[Any](target_name, message=msg, timing=timing)
                                      yield response
                      
                              finally:
                                  # Always clean up connections
                                  for connection in connections:
                                      connection.disconnect()
                      
                          @asynccontextmanager
                          async def chain_stream(
                              self,
                              *prompts: AnyPromptType | PIL.Image.Image | os.PathLike[str] | None,
                              require_all: bool = True,
                              **kwargs: Any,
                          ) -> AsyncIterator[StreamingResponseProtocol]:
                              """Stream results through chain of team members."""
                              from llmling_agent.agent import Agent, StructuredAgent
                              from llmling_agent.delegation import TeamRun
                              from llmling_agent_providers.base import StreamingResponseProtocol
                      
                              async with AsyncExitStack() as stack:
                                  streams: list[StreamingResponseProtocol[str]] = []
                                  current_message = prompts
                      
                                  # Set up all streams
                                  for agent in self.agents:
                                      try:
                                          assert isinstance(agent, TeamRun | Agent | StructuredAgent), (
                                              "Cannot stream teams!"
                                          )
                                          stream = await stack.enter_async_context(
                                              agent.run_stream(*current_message, **kwargs)
                                          )
                                          streams.append(stream)  # type: ignore
                                          # Wait for complete response for next agent
                                          async for chunk in stream.stream():
                                              current_message = chunk
                                              if stream.is_complete:
                                                  current_message = (stream.formatted_content,)  # type: ignore
                                                  break
                                      except Exception as e:
                                          if require_all:
                                              msg = f"Chain broken at {agent.name}: {e}"
                                              raise ValueError(msg) from e
                                          logger.warning("Chain handler %s failed: %s", agent.name, e)
                      
                                  # Create a stream-like interface for the chain
                                  class ChainStream(StreamingResponseProtocol[str]):
                                      def __init__(self):
                                          self.streams = streams
                                          self.current_stream_idx = 0
                                          self.is_complete = False
                                          self.model_name = None
                      
                                      def usage(self) -> Usage:
                                          @dataclass
                                          class Usage:
                                              total_tokens: int | None
                                              request_tokens: int | None
                                              response_tokens: int | None
                      
                                          return Usage(0, 0, 0)
                      
                                      async def stream(self) -> AsyncIterator[str]:  # type: ignore
                                          for idx, stream in enumerate(self.streams):
                                              self.current_stream_idx = idx
                                              async for chunk in stream.stream():
                                                  yield chunk
                                                  if idx == len(self.streams) - 1 and stream.is_complete:
                                                      self.is_complete = True
                      
                                  yield ChainStream()
                      
                          @asynccontextmanager
                          async def run_stream(
                              self,
                              *prompts: AnyPromptType | PIL.Image.Image | os.PathLike[str],
                              **kwargs: Any,
                          ) -> AsyncIterator[StreamingResponseProtocol[TResult]]:
                              """Stream responses through the chain.
                      
                              Provides same interface as Agent.run_stream.
                              """
                              async with self.chain_stream(*prompts, **kwargs) as stream:
                                  yield stream
                      

                      __prompt__

                      __prompt__() -> str
                      

                      Format team info for prompts.

                      Source code in src/llmling_agent/delegation/teamrun.py
                      87
                      88
                      89
                      90
                      91
                      def __prompt__(self) -> str:
                          """Format team info for prompts."""
                          members = " -> ".join(a.name for a in self.agents)
                          desc = f" - {self.description}" if self.description else ""
                          return f"Sequential Team '{self.name}'{desc}\nPipeline: {members}"
                      

                      _run async

                      _run(
                          *prompts: AnyPromptType | Image | PathLike[str] | None,
                          wait_for_connections: bool | None = None,
                          message_id: str | None = None,
                          conversation_id: str | None = None,
                          **kwargs: Any,
                      ) -> ChatMessage[TResult]
                      

                      Run agents sequentially and return combined message.

                      This message wraps execute and extracts the ChatMessage in order to fulfill the "message protocol".

                      Source code in src/llmling_agent/delegation/teamrun.py
                       93
                       94
                       95
                       96
                       97
                       98
                       99
                      100
                      101
                      102
                      103
                      104
                      105
                      106
                      107
                      108
                      109
                      110
                      111
                      112
                      113
                      114
                      115
                      116
                      117
                      118
                      119
                      120
                      121
                      122
                      123
                      124
                      125
                      126
                      127
                      128
                      129
                      130
                      131
                      132
                      133
                      async def _run(
                          self,
                          *prompts: AnyPromptType | PIL.Image.Image | os.PathLike[str] | None,
                          wait_for_connections: bool | None = None,
                          message_id: str | None = None,
                          conversation_id: str | None = None,
                          **kwargs: Any,
                      ) -> ChatMessage[TResult]:
                          """Run agents sequentially and return combined message.
                      
                          This message wraps execute and extracts the ChatMessage in order to fulfill
                          the "message protocol".
                          """
                          message_id = message_id or str(uuid4())
                      
                          result = await self.execute(*prompts, **kwargs)
                          all_messages = [r.message for r in result if r.message]
                          assert all_messages, "Error during execution, returned None for TeamRun"
                          # Determine content based on mode
                          match self.result_mode:
                              case "last":
                                  content = all_messages[-1].content
                              # case "concat":
                              #     content = "\n".join(msg.format() for msg in all_messages)
                              case _:
                                  msg = f"Invalid result mode: {self.result_mode}"
                                  raise ValueError(msg)
                      
                          return ChatMessage(
                              content=content,
                              role="assistant",
                              name=self.name,
                              associated_messages=all_messages,
                              message_id=message_id,
                              conversation_id=conversation_id,
                              metadata={
                                  "execution_order": [r.agent_name for r in result],
                                  "start_time": result.start_time.isoformat(),
                                  "errors": {name: str(error) for name, error in result.errors.items()},
                              },
                          )
                      

                      chain_stream async

                      chain_stream(
                          *prompts: AnyPromptType | Image | PathLike[str] | None,
                          require_all: bool = True,
                          **kwargs: Any,
                      ) -> AsyncIterator[StreamingResponseProtocol]
                      

                      Stream results through chain of team members.

                      Source code in src/llmling_agent/delegation/teamrun.py
                      223
                      224
                      225
                      226
                      227
                      228
                      229
                      230
                      231
                      232
                      233
                      234
                      235
                      236
                      237
                      238
                      239
                      240
                      241
                      242
                      243
                      244
                      245
                      246
                      247
                      248
                      249
                      250
                      251
                      252
                      253
                      254
                      255
                      256
                      257
                      258
                      259
                      260
                      261
                      262
                      263
                      264
                      265
                      266
                      267
                      268
                      269
                      270
                      271
                      272
                      273
                      274
                      275
                      276
                      277
                      278
                      279
                      280
                      281
                      282
                      283
                      284
                      285
                      286
                      @asynccontextmanager
                      async def chain_stream(
                          self,
                          *prompts: AnyPromptType | PIL.Image.Image | os.PathLike[str] | None,
                          require_all: bool = True,
                          **kwargs: Any,
                      ) -> AsyncIterator[StreamingResponseProtocol]:
                          """Stream results through chain of team members."""
                          from llmling_agent.agent import Agent, StructuredAgent
                          from llmling_agent.delegation import TeamRun
                          from llmling_agent_providers.base import StreamingResponseProtocol
                      
                          async with AsyncExitStack() as stack:
                              streams: list[StreamingResponseProtocol[str]] = []
                              current_message = prompts
                      
                              # Set up all streams
                              for agent in self.agents:
                                  try:
                                      assert isinstance(agent, TeamRun | Agent | StructuredAgent), (
                                          "Cannot stream teams!"
                                      )
                                      stream = await stack.enter_async_context(
                                          agent.run_stream(*current_message, **kwargs)
                                      )
                                      streams.append(stream)  # type: ignore
                                      # Wait for complete response for next agent
                                      async for chunk in stream.stream():
                                          current_message = chunk
                                          if stream.is_complete:
                                              current_message = (stream.formatted_content,)  # type: ignore
                                              break
                                  except Exception as e:
                                      if require_all:
                                          msg = f"Chain broken at {agent.name}: {e}"
                                          raise ValueError(msg) from e
                                      logger.warning("Chain handler %s failed: %s", agent.name, e)
                      
                              # Create a stream-like interface for the chain
                              class ChainStream(StreamingResponseProtocol[str]):
                                  def __init__(self):
                                      self.streams = streams
                                      self.current_stream_idx = 0
                                      self.is_complete = False
                                      self.model_name = None
                      
                                  def usage(self) -> Usage:
                                      @dataclass
                                      class Usage:
                                          total_tokens: int | None
                                          request_tokens: int | None
                                          response_tokens: int | None
                      
                                      return Usage(0, 0, 0)
                      
                                  async def stream(self) -> AsyncIterator[str]:  # type: ignore
                                      for idx, stream in enumerate(self.streams):
                                          self.current_stream_idx = idx
                                          async for chunk in stream.stream():
                                              yield chunk
                                              if idx == len(self.streams) - 1 and stream.is_complete:
                                                  self.is_complete = True
                      
                              yield ChainStream()
                      

                      execute async

                      execute(
                          *prompts: AnyPromptType | Image | PathLike[str] | None, **kwargs: Any
                      ) -> TeamResponse[TResult]
                      

                      Start execution with optional monitoring.

                      Source code in src/llmling_agent/delegation/teamrun.py
                      135
                      136
                      137
                      138
                      139
                      140
                      141
                      142
                      143
                      144
                      145
                      146
                      147
                      148
                      149
                      150
                      151
                      152
                      async def execute(
                          self,
                          *prompts: AnyPromptType | PIL.Image.Image | os.PathLike[str] | None,
                          **kwargs: Any,
                      ) -> TeamResponse[TResult]:
                          """Start execution with optional monitoring."""
                          self._team_talk.clear()
                          start_time = datetime.now()
                          final_prompt = list(prompts)
                          if self.shared_prompt:
                              final_prompt.insert(0, self.shared_prompt)
                      
                          responses = [
                              i
                              async for i in self.execute_iter(*final_prompt)
                              if isinstance(i, AgentResponse)
                          ]
                          return TeamResponse(responses, start_time)
                      

                      run_iter async

                      run_iter(
                          *prompts: AnyPromptType | Image | PathLike[str], **kwargs: Any
                      ) -> AsyncIterator[ChatMessage[Any]]
                      

                      Yield messages from the execution chain.

                      Source code in src/llmling_agent/delegation/teamrun.py
                      154
                      155
                      156
                      157
                      158
                      159
                      160
                      161
                      162
                      163
                      164
                      165
                      166
                      async def run_iter(
                          self,
                          *prompts: AnyPromptType | PIL.Image.Image | os.PathLike[str],
                          **kwargs: Any,
                      ) -> AsyncIterator[ChatMessage[Any]]:
                          """Yield messages from the execution chain."""
                          async for item in self.execute_iter(*prompts, **kwargs):
                              match item:
                                  case AgentResponse():
                                      if item.message:
                                          yield item.message
                                  case Talk():
                                      pass
                      

                      run_stream async

                      run_stream(
                          *prompts: AnyPromptType | Image | PathLike[str], **kwargs: Any
                      ) -> AsyncIterator[StreamingResponseProtocol[TResult]]
                      

                      Stream responses through the chain.

                      Provides same interface as Agent.run_stream.

                      Source code in src/llmling_agent/delegation/teamrun.py
                      288
                      289
                      290
                      291
                      292
                      293
                      294
                      295
                      296
                      297
                      298
                      299
                      @asynccontextmanager
                      async def run_stream(
                          self,
                          *prompts: AnyPromptType | PIL.Image.Image | os.PathLike[str],
                          **kwargs: Any,
                      ) -> AsyncIterator[StreamingResponseProtocol[TResult]]:
                          """Stream responses through the chain.
                      
                          Provides same interface as Agent.run_stream.
                          """
                          async with self.chain_stream(*prompts, **kwargs) as stream:
                              yield stream
                      

                      ToolInfo dataclass

                      Information about a registered tool.

                      Source code in src/llmling_agent/tools/base.py
                       65
                       66
                       67
                       68
                       69
                       70
                       71
                       72
                       73
                       74
                       75
                       76
                       77
                       78
                       79
                       80
                       81
                       82
                       83
                       84
                       85
                       86
                       87
                       88
                       89
                       90
                       91
                       92
                       93
                       94
                       95
                       96
                       97
                       98
                       99
                      100
                      101
                      102
                      103
                      104
                      105
                      106
                      107
                      108
                      109
                      110
                      111
                      112
                      113
                      114
                      115
                      116
                      117
                      118
                      119
                      120
                      121
                      122
                      123
                      124
                      125
                      126
                      127
                      128
                      129
                      130
                      131
                      132
                      133
                      134
                      135
                      136
                      137
                      138
                      139
                      140
                      141
                      142
                      143
                      144
                      145
                      146
                      147
                      148
                      149
                      150
                      151
                      152
                      153
                      154
                      155
                      156
                      157
                      158
                      159
                      160
                      161
                      162
                      163
                      164
                      165
                      166
                      167
                      168
                      169
                      170
                      171
                      172
                      173
                      174
                      175
                      176
                      177
                      178
                      179
                      180
                      181
                      182
                      183
                      184
                      185
                      186
                      187
                      188
                      189
                      190
                      191
                      192
                      193
                      194
                      195
                      196
                      197
                      198
                      199
                      200
                      201
                      202
                      203
                      204
                      205
                      206
                      207
                      208
                      209
                      210
                      211
                      212
                      213
                      214
                      215
                      216
                      217
                      218
                      219
                      220
                      221
                      222
                      223
                      224
                      225
                      226
                      227
                      228
                      229
                      230
                      231
                      232
                      233
                      234
                      235
                      236
                      237
                      238
                      239
                      240
                      241
                      242
                      243
                      244
                      245
                      246
                      247
                      248
                      249
                      250
                      251
                      252
                      253
                      254
                      255
                      256
                      257
                      258
                      259
                      260
                      261
                      262
                      263
                      264
                      265
                      266
                      267
                      268
                      269
                      270
                      271
                      272
                      273
                      274
                      275
                      276
                      277
                      278
                      279
                      280
                      281
                      282
                      283
                      284
                      285
                      286
                      287
                      288
                      289
                      290
                      291
                      292
                      293
                      294
                      295
                      296
                      297
                      @dataclass
                      class ToolInfo:
                          """Information about a registered tool."""
                      
                          callable: LLMCallableTool
                          """The actual tool implementation"""
                      
                          enabled: bool = True
                          """Whether the tool is currently enabled"""
                      
                          source: ToolSource = "runtime"
                          """Where the tool came from."""
                      
                          priority: int = 100
                          """Priority for tool execution (lower = higher priority)"""
                      
                          requires_confirmation: bool = False
                          """Whether tool execution needs explicit confirmation"""
                      
                          requires_capability: str | None = None
                          """Optional capability required to use this tool"""
                      
                          agent_name: str | None = None
                          """The agent name as an identifier for agent-as-a-tool."""
                      
                          metadata: dict[str, str] = field(default_factory=dict)
                          """Additional tool metadata"""
                      
                          cache_enabled: bool = False
                          """Whether to enable caching for this tool."""
                      
                          @property
                          def schema(self) -> py2openai.OpenAIFunctionTool:
                              """Get the OpenAI function schema for the tool."""
                              return self.callable.get_schema()
                      
                          @property
                          def name(self) -> str:
                              """Get tool name."""
                              return self.callable.name
                      
                          @property
                          def description(self) -> str | None:
                              """Get tool description."""
                              return self.callable.description
                      
                          def matches_filter(self, state: Literal["all", "enabled", "disabled"]) -> bool:
                              """Check if tool matches state filter."""
                              match state:
                                  case "all":
                                      return True
                                  case "enabled":
                                      return self.enabled
                                  case "disabled":
                                      return not self.enabled
                      
                          @property
                          def parameters(self) -> list[ToolParameter]:
                              """Get information about tool parameters."""
                              schema = self.schema["function"]
                              properties: dict[str, Property] = schema.get("properties", {})  # type: ignore
                              required: list[str] = schema.get("required", [])  # type: ignore
                      
                              return [
                                  ToolParameter(
                                      name=name,
                                      required=name in required,
                                      type_info=details.get("type"),
                                      description=details.get("description"),
                                  )
                                  for name, details in properties.items()
                              ]
                      
                          def format_info(self, indent: str = "  ") -> str:
                              """Format complete tool information."""
                              lines = [f"{indent}{self.name}"]
                              if self.description:
                                  lines.append(f"{indent}  {self.description}")
                              if self.parameters:
                                  lines.append(f"{indent}  Parameters:")
                                  lines.extend(f"{indent}    {param}" for param in self.parameters)
                              if self.metadata:
                                  lines.append(f"{indent}  Metadata:")
                                  lines.extend(f"{indent}    {k}: {v}" for k, v in self.metadata.items())
                              return "\n".join(lines)
                      
                          async def execute(self, *args: Any, **kwargs: Any) -> Any:
                              """Execute tool, handling both sync and async cases."""
                              fn = track_tool(self.name)(self.callable.callable)
                              return await execute(fn, *args, **kwargs, use_thread=True)
                      
                          @classmethod
                          def from_code(
                              cls,
                              code: str,
                              name: str | None = None,
                              description: str | None = None,
                          ) -> Self:
                              """Create a tool from a code string."""
                              namespace: dict[str, Any] = {}
                              exec(code, namespace)
                              func = next((v for v in namespace.values() if callable(v)), None)
                              if not func:
                                  msg = "No callable found in provided code"
                                  raise ValueError(msg)
                              return cls.from_callable(
                                  func, name_override=name, description_override=description
                              )
                      
                          @classmethod
                          def from_callable(
                              cls,
                              fn: Callable[..., Any] | str,
                              *,
                              name_override: str | None = None,
                              description_override: str | None = None,
                              schema_override: py2openai.OpenAIFunctionDefinition | None = None,
                              **kwargs: Any,
                          ) -> Self:
                              tool = LLMCallableTool.from_callable(
                                  fn,
                                  name_override=name_override,
                                  description_override=description_override,
                                  schema_override=schema_override,
                              )
                              return cls(tool, **kwargs)
                      
                          @classmethod
                          def from_crewai_tool(
                              cls,
                              tool: Any,
                              *,
                              name_override: str | None = None,
                              description_override: str | None = None,
                              schema_override: py2openai.OpenAIFunctionDefinition | None = None,
                              **kwargs: Any,
                          ) -> Self:
                              """Allows importing crewai tools."""
                              # vaidate_import("crewai_tools", "crewai")
                              try:
                                  from crewai.tools import BaseTool as CrewAiBaseTool
                              except ImportError as e:
                                  msg = "crewai package not found. Please install it with 'pip install crewai'"
                                  raise ImportError(msg) from e
                      
                              if not isinstance(tool, CrewAiBaseTool):
                                  msg = f"Expected CrewAI BaseTool, got {type(tool)}"
                                  raise TypeError(msg)
                      
                              return cls.from_callable(
                                  tool._run,
                                  name_override=name_override or tool.__class__.__name__.removesuffix("Tool"),
                                  description_override=description_override or tool.description,
                                  schema_override=schema_override,
                                  **kwargs,
                              )
                      
                          @classmethod
                          def from_langchain_tool(
                              cls,
                              tool: Any,
                              *,
                              name_override: str | None = None,
                              description_override: str | None = None,
                              schema_override: py2openai.OpenAIFunctionDefinition | None = None,
                              **kwargs: Any,
                          ) -> Self:
                              """Create a tool from a LangChain tool."""
                              # vaidate_import("langchain_core", "langchain")
                              try:
                                  from langchain_core.tools import BaseTool as LangChainBaseTool
                              except ImportError as e:
                                  msg = "langchain-core package not found."
                                  raise ImportError(msg) from e
                      
                              if not isinstance(tool, LangChainBaseTool):
                                  msg = f"Expected LangChain BaseTool, got {type(tool)}"
                                  raise TypeError(msg)
                      
                              return cls.from_callable(
                                  tool.invoke,
                                  name_override=name_override or tool.name,
                                  description_override=description_override or tool.description,
                                  schema_override=schema_override,
                                  **kwargs,
                              )
                      
                          @classmethod
                          def from_autogen_tool(
                              cls,
                              tool: Any,
                              *,
                              name_override: str | None = None,
                              description_override: str | None = None,
                              schema_override: py2openai.OpenAIFunctionDefinition | None = None,
                              **kwargs: Any,
                          ) -> Self:
                              """Create a tool from a AutoGen tool."""
                              # vaidate_import("autogen_core", "autogen")
                              try:
                                  from autogen_core import CancellationToken
                                  from autogen_core.tools import BaseTool
                              except ImportError as e:
                                  msg = "autogent_core package not found."
                                  raise ImportError(msg) from e
                      
                              if not isinstance(tool, BaseTool):
                                  msg = f"Expected AutoGent BaseTool, got {type(tool)}"
                                  raise TypeError(msg)
                              token = CancellationToken()
                      
                              input_model = tool.__class__.__orig_bases__[0].__args__[0]  # type: ignore
                      
                              name = name_override or tool.name or tool.__class__.__name__.removesuffix("Tool")
                              description = (
                                  description_override
                                  or tool.description
                                  or inspect.getdoc(tool.__class__)
                                  or ""
                              )
                      
                              async def wrapper(**kwargs: Any) -> Any:
                                  # Convert kwargs to the expected input model
                                  model = input_model(**kwargs)
                                  return await tool.run(model, cancellation_token=token)
                      
                              return cls.from_callable(
                                  wrapper,  # type: ignore
                                  name_override=name,
                                  description_override=description,
                                  schema_override=schema_override,
                                  **kwargs,
                              )
                      

                      agent_name class-attribute instance-attribute

                      agent_name: str | None = None
                      

                      The agent name as an identifier for agent-as-a-tool.

                      cache_enabled class-attribute instance-attribute

                      cache_enabled: bool = False
                      

                      Whether to enable caching for this tool.

                      callable instance-attribute

                      callable: LLMCallableTool
                      

                      The actual tool implementation

                      description property

                      description: str | None
                      

                      Get tool description.

                      enabled class-attribute instance-attribute

                      enabled: bool = True
                      

                      Whether the tool is currently enabled

                      metadata class-attribute instance-attribute

                      metadata: dict[str, str] = field(default_factory=dict)
                      

                      Additional tool metadata

                      name property

                      name: str
                      

                      Get tool name.

                      parameters property

                      parameters: list[ToolParameter]
                      

                      Get information about tool parameters.

                      priority class-attribute instance-attribute

                      priority: int = 100
                      

                      Priority for tool execution (lower = higher priority)

                      requires_capability class-attribute instance-attribute

                      requires_capability: str | None = None
                      

                      Optional capability required to use this tool

                      requires_confirmation class-attribute instance-attribute

                      requires_confirmation: bool = False
                      

                      Whether tool execution needs explicit confirmation

                      schema property

                      schema: OpenAIFunctionTool
                      

                      Get the OpenAI function schema for the tool.

                      source class-attribute instance-attribute

                      source: ToolSource = 'runtime'
                      

                      Where the tool came from.

                      execute async

                      execute(*args: Any, **kwargs: Any) -> Any
                      

                      Execute tool, handling both sync and async cases.

                      Source code in src/llmling_agent/tools/base.py
                      151
                      152
                      153
                      154
                      async def execute(self, *args: Any, **kwargs: Any) -> Any:
                          """Execute tool, handling both sync and async cases."""
                          fn = track_tool(self.name)(self.callable.callable)
                          return await execute(fn, *args, **kwargs, use_thread=True)
                      

                      format_info

                      format_info(indent: str = '  ') -> str
                      

                      Format complete tool information.

                      Source code in src/llmling_agent/tools/base.py
                      138
                      139
                      140
                      141
                      142
                      143
                      144
                      145
                      146
                      147
                      148
                      149
                      def format_info(self, indent: str = "  ") -> str:
                          """Format complete tool information."""
                          lines = [f"{indent}{self.name}"]
                          if self.description:
                              lines.append(f"{indent}  {self.description}")
                          if self.parameters:
                              lines.append(f"{indent}  Parameters:")
                              lines.extend(f"{indent}    {param}" for param in self.parameters)
                          if self.metadata:
                              lines.append(f"{indent}  Metadata:")
                              lines.extend(f"{indent}    {k}: {v}" for k, v in self.metadata.items())
                          return "\n".join(lines)
                      

                      from_autogen_tool classmethod

                      from_autogen_tool(
                          tool: Any,
                          *,
                          name_override: str | None = None,
                          description_override: str | None = None,
                          schema_override: OpenAIFunctionDefinition | None = None,
                          **kwargs: Any,
                      ) -> Self
                      

                      Create a tool from a AutoGen tool.

                      Source code in src/llmling_agent/tools/base.py
                      252
                      253
                      254
                      255
                      256
                      257
                      258
                      259
                      260
                      261
                      262
                      263
                      264
                      265
                      266
                      267
                      268
                      269
                      270
                      271
                      272
                      273
                      274
                      275
                      276
                      277
                      278
                      279
                      280
                      281
                      282
                      283
                      284
                      285
                      286
                      287
                      288
                      289
                      290
                      291
                      292
                      293
                      294
                      295
                      296
                      297
                      @classmethod
                      def from_autogen_tool(
                          cls,
                          tool: Any,
                          *,
                          name_override: str | None = None,
                          description_override: str | None = None,
                          schema_override: py2openai.OpenAIFunctionDefinition | None = None,
                          **kwargs: Any,
                      ) -> Self:
                          """Create a tool from a AutoGen tool."""
                          # vaidate_import("autogen_core", "autogen")
                          try:
                              from autogen_core import CancellationToken
                              from autogen_core.tools import BaseTool
                          except ImportError as e:
                              msg = "autogent_core package not found."
                              raise ImportError(msg) from e
                      
                          if not isinstance(tool, BaseTool):
                              msg = f"Expected AutoGent BaseTool, got {type(tool)}"
                              raise TypeError(msg)
                          token = CancellationToken()
                      
                          input_model = tool.__class__.__orig_bases__[0].__args__[0]  # type: ignore
                      
                          name = name_override or tool.name or tool.__class__.__name__.removesuffix("Tool")
                          description = (
                              description_override
                              or tool.description
                              or inspect.getdoc(tool.__class__)
                              or ""
                          )
                      
                          async def wrapper(**kwargs: Any) -> Any:
                              # Convert kwargs to the expected input model
                              model = input_model(**kwargs)
                              return await tool.run(model, cancellation_token=token)
                      
                          return cls.from_callable(
                              wrapper,  # type: ignore
                              name_override=name,
                              description_override=description,
                              schema_override=schema_override,
                              **kwargs,
                          )
                      

                      from_code classmethod

                      from_code(code: str, name: str | None = None, description: str | None = None) -> Self
                      

                      Create a tool from a code string.

                      Source code in src/llmling_agent/tools/base.py
                      156
                      157
                      158
                      159
                      160
                      161
                      162
                      163
                      164
                      165
                      166
                      167
                      168
                      169
                      170
                      171
                      172
                      @classmethod
                      def from_code(
                          cls,
                          code: str,
                          name: str | None = None,
                          description: str | None = None,
                      ) -> Self:
                          """Create a tool from a code string."""
                          namespace: dict[str, Any] = {}
                          exec(code, namespace)
                          func = next((v for v in namespace.values() if callable(v)), None)
                          if not func:
                              msg = "No callable found in provided code"
                              raise ValueError(msg)
                          return cls.from_callable(
                              func, name_override=name, description_override=description
                          )
                      

                      from_crewai_tool classmethod

                      from_crewai_tool(
                          tool: Any,
                          *,
                          name_override: str | None = None,
                          description_override: str | None = None,
                          schema_override: OpenAIFunctionDefinition | None = None,
                          **kwargs: Any,
                      ) -> Self
                      

                      Allows importing crewai tools.

                      Source code in src/llmling_agent/tools/base.py
                      192
                      193
                      194
                      195
                      196
                      197
                      198
                      199
                      200
                      201
                      202
                      203
                      204
                      205
                      206
                      207
                      208
                      209
                      210
                      211
                      212
                      213
                      214
                      215
                      216
                      217
                      218
                      219
                      220
                      @classmethod
                      def from_crewai_tool(
                          cls,
                          tool: Any,
                          *,
                          name_override: str | None = None,
                          description_override: str | None = None,
                          schema_override: py2openai.OpenAIFunctionDefinition | None = None,
                          **kwargs: Any,
                      ) -> Self:
                          """Allows importing crewai tools."""
                          # vaidate_import("crewai_tools", "crewai")
                          try:
                              from crewai.tools import BaseTool as CrewAiBaseTool
                          except ImportError as e:
                              msg = "crewai package not found. Please install it with 'pip install crewai'"
                              raise ImportError(msg) from e
                      
                          if not isinstance(tool, CrewAiBaseTool):
                              msg = f"Expected CrewAI BaseTool, got {type(tool)}"
                              raise TypeError(msg)
                      
                          return cls.from_callable(
                              tool._run,
                              name_override=name_override or tool.__class__.__name__.removesuffix("Tool"),
                              description_override=description_override or tool.description,
                              schema_override=schema_override,
                              **kwargs,
                          )
                      

                      from_langchain_tool classmethod

                      from_langchain_tool(
                          tool: Any,
                          *,
                          name_override: str | None = None,
                          description_override: str | None = None,
                          schema_override: OpenAIFunctionDefinition | None = None,
                          **kwargs: Any,
                      ) -> Self
                      

                      Create a tool from a LangChain tool.

                      Source code in src/llmling_agent/tools/base.py
                      222
                      223
                      224
                      225
                      226
                      227
                      228
                      229
                      230
                      231
                      232
                      233
                      234
                      235
                      236
                      237
                      238
                      239
                      240
                      241
                      242
                      243
                      244
                      245
                      246
                      247
                      248
                      249
                      250
                      @classmethod
                      def from_langchain_tool(
                          cls,
                          tool: Any,
                          *,
                          name_override: str | None = None,
                          description_override: str | None = None,
                          schema_override: py2openai.OpenAIFunctionDefinition | None = None,
                          **kwargs: Any,
                      ) -> Self:
                          """Create a tool from a LangChain tool."""
                          # vaidate_import("langchain_core", "langchain")
                          try:
                              from langchain_core.tools import BaseTool as LangChainBaseTool
                          except ImportError as e:
                              msg = "langchain-core package not found."
                              raise ImportError(msg) from e
                      
                          if not isinstance(tool, LangChainBaseTool):
                              msg = f"Expected LangChain BaseTool, got {type(tool)}"
                              raise TypeError(msg)
                      
                          return cls.from_callable(
                              tool.invoke,
                              name_override=name_override or tool.name,
                              description_override=description_override or tool.description,
                              schema_override=schema_override,
                              **kwargs,
                          )
                      

                      matches_filter

                      matches_filter(state: Literal['all', 'enabled', 'disabled']) -> bool
                      

                      Check if tool matches state filter.

                      Source code in src/llmling_agent/tools/base.py
                      111
                      112
                      113
                      114
                      115
                      116
                      117
                      118
                      119
                      def matches_filter(self, state: Literal["all", "enabled", "disabled"]) -> bool:
                          """Check if tool matches state filter."""
                          match state:
                              case "all":
                                  return True
                              case "enabled":
                                  return self.enabled
                              case "disabled":
                                  return not self.enabled