Skip to content

llmling_agent

Class info

Classes

Name Children Inherits
Agent
llmling_agent.agent.agent
Agent for AI-powered interaction with LLMling resources and tools.
    AgentConfig
    llmling_agent.models.agents
    Configuration for a single agent in the system.
      • NodeConfig
      AgentContext
      llmling_agent.agent.context
      Runtime context for agent execution.
        AgentPool
        llmling_agent.delegation.pool
        Pool managing message processing nodes (agents and teams).
          AgentsManifest
          llmling_agent.models.manifest
          Complete agent configuration manifest defining all available agents.
            AudioBase64Content
            llmling_agent.models.content
            Audio from base64 data.
              AudioURLContent
              llmling_agent.models.content
              Audio from URL.
                BaseTeam
                llmling_agent.delegation.base_team
                Base class for Team and TeamRun.
                ChatMessage
                llmling_agent.messaging.messages
                Common message format for all UI types.
                  ImageBase64Content
                  llmling_agent.models.content
                  Image from base64 data.
                    ImageURLContent
                    llmling_agent.models.content
                    Image from URL.
                      MessageNode
                      llmling_agent.messaging.messagenode
                      Base class for all message processing nodes.
                      PDFBase64Content
                      llmling_agent.models.content
                      PDF from base64 data.
                        PDFURLContent
                        llmling_agent.models.content
                        PDF from URL.
                          Team
                          llmling_agent.delegation.team
                          Group of agents that can execute together.
                            TeamRun
                            llmling_agent.delegation.teamrun
                            Handles team operations with monitoring.
                              Tool
                              llmling_agent.tools.base
                              Information about a registered tool.
                                ToolCallInfo
                                llmling_agent.tools.tool_call_info
                                Information about an executed tool call.
                                  VideoURLContent
                                  llmling_agent.models.content
                                  Video from URL.

                                    🛈 DocStrings

                                    LLMling-Agent: main package.

                                    A pydantic-ai based Agent with LLMling backend.

                                    Agent

                                    Bases: MessageNode[TDeps, OutputDataT]

                                    Agent for AI-powered interaction with LLMling resources and tools.

                                    Generically typed with: LLMLingAgent[Type of Dependencies, Type of Result]

                                    This agent integrates LLMling's resource system with PydanticAI's agent capabilities. It provides: - Access to resources through RuntimeConfig - Tool registration for resource operations - System prompt customization - Signals - Message history management - Database logging

                                    Source code in src/llmling_agent/agent/agent.py
                                     126
                                     127
                                     128
                                     129
                                     130
                                     131
                                     132
                                     133
                                     134
                                     135
                                     136
                                     137
                                     138
                                     139
                                     140
                                     141
                                     142
                                     143
                                     144
                                     145
                                     146
                                     147
                                     148
                                     149
                                     150
                                     151
                                     152
                                     153
                                     154
                                     155
                                     156
                                     157
                                     158
                                     159
                                     160
                                     161
                                     162
                                     163
                                     164
                                     165
                                     166
                                     167
                                     168
                                     169
                                     170
                                     171
                                     172
                                     173
                                     174
                                     175
                                     176
                                     177
                                     178
                                     179
                                     180
                                     181
                                     182
                                     183
                                     184
                                     185
                                     186
                                     187
                                     188
                                     189
                                     190
                                     191
                                     192
                                     193
                                     194
                                     195
                                     196
                                     197
                                     198
                                     199
                                     200
                                     201
                                     202
                                     203
                                     204
                                     205
                                     206
                                     207
                                     208
                                     209
                                     210
                                     211
                                     212
                                     213
                                     214
                                     215
                                     216
                                     217
                                     218
                                     219
                                     220
                                     221
                                     222
                                     223
                                     224
                                     225
                                     226
                                     227
                                     228
                                     229
                                     230
                                     231
                                     232
                                     233
                                     234
                                     235
                                     236
                                     237
                                     238
                                     239
                                     240
                                     241
                                     242
                                     243
                                     244
                                     245
                                     246
                                     247
                                     248
                                     249
                                     250
                                     251
                                     252
                                     253
                                     254
                                     255
                                     256
                                     257
                                     258
                                     259
                                     260
                                     261
                                     262
                                     263
                                     264
                                     265
                                     266
                                     267
                                     268
                                     269
                                     270
                                     271
                                     272
                                     273
                                     274
                                     275
                                     276
                                     277
                                     278
                                     279
                                     280
                                     281
                                     282
                                     283
                                     284
                                     285
                                     286
                                     287
                                     288
                                     289
                                     290
                                     291
                                     292
                                     293
                                     294
                                     295
                                     296
                                     297
                                     298
                                     299
                                     300
                                     301
                                     302
                                     303
                                     304
                                     305
                                     306
                                     307
                                     308
                                     309
                                     310
                                     311
                                     312
                                     313
                                     314
                                     315
                                     316
                                     317
                                     318
                                     319
                                     320
                                     321
                                     322
                                     323
                                     324
                                     325
                                     326
                                     327
                                     328
                                     329
                                     330
                                     331
                                     332
                                     333
                                     334
                                     335
                                     336
                                     337
                                     338
                                     339
                                     340
                                     341
                                     342
                                     343
                                     344
                                     345
                                     346
                                     347
                                     348
                                     349
                                     350
                                     351
                                     352
                                     353
                                     354
                                     355
                                     356
                                     357
                                     358
                                     359
                                     360
                                     361
                                     362
                                     363
                                     364
                                     365
                                     366
                                     367
                                     368
                                     369
                                     370
                                     371
                                     372
                                     373
                                     374
                                     375
                                     376
                                     377
                                     378
                                     379
                                     380
                                     381
                                     382
                                     383
                                     384
                                     385
                                     386
                                     387
                                     388
                                     389
                                     390
                                     391
                                     392
                                     393
                                     394
                                     395
                                     396
                                     397
                                     398
                                     399
                                     400
                                     401
                                     402
                                     403
                                     404
                                     405
                                     406
                                     407
                                     408
                                     409
                                     410
                                     411
                                     412
                                     413
                                     414
                                     415
                                     416
                                     417
                                     418
                                     419
                                     420
                                     421
                                     422
                                     423
                                     424
                                     425
                                     426
                                     427
                                     428
                                     429
                                     430
                                     431
                                     432
                                     433
                                     434
                                     435
                                     436
                                     437
                                     438
                                     439
                                     440
                                     441
                                     442
                                     443
                                     444
                                     445
                                     446
                                     447
                                     448
                                     449
                                     450
                                     451
                                     452
                                     453
                                     454
                                     455
                                     456
                                     457
                                     458
                                     459
                                     460
                                     461
                                     462
                                     463
                                     464
                                     465
                                     466
                                     467
                                     468
                                     469
                                     470
                                     471
                                     472
                                     473
                                     474
                                     475
                                     476
                                     477
                                     478
                                     479
                                     480
                                     481
                                     482
                                     483
                                     484
                                     485
                                     486
                                     487
                                     488
                                     489
                                     490
                                     491
                                     492
                                     493
                                     494
                                     495
                                     496
                                     497
                                     498
                                     499
                                     500
                                     501
                                     502
                                     503
                                     504
                                     505
                                     506
                                     507
                                     508
                                     509
                                     510
                                     511
                                     512
                                     513
                                     514
                                     515
                                     516
                                     517
                                     518
                                     519
                                     520
                                     521
                                     522
                                     523
                                     524
                                     525
                                     526
                                     527
                                     528
                                     529
                                     530
                                     531
                                     532
                                     533
                                     534
                                     535
                                     536
                                     537
                                     538
                                     539
                                     540
                                     541
                                     542
                                     543
                                     544
                                     545
                                     546
                                     547
                                     548
                                     549
                                     550
                                     551
                                     552
                                     553
                                     554
                                     555
                                     556
                                     557
                                     558
                                     559
                                     560
                                     561
                                     562
                                     563
                                     564
                                     565
                                     566
                                     567
                                     568
                                     569
                                     570
                                     571
                                     572
                                     573
                                     574
                                     575
                                     576
                                     577
                                     578
                                     579
                                     580
                                     581
                                     582
                                     583
                                     584
                                     585
                                     586
                                     587
                                     588
                                     589
                                     590
                                     591
                                     592
                                     593
                                     594
                                     595
                                     596
                                     597
                                     598
                                     599
                                     600
                                     601
                                     602
                                     603
                                     604
                                     605
                                     606
                                     607
                                     608
                                     609
                                     610
                                     611
                                     612
                                     613
                                     614
                                     615
                                     616
                                     617
                                     618
                                     619
                                     620
                                     621
                                     622
                                     623
                                     624
                                     625
                                     626
                                     627
                                     628
                                     629
                                     630
                                     631
                                     632
                                     633
                                     634
                                     635
                                     636
                                     637
                                     638
                                     639
                                     640
                                     641
                                     642
                                     643
                                     644
                                     645
                                     646
                                     647
                                     648
                                     649
                                     650
                                     651
                                     652
                                     653
                                     654
                                     655
                                     656
                                     657
                                     658
                                     659
                                     660
                                     661
                                     662
                                     663
                                     664
                                     665
                                     666
                                     667
                                     668
                                     669
                                     670
                                     671
                                     672
                                     673
                                     674
                                     675
                                     676
                                     677
                                     678
                                     679
                                     680
                                     681
                                     682
                                     683
                                     684
                                     685
                                     686
                                     687
                                     688
                                     689
                                     690
                                     691
                                     692
                                     693
                                     694
                                     695
                                     696
                                     697
                                     698
                                     699
                                     700
                                     701
                                     702
                                     703
                                     704
                                     705
                                     706
                                     707
                                     708
                                     709
                                     710
                                     711
                                     712
                                     713
                                     714
                                     715
                                     716
                                     717
                                     718
                                     719
                                     720
                                     721
                                     722
                                     723
                                     724
                                     725
                                     726
                                     727
                                     728
                                     729
                                     730
                                     731
                                     732
                                     733
                                     734
                                     735
                                     736
                                     737
                                     738
                                     739
                                     740
                                     741
                                     742
                                     743
                                     744
                                     745
                                     746
                                     747
                                     748
                                     749
                                     750
                                     751
                                     752
                                     753
                                     754
                                     755
                                     756
                                     757
                                     758
                                     759
                                     760
                                     761
                                     762
                                     763
                                     764
                                     765
                                     766
                                     767
                                     768
                                     769
                                     770
                                     771
                                     772
                                     773
                                     774
                                     775
                                     776
                                     777
                                     778
                                     779
                                     780
                                     781
                                     782
                                     783
                                     784
                                     785
                                     786
                                     787
                                     788
                                     789
                                     790
                                     791
                                     792
                                     793
                                     794
                                     795
                                     796
                                     797
                                     798
                                     799
                                     800
                                     801
                                     802
                                     803
                                     804
                                     805
                                     806
                                     807
                                     808
                                     809
                                     810
                                     811
                                     812
                                     813
                                     814
                                     815
                                     816
                                     817
                                     818
                                     819
                                     820
                                     821
                                     822
                                     823
                                     824
                                     825
                                     826
                                     827
                                     828
                                     829
                                     830
                                     831
                                     832
                                     833
                                     834
                                     835
                                     836
                                     837
                                     838
                                     839
                                     840
                                     841
                                     842
                                     843
                                     844
                                     845
                                     846
                                     847
                                     848
                                     849
                                     850
                                     851
                                     852
                                     853
                                     854
                                     855
                                     856
                                     857
                                     858
                                     859
                                     860
                                     861
                                     862
                                     863
                                     864
                                     865
                                     866
                                     867
                                     868
                                     869
                                     870
                                     871
                                     872
                                     873
                                     874
                                     875
                                     876
                                     877
                                     878
                                     879
                                     880
                                     881
                                     882
                                     883
                                     884
                                     885
                                     886
                                     887
                                     888
                                     889
                                     890
                                     891
                                     892
                                     893
                                     894
                                     895
                                     896
                                     897
                                     898
                                     899
                                     900
                                     901
                                     902
                                     903
                                     904
                                     905
                                     906
                                     907
                                     908
                                     909
                                     910
                                     911
                                     912
                                     913
                                     914
                                     915
                                     916
                                     917
                                     918
                                     919
                                     920
                                     921
                                     922
                                     923
                                     924
                                     925
                                     926
                                     927
                                     928
                                     929
                                     930
                                     931
                                     932
                                     933
                                     934
                                     935
                                     936
                                     937
                                     938
                                     939
                                     940
                                     941
                                     942
                                     943
                                     944
                                     945
                                     946
                                     947
                                     948
                                     949
                                     950
                                     951
                                     952
                                     953
                                     954
                                     955
                                     956
                                     957
                                     958
                                     959
                                     960
                                     961
                                     962
                                     963
                                     964
                                     965
                                     966
                                     967
                                     968
                                     969
                                     970
                                     971
                                     972
                                     973
                                     974
                                     975
                                     976
                                     977
                                     978
                                     979
                                     980
                                     981
                                     982
                                     983
                                     984
                                     985
                                     986
                                     987
                                     988
                                     989
                                     990
                                     991
                                     992
                                     993
                                     994
                                     995
                                     996
                                     997
                                     998
                                     999
                                    1000
                                    1001
                                    1002
                                    1003
                                    1004
                                    1005
                                    1006
                                    1007
                                    1008
                                    1009
                                    1010
                                    1011
                                    1012
                                    1013
                                    1014
                                    1015
                                    1016
                                    1017
                                    1018
                                    1019
                                    1020
                                    1021
                                    1022
                                    1023
                                    1024
                                    1025
                                    1026
                                    1027
                                    1028
                                    1029
                                    1030
                                    1031
                                    1032
                                    1033
                                    1034
                                    1035
                                    1036
                                    1037
                                    1038
                                    1039
                                    1040
                                    1041
                                    1042
                                    1043
                                    1044
                                    1045
                                    1046
                                    1047
                                    1048
                                    1049
                                    1050
                                    1051
                                    1052
                                    1053
                                    1054
                                    1055
                                    1056
                                    1057
                                    1058
                                    1059
                                    1060
                                    1061
                                    1062
                                    1063
                                    1064
                                    1065
                                    1066
                                    1067
                                    1068
                                    1069
                                    1070
                                    1071
                                    1072
                                    1073
                                    1074
                                    1075
                                    1076
                                    1077
                                    1078
                                    1079
                                    1080
                                    1081
                                    1082
                                    1083
                                    1084
                                    1085
                                    1086
                                    1087
                                    1088
                                    1089
                                    1090
                                    1091
                                    1092
                                    1093
                                    1094
                                    1095
                                    1096
                                    1097
                                    1098
                                    1099
                                    1100
                                    1101
                                    1102
                                    1103
                                    1104
                                    1105
                                    1106
                                    1107
                                    1108
                                    1109
                                    1110
                                    1111
                                    1112
                                    1113
                                    1114
                                    1115
                                    1116
                                    1117
                                    1118
                                    1119
                                    1120
                                    1121
                                    1122
                                    1123
                                    1124
                                    1125
                                    1126
                                    1127
                                    1128
                                    1129
                                    1130
                                    1131
                                    1132
                                    1133
                                    1134
                                    1135
                                    1136
                                    1137
                                    1138
                                    1139
                                    1140
                                    1141
                                    1142
                                    1143
                                    1144
                                    1145
                                    1146
                                    1147
                                    1148
                                    1149
                                    1150
                                    1151
                                    1152
                                    1153
                                    1154
                                    1155
                                    1156
                                    1157
                                    1158
                                    1159
                                    1160
                                    1161
                                    1162
                                    1163
                                    1164
                                    1165
                                    1166
                                    1167
                                    1168
                                    1169
                                    1170
                                    1171
                                    1172
                                    1173
                                    1174
                                    1175
                                    1176
                                    1177
                                    1178
                                    1179
                                    1180
                                    1181
                                    1182
                                    1183
                                    1184
                                    1185
                                    1186
                                    1187
                                    1188
                                    1189
                                    1190
                                    1191
                                    1192
                                    1193
                                    1194
                                    1195
                                    1196
                                    1197
                                    1198
                                    1199
                                    1200
                                    1201
                                    1202
                                    1203
                                    1204
                                    1205
                                    1206
                                    1207
                                    1208
                                    1209
                                    1210
                                    1211
                                    1212
                                    1213
                                    1214
                                    1215
                                    1216
                                    1217
                                    1218
                                    1219
                                    1220
                                    1221
                                    1222
                                    1223
                                    1224
                                    1225
                                    1226
                                    1227
                                    1228
                                    1229
                                    1230
                                    1231
                                    1232
                                    1233
                                    1234
                                    1235
                                    1236
                                    1237
                                    1238
                                    1239
                                    1240
                                    1241
                                    1242
                                    1243
                                    1244
                                    1245
                                    1246
                                    1247
                                    1248
                                    1249
                                    1250
                                    1251
                                    1252
                                    1253
                                    1254
                                    1255
                                    1256
                                    1257
                                    1258
                                    1259
                                    1260
                                    1261
                                    1262
                                    1263
                                    1264
                                    1265
                                    1266
                                    1267
                                    1268
                                    1269
                                    1270
                                    1271
                                    1272
                                    1273
                                    1274
                                    1275
                                    1276
                                    1277
                                    1278
                                    1279
                                    1280
                                    1281
                                    1282
                                    1283
                                    1284
                                    1285
                                    1286
                                    class Agent[TDeps = None, OutputDataT = str](MessageNode[TDeps, OutputDataT]):
                                        """Agent for AI-powered interaction with LLMling resources and tools.
                                    
                                        Generically typed with: LLMLingAgent[Type of Dependencies, Type of Result]
                                    
                                        This agent integrates LLMling's resource system with PydanticAI's agent capabilities.
                                        It provides:
                                        - Access to resources through RuntimeConfig
                                        - Tool registration for resource operations
                                        - System prompt customization
                                        - Signals
                                        - Message history management
                                        - Database logging
                                        """
                                    
                                        @dataclass(frozen=True)
                                        class AgentReset:
                                            """Emitted when agent is reset."""
                                    
                                            agent_name: AgentName
                                            previous_tools: dict[str, bool]
                                            new_tools: dict[str, bool]
                                            timestamp: datetime = field(default_factory=get_now)
                                    
                                        # this fixes weird mypy issue
                                        talk: Interactions
                                        run_failed = Signal(str, Exception)
                                        agent_reset = Signal(AgentReset)
                                    
                                        def __init__(  # noqa: PLR0915
                                            # we dont use AgentKwargs here so that we can work with explicit ones in the ctor
                                            self,
                                            name: str = "llmling-agent",
                                            provider: AgentType = "pydantic_ai",
                                            *,
                                            model: ModelType = None,
                                            output_type: OutputSpec[OutputDataT] | StructuredResponseConfig | str = str,  # type: ignore[assignment]
                                            runtime: RuntimeConfig | Config | JoinablePathLike | None = None,
                                            context: AgentContext[TDeps] | None = None,
                                            session: SessionIdType | SessionQuery | MemoryConfig | bool | int = None,
                                            system_prompt: AnyPromptType | Sequence[AnyPromptType] = (),
                                            description: str | None = None,
                                            tools: Sequence[ToolType | Tool] | None = None,
                                            toolsets: Sequence[ResourceProvider] | None = None,
                                            mcp_servers: Sequence[str | MCPServerConfig] | None = None,
                                            resources: Sequence[Resource | PromptType | str] = (),
                                            retries: int = 1,
                                            output_retries: int | None = None,
                                            end_strategy: EndStrategy = "early",
                                            defer_model_check: bool = False,
                                            input_provider: InputProvider | None = None,
                                            parallel_init: bool = True,
                                            debug: bool = False,
                                            event_handlers: Sequence[IndividualEventHandler] | None = None,
                                        ):
                                            """Initialize agent with runtime configuration.
                                    
                                            Args:
                                                name: Name of the agent for logging and identification
                                                provider: Agent type to use (ai: PydanticAIProvider, human: HumanProvider)
                                                model: The default model to use (defaults to GPT-5)
                                                output_type: The default output type to use (defaults to str)
                                                runtime: Runtime configuration providing access to resources/tools
                                                context: Agent context with configuration
                                                session: Memory configuration.
                                                    - None: Default memory config
                                                    - False: Disable message history (max_messages=0)
                                                    - int: Max tokens for memory
                                                    - str/UUID: Session identifier
                                                    - MemoryConfig: Full memory configuration
                                                    - MemoryProvider: Custom memory provider
                                                    - SessionQuery: Session query
                                    
                                                system_prompt: System prompts for the agent
                                                description: Description of the Agent ("what it can do")
                                                tools: List of tools to register with the agent
                                                toolsets: List of toolset resource providers for the agent
                                                mcp_servers: MCP servers to connect to
                                                resources: Additional resources to load
                                                retries: Default number of retries for failed operations
                                                output_retries: Max retries for result validation (defaults to retries)
                                                end_strategy: Strategy for handling tool calls that are requested alongside
                                                              a final result
                                                defer_model_check: Whether to defer model evaluation until first run
                                                input_provider: Provider for human input (tool confirmation / HumanProviders)
                                                parallel_init: Whether to initialize resources in parallel
                                                debug: Whether to enable debug mode
                                                event_handlers: Sequence of event handlers to register with the agent
                                            """
                                            from llmling_agent.agent import AgentContext
                                            from llmling_agent.agent.conversation import ConversationManager
                                            from llmling_agent.agent.interactions import Interactions
                                            from llmling_agent.agent.sys_prompts import SystemPrompts
                                            from llmling_agent_providers.base import AgentProvider
                                    
                                            self.task_manager = TaskManager()
                                            self._infinite = False
                                            # save some stuff for asnyc init
                                            self._owns_runtime = False
                                            # match output_type:
                                            #     case type() | str():
                                            #         # For types and named definitions, use overrides if provided
                                            #         self.set_output_type(
                                            #             output_type,
                                            #             tool_name=tool_name,
                                            #             tool_description=tool_description,
                                            #         )
                                            #     case StructuredResponseConfig():
                                            #         # For response definitions, use as-is
                                            #         # (overrides don't apply to complete definitions)
                                            #         self.set_output_type(output_type)
                                            # prepare context
                                            ctx = context or AgentContext[TDeps].create_default(
                                                name,
                                                input_provider=input_provider,
                                            )
                                            self._context = ctx
                                            self._output_type = to_type(output_type, ctx.definition.responses)
                                            memory_cfg = (
                                                session
                                                if isinstance(session, MemoryConfig)
                                                else MemoryConfig.from_value(session)
                                            )
                                            super().__init__(
                                                name=name,
                                                context=ctx,
                                                description=description,
                                                enable_logging=memory_cfg.enable,
                                                mcp_servers=mcp_servers,
                                                progress_handler=self._create_progress_handler(),
                                            )
                                            # Initialize runtime
                                            match runtime:
                                                case None:
                                                    ctx.runtime = RuntimeConfig.from_config(Config())
                                                case Config() | str() | PathLike() | UPath():
                                                    ctx.runtime = RuntimeConfig.from_config(runtime)
                                                case RuntimeConfig():
                                                    ctx.runtime = runtime
                                                case _:
                                                    msg = f"Invalid runtime type: {type(runtime)}"
                                                    raise TypeError(msg)
                                    
                                            runtime_provider = RuntimePromptProvider(ctx.runtime)
                                            ctx.definition.prompt_manager.providers["runtime"] = runtime_provider
                                            # Initialize tool manager
                                            self.event_handler = MultiEventHandler[IndividualEventHandler](event_handlers)
                                            all_tools = list(tools or [])
                                            self.tools = ToolManager(all_tools)
                                            self.tools.add_provider(self.mcp)
                                            if builtin_tools := ctx.config.get_tool_provider():
                                                self.tools.add_provider(builtin_tools)
                                    
                                            # Add toolset providers
                                            if toolsets:
                                                for toolset_provider in toolsets:
                                                    self.tools.add_provider(toolset_provider)
                                    
                                            # Initialize conversation manager
                                            resources = list(resources)
                                            if ctx.config.knowledge:
                                                resources.extend(ctx.config.knowledge.get_resources())
                                            self.conversation = ConversationManager(self, memory_cfg, resources=resources)
                                            # Initialize provider
                                            match provider:
                                                case "pydantic_ai":
                                                    from llmling_agent_providers.pydanticai import PydanticAIProvider
                                    
                                                    if model and not isinstance(model, str):
                                                        from pydantic_ai import models
                                    
                                                        assert isinstance(model, models.Model)
                                                    self._provider: AgentProvider = PydanticAIProvider(
                                                        model=model,
                                                        retries=retries,
                                                        end_strategy=end_strategy,
                                                        output_retries=output_retries,
                                                        defer_model_check=defer_model_check,
                                                        debug=debug,
                                                        context=ctx,
                                                    )
                                                case "human":
                                                    from llmling_agent_providers.human import HumanProvider
                                    
                                                    self._provider = HumanProvider(name=name, debug=debug, context=ctx)
                                                case Callable():
                                                    from llmling_agent_providers.callback import CallbackProvider
                                    
                                                    self._provider = CallbackProvider(
                                                        provider, name=name, debug=debug, context=ctx
                                                    )
                                                case AgentProvider():
                                                    self._provider = provider
                                                    self._provider.context = ctx
                                                case _:
                                                    msg = f"Invalid agent type: {type}"
                                                    raise ValueError(msg)
                                    
                                            # Initialize skills registry
                                            from llmling_agent.tools.skills import SkillsRegistry
                                    
                                            self.skills_registry = SkillsRegistry()
                                    
                                            if ctx and ctx.definition:
                                                from llmling_agent.observability import registry
                                    
                                                registry.configure_observability(ctx.definition.observability)
                                    
                                            # init variables
                                            self._debug = debug
                                            self.parallel_init = parallel_init
                                            self.name = name
                                            self._background_task: asyncio.Task[Any] | None = None
                                            self._progress_queue: asyncio.Queue[ToolCallProgressEvent] = asyncio.Queue()
                                    
                                            # Forward provider signals
                                            self._provider.tool_used.connect(self.tool_used)
                                            self.talk = Interactions(self)
                                    
                                            # Set up system prompts
                                            config_prompts = ctx.config.system_prompts if ctx else []
                                            all_prompts: list[AnyPromptType] = list(config_prompts)
                                            if isinstance(system_prompt, list):
                                                all_prompts.extend(system_prompt)
                                            else:
                                                all_prompts.append(system_prompt)
                                            self.sys_prompts = SystemPrompts(all_prompts, context=ctx)
                                    
                                        def __repr__(self) -> str:
                                            desc = f", {self.description!r}" if self.description else ""
                                            return f"Agent({self.name!r}, provider={self._provider.NAME!r}{desc})"
                                    
                                        def __prompt__(self) -> str:
                                            typ = self._provider.__class__.__name__
                                            model = self.model_name or "default"
                                            parts = [f"Agent: {self.name}", f"Type: {typ}", f"Model: {model}"]
                                            if self.description:
                                                parts.append(f"Description: {self.description}")
                                            parts.extend([self.tools.__prompt__(), self.conversation.__prompt__()])
                                    
                                            return "\n".join(parts)
                                    
                                        async def __aenter__(self) -> Self:
                                            """Enter async context and set up MCP servers."""
                                            try:
                                                # Collect all coroutines that need to be run
                                                coros: list[Coroutine[Any, Any, Any]] = []
                                    
                                                # Runtime initialization if needed
                                                runtime_ref = self.context.runtime
                                                if runtime_ref and not runtime_ref._initialized:
                                                    self._owns_runtime = True
                                                    coros.append(runtime_ref.__aenter__())
                                    
                                                # Events initialization
                                                coros.append(super().__aenter__())
                                    
                                                # Get conversation init tasks directly
                                                coros.extend(self.conversation.get_initialization_tasks())
                                    
                                                # Execute coroutines either in parallel or sequentially
                                                if self.parallel_init and coros:
                                                    await asyncio.gather(*coros)
                                                else:
                                                    for coro in coros:
                                                        await coro
                                                if runtime_ref:
                                                    self.tools.add_provider(RuntimeResourceProvider(runtime_ref))
                                                for provider in self.context.config.get_toolsets():
                                                    self.tools.add_provider(provider)
                                            except Exception as e:
                                                # Clean up in reverse order
                                                if self._owns_runtime and runtime_ref and self.context.runtime == runtime_ref:
                                                    await runtime_ref.__aexit__(type(e), e, e.__traceback__)
                                                msg = "Failed to initialize agent"
                                                raise RuntimeError(msg) from e
                                            else:
                                                return self
                                    
                                        async def __aexit__(
                                            self,
                                            exc_type: type[BaseException] | None,
                                            exc_val: BaseException | None,
                                            exc_tb: TracebackType | None,
                                        ):
                                            """Exit async context."""
                                            await super().__aexit__(exc_type, exc_val, exc_tb)
                                            try:
                                                await self.mcp.__aexit__(exc_type, exc_val, exc_tb)
                                            finally:
                                                if self._owns_runtime and self.context.runtime:
                                                    self.tools.remove_provider("runtime")
                                                    await self.context.runtime.__aexit__(exc_type, exc_val, exc_tb)
                                                # for provider in await self.context.config.get_toolsets():
                                                #     self.tools.remove_provider(provider.name)
                                    
                                        @overload
                                        def __and__(  # if other doesnt define deps, we take the agents one
                                            self, other: ProcessorCallback[Any] | Team[TDeps] | Agent[TDeps, Any]
                                        ) -> Team[TDeps]: ...
                                    
                                        @overload
                                        def __and__(  # otherwise, we dont know and deps is Any
                                            self, other: ProcessorCallback[Any] | Team[Any] | Agent[Any, Any]
                                        ) -> Team[Any]: ...
                                    
                                        def __and__(self, other: MessageNode[Any, Any] | ProcessorCallback[Any]) -> Team[Any]:
                                            """Create sequential team using & operator.
                                    
                                            Example:
                                                group = analyzer & planner & executor  # Create group of 3
                                                group = analyzer & existing_group  # Add to existing group
                                            """
                                            from llmling_agent.delegation.team import Team
                                    
                                            match other:
                                                case Team():
                                                    return Team([self, *other.agents])
                                                case Callable():
                                                    agent_2 = Agent.from_callback(other)
                                                    agent_2.context.pool = self.context.pool
                                                    return Team([self, agent_2])
                                                case MessageNode():
                                                    return Team([self, other])
                                                case _:
                                                    msg = f"Invalid agent type: {type(other)}"
                                                    raise ValueError(msg)
                                    
                                        @overload
                                        def __or__(self, other: MessageNode[TDeps, Any]) -> TeamRun[TDeps, Any]: ...
                                    
                                        @overload
                                        def __or__[TOtherDeps](
                                            self,
                                            other: MessageNode[TOtherDeps, Any],
                                        ) -> TeamRun[Any, Any]: ...
                                    
                                        @overload
                                        def __or__(self, other: ProcessorCallback[Any]) -> TeamRun[Any, Any]: ...
                                    
                                        def __or__(self, other: MessageNode[Any, Any] | ProcessorCallback[Any]) -> TeamRun:
                                            # Create new execution with sequential mode (for piping)
                                            from llmling_agent import TeamRun
                                    
                                            if callable(other):
                                                other = Agent.from_callback(other)
                                                other.context.pool = self.context.pool
                                    
                                            return TeamRun([self, other])
                                    
                                        @classmethod
                                        def from_callback[TResult](
                                            cls,
                                            callback: ProcessorCallback[TResult],
                                            *,
                                            name: str | None = None,
                                            **kwargs: Any,
                                        ) -> Agent[None, TResult]:
                                            """Create an agent from a processing callback.
                                    
                                            Args:
                                                callback: Function to process messages. Can be:
                                                    - sync or async
                                                    - with or without context
                                                    - must return str for pipeline compatibility
                                                name: Optional name for the agent
                                                kwargs: Additional arguments for agent
                                            """
                                            from llmling_agent.agent.agent import Agent
                                            from llmling_agent_providers.callback import CallbackProvider
                                    
                                            name = name or callback.__name__ or "processor"
                                            provider = CallbackProvider(callback, name=name)
                                            # Get return type from signature for validation
                                            hints = get_type_hints(callback)
                                            return_type = hints.get("return")
                                    
                                            # If async, unwrap from Awaitable
                                            if (
                                                return_type
                                                and hasattr(return_type, "__origin__")
                                                and return_type.__origin__ is Awaitable
                                            ):
                                                return_type = return_type.__args__[0]
                                            return Agent(
                                                provider=provider,
                                                name=name,
                                                output_type=return_type or str,
                                                **kwargs,
                                            )  # type: ignore
                                    
                                        @property
                                        def name(self) -> str:
                                            """Get agent name."""
                                            return self._name or "llmling-agent"
                                    
                                        @name.setter
                                        def name(self, value: str):
                                            self._provider.name = value
                                            self._name = value
                                    
                                        @property
                                        def context(self) -> AgentContext[TDeps]:
                                            """Get agent context."""
                                            return self._context
                                    
                                        @context.setter
                                        def context(self, value: AgentContext[TDeps]):
                                            """Set agent context and propagate to provider."""
                                            self._provider.context = value
                                            self.mcp.context = value
                                            self._context = value
                                    
                                        def set_output_type(
                                            self,
                                            output_type: type | str | StructuredResponseConfig | None,
                                            *,
                                            tool_name: str | None = None,
                                            tool_description: str | None = None,
                                        ):
                                            """Set or update the result type for this agent.
                                    
                                            Args:
                                                output_type: New result type, can be:
                                                    - A Python type for validation
                                                    - Name of a response definition
                                                    - Response definition instance
                                                    - None to reset to unstructured mode
                                                tool_name: Optional override for tool name
                                                tool_description: Optional override for tool description
                                            """
                                            logger.debug("Setting result type", output_type=output_type, agent_name=self.name)
                                            self._output_type = to_type(output_type)
                                    
                                        @property
                                        def provider(self) -> AgentProvider:
                                            """Get the underlying provider."""
                                            return self._provider
                                    
                                        @provider.setter
                                        def provider(self, value: AgentType, model: ModelType = None):
                                            """Set the underlying provider."""
                                            from llmling_agent_providers.base import AgentProvider
                                    
                                            name = self.name
                                            debug = self._debug
                                            self._provider.tool_used.disconnect(self.tool_used)
                                            match value:
                                                case AgentProvider():
                                                    self._provider = value
                                                case "pydantic_ai":
                                                    from llmling_agent_providers.pydanticai import PydanticAIProvider
                                    
                                                    self._provider = PydanticAIProvider(model=model, name=name, debug=debug)
                                                case "human":
                                                    from llmling_agent_providers.human import HumanProvider
                                    
                                                    self._provider = HumanProvider(name=name, debug=debug)
                                                case Callable():
                                                    from llmling_agent_providers.callback import CallbackProvider
                                    
                                                    self._provider = CallbackProvider(value, name=name, debug=debug)
                                                case _:
                                                    msg = f"Invalid agent type: {type}"
                                                    raise ValueError(msg)
                                            self._provider.tool_used.connect(self.tool_used)
                                            self._provider.context = self._context  # pyright: ignore[reportAttributeAccessIssue]
                                    
                                        def to_structured[NewOutputDataT](
                                            self,
                                            output_type: type[NewOutputDataT] | str | StructuredResponseConfig,
                                            *,
                                            tool_name: str | None = None,
                                            tool_description: str | None = None,
                                        ) -> Agent[TDeps, NewOutputDataT] | Self:
                                            """Convert this agent to a structured agent.
                                    
                                            Args:
                                                output_type: Type for structured responses. Can be:
                                                    - A Python type (Pydantic model)
                                                    - Name of response definition from context
                                                    - Complete response definition
                                                tool_name: Optional override for result tool name
                                                tool_description: Optional override for result tool description
                                    
                                            Returns:
                                                Typed Agent
                                            """
                                            self.set_output_type(output_type)  # type: ignore
                                            return self
                                    
                                        def is_busy(self) -> bool:
                                            """Check if agent is currently processing tasks."""
                                            return bool(self.task_manager._pending_tasks or self._background_task)
                                    
                                        @property
                                        def model_name(self) -> str | None:
                                            """Get the model name in a consistent format."""
                                            return self._provider.model_name
                                    
                                        def to_tool(
                                            self,
                                            *,
                                            name: str | None = None,
                                            reset_history_on_run: bool = True,
                                            pass_message_history: bool = False,
                                            parent: Agent[Any, Any] | None = None,
                                        ) -> Tool:
                                            """Create a tool from this agent.
                                    
                                            Args:
                                                name: Optional tool name override
                                                reset_history_on_run: Clear agent's history before each run
                                                pass_message_history: Pass parent's message history to agent
                                                parent: Optional parent agent for history/context sharing
                                            """
                                            tool_name = name or f"ask_{self.name}"
                                    
                                            # TODO: should probably make output type configurable
                                            async def wrapped_tool(prompt: str) -> Any:
                                                if pass_message_history and not parent:
                                                    msg = "Parent agent required for message history sharing"
                                                    raise ToolError(msg)
                                    
                                                if reset_history_on_run:
                                                    self.conversation.clear()
                                    
                                                history = None
                                                if pass_message_history and parent:
                                                    history = parent.conversation.get_history()
                                                    old = self.conversation.get_history()
                                                    self.conversation.set_history(history)
                                                result = await self.run(prompt)
                                                if history:
                                                    self.conversation.set_history(old)
                                                return result.data
                                    
                                            normalized_name = self.name.replace("_", " ").title()
                                            docstring = f"Get expert answer from specialized agent: {normalized_name}"
                                            if self.description:
                                                docstring = f"{docstring}\n\n{self.description}"
                                    
                                            wrapped_tool.__doc__ = docstring
                                            wrapped_tool.__name__ = tool_name
                                    
                                            return Tool.from_callable(
                                                wrapped_tool,
                                                name_override=tool_name,
                                                description_override=docstring,
                                            )
                                    
                                        @logfire.instrument("Calling Agent.run: {prompts}:")
                                        async def _run(
                                            self,
                                            *prompts: AnyPromptType | PIL.Image.Image | os.PathLike[str] | ChatMessage[Any],
                                            output_type: type[OutputDataT] | None = None,
                                            model: ModelType = None,
                                            store_history: bool = True,
                                            tool_choice: str | list[str] | None = None,
                                            usage_limits: UsageLimits | None = None,
                                            message_id: str | None = None,
                                            conversation_id: str | None = None,
                                            messages: list[ChatMessage[Any]] | None = None,
                                            wait_for_connections: bool | None = None,
                                        ) -> ChatMessage[OutputDataT]:
                                            """Run agent with prompt and get response.
                                    
                                            Args:
                                                prompts: User query or instruction
                                                output_type: Optional type for structured responses
                                                model: Optional model override
                                                store_history: Whether the message exchange should be added to the
                                                                context window
                                                tool_choice: Filter tool choice by name
                                                usage_limits: Optional usage limits for the model
                                                message_id: Optional message id for the returned message.
                                                            Automatically generated if not provided.
                                                conversation_id: Optional conversation id for the returned message.
                                                messages: Optional list of messages to replace the conversation history
                                                wait_for_connections: Whether to wait for connected agents to complete
                                    
                                            Returns:
                                                Result containing response and run information
                                    
                                            Raises:
                                                UnexpectedModelBehavior: If the model fails or behaves unexpectedly
                                            """
                                            """Run agent with prompt and get response."""
                                            message_id = message_id or str(uuid4())
                                            tools = await self.tools.get_tools(state="enabled", names=tool_choice)
                                            final_type = to_type(output_type) if output_type else self._output_type
                                            start_time = time.perf_counter()
                                            sys_prompt = await self.sys_prompts.format_system_prompt(self)
                                    
                                            message_history = (
                                                messages if messages is not None else self.conversation.get_history()
                                            )
                                            try:
                                                result = await self._provider.generate_response(
                                                    *await convert_prompts(prompts),
                                                    message_id=message_id,
                                                    message_history=message_history,
                                                    tools=tools,
                                                    output_type=final_type,
                                                    usage_limits=usage_limits,
                                                    model=model,
                                                    system_prompt=sys_prompt,
                                                    event_stream_handler=self.event_handler,
                                                )
                                            except Exception as e:
                                                logger.exception("Agent run failed", agent_name=self.name)
                                                self.run_failed.emit("Agent run failed", e)
                                                raise
                                            else:
                                                response_msg = ChatMessage[OutputDataT](
                                                    content=result.content,
                                                    role="assistant",
                                                    name=self.name,
                                                    model_name=result.response.model_name,
                                                    finish_reason=result.response.finish_reason,
                                                    parts=result.response.parts,
                                                    provider_response_id=result.response.provider_response_id,
                                                    usage=result.response.usage,
                                                    provider_name=result.response.provider_name,
                                                    message_id=message_id,
                                                    conversation_id=conversation_id,
                                                    tool_calls=result.tool_calls,
                                                    cost_info=result.cost_and_usage,
                                                    response_time=time.perf_counter() - start_time,
                                                    provider_details=result.provider_details or {},
                                                )
                                                if self._debug:
                                                    import devtools
                                    
                                                    devtools.debug(response_msg)
                                                return response_msg
                                    
                                        @method_spawner
                                        async def run_stream(
                                            self,
                                            *prompt: AnyPromptType | PIL.Image.Image | os.PathLike[str],
                                            output_type: type[OutputDataT] | None = None,
                                            model: ModelType = None,
                                            tool_choice: str | list[str] | None = None,
                                            store_history: bool = True,
                                            usage_limits: UsageLimits | None = None,
                                            message_id: str | None = None,
                                            conversation_id: str | None = None,
                                            messages: list[ChatMessage[Any]] | None = None,
                                            wait_for_connections: bool | None = None,
                                        ) -> AsyncIterator[RichAgentStreamEvent[OutputDataT]]:
                                            """Run agent with prompt and get a streaming response.
                                    
                                            Args:
                                                prompt: User query or instruction
                                                output_type: Optional type for structured responses
                                                model: Optional model override
                                                tool_choice: Filter tool choice by name
                                                store_history: Whether the message exchange should be added to the
                                                               context window
                                                usage_limits: Optional usage limits for the model
                                                message_id: Optional message id for the returned message.
                                                            Automatically generated if not provided.
                                                conversation_id: Optional conversation id for the returned message.
                                                messages: Optional list of messages to replace the conversation history
                                                wait_for_connections: Whether to wait for connected agents to complete
                                            Returns:
                                                An async iterator yielding streaming events with final message embedded.
                                    
                                            Raises:
                                                UnexpectedModelBehavior: If the model fails or behaves unexpectedly
                                            """
                                            message_id = message_id or str(uuid4())
                                            user_msg, prompts = await self.pre_run(*prompt)
                                            final_type = to_type(output_type) if output_type else self._output_type
                                            start_time = time.perf_counter()
                                            sys_prompt = await self.sys_prompts.format_system_prompt(self)
                                            tools = await self.tools.get_tools(state="enabled", names=tool_choice)
                                            message_history = (
                                                messages if messages is not None else self.conversation.get_history()
                                            )
                                            try:
                                                # Collect chunks for final message construction
                                                chunks = []
                                                usage = None
                                                model_name = None
                                                output = None
                                                finish_reason = None
                                                parts: Sequence[Any] = []
                                                provider_name = None
                                                provider_response_id = None
                                    
                                                provider_stream = self._provider.stream_events(
                                                    *prompts,
                                                    message_id=message_id,
                                                    message_history=message_history,
                                                    output_type=final_type,
                                                    model=model,
                                                    tools=tools,
                                                    usage_limits=usage_limits,
                                                    system_prompt=sys_prompt,
                                                )
                                    
                                                async with merge_queue_into_iterator(
                                                    provider_stream, self._progress_queue
                                                ) as events:
                                                    async for event in events:
                                                        # Pass through PydanticAI events and collect chunks
                                                        match event:
                                                            case PartDeltaEvent(delta=TextPartDelta(content_delta=delta)):
                                                                chunks.append(delta)
                                                                yield event  # Pass through original event
                                                            case AgentRunResultEvent(result=result):
                                                                usage = result.usage()
                                                                model_name = result.response.model_name
                                                                finish_reason = result.response.finish_reason
                                                                provider_name = result.response.provider_name
                                                                provider_response_id = result.response.provider_response_id
                                                                parts = result.response.parts
                                    
                                                                output = result.output
                                                                # Don't yield AgentRunResultEvent,
                                                                # we'll send our own final event
                                                            case _:
                                                                yield event  # Pass through other events
                                    
                                                # Build final chat message
                                                cost_info = None
                                                if model_name and usage and model_name != "test":
                                                    cost_info = await TokenCost.from_usage(usage, model_name)
                                    
                                                response_msg = ChatMessage[OutputDataT](
                                                    content=output,  # type: ignore
                                                    role="assistant",
                                                    name=self.name,
                                                    model_name=model_name,
                                                    message_id=message_id,
                                                    conversation_id=user_msg.conversation_id,
                                                    cost_info=cost_info,
                                                    response_time=time.perf_counter() - start_time,
                                                    provider_response_id=provider_response_id,
                                                    parts=parts,
                                                    provider_name=provider_name,
                                                    finish_reason=finish_reason,
                                                )
                                    
                                                # Yield final event with embedded message
                                                yield StreamCompleteEvent(message=response_msg)
                                                self.message_sent.emit(response_msg)
                                                await self.log_message(response_msg)
                                                if store_history:
                                                    self.conversation.add_chat_messages([user_msg, response_msg])
                                                await self.connections.route_message(
                                                    response_msg,
                                                    wait=wait_for_connections,
                                                )
                                    
                                            except Exception as e:
                                                logger.exception("Agent stream failed", agent_name=self.name)
                                                self.run_failed.emit("Agent stream failed", e)
                                                raise
                                    
                                        def _create_progress_handler(self):
                                            """Create progress handler that converts to ToolCallProgressEvent."""
                                    
                                            async def progress_handler(
                                                progress: float,
                                                total: float | None,
                                                message: str | None,
                                                tool_name: str | None = None,
                                                tool_call_id: str | None = None,
                                                tool_input: dict[str, Any] | None = None,
                                            ) -> None:
                                                event = ToolCallProgressEvent(
                                                    progress=int(progress) if progress is not None else 0,
                                                    total=int(total) if total is not None else 100,
                                                    message=message or "",
                                                    tool_name=tool_name or "",
                                                    tool_call_id=tool_call_id or "",
                                                    tool_input=tool_input,
                                                )
                                                await self._progress_queue.put(event)
                                    
                                            return progress_handler
                                    
                                        async def run_iter(
                                            self,
                                            *prompt_groups: Sequence[AnyPromptType | PIL.Image.Image | os.PathLike[str]],
                                            output_type: type[OutputDataT] | None = None,
                                            model: ModelType = None,
                                            store_history: bool = True,
                                            wait_for_connections: bool | None = None,
                                        ) -> AsyncIterator[ChatMessage[OutputDataT]]:
                                            """Run agent sequentially on multiple prompt groups.
                                    
                                            Args:
                                                prompt_groups: Groups of prompts to process sequentially
                                                output_type: Optional type for structured responses
                                                model: Optional model override
                                                store_history: Whether to store in conversation history
                                                wait_for_connections: Whether to wait for connected agents
                                    
                                            Yields:
                                                Response messages in sequence
                                    
                                            Example:
                                                questions = [
                                                    ["What is your name?"],
                                                    ["How old are you?", image1],
                                                    ["Describe this image", image2],
                                                ]
                                                async for response in agent.run_iter(*questions):
                                                    print(response.content)
                                            """
                                            for prompts in prompt_groups:
                                                response = await self.run(
                                                    *prompts,
                                                    output_type=output_type,
                                                    model=model,
                                                    store_history=store_history,
                                                    wait_for_connections=wait_for_connections,
                                                )
                                                yield response  # pyright: ignore
                                    
                                        @method_spawner
                                        async def run_job(
                                            self,
                                            job: Job[TDeps, str | None],
                                            *,
                                            store_history: bool = True,
                                            include_agent_tools: bool = True,
                                        ) -> ChatMessage[OutputDataT]:
                                            """Execute a pre-defined task.
                                    
                                            Args:
                                                job: Job configuration to execute
                                                store_history: Whether the message exchange should be added to the
                                                               context window
                                                include_agent_tools: Whether to include agent tools
                                            Returns:
                                                Job execution result
                                    
                                            Raises:
                                                JobError: If task execution fails
                                                ValueError: If task configuration is invalid
                                            """
                                            from llmling_agent.tasks import JobError
                                    
                                            if job.required_dependency is not None:  # noqa: SIM102
                                                if not isinstance(self.context.data, job.required_dependency):
                                                    msg = (
                                                        f"Agent dependencies ({type(self.context.data)}) "
                                                        f"don't match job requirement ({job.required_dependency})"
                                                    )
                                                    raise JobError(msg)
                                    
                                            # Load task knowledge
                                            if job.knowledge:
                                                # Add knowledge sources to context
                                                resources: list[Resource | str] = list(job.knowledge.paths) + list(
                                                    job.knowledge.resources
                                                )
                                                for source in resources:
                                                    await self.conversation.load_context_source(source)
                                                for prompt in job.knowledge.prompts:
                                                    await self.conversation.load_context_source(prompt)
                                            try:
                                                # Register task tools temporarily
                                                tools = job.get_tools()
                                                with self.tools.temporary_tools(tools, exclusive=not include_agent_tools):
                                                    # Execute job with job-specific tools
                                                    return await self.run(await job.get_prompt(), store_history=store_history)
                                    
                                            except Exception as e:
                                                logger.exception("Task execution failed", agent_name=self.name, error=str(e))
                                                msg = f"Task execution failed: {e}"
                                                raise JobError(msg) from e
                                    
                                        async def run_in_background(
                                            self,
                                            *prompt: AnyPromptType | PIL.Image.Image | os.PathLike[str],
                                            max_count: int | None = None,
                                            interval: float = 1.0,
                                            block: bool = False,
                                            **kwargs: Any,
                                        ) -> ChatMessage[OutputDataT] | None:
                                            """Run agent continuously in background with prompt or dynamic prompt function.
                                    
                                            Args:
                                                prompt: Static prompt or function that generates prompts
                                                max_count: Maximum number of runs (None = infinite)
                                                interval: Seconds between runs
                                                block: Whether to block until completion
                                                **kwargs: Arguments passed to run()
                                            """
                                            self._infinite = max_count is None
                                            log = logger.bind(agent_name=self.name, interval=interval)
                                    
                                            async def _continuous():
                                                count = 0
                                                log.debug("Starting continuous run", max_count=max_count)
                                                latest = None
                                                while max_count is None or count < max_count:
                                                    try:
                                                        current_prompts = [
                                                            call_with_context(p, self.context, **kwargs) if callable(p) else p
                                                            for p in prompt
                                                        ]
                                                        log.debug("Generated prompt", iteration=count)
                                                        latest = await self.run(current_prompts, **kwargs)
                                                        logger.debug("Run continuous result", iteration=count)
                                    
                                                        count += 1
                                                        await asyncio.sleep(interval)
                                                    except asyncio.CancelledError:
                                                        logger.debug("Continuous run cancelled", agent_name=self.name)
                                                        break
                                                    except Exception:
                                                        logger.exception("Background run failed", agent_name=self.name)
                                                        await asyncio.sleep(interval)
                                                logger.debug("Continuous run completed", iterations=count)
                                                return latest
                                    
                                            # Cancel any existing background task
                                            await self.stop()
                                            task = asyncio.create_task(_continuous(), name=f"background_{self.name}")
                                            if block:
                                                try:
                                                    return await task  # type: ignore
                                                finally:
                                                    if not task.done():
                                                        task.cancel()
                                            else:
                                                log.debug("Started background task", task_name=task.get_name())
                                                self._background_task = task
                                                return None
                                    
                                        async def stop(self):
                                            """Stop continuous execution if running."""
                                            if self._background_task and not self._background_task.done():
                                                self._background_task.cancel()
                                                await self._background_task
                                                self._background_task = None
                                    
                                        async def wait(self) -> ChatMessage[OutputDataT]:
                                            """Wait for background execution to complete."""
                                            if not self._background_task:
                                                msg = "No background task running"
                                                raise RuntimeError(msg)
                                            if self._infinite:
                                                msg = "Cannot wait on infinite execution"
                                                raise RuntimeError(msg)
                                            try:
                                                return await self._background_task
                                            finally:
                                                self._background_task = None
                                    
                                        async def share(
                                            self,
                                            target: Agent[TDeps, Any],
                                            *,
                                            tools: list[str] | None = None,
                                            resources: list[str] | None = None,
                                            history: bool | int | None = None,  # bool or number of messages
                                            token_limit: int | None = None,
                                        ):
                                            """Share capabilities and knowledge with another agent.
                                    
                                            Args:
                                                target: Agent to share with
                                                tools: List of tool names to share
                                                resources: List of resource names to share
                                                history: Share conversation history:
                                                        - True: Share full history
                                                        - int: Number of most recent messages to share
                                                        - None: Don't share history
                                                token_limit: Optional max tokens for history
                                    
                                            Raises:
                                                ValueError: If requested items don't exist
                                                RuntimeError: If runtime not available for resources
                                            """
                                            # Share tools if requested
                                            for name in tools or []:
                                                if tool := await self.tools.get_tool(name):
                                                    meta = {"shared_from": self.name}
                                                    target.tools.register_tool(tool.callable, metadata=meta)
                                                else:
                                                    msg = f"Tool not found: {name}"
                                                    raise ValueError(msg)
                                    
                                            # Share resources if requested
                                            if resources:
                                                if not self.runtime:
                                                    msg = "No runtime available for sharing resources"
                                                    raise RuntimeError(msg)
                                                for name in resources:
                                                    if resource := self.runtime.get_resource(name):
                                                        await target.conversation.load_context_source(resource)  # type: ignore
                                                    else:
                                                        msg = f"Resource not found: {name}"
                                                        raise ValueError(msg)
                                    
                                            # Share history if requested
                                            if history:
                                                history_text = await self.conversation.format_history(
                                                    max_tokens=token_limit,
                                                    num_messages=history if isinstance(history, int) else None,
                                                )
                                                target.conversation.add_context_message(
                                                    history_text, source=self.name, metadata={"type": "shared_history"}
                                                )
                                    
                                        def register_worker(
                                            self,
                                            worker: MessageNode[Any, Any],
                                            *,
                                            name: str | None = None,
                                            reset_history_on_run: bool = True,
                                            pass_message_history: bool = False,
                                        ) -> Tool:
                                            """Register another agent as a worker tool."""
                                            return self.tools.register_worker(
                                                worker,
                                                name=name,
                                                reset_history_on_run=reset_history_on_run,
                                                pass_message_history=pass_message_history,
                                                parent=self if pass_message_history else None,
                                            )
                                    
                                        def set_model(self, model: ModelType):
                                            """Set the model for this agent.
                                    
                                            Args:
                                                model: New model to use (name or instance)
                                    
                                            """
                                            self._provider.set_model(model)
                                    
                                        async def reset(self):
                                            """Reset agent state (conversation history and tool states)."""
                                            old_tools = await self.tools.list_tools()
                                            self.conversation.clear()
                                            self.tools.reset_states()
                                            new_tools = await self.tools.list_tools()
                                    
                                            event = self.AgentReset(
                                                agent_name=self.name,
                                                previous_tools=old_tools,
                                                new_tools=new_tools,
                                            )
                                            self.agent_reset.emit(event)
                                    
                                        @property
                                        def runtime(self) -> RuntimeConfig:
                                            """Get runtime configuration from context."""
                                            assert self.context.runtime
                                            return self.context.runtime
                                    
                                        @runtime.setter
                                        def runtime(self, value: RuntimeConfig):
                                            """Set runtime configuration and update context."""
                                            self.context.runtime = value
                                    
                                        async def get_stats(self) -> MessageStats:
                                            """Get message statistics (async version)."""
                                            messages = await self.get_message_history()
                                            return MessageStats(messages=messages)
                                    
                                        @asynccontextmanager
                                        async def temporary_state[T](
                                            self,
                                            *,
                                            system_prompts: list[AnyPromptType] | None = None,
                                            output_type: type[T] | None = None,
                                            replace_prompts: bool = False,
                                            tools: list[ToolType] | None = None,
                                            replace_tools: bool = False,
                                            history: list[AnyPromptType] | SessionQuery | None = None,
                                            replace_history: bool = False,
                                            pause_routing: bool = False,
                                            model: ModelType | None = None,
                                            provider: AgentProvider | None = None,
                                        ) -> AsyncIterator[Self | Agent[T]]:
                                            """Temporarily modify agent state.
                                    
                                            Args:
                                                system_prompts: Temporary system prompts to use
                                                output_type: Temporary output type to use
                                                replace_prompts: Whether to replace existing prompts
                                                tools: Temporary tools to make available
                                                replace_tools: Whether to replace existing tools
                                                history: Conversation history (prompts or query)
                                                replace_history: Whether to replace existing history
                                                pause_routing: Whether to pause message routing
                                                model: Temporary model override
                                                provider: Temporary provider override
                                            """
                                            old_model = self._provider.model if hasattr(self._provider, "model") else None  # pyright: ignore
                                            old_provider = self._provider
                                            if output_type:
                                                old_type = self._output_type
                                                self.set_output_type(output_type)  # type: ignore
                                            async with AsyncExitStack() as stack:
                                                # System prompts (async)
                                                if system_prompts is not None:
                                                    await stack.enter_async_context(
                                                        self.sys_prompts.temporary_prompt(
                                                            system_prompts, exclusive=replace_prompts
                                                        )
                                                    )
                                    
                                                # Tools (sync)
                                                if tools is not None:
                                                    stack.enter_context(
                                                        self.tools.temporary_tools(tools, exclusive=replace_tools)
                                                    )
                                    
                                                # History (async)
                                                if history is not None:
                                                    await stack.enter_async_context(
                                                        self.conversation.temporary_state(
                                                            history, replace_history=replace_history
                                                        )
                                                    )
                                    
                                                # Routing (async)
                                                if pause_routing:
                                                    await stack.enter_async_context(self.connections.paused_routing())
                                    
                                                # Model/Provider
                                                if provider is not None:
                                                    self._provider = provider
                                                elif model is not None:
                                                    self._provider.set_model(model)
                                    
                                                try:
                                                    yield self
                                                finally:
                                                    # Restore model/provider
                                                    if provider is not None:
                                                        self._provider = old_provider
                                                    elif model is not None and old_model:
                                                        self._provider.set_model(old_model)
                                                    if output_type:
                                                        self.set_output_type(old_type)
                                    
                                        async def validate_against(
                                            self,
                                            prompt: str,
                                            criteria: type[OutputDataT],
                                            **kwargs: Any,
                                        ) -> bool:
                                            """Check if agent's response satisfies stricter criteria."""
                                            result = await self.run(prompt, **kwargs)
                                            try:
                                                criteria.model_validate(result.content.model_dump())  # type: ignore
                                            except ValidationError:
                                                return False
                                            else:
                                                return True
                                    

                                    context property writable

                                    context: AgentContext[TDeps]
                                    

                                    Get agent context.

                                    model_name property

                                    model_name: str | None
                                    

                                    Get the model name in a consistent format.

                                    name property writable

                                    name: str
                                    

                                    Get agent name.

                                    provider property writable

                                    provider: AgentProvider
                                    

                                    Get the underlying provider.

                                    runtime property writable

                                    runtime: RuntimeConfig
                                    

                                    Get runtime configuration from context.

                                    AgentReset dataclass

                                    Emitted when agent is reset.

                                    Source code in src/llmling_agent/agent/agent.py
                                    141
                                    142
                                    143
                                    144
                                    145
                                    146
                                    147
                                    148
                                    @dataclass(frozen=True)
                                    class AgentReset:
                                        """Emitted when agent is reset."""
                                    
                                        agent_name: AgentName
                                        previous_tools: dict[str, bool]
                                        new_tools: dict[str, bool]
                                        timestamp: datetime = field(default_factory=get_now)
                                    

                                    __aenter__ async

                                    __aenter__() -> Self
                                    

                                    Enter async context and set up MCP servers.

                                    Source code in src/llmling_agent/agent/agent.py
                                    368
                                    369
                                    370
                                    371
                                    372
                                    373
                                    374
                                    375
                                    376
                                    377
                                    378
                                    379
                                    380
                                    381
                                    382
                                    383
                                    384
                                    385
                                    386
                                    387
                                    388
                                    389
                                    390
                                    391
                                    392
                                    393
                                    394
                                    395
                                    396
                                    397
                                    398
                                    399
                                    400
                                    401
                                    402
                                    403
                                    async def __aenter__(self) -> Self:
                                        """Enter async context and set up MCP servers."""
                                        try:
                                            # Collect all coroutines that need to be run
                                            coros: list[Coroutine[Any, Any, Any]] = []
                                    
                                            # Runtime initialization if needed
                                            runtime_ref = self.context.runtime
                                            if runtime_ref and not runtime_ref._initialized:
                                                self._owns_runtime = True
                                                coros.append(runtime_ref.__aenter__())
                                    
                                            # Events initialization
                                            coros.append(super().__aenter__())
                                    
                                            # Get conversation init tasks directly
                                            coros.extend(self.conversation.get_initialization_tasks())
                                    
                                            # Execute coroutines either in parallel or sequentially
                                            if self.parallel_init and coros:
                                                await asyncio.gather(*coros)
                                            else:
                                                for coro in coros:
                                                    await coro
                                            if runtime_ref:
                                                self.tools.add_provider(RuntimeResourceProvider(runtime_ref))
                                            for provider in self.context.config.get_toolsets():
                                                self.tools.add_provider(provider)
                                        except Exception as e:
                                            # Clean up in reverse order
                                            if self._owns_runtime and runtime_ref and self.context.runtime == runtime_ref:
                                                await runtime_ref.__aexit__(type(e), e, e.__traceback__)
                                            msg = "Failed to initialize agent"
                                            raise RuntimeError(msg) from e
                                        else:
                                            return self
                                    

                                    __aexit__ async

                                    __aexit__(
                                        exc_type: type[BaseException] | None,
                                        exc_val: BaseException | None,
                                        exc_tb: TracebackType | None,
                                    )
                                    

                                    Exit async context.

                                    Source code in src/llmling_agent/agent/agent.py
                                    405
                                    406
                                    407
                                    408
                                    409
                                    410
                                    411
                                    412
                                    413
                                    414
                                    415
                                    416
                                    417
                                    418
                                    async def __aexit__(
                                        self,
                                        exc_type: type[BaseException] | None,
                                        exc_val: BaseException | None,
                                        exc_tb: TracebackType | None,
                                    ):
                                        """Exit async context."""
                                        await super().__aexit__(exc_type, exc_val, exc_tb)
                                        try:
                                            await self.mcp.__aexit__(exc_type, exc_val, exc_tb)
                                        finally:
                                            if self._owns_runtime and self.context.runtime:
                                                self.tools.remove_provider("runtime")
                                                await self.context.runtime.__aexit__(exc_type, exc_val, exc_tb)
                                    

                                    __and__

                                    __and__(other: ProcessorCallback[Any] | Team[TDeps] | Agent[TDeps, Any]) -> Team[TDeps]
                                    
                                    __and__(other: ProcessorCallback[Any] | Team[Any] | Agent[Any, Any]) -> Team[Any]
                                    
                                    __and__(other: MessageNode[Any, Any] | ProcessorCallback[Any]) -> Team[Any]
                                    

                                    Create sequential team using & operator.

                                    Example

                                    group = analyzer & planner & executor # Create group of 3 group = analyzer & existing_group # Add to existing group

                                    Source code in src/llmling_agent/agent/agent.py
                                    432
                                    433
                                    434
                                    435
                                    436
                                    437
                                    438
                                    439
                                    440
                                    441
                                    442
                                    443
                                    444
                                    445
                                    446
                                    447
                                    448
                                    449
                                    450
                                    451
                                    452
                                    def __and__(self, other: MessageNode[Any, Any] | ProcessorCallback[Any]) -> Team[Any]:
                                        """Create sequential team using & operator.
                                    
                                        Example:
                                            group = analyzer & planner & executor  # Create group of 3
                                            group = analyzer & existing_group  # Add to existing group
                                        """
                                        from llmling_agent.delegation.team import Team
                                    
                                        match other:
                                            case Team():
                                                return Team([self, *other.agents])
                                            case Callable():
                                                agent_2 = Agent.from_callback(other)
                                                agent_2.context.pool = self.context.pool
                                                return Team([self, agent_2])
                                            case MessageNode():
                                                return Team([self, other])
                                            case _:
                                                msg = f"Invalid agent type: {type(other)}"
                                                raise ValueError(msg)
                                    

                                    __init__

                                    __init__(
                                        name: str = "llmling-agent",
                                        provider: AgentType = "pydantic_ai",
                                        *,
                                        model: ModelType = None,
                                        output_type: OutputSpec[OutputDataT] | StructuredResponseConfig | str = str,
                                        runtime: RuntimeConfig | Config | JoinablePathLike | None = None,
                                        context: AgentContext[TDeps] | None = None,
                                        session: SessionIdType | SessionQuery | MemoryConfig | bool | int = None,
                                        system_prompt: AnyPromptType | Sequence[AnyPromptType] = (),
                                        description: str | None = None,
                                        tools: Sequence[ToolType | Tool] | None = None,
                                        toolsets: Sequence[ResourceProvider] | None = None,
                                        mcp_servers: Sequence[str | MCPServerConfig] | None = None,
                                        resources: Sequence[Resource | PromptType | str] = (),
                                        retries: int = 1,
                                        output_retries: int | None = None,
                                        end_strategy: EndStrategy = "early",
                                        defer_model_check: bool = False,
                                        input_provider: InputProvider | None = None,
                                        parallel_init: bool = True,
                                        debug: bool = False,
                                        event_handlers: Sequence[IndividualEventHandler] | None = None,
                                    )
                                    

                                    Initialize agent with runtime configuration.

                                    Parameters:

                                    Name Type Description Default
                                    name str

                                    Name of the agent for logging and identification

                                    'llmling-agent'
                                    provider AgentType

                                    Agent type to use (ai: PydanticAIProvider, human: HumanProvider)

                                    'pydantic_ai'
                                    model ModelType

                                    The default model to use (defaults to GPT-5)

                                    None
                                    output_type OutputSpec[OutputDataT] | StructuredResponseConfig | str

                                    The default output type to use (defaults to str)

                                    str
                                    runtime RuntimeConfig | Config | JoinablePathLike | None

                                    Runtime configuration providing access to resources/tools

                                    None
                                    context AgentContext[TDeps] | None

                                    Agent context with configuration

                                    None
                                    session SessionIdType | SessionQuery | MemoryConfig | bool | int

                                    Memory configuration. - None: Default memory config - False: Disable message history (max_messages=0) - int: Max tokens for memory - str/UUID: Session identifier - MemoryConfig: Full memory configuration - MemoryProvider: Custom memory provider - SessionQuery: Session query

                                    None
                                    system_prompt AnyPromptType | Sequence[AnyPromptType]

                                    System prompts for the agent

                                    ()
                                    description str | None

                                    Description of the Agent ("what it can do")

                                    None
                                    tools Sequence[ToolType | Tool] | None

                                    List of tools to register with the agent

                                    None
                                    toolsets Sequence[ResourceProvider] | None

                                    List of toolset resource providers for the agent

                                    None
                                    mcp_servers Sequence[str | MCPServerConfig] | None

                                    MCP servers to connect to

                                    None
                                    resources Sequence[Resource | PromptType | str]

                                    Additional resources to load

                                    ()
                                    retries int

                                    Default number of retries for failed operations

                                    1
                                    output_retries int | None

                                    Max retries for result validation (defaults to retries)

                                    None
                                    end_strategy EndStrategy

                                    Strategy for handling tool calls that are requested alongside a final result

                                    'early'
                                    defer_model_check bool

                                    Whether to defer model evaluation until first run

                                    False
                                    input_provider InputProvider | None

                                    Provider for human input (tool confirmation / HumanProviders)

                                    None
                                    parallel_init bool

                                    Whether to initialize resources in parallel

                                    True
                                    debug bool

                                    Whether to enable debug mode

                                    False
                                    event_handlers Sequence[IndividualEventHandler] | None

                                    Sequence of event handlers to register with the agent

                                    None
                                    Source code in src/llmling_agent/agent/agent.py
                                    155
                                    156
                                    157
                                    158
                                    159
                                    160
                                    161
                                    162
                                    163
                                    164
                                    165
                                    166
                                    167
                                    168
                                    169
                                    170
                                    171
                                    172
                                    173
                                    174
                                    175
                                    176
                                    177
                                    178
                                    179
                                    180
                                    181
                                    182
                                    183
                                    184
                                    185
                                    186
                                    187
                                    188
                                    189
                                    190
                                    191
                                    192
                                    193
                                    194
                                    195
                                    196
                                    197
                                    198
                                    199
                                    200
                                    201
                                    202
                                    203
                                    204
                                    205
                                    206
                                    207
                                    208
                                    209
                                    210
                                    211
                                    212
                                    213
                                    214
                                    215
                                    216
                                    217
                                    218
                                    219
                                    220
                                    221
                                    222
                                    223
                                    224
                                    225
                                    226
                                    227
                                    228
                                    229
                                    230
                                    231
                                    232
                                    233
                                    234
                                    235
                                    236
                                    237
                                    238
                                    239
                                    240
                                    241
                                    242
                                    243
                                    244
                                    245
                                    246
                                    247
                                    248
                                    249
                                    250
                                    251
                                    252
                                    253
                                    254
                                    255
                                    256
                                    257
                                    258
                                    259
                                    260
                                    261
                                    262
                                    263
                                    264
                                    265
                                    266
                                    267
                                    268
                                    269
                                    270
                                    271
                                    272
                                    273
                                    274
                                    275
                                    276
                                    277
                                    278
                                    279
                                    280
                                    281
                                    282
                                    283
                                    284
                                    285
                                    286
                                    287
                                    288
                                    289
                                    290
                                    291
                                    292
                                    293
                                    294
                                    295
                                    296
                                    297
                                    298
                                    299
                                    300
                                    301
                                    302
                                    303
                                    304
                                    305
                                    306
                                    307
                                    308
                                    309
                                    310
                                    311
                                    312
                                    313
                                    314
                                    315
                                    316
                                    317
                                    318
                                    319
                                    320
                                    321
                                    322
                                    323
                                    324
                                    325
                                    326
                                    327
                                    328
                                    329
                                    330
                                    331
                                    332
                                    333
                                    334
                                    335
                                    336
                                    337
                                    338
                                    339
                                    340
                                    341
                                    342
                                    343
                                    344
                                    345
                                    346
                                    347
                                    348
                                    349
                                    350
                                    351
                                    352
                                    def __init__(  # noqa: PLR0915
                                        # we dont use AgentKwargs here so that we can work with explicit ones in the ctor
                                        self,
                                        name: str = "llmling-agent",
                                        provider: AgentType = "pydantic_ai",
                                        *,
                                        model: ModelType = None,
                                        output_type: OutputSpec[OutputDataT] | StructuredResponseConfig | str = str,  # type: ignore[assignment]
                                        runtime: RuntimeConfig | Config | JoinablePathLike | None = None,
                                        context: AgentContext[TDeps] | None = None,
                                        session: SessionIdType | SessionQuery | MemoryConfig | bool | int = None,
                                        system_prompt: AnyPromptType | Sequence[AnyPromptType] = (),
                                        description: str | None = None,
                                        tools: Sequence[ToolType | Tool] | None = None,
                                        toolsets: Sequence[ResourceProvider] | None = None,
                                        mcp_servers: Sequence[str | MCPServerConfig] | None = None,
                                        resources: Sequence[Resource | PromptType | str] = (),
                                        retries: int = 1,
                                        output_retries: int | None = None,
                                        end_strategy: EndStrategy = "early",
                                        defer_model_check: bool = False,
                                        input_provider: InputProvider | None = None,
                                        parallel_init: bool = True,
                                        debug: bool = False,
                                        event_handlers: Sequence[IndividualEventHandler] | None = None,
                                    ):
                                        """Initialize agent with runtime configuration.
                                    
                                        Args:
                                            name: Name of the agent for logging and identification
                                            provider: Agent type to use (ai: PydanticAIProvider, human: HumanProvider)
                                            model: The default model to use (defaults to GPT-5)
                                            output_type: The default output type to use (defaults to str)
                                            runtime: Runtime configuration providing access to resources/tools
                                            context: Agent context with configuration
                                            session: Memory configuration.
                                                - None: Default memory config
                                                - False: Disable message history (max_messages=0)
                                                - int: Max tokens for memory
                                                - str/UUID: Session identifier
                                                - MemoryConfig: Full memory configuration
                                                - MemoryProvider: Custom memory provider
                                                - SessionQuery: Session query
                                    
                                            system_prompt: System prompts for the agent
                                            description: Description of the Agent ("what it can do")
                                            tools: List of tools to register with the agent
                                            toolsets: List of toolset resource providers for the agent
                                            mcp_servers: MCP servers to connect to
                                            resources: Additional resources to load
                                            retries: Default number of retries for failed operations
                                            output_retries: Max retries for result validation (defaults to retries)
                                            end_strategy: Strategy for handling tool calls that are requested alongside
                                                          a final result
                                            defer_model_check: Whether to defer model evaluation until first run
                                            input_provider: Provider for human input (tool confirmation / HumanProviders)
                                            parallel_init: Whether to initialize resources in parallel
                                            debug: Whether to enable debug mode
                                            event_handlers: Sequence of event handlers to register with the agent
                                        """
                                        from llmling_agent.agent import AgentContext
                                        from llmling_agent.agent.conversation import ConversationManager
                                        from llmling_agent.agent.interactions import Interactions
                                        from llmling_agent.agent.sys_prompts import SystemPrompts
                                        from llmling_agent_providers.base import AgentProvider
                                    
                                        self.task_manager = TaskManager()
                                        self._infinite = False
                                        # save some stuff for asnyc init
                                        self._owns_runtime = False
                                        # match output_type:
                                        #     case type() | str():
                                        #         # For types and named definitions, use overrides if provided
                                        #         self.set_output_type(
                                        #             output_type,
                                        #             tool_name=tool_name,
                                        #             tool_description=tool_description,
                                        #         )
                                        #     case StructuredResponseConfig():
                                        #         # For response definitions, use as-is
                                        #         # (overrides don't apply to complete definitions)
                                        #         self.set_output_type(output_type)
                                        # prepare context
                                        ctx = context or AgentContext[TDeps].create_default(
                                            name,
                                            input_provider=input_provider,
                                        )
                                        self._context = ctx
                                        self._output_type = to_type(output_type, ctx.definition.responses)
                                        memory_cfg = (
                                            session
                                            if isinstance(session, MemoryConfig)
                                            else MemoryConfig.from_value(session)
                                        )
                                        super().__init__(
                                            name=name,
                                            context=ctx,
                                            description=description,
                                            enable_logging=memory_cfg.enable,
                                            mcp_servers=mcp_servers,
                                            progress_handler=self._create_progress_handler(),
                                        )
                                        # Initialize runtime
                                        match runtime:
                                            case None:
                                                ctx.runtime = RuntimeConfig.from_config(Config())
                                            case Config() | str() | PathLike() | UPath():
                                                ctx.runtime = RuntimeConfig.from_config(runtime)
                                            case RuntimeConfig():
                                                ctx.runtime = runtime
                                            case _:
                                                msg = f"Invalid runtime type: {type(runtime)}"
                                                raise TypeError(msg)
                                    
                                        runtime_provider = RuntimePromptProvider(ctx.runtime)
                                        ctx.definition.prompt_manager.providers["runtime"] = runtime_provider
                                        # Initialize tool manager
                                        self.event_handler = MultiEventHandler[IndividualEventHandler](event_handlers)
                                        all_tools = list(tools or [])
                                        self.tools = ToolManager(all_tools)
                                        self.tools.add_provider(self.mcp)
                                        if builtin_tools := ctx.config.get_tool_provider():
                                            self.tools.add_provider(builtin_tools)
                                    
                                        # Add toolset providers
                                        if toolsets:
                                            for toolset_provider in toolsets:
                                                self.tools.add_provider(toolset_provider)
                                    
                                        # Initialize conversation manager
                                        resources = list(resources)
                                        if ctx.config.knowledge:
                                            resources.extend(ctx.config.knowledge.get_resources())
                                        self.conversation = ConversationManager(self, memory_cfg, resources=resources)
                                        # Initialize provider
                                        match provider:
                                            case "pydantic_ai":
                                                from llmling_agent_providers.pydanticai import PydanticAIProvider
                                    
                                                if model and not isinstance(model, str):
                                                    from pydantic_ai import models
                                    
                                                    assert isinstance(model, models.Model)
                                                self._provider: AgentProvider = PydanticAIProvider(
                                                    model=model,
                                                    retries=retries,
                                                    end_strategy=end_strategy,
                                                    output_retries=output_retries,
                                                    defer_model_check=defer_model_check,
                                                    debug=debug,
                                                    context=ctx,
                                                )
                                            case "human":
                                                from llmling_agent_providers.human import HumanProvider
                                    
                                                self._provider = HumanProvider(name=name, debug=debug, context=ctx)
                                            case Callable():
                                                from llmling_agent_providers.callback import CallbackProvider
                                    
                                                self._provider = CallbackProvider(
                                                    provider, name=name, debug=debug, context=ctx
                                                )
                                            case AgentProvider():
                                                self._provider = provider
                                                self._provider.context = ctx
                                            case _:
                                                msg = f"Invalid agent type: {type}"
                                                raise ValueError(msg)
                                    
                                        # Initialize skills registry
                                        from llmling_agent.tools.skills import SkillsRegistry
                                    
                                        self.skills_registry = SkillsRegistry()
                                    
                                        if ctx and ctx.definition:
                                            from llmling_agent.observability import registry
                                    
                                            registry.configure_observability(ctx.definition.observability)
                                    
                                        # init variables
                                        self._debug = debug
                                        self.parallel_init = parallel_init
                                        self.name = name
                                        self._background_task: asyncio.Task[Any] | None = None
                                        self._progress_queue: asyncio.Queue[ToolCallProgressEvent] = asyncio.Queue()
                                    
                                        # Forward provider signals
                                        self._provider.tool_used.connect(self.tool_used)
                                        self.talk = Interactions(self)
                                    
                                        # Set up system prompts
                                        config_prompts = ctx.config.system_prompts if ctx else []
                                        all_prompts: list[AnyPromptType] = list(config_prompts)
                                        if isinstance(system_prompt, list):
                                            all_prompts.extend(system_prompt)
                                        else:
                                            all_prompts.append(system_prompt)
                                        self.sys_prompts = SystemPrompts(all_prompts, context=ctx)
                                    

                                    from_callback classmethod

                                    from_callback(
                                        callback: ProcessorCallback[TResult], *, name: str | None = None, **kwargs: Any
                                    ) -> Agent[None, TResult]
                                    

                                    Create an agent from a processing callback.

                                    Parameters:

                                    Name Type Description Default
                                    callback ProcessorCallback[TResult]

                                    Function to process messages. Can be: - sync or async - with or without context - must return str for pipeline compatibility

                                    required
                                    name str | None

                                    Optional name for the agent

                                    None
                                    kwargs Any

                                    Additional arguments for agent

                                    {}
                                    Source code in src/llmling_agent/agent/agent.py
                                    476
                                    477
                                    478
                                    479
                                    480
                                    481
                                    482
                                    483
                                    484
                                    485
                                    486
                                    487
                                    488
                                    489
                                    490
                                    491
                                    492
                                    493
                                    494
                                    495
                                    496
                                    497
                                    498
                                    499
                                    500
                                    501
                                    502
                                    503
                                    504
                                    505
                                    506
                                    507
                                    508
                                    509
                                    510
                                    511
                                    512
                                    513
                                    514
                                    515
                                    @classmethod
                                    def from_callback[TResult](
                                        cls,
                                        callback: ProcessorCallback[TResult],
                                        *,
                                        name: str | None = None,
                                        **kwargs: Any,
                                    ) -> Agent[None, TResult]:
                                        """Create an agent from a processing callback.
                                    
                                        Args:
                                            callback: Function to process messages. Can be:
                                                - sync or async
                                                - with or without context
                                                - must return str for pipeline compatibility
                                            name: Optional name for the agent
                                            kwargs: Additional arguments for agent
                                        """
                                        from llmling_agent.agent.agent import Agent
                                        from llmling_agent_providers.callback import CallbackProvider
                                    
                                        name = name or callback.__name__ or "processor"
                                        provider = CallbackProvider(callback, name=name)
                                        # Get return type from signature for validation
                                        hints = get_type_hints(callback)
                                        return_type = hints.get("return")
                                    
                                        # If async, unwrap from Awaitable
                                        if (
                                            return_type
                                            and hasattr(return_type, "__origin__")
                                            and return_type.__origin__ is Awaitable
                                        ):
                                            return_type = return_type.__args__[0]
                                        return Agent(
                                            provider=provider,
                                            name=name,
                                            output_type=return_type or str,
                                            **kwargs,
                                        )  # type: ignore
                                    

                                    get_stats async

                                    get_stats() -> MessageStats
                                    

                                    Get message statistics (async version).

                                    Source code in src/llmling_agent/agent/agent.py
                                    1190
                                    1191
                                    1192
                                    1193
                                    async def get_stats(self) -> MessageStats:
                                        """Get message statistics (async version)."""
                                        messages = await self.get_message_history()
                                        return MessageStats(messages=messages)
                                    

                                    is_busy

                                    is_busy() -> bool
                                    

                                    Check if agent is currently processing tasks.

                                    Source code in src/llmling_agent/agent/agent.py
                                    617
                                    618
                                    619
                                    def is_busy(self) -> bool:
                                        """Check if agent is currently processing tasks."""
                                        return bool(self.task_manager._pending_tasks or self._background_task)
                                    

                                    register_worker

                                    register_worker(
                                        worker: MessageNode[Any, Any],
                                        *,
                                        name: str | None = None,
                                        reset_history_on_run: bool = True,
                                        pass_message_history: bool = False,
                                    ) -> Tool
                                    

                                    Register another agent as a worker tool.

                                    Source code in src/llmling_agent/agent/agent.py
                                    1139
                                    1140
                                    1141
                                    1142
                                    1143
                                    1144
                                    1145
                                    1146
                                    1147
                                    1148
                                    1149
                                    1150
                                    1151
                                    1152
                                    1153
                                    1154
                                    def register_worker(
                                        self,
                                        worker: MessageNode[Any, Any],
                                        *,
                                        name: str | None = None,
                                        reset_history_on_run: bool = True,
                                        pass_message_history: bool = False,
                                    ) -> Tool:
                                        """Register another agent as a worker tool."""
                                        return self.tools.register_worker(
                                            worker,
                                            name=name,
                                            reset_history_on_run=reset_history_on_run,
                                            pass_message_history=pass_message_history,
                                            parent=self if pass_message_history else None,
                                        )
                                    

                                    reset async

                                    reset()
                                    

                                    Reset agent state (conversation history and tool states).

                                    Source code in src/llmling_agent/agent/agent.py
                                    1165
                                    1166
                                    1167
                                    1168
                                    1169
                                    1170
                                    1171
                                    1172
                                    1173
                                    1174
                                    1175
                                    1176
                                    1177
                                    async def reset(self):
                                        """Reset agent state (conversation history and tool states)."""
                                        old_tools = await self.tools.list_tools()
                                        self.conversation.clear()
                                        self.tools.reset_states()
                                        new_tools = await self.tools.list_tools()
                                    
                                        event = self.AgentReset(
                                            agent_name=self.name,
                                            previous_tools=old_tools,
                                            new_tools=new_tools,
                                        )
                                        self.agent_reset.emit(event)
                                    

                                    run_in_background async

                                    run_in_background(
                                        *prompt: AnyPromptType | Image | PathLike[str],
                                        max_count: int | None = None,
                                        interval: float = 1.0,
                                        block: bool = False,
                                        **kwargs: Any,
                                    ) -> ChatMessage[OutputDataT] | None
                                    

                                    Run agent continuously in background with prompt or dynamic prompt function.

                                    Parameters:

                                    Name Type Description Default
                                    prompt AnyPromptType | Image | PathLike[str]

                                    Static prompt or function that generates prompts

                                    ()
                                    max_count int | None

                                    Maximum number of runs (None = infinite)

                                    None
                                    interval float

                                    Seconds between runs

                                    1.0
                                    block bool

                                    Whether to block until completion

                                    False
                                    **kwargs Any

                                    Arguments passed to run()

                                    {}
                                    Source code in src/llmling_agent/agent/agent.py
                                    1004
                                    1005
                                    1006
                                    1007
                                    1008
                                    1009
                                    1010
                                    1011
                                    1012
                                    1013
                                    1014
                                    1015
                                    1016
                                    1017
                                    1018
                                    1019
                                    1020
                                    1021
                                    1022
                                    1023
                                    1024
                                    1025
                                    1026
                                    1027
                                    1028
                                    1029
                                    1030
                                    1031
                                    1032
                                    1033
                                    1034
                                    1035
                                    1036
                                    1037
                                    1038
                                    1039
                                    1040
                                    1041
                                    1042
                                    1043
                                    1044
                                    1045
                                    1046
                                    1047
                                    1048
                                    1049
                                    1050
                                    1051
                                    1052
                                    1053
                                    1054
                                    1055
                                    1056
                                    1057
                                    1058
                                    1059
                                    1060
                                    1061
                                    async def run_in_background(
                                        self,
                                        *prompt: AnyPromptType | PIL.Image.Image | os.PathLike[str],
                                        max_count: int | None = None,
                                        interval: float = 1.0,
                                        block: bool = False,
                                        **kwargs: Any,
                                    ) -> ChatMessage[OutputDataT] | None:
                                        """Run agent continuously in background with prompt or dynamic prompt function.
                                    
                                        Args:
                                            prompt: Static prompt or function that generates prompts
                                            max_count: Maximum number of runs (None = infinite)
                                            interval: Seconds between runs
                                            block: Whether to block until completion
                                            **kwargs: Arguments passed to run()
                                        """
                                        self._infinite = max_count is None
                                        log = logger.bind(agent_name=self.name, interval=interval)
                                    
                                        async def _continuous():
                                            count = 0
                                            log.debug("Starting continuous run", max_count=max_count)
                                            latest = None
                                            while max_count is None or count < max_count:
                                                try:
                                                    current_prompts = [
                                                        call_with_context(p, self.context, **kwargs) if callable(p) else p
                                                        for p in prompt
                                                    ]
                                                    log.debug("Generated prompt", iteration=count)
                                                    latest = await self.run(current_prompts, **kwargs)
                                                    logger.debug("Run continuous result", iteration=count)
                                    
                                                    count += 1
                                                    await asyncio.sleep(interval)
                                                except asyncio.CancelledError:
                                                    logger.debug("Continuous run cancelled", agent_name=self.name)
                                                    break
                                                except Exception:
                                                    logger.exception("Background run failed", agent_name=self.name)
                                                    await asyncio.sleep(interval)
                                            logger.debug("Continuous run completed", iterations=count)
                                            return latest
                                    
                                        # Cancel any existing background task
                                        await self.stop()
                                        task = asyncio.create_task(_continuous(), name=f"background_{self.name}")
                                        if block:
                                            try:
                                                return await task  # type: ignore
                                            finally:
                                                if not task.done():
                                                    task.cancel()
                                        else:
                                            log.debug("Started background task", task_name=task.get_name())
                                            self._background_task = task
                                            return None
                                    

                                    run_iter async

                                    run_iter(
                                        *prompt_groups: Sequence[AnyPromptType | Image | PathLike[str]],
                                        output_type: type[OutputDataT] | None = None,
                                        model: ModelType = None,
                                        store_history: bool = True,
                                        wait_for_connections: bool | None = None,
                                    ) -> AsyncIterator[ChatMessage[OutputDataT]]
                                    

                                    Run agent sequentially on multiple prompt groups.

                                    Parameters:

                                    Name Type Description Default
                                    prompt_groups Sequence[AnyPromptType | Image | PathLike[str]]

                                    Groups of prompts to process sequentially

                                    ()
                                    output_type type[OutputDataT] | None

                                    Optional type for structured responses

                                    None
                                    model ModelType

                                    Optional model override

                                    None
                                    store_history bool

                                    Whether to store in conversation history

                                    True
                                    wait_for_connections bool | None

                                    Whether to wait for connected agents

                                    None

                                    Yields:

                                    Type Description
                                    AsyncIterator[ChatMessage[OutputDataT]]

                                    Response messages in sequence

                                    Example

                                    questions = [ ["What is your name?"], ["How old are you?", image1], ["Describe this image", image2], ] async for response in agent.run_iter(*questions): print(response.content)

                                    Source code in src/llmling_agent/agent/agent.py
                                    911
                                    912
                                    913
                                    914
                                    915
                                    916
                                    917
                                    918
                                    919
                                    920
                                    921
                                    922
                                    923
                                    924
                                    925
                                    926
                                    927
                                    928
                                    929
                                    930
                                    931
                                    932
                                    933
                                    934
                                    935
                                    936
                                    937
                                    938
                                    939
                                    940
                                    941
                                    942
                                    943
                                    944
                                    945
                                    946
                                    947
                                    948
                                    async def run_iter(
                                        self,
                                        *prompt_groups: Sequence[AnyPromptType | PIL.Image.Image | os.PathLike[str]],
                                        output_type: type[OutputDataT] | None = None,
                                        model: ModelType = None,
                                        store_history: bool = True,
                                        wait_for_connections: bool | None = None,
                                    ) -> AsyncIterator[ChatMessage[OutputDataT]]:
                                        """Run agent sequentially on multiple prompt groups.
                                    
                                        Args:
                                            prompt_groups: Groups of prompts to process sequentially
                                            output_type: Optional type for structured responses
                                            model: Optional model override
                                            store_history: Whether to store in conversation history
                                            wait_for_connections: Whether to wait for connected agents
                                    
                                        Yields:
                                            Response messages in sequence
                                    
                                        Example:
                                            questions = [
                                                ["What is your name?"],
                                                ["How old are you?", image1],
                                                ["Describe this image", image2],
                                            ]
                                            async for response in agent.run_iter(*questions):
                                                print(response.content)
                                        """
                                        for prompts in prompt_groups:
                                            response = await self.run(
                                                *prompts,
                                                output_type=output_type,
                                                model=model,
                                                store_history=store_history,
                                                wait_for_connections=wait_for_connections,
                                            )
                                            yield response  # pyright: ignore
                                    

                                    run_job async

                                    run_job(
                                        job: Job[TDeps, str | None],
                                        *,
                                        store_history: bool = True,
                                        include_agent_tools: bool = True,
                                    ) -> ChatMessage[OutputDataT]
                                    

                                    Execute a pre-defined task.

                                    Parameters:

                                    Name Type Description Default
                                    job Job[TDeps, str | None]

                                    Job configuration to execute

                                    required
                                    store_history bool

                                    Whether the message exchange should be added to the context window

                                    True
                                    include_agent_tools bool

                                    Whether to include agent tools

                                    True

                                    Returns: Job execution result

                                    Raises:

                                    Type Description
                                    JobError

                                    If task execution fails

                                    ValueError

                                    If task configuration is invalid

                                    Source code in src/llmling_agent/agent/agent.py
                                     950
                                     951
                                     952
                                     953
                                     954
                                     955
                                     956
                                     957
                                     958
                                     959
                                     960
                                     961
                                     962
                                     963
                                     964
                                     965
                                     966
                                     967
                                     968
                                     969
                                     970
                                     971
                                     972
                                     973
                                     974
                                     975
                                     976
                                     977
                                     978
                                     979
                                     980
                                     981
                                     982
                                     983
                                     984
                                     985
                                     986
                                     987
                                     988
                                     989
                                     990
                                     991
                                     992
                                     993
                                     994
                                     995
                                     996
                                     997
                                     998
                                     999
                                    1000
                                    1001
                                    1002
                                    @method_spawner
                                    async def run_job(
                                        self,
                                        job: Job[TDeps, str | None],
                                        *,
                                        store_history: bool = True,
                                        include_agent_tools: bool = True,
                                    ) -> ChatMessage[OutputDataT]:
                                        """Execute a pre-defined task.
                                    
                                        Args:
                                            job: Job configuration to execute
                                            store_history: Whether the message exchange should be added to the
                                                           context window
                                            include_agent_tools: Whether to include agent tools
                                        Returns:
                                            Job execution result
                                    
                                        Raises:
                                            JobError: If task execution fails
                                            ValueError: If task configuration is invalid
                                        """
                                        from llmling_agent.tasks import JobError
                                    
                                        if job.required_dependency is not None:  # noqa: SIM102
                                            if not isinstance(self.context.data, job.required_dependency):
                                                msg = (
                                                    f"Agent dependencies ({type(self.context.data)}) "
                                                    f"don't match job requirement ({job.required_dependency})"
                                                )
                                                raise JobError(msg)
                                    
                                        # Load task knowledge
                                        if job.knowledge:
                                            # Add knowledge sources to context
                                            resources: list[Resource | str] = list(job.knowledge.paths) + list(
                                                job.knowledge.resources
                                            )
                                            for source in resources:
                                                await self.conversation.load_context_source(source)
                                            for prompt in job.knowledge.prompts:
                                                await self.conversation.load_context_source(prompt)
                                        try:
                                            # Register task tools temporarily
                                            tools = job.get_tools()
                                            with self.tools.temporary_tools(tools, exclusive=not include_agent_tools):
                                                # Execute job with job-specific tools
                                                return await self.run(await job.get_prompt(), store_history=store_history)
                                    
                                        except Exception as e:
                                            logger.exception("Task execution failed", agent_name=self.name, error=str(e))
                                            msg = f"Task execution failed: {e}"
                                            raise JobError(msg) from e
                                    

                                    run_stream async

                                    run_stream(
                                        *prompt: AnyPromptType | Image | PathLike[str],
                                        output_type: type[OutputDataT] | None = None,
                                        model: ModelType = None,
                                        tool_choice: str | list[str] | None = None,
                                        store_history: bool = True,
                                        usage_limits: UsageLimits | None = None,
                                        message_id: str | None = None,
                                        conversation_id: str | None = None,
                                        messages: list[ChatMessage[Any]] | None = None,
                                        wait_for_connections: bool | None = None,
                                    ) -> AsyncIterator[RichAgentStreamEvent[OutputDataT]]
                                    

                                    Run agent with prompt and get a streaming response.

                                    Parameters:

                                    Name Type Description Default
                                    prompt AnyPromptType | Image | PathLike[str]

                                    User query or instruction

                                    ()
                                    output_type type[OutputDataT] | None

                                    Optional type for structured responses

                                    None
                                    model ModelType

                                    Optional model override

                                    None
                                    tool_choice str | list[str] | None

                                    Filter tool choice by name

                                    None
                                    store_history bool

                                    Whether the message exchange should be added to the context window

                                    True
                                    usage_limits UsageLimits | None

                                    Optional usage limits for the model

                                    None
                                    message_id str | None

                                    Optional message id for the returned message. Automatically generated if not provided.

                                    None
                                    conversation_id str | None

                                    Optional conversation id for the returned message.

                                    None
                                    messages list[ChatMessage[Any]] | None

                                    Optional list of messages to replace the conversation history

                                    None
                                    wait_for_connections bool | None

                                    Whether to wait for connected agents to complete

                                    None

                                    Returns: An async iterator yielding streaming events with final message embedded.

                                    Raises:

                                    Type Description
                                    UnexpectedModelBehavior

                                    If the model fails or behaves unexpectedly

                                    Source code in src/llmling_agent/agent/agent.py
                                    763
                                    764
                                    765
                                    766
                                    767
                                    768
                                    769
                                    770
                                    771
                                    772
                                    773
                                    774
                                    775
                                    776
                                    777
                                    778
                                    779
                                    780
                                    781
                                    782
                                    783
                                    784
                                    785
                                    786
                                    787
                                    788
                                    789
                                    790
                                    791
                                    792
                                    793
                                    794
                                    795
                                    796
                                    797
                                    798
                                    799
                                    800
                                    801
                                    802
                                    803
                                    804
                                    805
                                    806
                                    807
                                    808
                                    809
                                    810
                                    811
                                    812
                                    813
                                    814
                                    815
                                    816
                                    817
                                    818
                                    819
                                    820
                                    821
                                    822
                                    823
                                    824
                                    825
                                    826
                                    827
                                    828
                                    829
                                    830
                                    831
                                    832
                                    833
                                    834
                                    835
                                    836
                                    837
                                    838
                                    839
                                    840
                                    841
                                    842
                                    843
                                    844
                                    845
                                    846
                                    847
                                    848
                                    849
                                    850
                                    851
                                    852
                                    853
                                    854
                                    855
                                    856
                                    857
                                    858
                                    859
                                    860
                                    861
                                    862
                                    863
                                    864
                                    865
                                    866
                                    867
                                    868
                                    869
                                    870
                                    871
                                    872
                                    873
                                    874
                                    875
                                    876
                                    877
                                    878
                                    879
                                    880
                                    881
                                    882
                                    883
                                    884
                                    885
                                    886
                                    @method_spawner
                                    async def run_stream(
                                        self,
                                        *prompt: AnyPromptType | PIL.Image.Image | os.PathLike[str],
                                        output_type: type[OutputDataT] | None = None,
                                        model: ModelType = None,
                                        tool_choice: str | list[str] | None = None,
                                        store_history: bool = True,
                                        usage_limits: UsageLimits | None = None,
                                        message_id: str | None = None,
                                        conversation_id: str | None = None,
                                        messages: list[ChatMessage[Any]] | None = None,
                                        wait_for_connections: bool | None = None,
                                    ) -> AsyncIterator[RichAgentStreamEvent[OutputDataT]]:
                                        """Run agent with prompt and get a streaming response.
                                    
                                        Args:
                                            prompt: User query or instruction
                                            output_type: Optional type for structured responses
                                            model: Optional model override
                                            tool_choice: Filter tool choice by name
                                            store_history: Whether the message exchange should be added to the
                                                           context window
                                            usage_limits: Optional usage limits for the model
                                            message_id: Optional message id for the returned message.
                                                        Automatically generated if not provided.
                                            conversation_id: Optional conversation id for the returned message.
                                            messages: Optional list of messages to replace the conversation history
                                            wait_for_connections: Whether to wait for connected agents to complete
                                        Returns:
                                            An async iterator yielding streaming events with final message embedded.
                                    
                                        Raises:
                                            UnexpectedModelBehavior: If the model fails or behaves unexpectedly
                                        """
                                        message_id = message_id or str(uuid4())
                                        user_msg, prompts = await self.pre_run(*prompt)
                                        final_type = to_type(output_type) if output_type else self._output_type
                                        start_time = time.perf_counter()
                                        sys_prompt = await self.sys_prompts.format_system_prompt(self)
                                        tools = await self.tools.get_tools(state="enabled", names=tool_choice)
                                        message_history = (
                                            messages if messages is not None else self.conversation.get_history()
                                        )
                                        try:
                                            # Collect chunks for final message construction
                                            chunks = []
                                            usage = None
                                            model_name = None
                                            output = None
                                            finish_reason = None
                                            parts: Sequence[Any] = []
                                            provider_name = None
                                            provider_response_id = None
                                    
                                            provider_stream = self._provider.stream_events(
                                                *prompts,
                                                message_id=message_id,
                                                message_history=message_history,
                                                output_type=final_type,
                                                model=model,
                                                tools=tools,
                                                usage_limits=usage_limits,
                                                system_prompt=sys_prompt,
                                            )
                                    
                                            async with merge_queue_into_iterator(
                                                provider_stream, self._progress_queue
                                            ) as events:
                                                async for event in events:
                                                    # Pass through PydanticAI events and collect chunks
                                                    match event:
                                                        case PartDeltaEvent(delta=TextPartDelta(content_delta=delta)):
                                                            chunks.append(delta)
                                                            yield event  # Pass through original event
                                                        case AgentRunResultEvent(result=result):
                                                            usage = result.usage()
                                                            model_name = result.response.model_name
                                                            finish_reason = result.response.finish_reason
                                                            provider_name = result.response.provider_name
                                                            provider_response_id = result.response.provider_response_id
                                                            parts = result.response.parts
                                    
                                                            output = result.output
                                                            # Don't yield AgentRunResultEvent,
                                                            # we'll send our own final event
                                                        case _:
                                                            yield event  # Pass through other events
                                    
                                            # Build final chat message
                                            cost_info = None
                                            if model_name and usage and model_name != "test":
                                                cost_info = await TokenCost.from_usage(usage, model_name)
                                    
                                            response_msg = ChatMessage[OutputDataT](
                                                content=output,  # type: ignore
                                                role="assistant",
                                                name=self.name,
                                                model_name=model_name,
                                                message_id=message_id,
                                                conversation_id=user_msg.conversation_id,
                                                cost_info=cost_info,
                                                response_time=time.perf_counter() - start_time,
                                                provider_response_id=provider_response_id,
                                                parts=parts,
                                                provider_name=provider_name,
                                                finish_reason=finish_reason,
                                            )
                                    
                                            # Yield final event with embedded message
                                            yield StreamCompleteEvent(message=response_msg)
                                            self.message_sent.emit(response_msg)
                                            await self.log_message(response_msg)
                                            if store_history:
                                                self.conversation.add_chat_messages([user_msg, response_msg])
                                            await self.connections.route_message(
                                                response_msg,
                                                wait=wait_for_connections,
                                            )
                                    
                                        except Exception as e:
                                            logger.exception("Agent stream failed", agent_name=self.name)
                                            self.run_failed.emit("Agent stream failed", e)
                                            raise
                                    

                                    set_model

                                    set_model(model: ModelType)
                                    

                                    Set the model for this agent.

                                    Parameters:

                                    Name Type Description Default
                                    model ModelType

                                    New model to use (name or instance)

                                    required
                                    Source code in src/llmling_agent/agent/agent.py
                                    1156
                                    1157
                                    1158
                                    1159
                                    1160
                                    1161
                                    1162
                                    1163
                                    def set_model(self, model: ModelType):
                                        """Set the model for this agent.
                                    
                                        Args:
                                            model: New model to use (name or instance)
                                    
                                        """
                                        self._provider.set_model(model)
                                    

                                    set_output_type

                                    set_output_type(
                                        output_type: type | str | StructuredResponseConfig | None,
                                        *,
                                        tool_name: str | None = None,
                                        tool_description: str | None = None,
                                    )
                                    

                                    Set or update the result type for this agent.

                                    Parameters:

                                    Name Type Description Default
                                    output_type type | str | StructuredResponseConfig | None

                                    New result type, can be: - A Python type for validation - Name of a response definition - Response definition instance - None to reset to unstructured mode

                                    required
                                    tool_name str | None

                                    Optional override for tool name

                                    None
                                    tool_description str | None

                                    Optional override for tool description

                                    None
                                    Source code in src/llmling_agent/agent/agent.py
                                    539
                                    540
                                    541
                                    542
                                    543
                                    544
                                    545
                                    546
                                    547
                                    548
                                    549
                                    550
                                    551
                                    552
                                    553
                                    554
                                    555
                                    556
                                    557
                                    558
                                    def set_output_type(
                                        self,
                                        output_type: type | str | StructuredResponseConfig | None,
                                        *,
                                        tool_name: str | None = None,
                                        tool_description: str | None = None,
                                    ):
                                        """Set or update the result type for this agent.
                                    
                                        Args:
                                            output_type: New result type, can be:
                                                - A Python type for validation
                                                - Name of a response definition
                                                - Response definition instance
                                                - None to reset to unstructured mode
                                            tool_name: Optional override for tool name
                                            tool_description: Optional override for tool description
                                        """
                                        logger.debug("Setting result type", output_type=output_type, agent_name=self.name)
                                        self._output_type = to_type(output_type)
                                    

                                    share async

                                    share(
                                        target: Agent[TDeps, Any],
                                        *,
                                        tools: list[str] | None = None,
                                        resources: list[str] | None = None,
                                        history: bool | int | None = None,
                                        token_limit: int | None = None,
                                    )
                                    

                                    Share capabilities and knowledge with another agent.

                                    Parameters:

                                    Name Type Description Default
                                    target Agent[TDeps, Any]

                                    Agent to share with

                                    required
                                    tools list[str] | None

                                    List of tool names to share

                                    None
                                    resources list[str] | None

                                    List of resource names to share

                                    None
                                    history bool | int | None

                                    Share conversation history: - True: Share full history - int: Number of most recent messages to share - None: Don't share history

                                    None
                                    token_limit int | None

                                    Optional max tokens for history

                                    None

                                    Raises:

                                    Type Description
                                    ValueError

                                    If requested items don't exist

                                    RuntimeError

                                    If runtime not available for resources

                                    Source code in src/llmling_agent/agent/agent.py
                                    1083
                                    1084
                                    1085
                                    1086
                                    1087
                                    1088
                                    1089
                                    1090
                                    1091
                                    1092
                                    1093
                                    1094
                                    1095
                                    1096
                                    1097
                                    1098
                                    1099
                                    1100
                                    1101
                                    1102
                                    1103
                                    1104
                                    1105
                                    1106
                                    1107
                                    1108
                                    1109
                                    1110
                                    1111
                                    1112
                                    1113
                                    1114
                                    1115
                                    1116
                                    1117
                                    1118
                                    1119
                                    1120
                                    1121
                                    1122
                                    1123
                                    1124
                                    1125
                                    1126
                                    1127
                                    1128
                                    1129
                                    1130
                                    1131
                                    1132
                                    1133
                                    1134
                                    1135
                                    1136
                                    1137
                                    async def share(
                                        self,
                                        target: Agent[TDeps, Any],
                                        *,
                                        tools: list[str] | None = None,
                                        resources: list[str] | None = None,
                                        history: bool | int | None = None,  # bool or number of messages
                                        token_limit: int | None = None,
                                    ):
                                        """Share capabilities and knowledge with another agent.
                                    
                                        Args:
                                            target: Agent to share with
                                            tools: List of tool names to share
                                            resources: List of resource names to share
                                            history: Share conversation history:
                                                    - True: Share full history
                                                    - int: Number of most recent messages to share
                                                    - None: Don't share history
                                            token_limit: Optional max tokens for history
                                    
                                        Raises:
                                            ValueError: If requested items don't exist
                                            RuntimeError: If runtime not available for resources
                                        """
                                        # Share tools if requested
                                        for name in tools or []:
                                            if tool := await self.tools.get_tool(name):
                                                meta = {"shared_from": self.name}
                                                target.tools.register_tool(tool.callable, metadata=meta)
                                            else:
                                                msg = f"Tool not found: {name}"
                                                raise ValueError(msg)
                                    
                                        # Share resources if requested
                                        if resources:
                                            if not self.runtime:
                                                msg = "No runtime available for sharing resources"
                                                raise RuntimeError(msg)
                                            for name in resources:
                                                if resource := self.runtime.get_resource(name):
                                                    await target.conversation.load_context_source(resource)  # type: ignore
                                                else:
                                                    msg = f"Resource not found: {name}"
                                                    raise ValueError(msg)
                                    
                                        # Share history if requested
                                        if history:
                                            history_text = await self.conversation.format_history(
                                                max_tokens=token_limit,
                                                num_messages=history if isinstance(history, int) else None,
                                            )
                                            target.conversation.add_context_message(
                                                history_text, source=self.name, metadata={"type": "shared_history"}
                                            )
                                    

                                    stop async

                                    stop()
                                    

                                    Stop continuous execution if running.

                                    Source code in src/llmling_agent/agent/agent.py
                                    1063
                                    1064
                                    1065
                                    1066
                                    1067
                                    1068
                                    async def stop(self):
                                        """Stop continuous execution if running."""
                                        if self._background_task and not self._background_task.done():
                                            self._background_task.cancel()
                                            await self._background_task
                                            self._background_task = None
                                    

                                    temporary_state async

                                    temporary_state(
                                        *,
                                        system_prompts: list[AnyPromptType] | None = None,
                                        output_type: type[T] | None = None,
                                        replace_prompts: bool = False,
                                        tools: list[ToolType] | None = None,
                                        replace_tools: bool = False,
                                        history: list[AnyPromptType] | SessionQuery | None = None,
                                        replace_history: bool = False,
                                        pause_routing: bool = False,
                                        model: ModelType | None = None,
                                        provider: AgentProvider | None = None,
                                    ) -> AsyncIterator[Self | Agent[T]]
                                    

                                    Temporarily modify agent state.

                                    Parameters:

                                    Name Type Description Default
                                    system_prompts list[AnyPromptType] | None

                                    Temporary system prompts to use

                                    None
                                    output_type type[T] | None

                                    Temporary output type to use

                                    None
                                    replace_prompts bool

                                    Whether to replace existing prompts

                                    False
                                    tools list[ToolType] | None

                                    Temporary tools to make available

                                    None
                                    replace_tools bool

                                    Whether to replace existing tools

                                    False
                                    history list[AnyPromptType] | SessionQuery | None

                                    Conversation history (prompts or query)

                                    None
                                    replace_history bool

                                    Whether to replace existing history

                                    False
                                    pause_routing bool

                                    Whether to pause message routing

                                    False
                                    model ModelType | None

                                    Temporary model override

                                    None
                                    provider AgentProvider | None

                                    Temporary provider override

                                    None
                                    Source code in src/llmling_agent/agent/agent.py
                                    1195
                                    1196
                                    1197
                                    1198
                                    1199
                                    1200
                                    1201
                                    1202
                                    1203
                                    1204
                                    1205
                                    1206
                                    1207
                                    1208
                                    1209
                                    1210
                                    1211
                                    1212
                                    1213
                                    1214
                                    1215
                                    1216
                                    1217
                                    1218
                                    1219
                                    1220
                                    1221
                                    1222
                                    1223
                                    1224
                                    1225
                                    1226
                                    1227
                                    1228
                                    1229
                                    1230
                                    1231
                                    1232
                                    1233
                                    1234
                                    1235
                                    1236
                                    1237
                                    1238
                                    1239
                                    1240
                                    1241
                                    1242
                                    1243
                                    1244
                                    1245
                                    1246
                                    1247
                                    1248
                                    1249
                                    1250
                                    1251
                                    1252
                                    1253
                                    1254
                                    1255
                                    1256
                                    1257
                                    1258
                                    1259
                                    1260
                                    1261
                                    1262
                                    1263
                                    1264
                                    1265
                                    1266
                                    1267
                                    1268
                                    1269
                                    1270
                                    1271
                                    @asynccontextmanager
                                    async def temporary_state[T](
                                        self,
                                        *,
                                        system_prompts: list[AnyPromptType] | None = None,
                                        output_type: type[T] | None = None,
                                        replace_prompts: bool = False,
                                        tools: list[ToolType] | None = None,
                                        replace_tools: bool = False,
                                        history: list[AnyPromptType] | SessionQuery | None = None,
                                        replace_history: bool = False,
                                        pause_routing: bool = False,
                                        model: ModelType | None = None,
                                        provider: AgentProvider | None = None,
                                    ) -> AsyncIterator[Self | Agent[T]]:
                                        """Temporarily modify agent state.
                                    
                                        Args:
                                            system_prompts: Temporary system prompts to use
                                            output_type: Temporary output type to use
                                            replace_prompts: Whether to replace existing prompts
                                            tools: Temporary tools to make available
                                            replace_tools: Whether to replace existing tools
                                            history: Conversation history (prompts or query)
                                            replace_history: Whether to replace existing history
                                            pause_routing: Whether to pause message routing
                                            model: Temporary model override
                                            provider: Temporary provider override
                                        """
                                        old_model = self._provider.model if hasattr(self._provider, "model") else None  # pyright: ignore
                                        old_provider = self._provider
                                        if output_type:
                                            old_type = self._output_type
                                            self.set_output_type(output_type)  # type: ignore
                                        async with AsyncExitStack() as stack:
                                            # System prompts (async)
                                            if system_prompts is not None:
                                                await stack.enter_async_context(
                                                    self.sys_prompts.temporary_prompt(
                                                        system_prompts, exclusive=replace_prompts
                                                    )
                                                )
                                    
                                            # Tools (sync)
                                            if tools is not None:
                                                stack.enter_context(
                                                    self.tools.temporary_tools(tools, exclusive=replace_tools)
                                                )
                                    
                                            # History (async)
                                            if history is not None:
                                                await stack.enter_async_context(
                                                    self.conversation.temporary_state(
                                                        history, replace_history=replace_history
                                                    )
                                                )
                                    
                                            # Routing (async)
                                            if pause_routing:
                                                await stack.enter_async_context(self.connections.paused_routing())
                                    
                                            # Model/Provider
                                            if provider is not None:
                                                self._provider = provider
                                            elif model is not None:
                                                self._provider.set_model(model)
                                    
                                            try:
                                                yield self
                                            finally:
                                                # Restore model/provider
                                                if provider is not None:
                                                    self._provider = old_provider
                                                elif model is not None and old_model:
                                                    self._provider.set_model(old_model)
                                                if output_type:
                                                    self.set_output_type(old_type)
                                    

                                    to_structured

                                    to_structured(
                                        output_type: type[NewOutputDataT] | str | StructuredResponseConfig,
                                        *,
                                        tool_name: str | None = None,
                                        tool_description: str | None = None,
                                    ) -> Agent[TDeps, NewOutputDataT] | Self
                                    

                                    Convert this agent to a structured agent.

                                    Parameters:

                                    Name Type Description Default
                                    output_type type[NewOutputDataT] | str | StructuredResponseConfig

                                    Type for structured responses. Can be: - A Python type (Pydantic model) - Name of response definition from context - Complete response definition

                                    required
                                    tool_name str | None

                                    Optional override for result tool name

                                    None
                                    tool_description str | None

                                    Optional override for result tool description

                                    None

                                    Returns:

                                    Type Description
                                    Agent[TDeps, NewOutputDataT] | Self

                                    Typed Agent

                                    Source code in src/llmling_agent/agent/agent.py
                                    594
                                    595
                                    596
                                    597
                                    598
                                    599
                                    600
                                    601
                                    602
                                    603
                                    604
                                    605
                                    606
                                    607
                                    608
                                    609
                                    610
                                    611
                                    612
                                    613
                                    614
                                    615
                                    def to_structured[NewOutputDataT](
                                        self,
                                        output_type: type[NewOutputDataT] | str | StructuredResponseConfig,
                                        *,
                                        tool_name: str | None = None,
                                        tool_description: str | None = None,
                                    ) -> Agent[TDeps, NewOutputDataT] | Self:
                                        """Convert this agent to a structured agent.
                                    
                                        Args:
                                            output_type: Type for structured responses. Can be:
                                                - A Python type (Pydantic model)
                                                - Name of response definition from context
                                                - Complete response definition
                                            tool_name: Optional override for result tool name
                                            tool_description: Optional override for result tool description
                                    
                                        Returns:
                                            Typed Agent
                                        """
                                        self.set_output_type(output_type)  # type: ignore
                                        return self
                                    

                                    to_tool

                                    to_tool(
                                        *,
                                        name: str | None = None,
                                        reset_history_on_run: bool = True,
                                        pass_message_history: bool = False,
                                        parent: Agent[Any, Any] | None = None,
                                    ) -> Tool
                                    

                                    Create a tool from this agent.

                                    Parameters:

                                    Name Type Description Default
                                    name str | None

                                    Optional tool name override

                                    None
                                    reset_history_on_run bool

                                    Clear agent's history before each run

                                    True
                                    pass_message_history bool

                                    Pass parent's message history to agent

                                    False
                                    parent Agent[Any, Any] | None

                                    Optional parent agent for history/context sharing

                                    None
                                    Source code in src/llmling_agent/agent/agent.py
                                    626
                                    627
                                    628
                                    629
                                    630
                                    631
                                    632
                                    633
                                    634
                                    635
                                    636
                                    637
                                    638
                                    639
                                    640
                                    641
                                    642
                                    643
                                    644
                                    645
                                    646
                                    647
                                    648
                                    649
                                    650
                                    651
                                    652
                                    653
                                    654
                                    655
                                    656
                                    657
                                    658
                                    659
                                    660
                                    661
                                    662
                                    663
                                    664
                                    665
                                    666
                                    667
                                    668
                                    669
                                    670
                                    671
                                    672
                                    673
                                    674
                                    675
                                    def to_tool(
                                        self,
                                        *,
                                        name: str | None = None,
                                        reset_history_on_run: bool = True,
                                        pass_message_history: bool = False,
                                        parent: Agent[Any, Any] | None = None,
                                    ) -> Tool:
                                        """Create a tool from this agent.
                                    
                                        Args:
                                            name: Optional tool name override
                                            reset_history_on_run: Clear agent's history before each run
                                            pass_message_history: Pass parent's message history to agent
                                            parent: Optional parent agent for history/context sharing
                                        """
                                        tool_name = name or f"ask_{self.name}"
                                    
                                        # TODO: should probably make output type configurable
                                        async def wrapped_tool(prompt: str) -> Any:
                                            if pass_message_history and not parent:
                                                msg = "Parent agent required for message history sharing"
                                                raise ToolError(msg)
                                    
                                            if reset_history_on_run:
                                                self.conversation.clear()
                                    
                                            history = None
                                            if pass_message_history and parent:
                                                history = parent.conversation.get_history()
                                                old = self.conversation.get_history()
                                                self.conversation.set_history(history)
                                            result = await self.run(prompt)
                                            if history:
                                                self.conversation.set_history(old)
                                            return result.data
                                    
                                        normalized_name = self.name.replace("_", " ").title()
                                        docstring = f"Get expert answer from specialized agent: {normalized_name}"
                                        if self.description:
                                            docstring = f"{docstring}\n\n{self.description}"
                                    
                                        wrapped_tool.__doc__ = docstring
                                        wrapped_tool.__name__ = tool_name
                                    
                                        return Tool.from_callable(
                                            wrapped_tool,
                                            name_override=tool_name,
                                            description_override=docstring,
                                        )
                                    

                                    validate_against async

                                    validate_against(prompt: str, criteria: type[OutputDataT], **kwargs: Any) -> bool
                                    

                                    Check if agent's response satisfies stricter criteria.

                                    Source code in src/llmling_agent/agent/agent.py
                                    1273
                                    1274
                                    1275
                                    1276
                                    1277
                                    1278
                                    1279
                                    1280
                                    1281
                                    1282
                                    1283
                                    1284
                                    1285
                                    1286
                                    async def validate_against(
                                        self,
                                        prompt: str,
                                        criteria: type[OutputDataT],
                                        **kwargs: Any,
                                    ) -> bool:
                                        """Check if agent's response satisfies stricter criteria."""
                                        result = await self.run(prompt, **kwargs)
                                        try:
                                            criteria.model_validate(result.content.model_dump())  # type: ignore
                                        except ValidationError:
                                            return False
                                        else:
                                            return True
                                    

                                    wait async

                                    wait() -> ChatMessage[OutputDataT]
                                    

                                    Wait for background execution to complete.

                                    Source code in src/llmling_agent/agent/agent.py
                                    1070
                                    1071
                                    1072
                                    1073
                                    1074
                                    1075
                                    1076
                                    1077
                                    1078
                                    1079
                                    1080
                                    1081
                                    async def wait(self) -> ChatMessage[OutputDataT]:
                                        """Wait for background execution to complete."""
                                        if not self._background_task:
                                            msg = "No background task running"
                                            raise RuntimeError(msg)
                                        if self._infinite:
                                            msg = "Cannot wait on infinite execution"
                                            raise RuntimeError(msg)
                                        try:
                                            return await self._background_task
                                        finally:
                                            self._background_task = None
                                    

                                    AgentConfig

                                    Bases: NodeConfig

                                    Configuration for a single agent in the system.

                                    Defines an agent's complete configuration including its model, environment, and behavior settings. Each agent can have its own: - Language model configuration - Environment setup (tools and resources) - Response type definitions - System prompts and default user prompts

                                    The configuration can be loaded from YAML or created programmatically.

                                    Source code in src/llmling_agent/models/agents.py
                                     57
                                     58
                                     59
                                     60
                                     61
                                     62
                                     63
                                     64
                                     65
                                     66
                                     67
                                     68
                                     69
                                     70
                                     71
                                     72
                                     73
                                     74
                                     75
                                     76
                                     77
                                     78
                                     79
                                     80
                                     81
                                     82
                                     83
                                     84
                                     85
                                     86
                                     87
                                     88
                                     89
                                     90
                                     91
                                     92
                                     93
                                     94
                                     95
                                     96
                                     97
                                     98
                                     99
                                    100
                                    101
                                    102
                                    103
                                    104
                                    105
                                    106
                                    107
                                    108
                                    109
                                    110
                                    111
                                    112
                                    113
                                    114
                                    115
                                    116
                                    117
                                    118
                                    119
                                    120
                                    121
                                    122
                                    123
                                    124
                                    125
                                    126
                                    127
                                    128
                                    129
                                    130
                                    131
                                    132
                                    133
                                    134
                                    135
                                    136
                                    137
                                    138
                                    139
                                    140
                                    141
                                    142
                                    143
                                    144
                                    145
                                    146
                                    147
                                    148
                                    149
                                    150
                                    151
                                    152
                                    153
                                    154
                                    155
                                    156
                                    157
                                    158
                                    159
                                    160
                                    161
                                    162
                                    163
                                    164
                                    165
                                    166
                                    167
                                    168
                                    169
                                    170
                                    171
                                    172
                                    173
                                    174
                                    175
                                    176
                                    177
                                    178
                                    179
                                    180
                                    181
                                    182
                                    183
                                    184
                                    185
                                    186
                                    187
                                    188
                                    189
                                    190
                                    191
                                    192
                                    193
                                    194
                                    195
                                    196
                                    197
                                    198
                                    199
                                    200
                                    201
                                    202
                                    203
                                    204
                                    205
                                    206
                                    207
                                    208
                                    209
                                    210
                                    211
                                    212
                                    213
                                    214
                                    215
                                    216
                                    217
                                    218
                                    219
                                    220
                                    221
                                    222
                                    223
                                    224
                                    225
                                    226
                                    227
                                    228
                                    229
                                    230
                                    231
                                    232
                                    233
                                    234
                                    235
                                    236
                                    237
                                    238
                                    239
                                    240
                                    241
                                    242
                                    243
                                    244
                                    245
                                    246
                                    247
                                    248
                                    249
                                    250
                                    251
                                    252
                                    253
                                    254
                                    255
                                    256
                                    257
                                    258
                                    259
                                    260
                                    261
                                    262
                                    263
                                    264
                                    265
                                    266
                                    267
                                    268
                                    269
                                    270
                                    271
                                    272
                                    273
                                    274
                                    275
                                    276
                                    277
                                    278
                                    279
                                    280
                                    281
                                    282
                                    283
                                    284
                                    285
                                    286
                                    287
                                    288
                                    289
                                    290
                                    291
                                    292
                                    293
                                    294
                                    295
                                    296
                                    297
                                    298
                                    299
                                    300
                                    301
                                    302
                                    303
                                    304
                                    305
                                    306
                                    307
                                    308
                                    309
                                    310
                                    311
                                    312
                                    313
                                    314
                                    315
                                    316
                                    317
                                    318
                                    319
                                    320
                                    321
                                    322
                                    323
                                    324
                                    325
                                    326
                                    327
                                    328
                                    329
                                    330
                                    331
                                    332
                                    333
                                    334
                                    335
                                    336
                                    337
                                    338
                                    339
                                    340
                                    341
                                    342
                                    343
                                    344
                                    345
                                    346
                                    347
                                    348
                                    349
                                    350
                                    351
                                    352
                                    353
                                    354
                                    355
                                    356
                                    357
                                    358
                                    359
                                    360
                                    361
                                    362
                                    363
                                    364
                                    365
                                    366
                                    367
                                    368
                                    369
                                    370
                                    371
                                    372
                                    373
                                    374
                                    375
                                    376
                                    377
                                    378
                                    379
                                    380
                                    381
                                    382
                                    383
                                    384
                                    385
                                    386
                                    387
                                    388
                                    389
                                    390
                                    391
                                    392
                                    393
                                    394
                                    395
                                    396
                                    397
                                    398
                                    399
                                    400
                                    401
                                    402
                                    403
                                    404
                                    405
                                    406
                                    407
                                    408
                                    409
                                    410
                                    411
                                    412
                                    413
                                    414
                                    415
                                    416
                                    417
                                    418
                                    419
                                    420
                                    421
                                    422
                                    423
                                    424
                                    425
                                    class AgentConfig(NodeConfig):
                                        """Configuration for a single agent in the system.
                                    
                                        Defines an agent's complete configuration including its model, environment,
                                        and behavior settings. Each agent can have its own:
                                        - Language model configuration
                                        - Environment setup (tools and resources)
                                        - Response type definitions
                                        - System prompts and default user prompts
                                    
                                        The configuration can be loaded from YAML or created programmatically.
                                        """
                                    
                                        provider: ProviderConfig | ProviderName = "pydantic_ai"
                                        """Provider configuration or shorthand type"""
                                    
                                        inherits: str | None = None
                                        """Name of agent config to inherit from"""
                                    
                                        model: str | AnyModelConfig | None = None
                                        """The model to use for this agent. Can be either a simple model name
                                        string (e.g. 'openai:gpt-5') or a structured model definition."""
                                    
                                        tools: list[ToolConfig | str] = Field(default_factory=list)
                                        """A list of tools to register with this agent."""
                                    
                                        toolsets: list[ToolsetConfig] = Field(default_factory=list)
                                        """Toolset configurations for extensible tool collections."""
                                    
                                        environment: str | AgentEnvironment | None = None
                                        """Environments configuration (path or object)"""
                                    
                                        session: str | SessionQuery | MemoryConfig | None = None
                                        """Session configuration for conversation recovery."""
                                    
                                        output_type: str | StructuredResponseConfig | None = None
                                        """Name of the response definition to use"""
                                    
                                        retries: int = 1
                                        """Number of retries for failed operations (maps to pydantic-ai's retries)"""
                                    
                                        result_tool_name: str = "final_result"
                                        """Name of the tool used for structured responses"""
                                    
                                        result_tool_description: str | None = None
                                        """Custom description for the result tool"""
                                    
                                        output_retries: int | None = None
                                        """Max retries for result validation"""
                                    
                                        end_strategy: EndStrategy = "early"
                                        """The strategy for handling multiple tool calls when a final result is found"""
                                    
                                        avatar: str | None = None
                                        """URL or path to agent's avatar image"""
                                    
                                        system_prompts: Sequence[str | PromptConfig] = Field(default_factory=list)
                                        """System prompts for the agent. Can be strings or structured prompt configs."""
                                    
                                        user_prompts: list[str] = Field(default_factory=list)
                                        """Default user prompts for the agent"""
                                    
                                        # context_sources: list[ContextSource] = Field(default_factory=list)
                                        # """Initial context sources to load"""
                                    
                                        config_file_path: str | None = None
                                        """Config file path for resolving environment."""
                                    
                                        knowledge: Knowledge | None = None
                                        """Knowledge sources for this agent."""
                                    
                                        workers: list[WorkerConfig] = Field(default_factory=list)
                                        """Worker agents which will be available as tools."""
                                    
                                        requires_tool_confirmation: ToolConfirmationMode = "per_tool"
                                        """How to handle tool confirmation:
                                        - "always": Always require confirmation for all tools
                                        - "never": Never require confirmation (ignore tool settings)
                                        - "per_tool": Use individual tool settings
                                        """
                                    
                                        debug: bool = False
                                        """Enable debug output for this agent."""
                                    
                                        def is_structured(self) -> bool:
                                            """Check if this config defines a structured agent."""
                                            return self.output_type is not None
                                    
                                        @model_validator(mode="before")
                                        @classmethod
                                        def validate_output_type(cls, data: dict[str, Any]) -> dict[str, Any]:
                                            """Convert result type and apply its settings."""
                                            output_type = data.get("output_type")
                                            if isinstance(output_type, dict):
                                                # Extract response-specific settings
                                                tool_name = output_type.pop("result_tool_name", None)
                                                tool_description = output_type.pop("result_tool_description", None)
                                                retries = output_type.pop("output_retries", None)
                                    
                                                # Convert remaining dict to ResponseDefinition
                                                if "type" not in output_type["response_schema"]:
                                                    output_type["response_schema"]["type"] = "inline"
                                                data["output_type"]["response_schema"] = InlineSchemaDef(**output_type)
                                    
                                                # Apply extracted settings to agent config
                                                if tool_name:
                                                    data["result_tool_name"] = tool_name
                                                if tool_description:
                                                    data["result_tool_description"] = tool_description
                                                if retries is not None:
                                                    data["output_retries"] = retries
                                    
                                            return data
                                    
                                        @model_validator(mode="before")
                                        @classmethod
                                        def handle_model_types(cls, data: dict[str, Any]) -> dict[str, Any]:
                                            """Convert model inputs to appropriate format."""
                                            if isinstance((model := data.get("model")), str):
                                                data["model"] = {"type": "string", "identifier": model}
                                            return data
                                    
                                        def get_toolsets(self) -> list[ResourceProvider]:
                                            """Get all resource providers for this agent."""
                                            providers: list[ResourceProvider] = []
                                    
                                            # Add providers from toolsets
                                            for toolset_config in self.toolsets:
                                                try:
                                                    provider = toolset_config.get_provider()
                                                    providers.append(provider)
                                                except Exception as e:
                                                    msg = "Failed to create provider for toolset"
                                                    logger.exception(msg, toolset_config)
                                                    raise ValueError(msg) from e
                                    
                                            return providers
                                    
                                        def get_tool_provider(self) -> ResourceProvider | None:
                                            """Get tool provider for this agent."""
                                            from llmling_agent.tools.base import Tool
                                    
                                            # Create provider for static tools
                                            if not self.tools:
                                                return None
                                            static_tools: list[Tool] = []
                                            for tool_config in self.tools:
                                                try:
                                                    match tool_config:
                                                        case str():
                                                            if tool_config.startswith("crewai_tools"):
                                                                obj = import_class(tool_config)()
                                                                static_tools.append(Tool.from_crewai_tool(obj))
                                                            elif tool_config.startswith("langchain"):
                                                                obj = import_class(tool_config)()
                                                                static_tools.append(Tool.from_langchain_tool(obj))
                                                            else:
                                                                tool = Tool.from_callable(tool_config)
                                                                static_tools.append(tool)
                                                        case BaseToolConfig():
                                                            static_tools.append(tool_config.get_tool())
                                                except Exception:
                                                    logger.exception("Failed to load tool %r", tool_config)
                                                    continue
                                    
                                            return StaticResourceProvider(name="builtin", tools=static_tools)
                                    
                                        def get_session_config(self) -> MemoryConfig:
                                            """Get resolved memory configuration."""
                                            match self.session:
                                                case str() | UUID():
                                                    return MemoryConfig(session=SessionQuery(name=str(self.session)))
                                                case SessionQuery():
                                                    return MemoryConfig(session=self.session)
                                                case MemoryConfig():
                                                    return self.session
                                                case None:
                                                    return MemoryConfig()
                                                case _:
                                                    msg = f"Invalid session configuration: {self.session}"
                                                    raise ValueError(msg)
                                    
                                        def get_system_prompts(self) -> list[BasePrompt]:
                                            """Get all system prompts as BasePrompts."""
                                            from llmling_agent_config.system_prompts import (
                                                FilePromptConfig,
                                                FunctionPromptConfig,
                                                LibraryPromptConfig,
                                                StaticPromptConfig,
                                            )
                                    
                                            prompts: list[BasePrompt] = []
                                            for prompt in self.system_prompts:
                                                match prompt:
                                                    case str():
                                                        # Convert string to StaticPrompt
                                                        static_prompt = StaticPrompt(
                                                            name="system",
                                                            description="System prompt",
                                                            messages=[PromptMessage(role="system", content=prompt)],
                                                        )
                                                        prompts.append(static_prompt)
                                                    case StaticPromptConfig(content=content):
                                                        # Convert StaticPromptConfig to StaticPrompt
                                                        static_prompt = StaticPrompt(
                                                            name="system",
                                                            description="System prompt",
                                                            messages=[PromptMessage(role="system", content=content)],
                                                        )
                                                        prompts.append(static_prompt)
                                                    case FilePromptConfig(path=path):
                                                        # Load and convert file-based prompt
                                    
                                                        template_path = Path(path)
                                                        if not template_path.is_absolute() and self.config_file_path:
                                                            base_path = Path(self.config_file_path).parent
                                                            template_path = base_path / path
                                    
                                                        template_content = template_path.read_text("utf-8")
                                                        # Create a template-based prompt
                                                        # (for now as StaticPrompt with placeholder)
                                                        static_prompt = StaticPrompt(
                                                            name="system",
                                                            description=f"File prompt: {path}",
                                                            messages=[PromptMessage(role="system", content=template_content)],
                                                        )
                                                        prompts.append(static_prompt)
                                                    case LibraryPromptConfig(reference=reference):
                                                        # Create placeholder for library prompts (resolved by manifest)
                                                        msg = PromptMessage(role="system", content=f"[LIBRARY:{reference}]")
                                                        static_prompt = StaticPrompt(
                                                            name="system",
                                                            description=f"Library: {reference}",
                                                            messages=[msg],
                                                        )
                                                        prompts.append(static_prompt)
                                                    case FunctionPromptConfig(arguments=arguments, function=function):
                                                        # Import and call the function to get prompt content
                                                        content = function(**arguments)
                                                        static_prompt = StaticPrompt(
                                                            name="system",
                                                            description=f"Function prompt: {function}",
                                                            messages=[PromptMessage(role="system", content=content)],
                                                        )
                                                        prompts.append(static_prompt)
                                                    case BasePrompt():
                                                        prompts.append(prompt)
                                            return prompts
                                    
                                        def get_provider(self) -> AgentProvider:
                                            """Get resolved provider instance.
                                    
                                            Creates provider instance based on configuration:
                                            - Full provider config: Use as-is
                                            - Shorthand type: Create default provider config
                                            """
                                            # If string shorthand is used, convert to default provider config
                                            from llmling_agent_config.providers import (
                                                CallbackProviderConfig,
                                                HumanProviderConfig,
                                                PydanticAIProviderConfig,
                                            )
                                    
                                            provider_config = self.provider
                                            if isinstance(provider_config, str):
                                                match provider_config:
                                                    case "pydantic_ai":
                                                        provider_config = PydanticAIProviderConfig(model=self.model)
                                                    case "human":
                                                        provider_config = HumanProviderConfig()
                                                    case _:
                                                        try:
                                                            fn = import_callable(provider_config)
                                                            provider_config = CallbackProviderConfig(callback=fn)
                                                        except Exception:  # noqa: BLE001
                                                            msg = f"Invalid provider type: {provider_config}"
                                                            raise ValueError(msg)  # noqa: B904
                                    
                                            # Create provider instance from config
                                            return provider_config.get_provider()
                                    
                                        def render_system_prompts(self, context: dict[str, Any] | None = None) -> list[str]:
                                            """Render system prompts with context."""
                                            from llmling_agent_config.system_prompts import (
                                                FilePromptConfig,
                                                FunctionPromptConfig,
                                                LibraryPromptConfig,
                                                StaticPromptConfig,
                                            )
                                    
                                            if not context:
                                                # Default context
                                                context = {"name": self.name, "id": 1, "model": self.model}
                                    
                                            rendered_prompts: list[str] = []
                                            for prompt in self.system_prompts:
                                                match prompt:
                                                    case (str() as content) | StaticPromptConfig(content=content):
                                                        rendered_prompts.append(render_prompt(content, {"agent": context}))
                                                    case FilePromptConfig(path=path, variables=variables):
                                                        # Load and render Jinja template from file
                                    
                                                        template_path = Path(path)
                                                        if not template_path.is_absolute() and self.config_file_path:
                                                            base_path = Path(self.config_file_path).parent
                                                            template_path = base_path / path
                                    
                                                        template_content = template_path.read_text("utf-8")
                                                        template_ctx = {"agent": context, **variables}
                                                        rendered_prompts.append(render_prompt(template_content, template_ctx))
                                                    case LibraryPromptConfig(reference=reference):
                                                        # This will be handled by the manifest's get_agent method
                                                        # For now, just add a placeholder
                                                        rendered_prompts.append(f"[LIBRARY:{reference}]")
                                                    case FunctionPromptConfig(function=function, arguments=arguments):
                                                        # Import and call the function to get prompt content
                                                        content = function(**arguments)
                                                        rendered_prompts.append(render_prompt(content, {"agent": context}))
                                    
                                            return rendered_prompts
                                    
                                        def get_config(self) -> Config:
                                            """Get configuration for this agent."""
                                            match self.environment:
                                                case None:
                                                    # Create minimal config
                                                    caps = LLMCapabilitiesConfig()
                                                    global_settings = GlobalSettings(llm_capabilities=caps)
                                                    return Config(global_settings=global_settings)
                                                case str() as path:
                                                    # Backward compatibility: treat as file path
                                                    resolved = self._resolve_environment_path(path, self.config_file_path)
                                                    return Config.from_file(resolved)
                                                case FileEnvironment(uri=uri) as env:
                                                    # Handle FileEnvironment instance
                                                    resolved = env.get_file_path()
                                                    return Config.from_file(resolved)
                                                case {"type": "file", "uri": uri}:
                                                    # Handle raw dict matching file environment structure
                                                    return Config.from_file(uri)
                                                case {"type": "inline", "config": config}:
                                                    return config
                                                case InlineEnvironment() as config:
                                                    return config
                                                case _:
                                                    msg = f"Invalid environment configuration: {self.environment}"
                                                    raise ValueError(msg)
                                    
                                        def get_environment_path(self) -> str | None:
                                            """Get environment file path if available."""
                                            match self.environment:
                                                case str() as path:
                                                    return self._resolve_environment_path(path, self.config_file_path)
                                                case {"type": "file", "uri": uri} | FileEnvironment(uri=uri):
                                                    return uri
                                                case _:
                                                    return None
                                    
                                        @staticmethod
                                        def _resolve_environment_path(env: str, config_file_path: str | None = None) -> str:
                                            """Resolve environment path from config store or relative path."""
                                            try:
                                                config_store = ConfigStore()
                                                return config_store.get_config(env)
                                            except KeyError:
                                                if config_file_path:
                                                    base_dir = UPath(config_file_path).parent
                                                    return str(base_dir / env)
                                                return env
                                    

                                    avatar class-attribute instance-attribute

                                    avatar: str | None = None
                                    

                                    URL or path to agent's avatar image

                                    config_file_path class-attribute instance-attribute

                                    config_file_path: str | None = None
                                    

                                    Config file path for resolving environment.

                                    debug class-attribute instance-attribute

                                    debug: bool = False
                                    

                                    Enable debug output for this agent.

                                    end_strategy class-attribute instance-attribute

                                    end_strategy: EndStrategy = 'early'
                                    

                                    The strategy for handling multiple tool calls when a final result is found

                                    environment class-attribute instance-attribute

                                    environment: str | AgentEnvironment | None = None
                                    

                                    Environments configuration (path or object)

                                    inherits class-attribute instance-attribute

                                    inherits: str | None = None
                                    

                                    Name of agent config to inherit from

                                    knowledge class-attribute instance-attribute

                                    knowledge: Knowledge | None = None
                                    

                                    Knowledge sources for this agent.

                                    model class-attribute instance-attribute

                                    model: str | AnyModelConfig | None = None
                                    

                                    The model to use for this agent. Can be either a simple model name string (e.g. 'openai:gpt-5') or a structured model definition.

                                    output_retries class-attribute instance-attribute

                                    output_retries: int | None = None
                                    

                                    Max retries for result validation

                                    output_type class-attribute instance-attribute

                                    output_type: str | StructuredResponseConfig | None = None
                                    

                                    Name of the response definition to use

                                    provider class-attribute instance-attribute

                                    provider: ProviderConfig | ProviderName = 'pydantic_ai'
                                    

                                    Provider configuration or shorthand type

                                    requires_tool_confirmation class-attribute instance-attribute

                                    requires_tool_confirmation: ToolConfirmationMode = 'per_tool'
                                    

                                    How to handle tool confirmation: - "always": Always require confirmation for all tools - "never": Never require confirmation (ignore tool settings) - "per_tool": Use individual tool settings

                                    result_tool_description class-attribute instance-attribute

                                    result_tool_description: str | None = None
                                    

                                    Custom description for the result tool

                                    result_tool_name class-attribute instance-attribute

                                    result_tool_name: str = 'final_result'
                                    

                                    Name of the tool used for structured responses

                                    retries class-attribute instance-attribute

                                    retries: int = 1
                                    

                                    Number of retries for failed operations (maps to pydantic-ai's retries)

                                    session class-attribute instance-attribute

                                    session: str | SessionQuery | MemoryConfig | None = None
                                    

                                    Session configuration for conversation recovery.

                                    system_prompts class-attribute instance-attribute

                                    system_prompts: Sequence[str | PromptConfig] = Field(default_factory=list)
                                    

                                    System prompts for the agent. Can be strings or structured prompt configs.

                                    tools class-attribute instance-attribute

                                    tools: list[ToolConfig | str] = Field(default_factory=list)
                                    

                                    A list of tools to register with this agent.

                                    toolsets class-attribute instance-attribute

                                    toolsets: list[ToolsetConfig] = Field(default_factory=list)
                                    

                                    Toolset configurations for extensible tool collections.

                                    user_prompts class-attribute instance-attribute

                                    user_prompts: list[str] = Field(default_factory=list)
                                    

                                    Default user prompts for the agent

                                    workers class-attribute instance-attribute

                                    workers: list[WorkerConfig] = Field(default_factory=list)
                                    

                                    Worker agents which will be available as tools.

                                    get_config

                                    get_config() -> Config
                                    

                                    Get configuration for this agent.

                                    Source code in src/llmling_agent/models/agents.py
                                    378
                                    379
                                    380
                                    381
                                    382
                                    383
                                    384
                                    385
                                    386
                                    387
                                    388
                                    389
                                    390
                                    391
                                    392
                                    393
                                    394
                                    395
                                    396
                                    397
                                    398
                                    399
                                    400
                                    401
                                    402
                                    403
                                    def get_config(self) -> Config:
                                        """Get configuration for this agent."""
                                        match self.environment:
                                            case None:
                                                # Create minimal config
                                                caps = LLMCapabilitiesConfig()
                                                global_settings = GlobalSettings(llm_capabilities=caps)
                                                return Config(global_settings=global_settings)
                                            case str() as path:
                                                # Backward compatibility: treat as file path
                                                resolved = self._resolve_environment_path(path, self.config_file_path)
                                                return Config.from_file(resolved)
                                            case FileEnvironment(uri=uri) as env:
                                                # Handle FileEnvironment instance
                                                resolved = env.get_file_path()
                                                return Config.from_file(resolved)
                                            case {"type": "file", "uri": uri}:
                                                # Handle raw dict matching file environment structure
                                                return Config.from_file(uri)
                                            case {"type": "inline", "config": config}:
                                                return config
                                            case InlineEnvironment() as config:
                                                return config
                                            case _:
                                                msg = f"Invalid environment configuration: {self.environment}"
                                                raise ValueError(msg)
                                    

                                    get_environment_path

                                    get_environment_path() -> str | None
                                    

                                    Get environment file path if available.

                                    Source code in src/llmling_agent/models/agents.py
                                    405
                                    406
                                    407
                                    408
                                    409
                                    410
                                    411
                                    412
                                    413
                                    def get_environment_path(self) -> str | None:
                                        """Get environment file path if available."""
                                        match self.environment:
                                            case str() as path:
                                                return self._resolve_environment_path(path, self.config_file_path)
                                            case {"type": "file", "uri": uri} | FileEnvironment(uri=uri):
                                                return uri
                                            case _:
                                                return None
                                    

                                    get_provider

                                    get_provider() -> AgentProvider
                                    

                                    Get resolved provider instance.

                                    Creates provider instance based on configuration: - Full provider config: Use as-is - Shorthand type: Create default provider config

                                    Source code in src/llmling_agent/models/agents.py
                                    306
                                    307
                                    308
                                    309
                                    310
                                    311
                                    312
                                    313
                                    314
                                    315
                                    316
                                    317
                                    318
                                    319
                                    320
                                    321
                                    322
                                    323
                                    324
                                    325
                                    326
                                    327
                                    328
                                    329
                                    330
                                    331
                                    332
                                    333
                                    334
                                    335
                                    336
                                    def get_provider(self) -> AgentProvider:
                                        """Get resolved provider instance.
                                    
                                        Creates provider instance based on configuration:
                                        - Full provider config: Use as-is
                                        - Shorthand type: Create default provider config
                                        """
                                        # If string shorthand is used, convert to default provider config
                                        from llmling_agent_config.providers import (
                                            CallbackProviderConfig,
                                            HumanProviderConfig,
                                            PydanticAIProviderConfig,
                                        )
                                    
                                        provider_config = self.provider
                                        if isinstance(provider_config, str):
                                            match provider_config:
                                                case "pydantic_ai":
                                                    provider_config = PydanticAIProviderConfig(model=self.model)
                                                case "human":
                                                    provider_config = HumanProviderConfig()
                                                case _:
                                                    try:
                                                        fn = import_callable(provider_config)
                                                        provider_config = CallbackProviderConfig(callback=fn)
                                                    except Exception:  # noqa: BLE001
                                                        msg = f"Invalid provider type: {provider_config}"
                                                        raise ValueError(msg)  # noqa: B904
                                    
                                        # Create provider instance from config
                                        return provider_config.get_provider()
                                    

                                    get_session_config

                                    get_session_config() -> MemoryConfig
                                    

                                    Get resolved memory configuration.

                                    Source code in src/llmling_agent/models/agents.py
                                    224
                                    225
                                    226
                                    227
                                    228
                                    229
                                    230
                                    231
                                    232
                                    233
                                    234
                                    235
                                    236
                                    237
                                    def get_session_config(self) -> MemoryConfig:
                                        """Get resolved memory configuration."""
                                        match self.session:
                                            case str() | UUID():
                                                return MemoryConfig(session=SessionQuery(name=str(self.session)))
                                            case SessionQuery():
                                                return MemoryConfig(session=self.session)
                                            case MemoryConfig():
                                                return self.session
                                            case None:
                                                return MemoryConfig()
                                            case _:
                                                msg = f"Invalid session configuration: {self.session}"
                                                raise ValueError(msg)
                                    

                                    get_system_prompts

                                    get_system_prompts() -> list[BasePrompt]
                                    

                                    Get all system prompts as BasePrompts.

                                    Source code in src/llmling_agent/models/agents.py
                                    239
                                    240
                                    241
                                    242
                                    243
                                    244
                                    245
                                    246
                                    247
                                    248
                                    249
                                    250
                                    251
                                    252
                                    253
                                    254
                                    255
                                    256
                                    257
                                    258
                                    259
                                    260
                                    261
                                    262
                                    263
                                    264
                                    265
                                    266
                                    267
                                    268
                                    269
                                    270
                                    271
                                    272
                                    273
                                    274
                                    275
                                    276
                                    277
                                    278
                                    279
                                    280
                                    281
                                    282
                                    283
                                    284
                                    285
                                    286
                                    287
                                    288
                                    289
                                    290
                                    291
                                    292
                                    293
                                    294
                                    295
                                    296
                                    297
                                    298
                                    299
                                    300
                                    301
                                    302
                                    303
                                    304
                                    def get_system_prompts(self) -> list[BasePrompt]:
                                        """Get all system prompts as BasePrompts."""
                                        from llmling_agent_config.system_prompts import (
                                            FilePromptConfig,
                                            FunctionPromptConfig,
                                            LibraryPromptConfig,
                                            StaticPromptConfig,
                                        )
                                    
                                        prompts: list[BasePrompt] = []
                                        for prompt in self.system_prompts:
                                            match prompt:
                                                case str():
                                                    # Convert string to StaticPrompt
                                                    static_prompt = StaticPrompt(
                                                        name="system",
                                                        description="System prompt",
                                                        messages=[PromptMessage(role="system", content=prompt)],
                                                    )
                                                    prompts.append(static_prompt)
                                                case StaticPromptConfig(content=content):
                                                    # Convert StaticPromptConfig to StaticPrompt
                                                    static_prompt = StaticPrompt(
                                                        name="system",
                                                        description="System prompt",
                                                        messages=[PromptMessage(role="system", content=content)],
                                                    )
                                                    prompts.append(static_prompt)
                                                case FilePromptConfig(path=path):
                                                    # Load and convert file-based prompt
                                    
                                                    template_path = Path(path)
                                                    if not template_path.is_absolute() and self.config_file_path:
                                                        base_path = Path(self.config_file_path).parent
                                                        template_path = base_path / path
                                    
                                                    template_content = template_path.read_text("utf-8")
                                                    # Create a template-based prompt
                                                    # (for now as StaticPrompt with placeholder)
                                                    static_prompt = StaticPrompt(
                                                        name="system",
                                                        description=f"File prompt: {path}",
                                                        messages=[PromptMessage(role="system", content=template_content)],
                                                    )
                                                    prompts.append(static_prompt)
                                                case LibraryPromptConfig(reference=reference):
                                                    # Create placeholder for library prompts (resolved by manifest)
                                                    msg = PromptMessage(role="system", content=f"[LIBRARY:{reference}]")
                                                    static_prompt = StaticPrompt(
                                                        name="system",
                                                        description=f"Library: {reference}",
                                                        messages=[msg],
                                                    )
                                                    prompts.append(static_prompt)
                                                case FunctionPromptConfig(arguments=arguments, function=function):
                                                    # Import and call the function to get prompt content
                                                    content = function(**arguments)
                                                    static_prompt = StaticPrompt(
                                                        name="system",
                                                        description=f"Function prompt: {function}",
                                                        messages=[PromptMessage(role="system", content=content)],
                                                    )
                                                    prompts.append(static_prompt)
                                                case BasePrompt():
                                                    prompts.append(prompt)
                                        return prompts
                                    

                                    get_tool_provider

                                    get_tool_provider() -> ResourceProvider | None
                                    

                                    Get tool provider for this agent.

                                    Source code in src/llmling_agent/models/agents.py
                                    195
                                    196
                                    197
                                    198
                                    199
                                    200
                                    201
                                    202
                                    203
                                    204
                                    205
                                    206
                                    207
                                    208
                                    209
                                    210
                                    211
                                    212
                                    213
                                    214
                                    215
                                    216
                                    217
                                    218
                                    219
                                    220
                                    221
                                    222
                                    def get_tool_provider(self) -> ResourceProvider | None:
                                        """Get tool provider for this agent."""
                                        from llmling_agent.tools.base import Tool
                                    
                                        # Create provider for static tools
                                        if not self.tools:
                                            return None
                                        static_tools: list[Tool] = []
                                        for tool_config in self.tools:
                                            try:
                                                match tool_config:
                                                    case str():
                                                        if tool_config.startswith("crewai_tools"):
                                                            obj = import_class(tool_config)()
                                                            static_tools.append(Tool.from_crewai_tool(obj))
                                                        elif tool_config.startswith("langchain"):
                                                            obj = import_class(tool_config)()
                                                            static_tools.append(Tool.from_langchain_tool(obj))
                                                        else:
                                                            tool = Tool.from_callable(tool_config)
                                                            static_tools.append(tool)
                                                    case BaseToolConfig():
                                                        static_tools.append(tool_config.get_tool())
                                            except Exception:
                                                logger.exception("Failed to load tool %r", tool_config)
                                                continue
                                    
                                        return StaticResourceProvider(name="builtin", tools=static_tools)
                                    

                                    get_toolsets

                                    get_toolsets() -> list[ResourceProvider]
                                    

                                    Get all resource providers for this agent.

                                    Source code in src/llmling_agent/models/agents.py
                                    179
                                    180
                                    181
                                    182
                                    183
                                    184
                                    185
                                    186
                                    187
                                    188
                                    189
                                    190
                                    191
                                    192
                                    193
                                    def get_toolsets(self) -> list[ResourceProvider]:
                                        """Get all resource providers for this agent."""
                                        providers: list[ResourceProvider] = []
                                    
                                        # Add providers from toolsets
                                        for toolset_config in self.toolsets:
                                            try:
                                                provider = toolset_config.get_provider()
                                                providers.append(provider)
                                            except Exception as e:
                                                msg = "Failed to create provider for toolset"
                                                logger.exception(msg, toolset_config)
                                                raise ValueError(msg) from e
                                    
                                        return providers
                                    

                                    handle_model_types classmethod

                                    handle_model_types(data: dict[str, Any]) -> dict[str, Any]
                                    

                                    Convert model inputs to appropriate format.

                                    Source code in src/llmling_agent/models/agents.py
                                    171
                                    172
                                    173
                                    174
                                    175
                                    176
                                    177
                                    @model_validator(mode="before")
                                    @classmethod
                                    def handle_model_types(cls, data: dict[str, Any]) -> dict[str, Any]:
                                        """Convert model inputs to appropriate format."""
                                        if isinstance((model := data.get("model")), str):
                                            data["model"] = {"type": "string", "identifier": model}
                                        return data
                                    

                                    is_structured

                                    is_structured() -> bool
                                    

                                    Check if this config defines a structured agent.

                                    Source code in src/llmling_agent/models/agents.py
                                    141
                                    142
                                    143
                                    def is_structured(self) -> bool:
                                        """Check if this config defines a structured agent."""
                                        return self.output_type is not None
                                    

                                    render_system_prompts

                                    render_system_prompts(context: dict[str, Any] | None = None) -> list[str]
                                    

                                    Render system prompts with context.

                                    Source code in src/llmling_agent/models/agents.py
                                    338
                                    339
                                    340
                                    341
                                    342
                                    343
                                    344
                                    345
                                    346
                                    347
                                    348
                                    349
                                    350
                                    351
                                    352
                                    353
                                    354
                                    355
                                    356
                                    357
                                    358
                                    359
                                    360
                                    361
                                    362
                                    363
                                    364
                                    365
                                    366
                                    367
                                    368
                                    369
                                    370
                                    371
                                    372
                                    373
                                    374
                                    375
                                    376
                                    def render_system_prompts(self, context: dict[str, Any] | None = None) -> list[str]:
                                        """Render system prompts with context."""
                                        from llmling_agent_config.system_prompts import (
                                            FilePromptConfig,
                                            FunctionPromptConfig,
                                            LibraryPromptConfig,
                                            StaticPromptConfig,
                                        )
                                    
                                        if not context:
                                            # Default context
                                            context = {"name": self.name, "id": 1, "model": self.model}
                                    
                                        rendered_prompts: list[str] = []
                                        for prompt in self.system_prompts:
                                            match prompt:
                                                case (str() as content) | StaticPromptConfig(content=content):
                                                    rendered_prompts.append(render_prompt(content, {"agent": context}))
                                                case FilePromptConfig(path=path, variables=variables):
                                                    # Load and render Jinja template from file
                                    
                                                    template_path = Path(path)
                                                    if not template_path.is_absolute() and self.config_file_path:
                                                        base_path = Path(self.config_file_path).parent
                                                        template_path = base_path / path
                                    
                                                    template_content = template_path.read_text("utf-8")
                                                    template_ctx = {"agent": context, **variables}
                                                    rendered_prompts.append(render_prompt(template_content, template_ctx))
                                                case LibraryPromptConfig(reference=reference):
                                                    # This will be handled by the manifest's get_agent method
                                                    # For now, just add a placeholder
                                                    rendered_prompts.append(f"[LIBRARY:{reference}]")
                                                case FunctionPromptConfig(function=function, arguments=arguments):
                                                    # Import and call the function to get prompt content
                                                    content = function(**arguments)
                                                    rendered_prompts.append(render_prompt(content, {"agent": context}))
                                    
                                        return rendered_prompts
                                    

                                    validate_output_type classmethod

                                    validate_output_type(data: dict[str, Any]) -> dict[str, Any]
                                    

                                    Convert result type and apply its settings.

                                    Source code in src/llmling_agent/models/agents.py
                                    145
                                    146
                                    147
                                    148
                                    149
                                    150
                                    151
                                    152
                                    153
                                    154
                                    155
                                    156
                                    157
                                    158
                                    159
                                    160
                                    161
                                    162
                                    163
                                    164
                                    165
                                    166
                                    167
                                    168
                                    169
                                    @model_validator(mode="before")
                                    @classmethod
                                    def validate_output_type(cls, data: dict[str, Any]) -> dict[str, Any]:
                                        """Convert result type and apply its settings."""
                                        output_type = data.get("output_type")
                                        if isinstance(output_type, dict):
                                            # Extract response-specific settings
                                            tool_name = output_type.pop("result_tool_name", None)
                                            tool_description = output_type.pop("result_tool_description", None)
                                            retries = output_type.pop("output_retries", None)
                                    
                                            # Convert remaining dict to ResponseDefinition
                                            if "type" not in output_type["response_schema"]:
                                                output_type["response_schema"]["type"] = "inline"
                                            data["output_type"]["response_schema"] = InlineSchemaDef(**output_type)
                                    
                                            # Apply extracted settings to agent config
                                            if tool_name:
                                                data["result_tool_name"] = tool_name
                                            if tool_description:
                                                data["result_tool_description"] = tool_description
                                            if retries is not None:
                                                data["output_retries"] = retries
                                    
                                        return data
                                    

                                    AgentContext dataclass

                                    Bases: NodeContext[TDeps]

                                    Runtime context for agent execution.

                                    Generically typed with AgentContext[Type of Dependencies]

                                    Source code in src/llmling_agent/agent/context.py
                                     27
                                     28
                                     29
                                     30
                                     31
                                     32
                                     33
                                     34
                                     35
                                     36
                                     37
                                     38
                                     39
                                     40
                                     41
                                     42
                                     43
                                     44
                                     45
                                     46
                                     47
                                     48
                                     49
                                     50
                                     51
                                     52
                                     53
                                     54
                                     55
                                     56
                                     57
                                     58
                                     59
                                     60
                                     61
                                     62
                                     63
                                     64
                                     65
                                     66
                                     67
                                     68
                                     69
                                     70
                                     71
                                     72
                                     73
                                     74
                                     75
                                     76
                                     77
                                     78
                                     79
                                     80
                                     81
                                     82
                                     83
                                     84
                                     85
                                     86
                                     87
                                     88
                                     89
                                     90
                                     91
                                     92
                                     93
                                     94
                                     95
                                     96
                                     97
                                     98
                                     99
                                    100
                                    101
                                    102
                                    103
                                    104
                                    105
                                    106
                                    107
                                    108
                                    109
                                    110
                                    111
                                    112
                                    113
                                    114
                                    115
                                    116
                                    117
                                    118
                                    119
                                    120
                                    @dataclass(kw_only=True)
                                    class AgentContext[TDeps = Any](NodeContext[TDeps]):
                                        """Runtime context for agent execution.
                                    
                                        Generically typed with AgentContext[Type of Dependencies]
                                        """
                                    
                                        config: AgentConfig
                                        """Current agent's specific configuration."""
                                    
                                        model_settings: dict[str, Any] = field(default_factory=dict)
                                        """Model-specific settings."""
                                    
                                        data: TDeps | None = None
                                        """Custom context data."""
                                    
                                        runtime: RuntimeConfig | None = None
                                        """Reference to the runtime configuration."""
                                    
                                        @classmethod
                                        def create_default(
                                            cls,
                                            name: str,
                                            deps: TDeps | None = None,
                                            pool: AgentPool | None = None,
                                            input_provider: InputProvider | None = None,
                                        ) -> AgentContext[TDeps]:
                                            """Create a default agent context with minimal privileges.
                                    
                                            Args:
                                                name: Name of the agent
                                    
                                                deps: Optional dependencies for the agent
                                                pool: Optional pool the agent is part of
                                                input_provider: Optional input provider for the agent
                                            """
                                            from llmling_agent.models import AgentConfig, AgentsManifest
                                    
                                            defn = AgentsManifest()
                                            cfg = AgentConfig(name=name)
                                            return cls(
                                                input_provider=input_provider,
                                                node_name=name,
                                                definition=defn,
                                                config=cfg,
                                                data=deps,
                                                pool=pool,
                                            )
                                    
                                        @cached_property
                                        def converter(self) -> ConversionManager:
                                            """Get conversion manager from global config."""
                                            return ConversionManager(self.definition.conversion)
                                    
                                        # TODO: perhaps add agent directly to context?
                                        @property
                                        def agent(self) -> Agent[TDeps, Any]:
                                            """Get the agent instance from the pool."""
                                            assert self.pool, "No agent pool available"
                                            assert self.node_name, "No agent name available"
                                            return self.pool.agents[self.node_name]
                                    
                                        @property
                                        def process_manager(self):
                                            """Get process manager from pool."""
                                            assert self.pool, "No agent pool available"
                                            return self.pool.process_manager
                                    
                                        async def handle_confirmation(
                                            self,
                                            tool: Tool,
                                            args: dict[str, Any],
                                        ) -> ConfirmationResult:
                                            """Handle tool execution confirmation.
                                    
                                            Returns True if:
                                            - No confirmation handler is set
                                            - Handler confirms the execution
                                            """
                                            provider = self.get_input_provider()
                                            mode = self.config.requires_tool_confirmation
                                            if (mode == "per_tool" and not tool.requires_confirmation) or mode == "never":
                                                return "allow"
                                            history = self.agent.conversation.get_history() if self.pool else []
                                            return await provider.get_tool_confirmation(self, tool, args, history)
                                    
                                        async def handle_elicitation(
                                            self,
                                            params: types.ElicitRequestParams,
                                        ) -> types.ElicitResult | types.ErrorData:
                                            """Handle elicitation request for additional information."""
                                            provider = self.get_input_provider()
                                            history = self.agent.conversation.get_history() if self.pool else []
                                            return await provider.get_elicitation(self, params, history)
                                    

                                    agent property

                                    agent: Agent[TDeps, Any]
                                    

                                    Get the agent instance from the pool.

                                    config instance-attribute

                                    config: AgentConfig
                                    

                                    Current agent's specific configuration.

                                    converter cached property

                                    converter: ConversionManager
                                    

                                    Get conversion manager from global config.

                                    data class-attribute instance-attribute

                                    data: TDeps | None = None
                                    

                                    Custom context data.

                                    model_settings class-attribute instance-attribute

                                    model_settings: dict[str, Any] = field(default_factory=dict)
                                    

                                    Model-specific settings.

                                    process_manager property

                                    process_manager
                                    

                                    Get process manager from pool.

                                    runtime class-attribute instance-attribute

                                    runtime: RuntimeConfig | None = None
                                    

                                    Reference to the runtime configuration.

                                    create_default classmethod

                                    create_default(
                                        name: str,
                                        deps: TDeps | None = None,
                                        pool: AgentPool | None = None,
                                        input_provider: InputProvider | None = None,
                                    ) -> AgentContext[TDeps]
                                    

                                    Create a default agent context with minimal privileges.

                                    Parameters:

                                    Name Type Description Default
                                    name str

                                    Name of the agent

                                    required
                                    deps TDeps | None

                                    Optional dependencies for the agent

                                    None
                                    pool AgentPool | None

                                    Optional pool the agent is part of

                                    None
                                    input_provider InputProvider | None

                                    Optional input provider for the agent

                                    None
                                    Source code in src/llmling_agent/agent/context.py
                                    46
                                    47
                                    48
                                    49
                                    50
                                    51
                                    52
                                    53
                                    54
                                    55
                                    56
                                    57
                                    58
                                    59
                                    60
                                    61
                                    62
                                    63
                                    64
                                    65
                                    66
                                    67
                                    68
                                    69
                                    70
                                    71
                                    72
                                    73
                                    74
                                    @classmethod
                                    def create_default(
                                        cls,
                                        name: str,
                                        deps: TDeps | None = None,
                                        pool: AgentPool | None = None,
                                        input_provider: InputProvider | None = None,
                                    ) -> AgentContext[TDeps]:
                                        """Create a default agent context with minimal privileges.
                                    
                                        Args:
                                            name: Name of the agent
                                    
                                            deps: Optional dependencies for the agent
                                            pool: Optional pool the agent is part of
                                            input_provider: Optional input provider for the agent
                                        """
                                        from llmling_agent.models import AgentConfig, AgentsManifest
                                    
                                        defn = AgentsManifest()
                                        cfg = AgentConfig(name=name)
                                        return cls(
                                            input_provider=input_provider,
                                            node_name=name,
                                            definition=defn,
                                            config=cfg,
                                            data=deps,
                                            pool=pool,
                                        )
                                    

                                    handle_confirmation async

                                    handle_confirmation(tool: Tool, args: dict[str, Any]) -> ConfirmationResult
                                    

                                    Handle tool execution confirmation.

                                    Returns True if: - No confirmation handler is set - Handler confirms the execution

                                    Source code in src/llmling_agent/agent/context.py
                                     95
                                     96
                                     97
                                     98
                                     99
                                    100
                                    101
                                    102
                                    103
                                    104
                                    105
                                    106
                                    107
                                    108
                                    109
                                    110
                                    111
                                    async def handle_confirmation(
                                        self,
                                        tool: Tool,
                                        args: dict[str, Any],
                                    ) -> ConfirmationResult:
                                        """Handle tool execution confirmation.
                                    
                                        Returns True if:
                                        - No confirmation handler is set
                                        - Handler confirms the execution
                                        """
                                        provider = self.get_input_provider()
                                        mode = self.config.requires_tool_confirmation
                                        if (mode == "per_tool" and not tool.requires_confirmation) or mode == "never":
                                            return "allow"
                                        history = self.agent.conversation.get_history() if self.pool else []
                                        return await provider.get_tool_confirmation(self, tool, args, history)
                                    

                                    handle_elicitation async

                                    handle_elicitation(params: ElicitRequestParams) -> ElicitResult | ErrorData
                                    

                                    Handle elicitation request for additional information.

                                    Source code in src/llmling_agent/agent/context.py
                                    113
                                    114
                                    115
                                    116
                                    117
                                    118
                                    119
                                    120
                                    async def handle_elicitation(
                                        self,
                                        params: types.ElicitRequestParams,
                                    ) -> types.ElicitResult | types.ErrorData:
                                        """Handle elicitation request for additional information."""
                                        provider = self.get_input_provider()
                                        history = self.agent.conversation.get_history() if self.pool else []
                                        return await provider.get_elicitation(self, params, history)
                                    

                                    AgentPool

                                    Bases: BaseRegistry[NodeName, MessageEmitter[Any, Any]]

                                    Pool managing message processing nodes (agents and teams).

                                    Acts as a unified registry for all nodes, providing: - Centralized node management and lookup - Shared dependency injection - Connection management - Resource coordination

                                    Nodes can be accessed through: - nodes: All registered nodes (agents and teams) - agents: Only Agent instances - teams: Only Team instances

                                    Source code in src/llmling_agent/delegation/pool.py
                                     57
                                     58
                                     59
                                     60
                                     61
                                     62
                                     63
                                     64
                                     65
                                     66
                                     67
                                     68
                                     69
                                     70
                                     71
                                     72
                                     73
                                     74
                                     75
                                     76
                                     77
                                     78
                                     79
                                     80
                                     81
                                     82
                                     83
                                     84
                                     85
                                     86
                                     87
                                     88
                                     89
                                     90
                                     91
                                     92
                                     93
                                     94
                                     95
                                     96
                                     97
                                     98
                                     99
                                    100
                                    101
                                    102
                                    103
                                    104
                                    105
                                    106
                                    107
                                    108
                                    109
                                    110
                                    111
                                    112
                                    113
                                    114
                                    115
                                    116
                                    117
                                    118
                                    119
                                    120
                                    121
                                    122
                                    123
                                    124
                                    125
                                    126
                                    127
                                    128
                                    129
                                    130
                                    131
                                    132
                                    133
                                    134
                                    135
                                    136
                                    137
                                    138
                                    139
                                    140
                                    141
                                    142
                                    143
                                    144
                                    145
                                    146
                                    147
                                    148
                                    149
                                    150
                                    151
                                    152
                                    153
                                    154
                                    155
                                    156
                                    157
                                    158
                                    159
                                    160
                                    161
                                    162
                                    163
                                    164
                                    165
                                    166
                                    167
                                    168
                                    169
                                    170
                                    171
                                    172
                                    173
                                    174
                                    175
                                    176
                                    177
                                    178
                                    179
                                    180
                                    181
                                    182
                                    183
                                    184
                                    185
                                    186
                                    187
                                    188
                                    189
                                    190
                                    191
                                    192
                                    193
                                    194
                                    195
                                    196
                                    197
                                    198
                                    199
                                    200
                                    201
                                    202
                                    203
                                    204
                                    205
                                    206
                                    207
                                    208
                                    209
                                    210
                                    211
                                    212
                                    213
                                    214
                                    215
                                    216
                                    217
                                    218
                                    219
                                    220
                                    221
                                    222
                                    223
                                    224
                                    225
                                    226
                                    227
                                    228
                                    229
                                    230
                                    231
                                    232
                                    233
                                    234
                                    235
                                    236
                                    237
                                    238
                                    239
                                    240
                                    241
                                    242
                                    243
                                    244
                                    245
                                    246
                                    247
                                    248
                                    249
                                    250
                                    251
                                    252
                                    253
                                    254
                                    255
                                    256
                                    257
                                    258
                                    259
                                    260
                                    261
                                    262
                                    263
                                    264
                                    265
                                    266
                                    267
                                    268
                                    269
                                    270
                                    271
                                    272
                                    273
                                    274
                                    275
                                    276
                                    277
                                    278
                                    279
                                    280
                                    281
                                    282
                                    283
                                    284
                                    285
                                    286
                                    287
                                    288
                                    289
                                    290
                                    291
                                    292
                                    293
                                    294
                                    295
                                    296
                                    297
                                    298
                                    299
                                    300
                                    301
                                    302
                                    303
                                    304
                                    305
                                    306
                                    307
                                    308
                                    309
                                    310
                                    311
                                    312
                                    313
                                    314
                                    315
                                    316
                                    317
                                    318
                                    319
                                    320
                                    321
                                    322
                                    323
                                    324
                                    325
                                    326
                                    327
                                    328
                                    329
                                    330
                                    331
                                    332
                                    333
                                    334
                                    335
                                    336
                                    337
                                    338
                                    339
                                    340
                                    341
                                    342
                                    343
                                    344
                                    345
                                    346
                                    347
                                    348
                                    349
                                    350
                                    351
                                    352
                                    353
                                    354
                                    355
                                    356
                                    357
                                    358
                                    359
                                    360
                                    361
                                    362
                                    363
                                    364
                                    365
                                    366
                                    367
                                    368
                                    369
                                    370
                                    371
                                    372
                                    373
                                    374
                                    375
                                    376
                                    377
                                    378
                                    379
                                    380
                                    381
                                    382
                                    383
                                    384
                                    385
                                    386
                                    387
                                    388
                                    389
                                    390
                                    391
                                    392
                                    393
                                    394
                                    395
                                    396
                                    397
                                    398
                                    399
                                    400
                                    401
                                    402
                                    403
                                    404
                                    405
                                    406
                                    407
                                    408
                                    409
                                    410
                                    411
                                    412
                                    413
                                    414
                                    415
                                    416
                                    417
                                    418
                                    419
                                    420
                                    421
                                    422
                                    423
                                    424
                                    425
                                    426
                                    427
                                    428
                                    429
                                    430
                                    431
                                    432
                                    433
                                    434
                                    435
                                    436
                                    437
                                    438
                                    439
                                    440
                                    441
                                    442
                                    443
                                    444
                                    445
                                    446
                                    447
                                    448
                                    449
                                    450
                                    451
                                    452
                                    453
                                    454
                                    455
                                    456
                                    457
                                    458
                                    459
                                    460
                                    461
                                    462
                                    463
                                    464
                                    465
                                    466
                                    467
                                    468
                                    469
                                    470
                                    471
                                    472
                                    473
                                    474
                                    475
                                    476
                                    477
                                    478
                                    479
                                    480
                                    481
                                    482
                                    483
                                    484
                                    485
                                    486
                                    487
                                    488
                                    489
                                    490
                                    491
                                    492
                                    493
                                    494
                                    495
                                    496
                                    497
                                    498
                                    499
                                    500
                                    501
                                    502
                                    503
                                    504
                                    505
                                    506
                                    507
                                    508
                                    509
                                    510
                                    511
                                    512
                                    513
                                    514
                                    515
                                    516
                                    517
                                    518
                                    519
                                    520
                                    521
                                    522
                                    523
                                    524
                                    525
                                    526
                                    527
                                    528
                                    529
                                    530
                                    531
                                    532
                                    533
                                    534
                                    535
                                    536
                                    537
                                    538
                                    539
                                    540
                                    541
                                    542
                                    543
                                    544
                                    545
                                    546
                                    547
                                    548
                                    549
                                    550
                                    551
                                    552
                                    553
                                    554
                                    555
                                    556
                                    557
                                    558
                                    559
                                    560
                                    561
                                    562
                                    563
                                    564
                                    565
                                    566
                                    567
                                    568
                                    569
                                    570
                                    571
                                    572
                                    573
                                    574
                                    575
                                    576
                                    577
                                    578
                                    579
                                    580
                                    581
                                    582
                                    583
                                    584
                                    585
                                    586
                                    587
                                    588
                                    589
                                    590
                                    591
                                    592
                                    593
                                    594
                                    595
                                    596
                                    597
                                    598
                                    599
                                    600
                                    601
                                    602
                                    603
                                    604
                                    605
                                    606
                                    607
                                    608
                                    609
                                    610
                                    611
                                    612
                                    613
                                    614
                                    615
                                    616
                                    617
                                    618
                                    619
                                    620
                                    621
                                    622
                                    623
                                    624
                                    625
                                    626
                                    627
                                    628
                                    629
                                    630
                                    631
                                    632
                                    633
                                    634
                                    635
                                    636
                                    637
                                    638
                                    639
                                    640
                                    641
                                    642
                                    643
                                    644
                                    645
                                    646
                                    647
                                    648
                                    649
                                    650
                                    651
                                    652
                                    653
                                    654
                                    655
                                    656
                                    657
                                    658
                                    659
                                    660
                                    661
                                    662
                                    663
                                    664
                                    665
                                    666
                                    667
                                    668
                                    669
                                    670
                                    671
                                    672
                                    673
                                    674
                                    675
                                    676
                                    677
                                    678
                                    679
                                    680
                                    681
                                    682
                                    683
                                    684
                                    685
                                    686
                                    687
                                    688
                                    689
                                    690
                                    691
                                    692
                                    693
                                    694
                                    695
                                    696
                                    697
                                    698
                                    699
                                    700
                                    701
                                    702
                                    703
                                    704
                                    705
                                    706
                                    707
                                    708
                                    709
                                    710
                                    711
                                    712
                                    713
                                    714
                                    715
                                    716
                                    717
                                    718
                                    719
                                    720
                                    721
                                    722
                                    723
                                    724
                                    725
                                    726
                                    727
                                    728
                                    729
                                    730
                                    731
                                    732
                                    733
                                    734
                                    735
                                    736
                                    737
                                    738
                                    739
                                    740
                                    741
                                    742
                                    743
                                    744
                                    745
                                    746
                                    747
                                    748
                                    749
                                    750
                                    751
                                    752
                                    753
                                    754
                                    755
                                    756
                                    757
                                    758
                                    759
                                    760
                                    761
                                    762
                                    763
                                    764
                                    765
                                    766
                                    767
                                    768
                                    769
                                    770
                                    771
                                    772
                                    773
                                    774
                                    775
                                    776
                                    777
                                    778
                                    779
                                    780
                                    781
                                    782
                                    783
                                    784
                                    785
                                    786
                                    787
                                    788
                                    789
                                    790
                                    791
                                    792
                                    793
                                    794
                                    795
                                    796
                                    797
                                    798
                                    799
                                    800
                                    801
                                    802
                                    803
                                    804
                                    805
                                    806
                                    807
                                    808
                                    809
                                    810
                                    811
                                    812
                                    813
                                    814
                                    815
                                    816
                                    817
                                    818
                                    819
                                    820
                                    821
                                    822
                                    823
                                    824
                                    825
                                    826
                                    827
                                    828
                                    829
                                    830
                                    831
                                    832
                                    833
                                    834
                                    835
                                    836
                                    837
                                    838
                                    839
                                    840
                                    841
                                    842
                                    843
                                    844
                                    845
                                    846
                                    847
                                    848
                                    849
                                    850
                                    851
                                    852
                                    853
                                    854
                                    855
                                    856
                                    857
                                    858
                                    859
                                    860
                                    861
                                    862
                                    863
                                    864
                                    class AgentPool[TPoolDeps = None](BaseRegistry[NodeName, MessageEmitter[Any, Any]]):
                                        """Pool managing message processing nodes (agents and teams).
                                    
                                        Acts as a unified registry for all nodes, providing:
                                        - Centralized node management and lookup
                                        - Shared dependency injection
                                        - Connection management
                                        - Resource coordination
                                    
                                        Nodes can be accessed through:
                                        - nodes: All registered nodes (agents and teams)
                                        - agents: Only Agent instances
                                        - teams: Only Team instances
                                        """
                                    
                                        def __init__(
                                            self,
                                            manifest: JoinablePathLike | AgentsManifest | None = None,
                                            *,
                                            shared_deps: TPoolDeps | None = None,
                                            connect_nodes: bool = True,
                                            input_provider: InputProvider | None = None,
                                            parallel_load: bool = True,
                                            progress_handlers: list[ProgressCallback] | None = None,
                                        ):
                                            """Initialize agent pool with immediate agent creation.
                                    
                                            Args:
                                                manifest: Agent configuration manifest
                                                shared_deps: Dependencies to share across all nodes
                                                connect_nodes: Whether to set up forwarding connections
                                                input_provider: Input provider for tool / step confirmations / HumanAgents
                                                parallel_load: Whether to load nodes in parallel (async)
                                                progress_handlers: List of progress handlers to notify about progress
                                    
                                            Raises:
                                                ValueError: If manifest contains invalid node configurations
                                                RuntimeError: If node initialization fails
                                            """
                                            super().__init__()
                                            from llmling_agent.mcp_server.manager import MCPManager
                                            from llmling_agent.models.manifest import AgentsManifest
                                            from llmling_agent.storage import StorageManager
                                    
                                            match manifest:
                                                case None:
                                                    self.manifest = AgentsManifest()
                                                case str() | os.PathLike() | UPath():
                                                    self.manifest = AgentsManifest.from_file(manifest)
                                                case AgentsManifest():
                                                    self.manifest = manifest
                                                case _:
                                                    msg = f"Invalid config path: {manifest}"
                                                    raise ValueError(msg)
                                            self.shared_deps = shared_deps
                                            self._input_provider = input_provider
                                            self.exit_stack = AsyncExitStack()
                                            self.parallel_load = parallel_load
                                            self.storage = StorageManager(self.manifest.storage)
                                            self.progress_handlers = MultiEventHandler[ProgressCallback](progress_handlers)
                                            self.connection_registry = ConnectionRegistry()
                                            servers = self.manifest.get_mcp_servers()
                                            self.mcp = MCPManager(name="pool_mcp", servers=servers, owner="pool")
                                            self._tasks = TaskRegistry()
                                            # Register tasks from manifest
                                            for name, task in self.manifest.jobs.items():
                                                self._tasks.register(name, task)
                                    
                                            # Initialize process manager for background processes
                                            from anyenv import ProcessManager
                                    
                                            self.process_manager = ProcessManager()
                                            self.pool_talk = TeamTalk[Any].from_nodes(list(self.nodes.values()))
                                            if self.manifest.pool_server and self.manifest.pool_server.enabled:
                                                from llmling_agent.resource_providers.pool import PoolResourceProvider
                                                from llmling_agent_mcp.server import LLMLingServer
                                    
                                                provider = PoolResourceProvider(
                                                    self, zed_mode=self.manifest.pool_server.zed_mode
                                                )
                                                self.server: LLMLingServer | None = LLMLingServer(
                                                    provider=provider,
                                                    config=self.manifest.pool_server,
                                                )
                                                self.progress_handlers.add_handler(self.server.report_progress)
                                            else:
                                                self.server = None
                                            # Create requested agents immediately
                                            for name in self.manifest.agents:
                                                agent = self.manifest.get_agent(name, deps=shared_deps)
                                                self.register(name, agent)
                                    
                                            # Then set up worker relationships
                                            for agent in self.agents.values():
                                                self.setup_agent_workers(agent)
                                            self._create_teams()
                                            # Set up forwarding connections
                                            if connect_nodes:
                                                self._connect_nodes()
                                    
                                        async def __aenter__(self) -> Self:
                                            """Enter async context and initialize all agents."""
                                            try:
                                                # Add MCP tool provider to all agents
                                                agents = list(self.agents.values())
                                                teams = list(self.teams.values())
                                                for agent in agents:
                                                    agent.tools.add_provider(self.mcp)
                                    
                                                # Collect all components to initialize
                                                components: list[AbstractAsyncContextManager[Any]] = [
                                                    self.mcp,
                                                    *agents,
                                                    *teams,
                                                ]
                                    
                                                # Add MCP server if configured
                                                if self.server:
                                                    components.append(self.server)
                                                components.append(self.storage)
                                                # Initialize all components
                                                if self.parallel_load:
                                                    await asyncio.gather(
                                                        *(self.exit_stack.enter_async_context(c) for c in components)
                                                    )
                                                else:
                                                    for component in components:
                                                        await self.exit_stack.enter_async_context(component)
                                    
                                            except Exception as e:
                                                await self.cleanup()
                                                msg = "Failed to initialize agent pool"
                                                logger.exception(msg, exc_info=e)
                                                raise RuntimeError(msg) from e
                                            return self
                                    
                                        async def __aexit__(
                                            self,
                                            exc_type: type[BaseException] | None,
                                            exc_val: BaseException | None,
                                            exc_tb: TracebackType | None,
                                        ):
                                            """Exit async context."""
                                            # Remove MCP tool provider from all agents
                                            for agent in self.agents.values():
                                                if self.mcp in agent.tools.providers:
                                                    agent.tools.remove_provider(self.mcp)
                                            await self.cleanup()
                                    
                                        async def cleanup(self):
                                            """Clean up all agents."""
                                            # Clean up background processes first
                                            await self.process_manager.cleanup()
                                            await self.exit_stack.aclose()
                                            self.clear()
                                    
                                        @overload
                                        def create_team_run[TResult](
                                            self,
                                            agents: Sequence[str],
                                            validator: MessageNode[Any, TResult] | None = None,
                                            *,
                                            name: str | None = None,
                                            description: str | None = None,
                                            shared_prompt: str | None = None,
                                            picker: Agent[Any, Any] | None = None,
                                            num_picks: int | None = None,
                                            pick_prompt: str | None = None,
                                        ) -> TeamRun[TPoolDeps, TResult]: ...
                                    
                                        @overload
                                        def create_team_run[TDeps, TResult](
                                            self,
                                            agents: Sequence[MessageNode[TDeps, Any]],
                                            validator: MessageNode[Any, TResult] | None = None,
                                            *,
                                            name: str | None = None,
                                            description: str | None = None,
                                            shared_prompt: str | None = None,
                                            picker: Agent[Any, Any] | None = None,
                                            num_picks: int | None = None,
                                            pick_prompt: str | None = None,
                                        ) -> TeamRun[TDeps, TResult]: ...
                                    
                                        @overload
                                        def create_team_run[TResult](
                                            self,
                                            agents: Sequence[AgentName | MessageNode[Any, Any]],
                                            validator: MessageNode[Any, TResult] | None = None,
                                            *,
                                            name: str | None = None,
                                            description: str | None = None,
                                            shared_prompt: str | None = None,
                                            picker: Agent[Any, Any] | None = None,
                                            num_picks: int | None = None,
                                            pick_prompt: str | None = None,
                                        ) -> TeamRun[Any, TResult]: ...
                                    
                                        def create_team_run[TResult](
                                            self,
                                            agents: Sequence[AgentName | MessageNode[Any, Any]] | None = None,
                                            validator: MessageNode[Any, TResult] | None = None,
                                            *,
                                            name: str | None = None,
                                            description: str | None = None,
                                            shared_prompt: str | None = None,
                                            picker: Agent[Any, Any] | None = None,
                                            num_picks: int | None = None,
                                            pick_prompt: str | None = None,
                                        ) -> TeamRun[Any, TResult]:
                                            """Create a a sequential TeamRun from a list of Agents.
                                    
                                            Args:
                                                agents: List of agent names or team/agent instances (all if None)
                                                validator: Node to validate the results of the TeamRun
                                                name: Optional name for the team
                                                description: Optional description for the team
                                                shared_prompt: Optional prompt for all agents
                                                picker: Agent to use for picking agents
                                                num_picks: Number of agents to pick
                                                pick_prompt: Prompt to use for picking agents
                                            """
                                            from llmling_agent.delegation.teamrun import TeamRun
                                    
                                            if agents is None:
                                                agents = list(self.agents.keys())
                                    
                                            # First resolve/configure agents
                                            resolved_agents: list[MessageNode[Any, Any]] = []
                                            for agent in agents:
                                                if isinstance(agent, str):
                                                    agent = self.get_agent(agent)
                                                resolved_agents.append(agent)
                                            team = TeamRun(
                                                resolved_agents,
                                                name=name,
                                                description=description,
                                                validator=validator,
                                                shared_prompt=shared_prompt,
                                                picker=picker,
                                                num_picks=num_picks,
                                                pick_prompt=pick_prompt,
                                            )
                                            if name:
                                                self[name] = team
                                            return team
                                    
                                        @overload
                                        def create_team(self, agents: Sequence[str]) -> Team[TPoolDeps]: ...
                                    
                                        @overload
                                        def create_team[TDeps](
                                            self,
                                            agents: Sequence[MessageNode[TDeps, Any]],
                                            *,
                                            name: str | None = None,
                                            description: str | None = None,
                                            shared_prompt: str | None = None,
                                            picker: Agent[Any, Any] | None = None,
                                            num_picks: int | None = None,
                                            pick_prompt: str | None = None,
                                        ) -> Team[TDeps]: ...
                                    
                                        @overload
                                        def create_team(
                                            self,
                                            agents: Sequence[AgentName | MessageNode[Any, Any]],
                                            *,
                                            name: str | None = None,
                                            description: str | None = None,
                                            shared_prompt: str | None = None,
                                            picker: Agent[Any, Any] | None = None,
                                            num_picks: int | None = None,
                                            pick_prompt: str | None = None,
                                        ) -> Team[Any]: ...
                                    
                                        def create_team(
                                            self,
                                            agents: Sequence[AgentName | MessageNode[Any, Any]] | None = None,
                                            *,
                                            name: str | None = None,
                                            description: str | None = None,
                                            shared_prompt: str | None = None,
                                            picker: Agent[Any, Any] | None = None,
                                            num_picks: int | None = None,
                                            pick_prompt: str | None = None,
                                        ) -> Team[Any]:
                                            """Create a group from agent names or instances.
                                    
                                            Args:
                                                agents: List of agent names or instances (all if None)
                                                name: Optional name for the team
                                                description: Optional description for the team
                                                shared_prompt: Optional prompt for all agents
                                                picker: Agent to use for picking agents
                                                num_picks: Number of agents to pick
                                                pick_prompt: Prompt to use for picking agents
                                            """
                                            from llmling_agent.delegation.team import Team
                                    
                                            if agents is None:
                                                agents = list(self.agents.keys())
                                    
                                            resolved_agents = [self.get_agent(i) if isinstance(i, str) else i for i in agents]
                                            team = Team(
                                                name=name,
                                                description=description,
                                                agents=resolved_agents,
                                                shared_prompt=shared_prompt,
                                                picker=picker,
                                                num_picks=num_picks,
                                                pick_prompt=pick_prompt,
                                            )
                                            if name:
                                                self[name] = team
                                            return team
                                    
                                        @asynccontextmanager
                                        async def track_message_flow(self) -> AsyncIterator[MessageFlowTracker]:
                                            """Track message flow during a context."""
                                            tracker = MessageFlowTracker()
                                            self.connection_registry.message_flow.connect(tracker.track)
                                            try:
                                                yield tracker
                                            finally:
                                                self.connection_registry.message_flow.disconnect(tracker.track)
                                    
                                        async def run_event_loop(self):
                                            """Run pool in event-watching mode until interrupted."""
                                            print("Starting event watch mode...")
                                            print("Active nodes: ", ", ".join(self.list_nodes()))
                                            print("Press Ctrl+C to stop")
                                    
                                            with suppress(KeyboardInterrupt):
                                                while True:
                                                    await asyncio.sleep(1)
                                    
                                        @property
                                        def agents(self) -> dict[str, Agent[Any, Any]]:
                                            """Get agents dict (backward compatibility)."""
                                            return {i.name: i for i in self._items.values() if isinstance(i, Agent)}
                                    
                                        @property
                                        def teams(self) -> dict[str, BaseTeam[Any, Any]]:
                                            """Get agents dict (backward compatibility)."""
                                            from llmling_agent.delegation.base_team import BaseTeam
                                    
                                            return {i.name: i for i in self._items.values() if isinstance(i, BaseTeam)}
                                    
                                        @property
                                        def nodes(self) -> dict[str, MessageNode[Any, Any]]:
                                            """Get agents dict (backward compatibility)."""
                                            from llmling_agent import MessageNode
                                    
                                            return {i.name: i for i in self._items.values() if isinstance(i, MessageNode)}
                                    
                                        @property
                                        def event_nodes(self) -> dict[str, EventNode[Any]]:
                                            """Get agents dict (backward compatibility)."""
                                            from llmling_agent.messaging.eventnode import EventNode
                                    
                                            return {i.name: i for i in self._items.values() if isinstance(i, EventNode)}
                                    
                                        @property
                                        def node_events(self) -> DictEvents:
                                            """Get node events."""
                                            return self._items.events
                                    
                                        @property
                                        def _error_class(self) -> type[LLMLingError]:
                                            """Error class for agent operations."""
                                            return LLMLingError
                                    
                                        def _validate_item(
                                            self, item: MessageEmitter[Any, Any] | Any
                                        ) -> MessageEmitter[Any, Any]:
                                            """Validate and convert items before registration.
                                    
                                            Args:
                                                item: Item to validate
                                    
                                            Returns:
                                                Validated Node
                                    
                                            Raises:
                                                LLMlingError: If item is not a valid node
                                            """
                                            if not isinstance(item, MessageEmitter):
                                                msg = f"Item must be Agent or Team, got {type(item)}"
                                                raise self._error_class(msg)
                                            item.context.pool = self
                                            return item
                                    
                                        def _create_teams(self):
                                            """Create all teams in two phases to allow nesting."""
                                            # Phase 1: Create empty teams
                                    
                                            empty_teams: dict[str, BaseTeam[Any, Any]] = {}
                                            for name, config in self.manifest.teams.items():
                                                if config.mode == "parallel":
                                                    empty_teams[name] = Team(
                                                        [], name=name, shared_prompt=config.shared_prompt
                                                    )
                                                else:
                                                    empty_teams[name] = TeamRun(
                                                        [], name=name, shared_prompt=config.shared_prompt
                                                    )
                                    
                                            # Phase 2: Resolve members
                                            for name, config in self.manifest.teams.items():
                                                team = empty_teams[name]
                                                members: list[MessageNode[Any, Any]] = []
                                                for member in config.members:
                                                    if member in self.agents:
                                                        members.append(self.agents[member])
                                                    elif member in empty_teams:
                                                        members.append(empty_teams[member])
                                                    else:
                                                        msg = f"Unknown team member: {member}"
                                                        raise ValueError(msg)
                                                team.agents.extend(members)
                                                self[name] = team
                                    
                                        def _connect_nodes(self):
                                            """Set up connections defined in manifest."""
                                            # Merge agent and team configs into one dict of nodes with connections
                                            for name, config in self.manifest.nodes.items():
                                                source = self[name]
                                                for target in config.connections or []:
                                                    match target:
                                                        case NodeConnectionConfig(name=name_):
                                                            if name_ not in self:
                                                                msg = f"Forward target {name_} not found for {name}"
                                                                raise ValueError(msg)
                                                            target_node = self[name_]
                                                        case FileConnectionConfig() | CallableConnectionConfig():
                                                            target_node = Agent(provider=target.get_provider())
                                                        case _:
                                                            msg = f"Invalid connection config: {target}"
                                                            raise ValueError(msg)
                                    
                                                    source.connect_to(
                                                        target_node,  # type: ignore  # recognized as "Any | BaseTeam[Any, Any]" by mypy?
                                                        connection_type=target.connection_type,
                                                        name=name,
                                                        priority=target.priority,
                                                        delay=target.delay,
                                                        queued=target.queued,
                                                        queue_strategy=target.queue_strategy,
                                                        transform=target.transform,
                                                        filter_condition=target.filter_condition.check
                                                        if target.filter_condition
                                                        else None,
                                                        stop_condition=target.stop_condition.check
                                                        if target.stop_condition
                                                        else None,
                                                        exit_condition=target.exit_condition.check
                                                        if target.exit_condition
                                                        else None,
                                                    )
                                                    source.connections.set_wait_state(
                                                        target_node,
                                                        wait=target.wait_for_completion,
                                                    )
                                    
                                        # async def clone_agent[TDeps, TAgentResult](
                                        #     self,
                                        #     agent: AgentName | Agent[TDeps, TAgentResult],
                                        #     new_name: AgentName | None = None,
                                        #     *,
                                        #     system_prompts: list[str] | None = None,
                                        #     template_context: dict[str, Any] | None = None,
                                        # ) -> Agent[TDeps, TAgentResult]:
                                        #     """Create a copy of an agent.
                                    
                                        #     Args:
                                        #         agent: Agent instance or name to clone
                                        #         new_name: Optional name for the clone
                                        #         system_prompts: Optional different prompts
                                        #         template_context: Variables for template rendering
                                    
                                        #     Returns:
                                        #         The new agent instance
                                        #     """
                                        #     from llmling_agent.agent import Agent
                                    
                                        #     # Get original config
                                        #     if isinstance(agent, str):
                                        #         if agent not in self.manifest.agents:
                                        #             msg = f"Agent {agent} not found"
                                        #             raise KeyError(msg)
                                        #         config = self.manifest.agents[agent]
                                        #         original_agent: Agent[Any, Any] = self.get_agent(agent)
                                        #     else:
                                        #         config = agent.context.config  # type: ignore
                                        #         original_agent = agent
                                    
                                        #     # Create new config
                                        #     new_cfg = config.model_copy(deep=True)
                                    
                                        #     # Apply overrides
                                        #     if system_prompts:
                                        #         new_cfg.system_prompts = system_prompts
                                    
                                        #     # Handle template rendering
                                        #     if template_context:
                                        #         new_cfg.system_prompts = new_cfg.render_system_prompts(template_context)
                                    
                                        #     # Create new agent with same runtime
                                        #     new_agent = Agent[TDeps](
                                        #         runtime=original_agent.runtime,
                                        #         context=original_agent.context,
                                        #         output_type=original_agent._output_type,
                                        #         # output_type=original_agent.actual_type,
                                        #         provider=new_cfg.get_provider(),
                                        #         system_prompt=new_cfg.system_prompts,
                                        #         name=new_name or f"{config.name}_copy_{len(self.agents)}",
                                        #     )
                                        #     # Register in pool
                                        #     agent_name = new_agent.name
                                        #     self.manifest.agents[agent_name] = new_cfg
                                        #     self.register(agent_name, new_agent)
                                        #     return await self.exit_stack.enter_async_context(new_agent)
                                    
                                        @overload
                                        async def create_agent(
                                            self,
                                            name: AgentName,
                                            *,
                                            session: SessionIdType | SessionQuery = None,
                                            name_override: str | None = None,
                                        ) -> Agent[TPoolDeps]: ...
                                    
                                        @overload
                                        async def create_agent[TCustomDeps](
                                            self,
                                            name: AgentName,
                                            *,
                                            deps: TCustomDeps,
                                            session: SessionIdType | SessionQuery = None,
                                            name_override: str | None = None,
                                        ) -> Agent[TCustomDeps]: ...
                                    
                                        @overload
                                        async def create_agent[TResult](
                                            self,
                                            name: AgentName,
                                            *,
                                            return_type: type[TResult],
                                            session: SessionIdType | SessionQuery = None,
                                            name_override: str | None = None,
                                        ) -> Agent[TPoolDeps, TResult]: ...
                                    
                                        @overload
                                        async def create_agent[TCustomDeps, TResult](
                                            self,
                                            name: AgentName,
                                            *,
                                            deps: TCustomDeps,
                                            return_type: type[TResult],
                                            session: SessionIdType | SessionQuery = None,
                                            name_override: str | None = None,
                                        ) -> Agent[TCustomDeps, TResult]: ...
                                    
                                        async def create_agent(
                                            self,
                                            name: AgentName,
                                            *,
                                            deps: Any | None = None,
                                            return_type: Any | None = None,
                                            session: SessionIdType | SessionQuery = None,
                                            name_override: str | None = None,
                                        ) -> Agent[Any, Any]:
                                            """Create a new agent instance from configuration.
                                    
                                            Args:
                                                name: Name of the agent configuration to use
                                                deps: Optional custom dependencies (overrides pool deps)
                                                return_type: Optional type for structured responses
                                                session: Optional session ID or query to recover conversation
                                                name_override: Optional different name for this instance
                                    
                                            Returns:
                                                New agent instance with the specified configuration
                                    
                                            Raises:
                                                KeyError: If agent configuration not found
                                                ValueError: If configuration is invalid
                                            """
                                            if name not in self.manifest.agents:
                                                msg = f"Agent configuration {name!r} not found"
                                                raise KeyError(msg)
                                    
                                            # Use Manifest.get_agent for proper initialization
                                            final_deps = deps if deps is not None else self.shared_deps
                                            agent = self.manifest.get_agent(name, deps=final_deps)
                                            # Override name if requested
                                            if name_override:
                                                agent.name = name_override
                                    
                                            # Set pool reference
                                            agent.context.pool = self
                                    
                                            # Handle session if provided
                                            if session:
                                                agent.conversation.load_history_from_database(session=session)
                                    
                                            # Initialize agent through exit stack
                                            agent = await self.exit_stack.enter_async_context(agent)
                                    
                                            # Override structured configuration if provided
                                            if return_type is not None:
                                                agent.set_output_type(return_type)
                                    
                                            return agent
                                    
                                        def setup_agent_workers(self, agent: Agent[Any, Any]):
                                            """Set up workers for an agent from configuration."""
                                            for worker_config in agent.context.config.workers:
                                                try:
                                                    worker = self.nodes[worker_config.name]
                                                    match worker_config:
                                                        case TeamWorkerConfig():
                                                            agent.register_worker(worker)
                                                        case AgentWorkerConfig():
                                                            agent.register_worker(
                                                                worker,
                                                                reset_history_on_run=worker_config.reset_history_on_run,
                                                                pass_message_history=worker_config.pass_message_history,
                                                            )
                                                except KeyError as e:
                                                    msg = f"Worker agent {worker_config.name!r} not found"
                                                    raise ValueError(msg) from e
                                    
                                        @overload
                                        def get_agent(
                                            self,
                                            agent: AgentName | Agent[Any, str],
                                            *,
                                            model_override: str | None = None,
                                            session: SessionIdType | SessionQuery = None,
                                        ) -> Agent[TPoolDeps]: ...
                                    
                                        @overload
                                        def get_agent[TResult](
                                            self,
                                            agent: AgentName | Agent[Any, str],
                                            *,
                                            return_type: type[TResult],
                                            model_override: str | None = None,
                                            session: SessionIdType | SessionQuery = None,
                                        ) -> Agent[TPoolDeps, TResult]: ...
                                    
                                        @overload
                                        def get_agent[TCustomDeps](
                                            self,
                                            agent: AgentName | Agent[Any, str],
                                            *,
                                            deps: TCustomDeps,
                                            model_override: str | None = None,
                                            session: SessionIdType | SessionQuery = None,
                                        ) -> Agent[TCustomDeps]: ...
                                    
                                        @overload
                                        def get_agent[TCustomDeps, TResult](
                                            self,
                                            agent: AgentName | Agent[Any, str],
                                            *,
                                            deps: TCustomDeps,
                                            return_type: type[TResult],
                                            model_override: str | None = None,
                                            session: SessionIdType | SessionQuery = None,
                                        ) -> Agent[TCustomDeps, TResult]: ...
                                    
                                        def get_agent(
                                            self,
                                            agent: AgentName | Agent[Any, str],
                                            *,
                                            deps: Any | None = None,
                                            return_type: Any | None = None,
                                            model_override: str | None = None,
                                            session: SessionIdType | SessionQuery = None,
                                        ) -> Agent[Any, Any]:
                                            """Get or configure an agent from the pool.
                                    
                                            This method provides flexible agent configuration with dependency injection:
                                            - Without deps: Agent uses pool's shared dependencies
                                            - With deps: Agent uses provided custom dependencies
                                    
                                            Args:
                                                agent: Either agent name or instance
                                                deps: Optional custom dependencies (overrides shared deps)
                                                return_type: Optional type for structured responses
                                                model_override: Optional model override
                                                session: Optional session ID or query to recover conversation
                                    
                                            Returns:
                                                Either:
                                                - Agent[TPoolDeps] when using pool's shared deps
                                                - Agent[TCustomDeps] when custom deps provided
                                    
                                            Raises:
                                                KeyError: If agent name not found
                                                ValueError: If configuration is invalid
                                            """
                                            from llmling_agent.agent import Agent
                                            from llmling_agent.agent.context import AgentContext
                                    
                                            # Get base agent
                                            base = agent if isinstance(agent, Agent) else self.agents[agent]
                                    
                                            # Setup context and dependencies
                                            if base.context is None:
                                                base.context = AgentContext[Any].create_default(base.name)
                                    
                                            # Use custom deps if provided, otherwise use shared deps
                                            base.context.data = deps if deps is not None else self.shared_deps
                                            base.context.pool = self
                                    
                                            # Apply overrides
                                            if model_override:
                                                base.set_model(model_override)
                                    
                                            if session:
                                                base.conversation.load_history_from_database(session=session)
                                    
                                            # Convert to structured if needed
                                            if return_type is not None:
                                                base.set_output_type(return_type)
                                    
                                            return base
                                    
                                        def list_nodes(self) -> list[str]:
                                            """List available agent names."""
                                            return list(self.list_items())
                                    
                                        def get_job(self, name: str) -> Job[Any, Any]:
                                            return self._tasks[name]
                                    
                                        def register_task(self, name: str, task: Job[Any, Any]):
                                            self._tasks.register(name, task)
                                    
                                        async def add_agent[TResult = str](
                                            self,
                                            name: AgentName,
                                            *,
                                            output_type: OutputSpec[TResult] | str | StructuredResponseConfig = str,  # type: ignore[assignment]
                                            **kwargs: Unpack[AgentKwargs],
                                        ) -> Agent[Any, TResult]:
                                            """Add a new permanent agent to the pool.
                                    
                                            Args:
                                                name: Name for the new agent
                                                output_type: Optional type for structured responses:
                                                    - None: Regular unstructured agent
                                                    - type: Python type for validation
                                                    - str: Name of response definition
                                                    - StructuredResponseConfig: Complete response definition
                                                **kwargs: Additional agent configuration
                                    
                                            Returns:
                                                An agent instance
                                            """
                                            from llmling_agent.agent import Agent
                                    
                                            agent: Agent[Any, TResult] = Agent(name=name, **kwargs, output_type=output_type)
                                            agent.tools.add_provider(self.mcp)
                                            agent = await self.exit_stack.enter_async_context(agent)
                                            self.register(name, agent)
                                            return agent
                                    
                                        def get_mermaid_diagram(self, include_details: bool = True) -> str:
                                            """Generate mermaid flowchart of all agents and their connections.
                                    
                                            Args:
                                                include_details: Whether to show connection details (types, queues, etc)
                                            """
                                            lines = ["flowchart LR"]
                                    
                                            # Add all agents as nodes
                                            for name in self.agents:
                                                lines.append(f"    {name}[{name}]")  # noqa: PERF401
                                    
                                            # Add all connections as edges
                                            for agent in self.agents.values():
                                                connections = agent.connections.get_connections()
                                                for talk in connections:
                                                    talk = cast(Talk[Any], talk)  # help mypy understand it's a Talk
                                                    source = talk.source.name
                                                    for target in talk.targets:
                                                        if include_details:
                                                            details: list[str] = []
                                                            details.append(talk.connection_type)
                                                            if talk.queued:
                                                                details.append(f"queued({talk.queue_strategy})")
                                                            if fn := talk.filter_condition:  # type: ignore
                                                                details.append(f"filter:{fn.__name__}")
                                                            if fn := talk.stop_condition:  # type: ignore
                                                                details.append(f"stop:{fn.__name__}")
                                                            if fn := talk.exit_condition:  # type: ignore
                                                                details.append(f"exit:{fn.__name__}")
                                    
                                                            label = f"|{' '.join(details)}|" if details else ""
                                                            lines.append(f"    {source}--{label}-->{target.name}")
                                                        else:
                                                            lines.append(f"    {source}-->{target.name}")
                                    
                                            return "\n".join(lines)
                                    

                                    agents property

                                    agents: dict[str, Agent[Any, Any]]
                                    

                                    Get agents dict (backward compatibility).

                                    event_nodes property

                                    event_nodes: dict[str, EventNode[Any]]
                                    

                                    Get agents dict (backward compatibility).

                                    node_events property

                                    node_events: DictEvents
                                    

                                    Get node events.

                                    nodes property

                                    nodes: dict[str, MessageNode[Any, Any]]
                                    

                                    Get agents dict (backward compatibility).

                                    teams property

                                    teams: dict[str, BaseTeam[Any, Any]]
                                    

                                    Get agents dict (backward compatibility).

                                    __aenter__ async

                                    __aenter__() -> Self
                                    

                                    Enter async context and initialize all agents.

                                    Source code in src/llmling_agent/delegation/pool.py
                                    157
                                    158
                                    159
                                    160
                                    161
                                    162
                                    163
                                    164
                                    165
                                    166
                                    167
                                    168
                                    169
                                    170
                                    171
                                    172
                                    173
                                    174
                                    175
                                    176
                                    177
                                    178
                                    179
                                    180
                                    181
                                    182
                                    183
                                    184
                                    185
                                    186
                                    187
                                    188
                                    189
                                    190
                                    191
                                    async def __aenter__(self) -> Self:
                                        """Enter async context and initialize all agents."""
                                        try:
                                            # Add MCP tool provider to all agents
                                            agents = list(self.agents.values())
                                            teams = list(self.teams.values())
                                            for agent in agents:
                                                agent.tools.add_provider(self.mcp)
                                    
                                            # Collect all components to initialize
                                            components: list[AbstractAsyncContextManager[Any]] = [
                                                self.mcp,
                                                *agents,
                                                *teams,
                                            ]
                                    
                                            # Add MCP server if configured
                                            if self.server:
                                                components.append(self.server)
                                            components.append(self.storage)
                                            # Initialize all components
                                            if self.parallel_load:
                                                await asyncio.gather(
                                                    *(self.exit_stack.enter_async_context(c) for c in components)
                                                )
                                            else:
                                                for component in components:
                                                    await self.exit_stack.enter_async_context(component)
                                    
                                        except Exception as e:
                                            await self.cleanup()
                                            msg = "Failed to initialize agent pool"
                                            logger.exception(msg, exc_info=e)
                                            raise RuntimeError(msg) from e
                                        return self
                                    

                                    __aexit__ async

                                    __aexit__(
                                        exc_type: type[BaseException] | None,
                                        exc_val: BaseException | None,
                                        exc_tb: TracebackType | None,
                                    )
                                    

                                    Exit async context.

                                    Source code in src/llmling_agent/delegation/pool.py
                                    193
                                    194
                                    195
                                    196
                                    197
                                    198
                                    199
                                    200
                                    201
                                    202
                                    203
                                    204
                                    async def __aexit__(
                                        self,
                                        exc_type: type[BaseException] | None,
                                        exc_val: BaseException | None,
                                        exc_tb: TracebackType | None,
                                    ):
                                        """Exit async context."""
                                        # Remove MCP tool provider from all agents
                                        for agent in self.agents.values():
                                            if self.mcp in agent.tools.providers:
                                                agent.tools.remove_provider(self.mcp)
                                        await self.cleanup()
                                    

                                    __init__

                                    __init__(
                                        manifest: JoinablePathLike | AgentsManifest | None = None,
                                        *,
                                        shared_deps: TPoolDeps | None = None,
                                        connect_nodes: bool = True,
                                        input_provider: InputProvider | None = None,
                                        parallel_load: bool = True,
                                        progress_handlers: list[ProgressCallback] | None = None,
                                    )
                                    

                                    Initialize agent pool with immediate agent creation.

                                    Parameters:

                                    Name Type Description Default
                                    manifest JoinablePathLike | AgentsManifest | None

                                    Agent configuration manifest

                                    None
                                    shared_deps TPoolDeps | None

                                    Dependencies to share across all nodes

                                    None
                                    connect_nodes bool

                                    Whether to set up forwarding connections

                                    True
                                    input_provider InputProvider | None

                                    Input provider for tool / step confirmations / HumanAgents

                                    None
                                    parallel_load bool

                                    Whether to load nodes in parallel (async)

                                    True
                                    progress_handlers list[ProgressCallback] | None

                                    List of progress handlers to notify about progress

                                    None

                                    Raises:

                                    Type Description
                                    ValueError

                                    If manifest contains invalid node configurations

                                    RuntimeError

                                    If node initialization fails

                                    Source code in src/llmling_agent/delegation/pool.py
                                     72
                                     73
                                     74
                                     75
                                     76
                                     77
                                     78
                                     79
                                     80
                                     81
                                     82
                                     83
                                     84
                                     85
                                     86
                                     87
                                     88
                                     89
                                     90
                                     91
                                     92
                                     93
                                     94
                                     95
                                     96
                                     97
                                     98
                                     99
                                    100
                                    101
                                    102
                                    103
                                    104
                                    105
                                    106
                                    107
                                    108
                                    109
                                    110
                                    111
                                    112
                                    113
                                    114
                                    115
                                    116
                                    117
                                    118
                                    119
                                    120
                                    121
                                    122
                                    123
                                    124
                                    125
                                    126
                                    127
                                    128
                                    129
                                    130
                                    131
                                    132
                                    133
                                    134
                                    135
                                    136
                                    137
                                    138
                                    139
                                    140
                                    141
                                    142
                                    143
                                    144
                                    145
                                    146
                                    147
                                    148
                                    149
                                    150
                                    151
                                    152
                                    153
                                    154
                                    155
                                    def __init__(
                                        self,
                                        manifest: JoinablePathLike | AgentsManifest | None = None,
                                        *,
                                        shared_deps: TPoolDeps | None = None,
                                        connect_nodes: bool = True,
                                        input_provider: InputProvider | None = None,
                                        parallel_load: bool = True,
                                        progress_handlers: list[ProgressCallback] | None = None,
                                    ):
                                        """Initialize agent pool with immediate agent creation.
                                    
                                        Args:
                                            manifest: Agent configuration manifest
                                            shared_deps: Dependencies to share across all nodes
                                            connect_nodes: Whether to set up forwarding connections
                                            input_provider: Input provider for tool / step confirmations / HumanAgents
                                            parallel_load: Whether to load nodes in parallel (async)
                                            progress_handlers: List of progress handlers to notify about progress
                                    
                                        Raises:
                                            ValueError: If manifest contains invalid node configurations
                                            RuntimeError: If node initialization fails
                                        """
                                        super().__init__()
                                        from llmling_agent.mcp_server.manager import MCPManager
                                        from llmling_agent.models.manifest import AgentsManifest
                                        from llmling_agent.storage import StorageManager
                                    
                                        match manifest:
                                            case None:
                                                self.manifest = AgentsManifest()
                                            case str() | os.PathLike() | UPath():
                                                self.manifest = AgentsManifest.from_file(manifest)
                                            case AgentsManifest():
                                                self.manifest = manifest
                                            case _:
                                                msg = f"Invalid config path: {manifest}"
                                                raise ValueError(msg)
                                        self.shared_deps = shared_deps
                                        self._input_provider = input_provider
                                        self.exit_stack = AsyncExitStack()
                                        self.parallel_load = parallel_load
                                        self.storage = StorageManager(self.manifest.storage)
                                        self.progress_handlers = MultiEventHandler[ProgressCallback](progress_handlers)
                                        self.connection_registry = ConnectionRegistry()
                                        servers = self.manifest.get_mcp_servers()
                                        self.mcp = MCPManager(name="pool_mcp", servers=servers, owner="pool")
                                        self._tasks = TaskRegistry()
                                        # Register tasks from manifest
                                        for name, task in self.manifest.jobs.items():
                                            self._tasks.register(name, task)
                                    
                                        # Initialize process manager for background processes
                                        from anyenv import ProcessManager
                                    
                                        self.process_manager = ProcessManager()
                                        self.pool_talk = TeamTalk[Any].from_nodes(list(self.nodes.values()))
                                        if self.manifest.pool_server and self.manifest.pool_server.enabled:
                                            from llmling_agent.resource_providers.pool import PoolResourceProvider
                                            from llmling_agent_mcp.server import LLMLingServer
                                    
                                            provider = PoolResourceProvider(
                                                self, zed_mode=self.manifest.pool_server.zed_mode
                                            )
                                            self.server: LLMLingServer | None = LLMLingServer(
                                                provider=provider,
                                                config=self.manifest.pool_server,
                                            )
                                            self.progress_handlers.add_handler(self.server.report_progress)
                                        else:
                                            self.server = None
                                        # Create requested agents immediately
                                        for name in self.manifest.agents:
                                            agent = self.manifest.get_agent(name, deps=shared_deps)
                                            self.register(name, agent)
                                    
                                        # Then set up worker relationships
                                        for agent in self.agents.values():
                                            self.setup_agent_workers(agent)
                                        self._create_teams()
                                        # Set up forwarding connections
                                        if connect_nodes:
                                            self._connect_nodes()
                                    

                                    add_agent async

                                    add_agent(
                                        name: AgentName,
                                        *,
                                        output_type: OutputSpec[TResult] | str | StructuredResponseConfig = str,
                                        **kwargs: Unpack[AgentKwargs],
                                    ) -> Agent[Any, TResult]
                                    

                                    Add a new permanent agent to the pool.

                                    Parameters:

                                    Name Type Description Default
                                    name AgentName

                                    Name for the new agent

                                    required
                                    output_type OutputSpec[TResult] | str | StructuredResponseConfig

                                    Optional type for structured responses: - None: Regular unstructured agent - type: Python type for validation - str: Name of response definition - StructuredResponseConfig: Complete response definition

                                    str
                                    **kwargs Unpack[AgentKwargs]

                                    Additional agent configuration

                                    {}

                                    Returns:

                                    Type Description
                                    Agent[Any, TResult]

                                    An agent instance

                                    Source code in src/llmling_agent/delegation/pool.py
                                    799
                                    800
                                    801
                                    802
                                    803
                                    804
                                    805
                                    806
                                    807
                                    808
                                    809
                                    810
                                    811
                                    812
                                    813
                                    814
                                    815
                                    816
                                    817
                                    818
                                    819
                                    820
                                    821
                                    822
                                    823
                                    824
                                    825
                                    826
                                    async def add_agent[TResult = str](
                                        self,
                                        name: AgentName,
                                        *,
                                        output_type: OutputSpec[TResult] | str | StructuredResponseConfig = str,  # type: ignore[assignment]
                                        **kwargs: Unpack[AgentKwargs],
                                    ) -> Agent[Any, TResult]:
                                        """Add a new permanent agent to the pool.
                                    
                                        Args:
                                            name: Name for the new agent
                                            output_type: Optional type for structured responses:
                                                - None: Regular unstructured agent
                                                - type: Python type for validation
                                                - str: Name of response definition
                                                - StructuredResponseConfig: Complete response definition
                                            **kwargs: Additional agent configuration
                                    
                                        Returns:
                                            An agent instance
                                        """
                                        from llmling_agent.agent import Agent
                                    
                                        agent: Agent[Any, TResult] = Agent(name=name, **kwargs, output_type=output_type)
                                        agent.tools.add_provider(self.mcp)
                                        agent = await self.exit_stack.enter_async_context(agent)
                                        self.register(name, agent)
                                        return agent
                                    

                                    cleanup async

                                    cleanup()
                                    

                                    Clean up all agents.

                                    Source code in src/llmling_agent/delegation/pool.py
                                    206
                                    207
                                    208
                                    209
                                    210
                                    211
                                    async def cleanup(self):
                                        """Clean up all agents."""
                                        # Clean up background processes first
                                        await self.process_manager.cleanup()
                                        await self.exit_stack.aclose()
                                        self.clear()
                                    

                                    create_agent async

                                    create_agent(
                                        name: AgentName,
                                        *,
                                        session: SessionIdType | SessionQuery = None,
                                        name_override: str | None = None,
                                    ) -> Agent[TPoolDeps]
                                    
                                    create_agent(
                                        name: AgentName,
                                        *,
                                        deps: TCustomDeps,
                                        session: SessionIdType | SessionQuery = None,
                                        name_override: str | None = None,
                                    ) -> Agent[TCustomDeps]
                                    
                                    create_agent(
                                        name: AgentName,
                                        *,
                                        return_type: type[TResult],
                                        session: SessionIdType | SessionQuery = None,
                                        name_override: str | None = None,
                                    ) -> Agent[TPoolDeps, TResult]
                                    
                                    create_agent(
                                        name: AgentName,
                                        *,
                                        deps: TCustomDeps,
                                        return_type: type[TResult],
                                        session: SessionIdType | SessionQuery = None,
                                        name_override: str | None = None,
                                    ) -> Agent[TCustomDeps, TResult]
                                    
                                    create_agent(
                                        name: AgentName,
                                        *,
                                        deps: Any | None = None,
                                        return_type: Any | None = None,
                                        session: SessionIdType | SessionQuery = None,
                                        name_override: str | None = None,
                                    ) -> Agent[Any, Any]
                                    

                                    Create a new agent instance from configuration.

                                    Parameters:

                                    Name Type Description Default
                                    name AgentName

                                    Name of the agent configuration to use

                                    required
                                    deps Any | None

                                    Optional custom dependencies (overrides pool deps)

                                    None
                                    return_type Any | None

                                    Optional type for structured responses

                                    None
                                    session SessionIdType | SessionQuery

                                    Optional session ID or query to recover conversation

                                    None
                                    name_override str | None

                                    Optional different name for this instance

                                    None

                                    Returns:

                                    Type Description
                                    Agent[Any, Any]

                                    New agent instance with the specified configuration

                                    Raises:

                                    Type Description
                                    KeyError

                                    If agent configuration not found

                                    ValueError

                                    If configuration is invalid

                                    Source code in src/llmling_agent/delegation/pool.py
                                    621
                                    622
                                    623
                                    624
                                    625
                                    626
                                    627
                                    628
                                    629
                                    630
                                    631
                                    632
                                    633
                                    634
                                    635
                                    636
                                    637
                                    638
                                    639
                                    640
                                    641
                                    642
                                    643
                                    644
                                    645
                                    646
                                    647
                                    648
                                    649
                                    650
                                    651
                                    652
                                    653
                                    654
                                    655
                                    656
                                    657
                                    658
                                    659
                                    660
                                    661
                                    662
                                    663
                                    664
                                    665
                                    666
                                    667
                                    668
                                    669
                                    670
                                    671
                                    async def create_agent(
                                        self,
                                        name: AgentName,
                                        *,
                                        deps: Any | None = None,
                                        return_type: Any | None = None,
                                        session: SessionIdType | SessionQuery = None,
                                        name_override: str | None = None,
                                    ) -> Agent[Any, Any]:
                                        """Create a new agent instance from configuration.
                                    
                                        Args:
                                            name: Name of the agent configuration to use
                                            deps: Optional custom dependencies (overrides pool deps)
                                            return_type: Optional type for structured responses
                                            session: Optional session ID or query to recover conversation
                                            name_override: Optional different name for this instance
                                    
                                        Returns:
                                            New agent instance with the specified configuration
                                    
                                        Raises:
                                            KeyError: If agent configuration not found
                                            ValueError: If configuration is invalid
                                        """
                                        if name not in self.manifest.agents:
                                            msg = f"Agent configuration {name!r} not found"
                                            raise KeyError(msg)
                                    
                                        # Use Manifest.get_agent for proper initialization
                                        final_deps = deps if deps is not None else self.shared_deps
                                        agent = self.manifest.get_agent(name, deps=final_deps)
                                        # Override name if requested
                                        if name_override:
                                            agent.name = name_override
                                    
                                        # Set pool reference
                                        agent.context.pool = self
                                    
                                        # Handle session if provided
                                        if session:
                                            agent.conversation.load_history_from_database(session=session)
                                    
                                        # Initialize agent through exit stack
                                        agent = await self.exit_stack.enter_async_context(agent)
                                    
                                        # Override structured configuration if provided
                                        if return_type is not None:
                                            agent.set_output_type(return_type)
                                    
                                        return agent
                                    

                                    create_team

                                    create_team(agents: Sequence[str]) -> Team[TPoolDeps]
                                    
                                    create_team(
                                        agents: Sequence[MessageNode[TDeps, Any]],
                                        *,
                                        name: str | None = None,
                                        description: str | None = None,
                                        shared_prompt: str | None = None,
                                        picker: Agent[Any, Any] | None = None,
                                        num_picks: int | None = None,
                                        pick_prompt: str | None = None,
                                    ) -> Team[TDeps]
                                    
                                    create_team(
                                        agents: Sequence[AgentName | MessageNode[Any, Any]],
                                        *,
                                        name: str | None = None,
                                        description: str | None = None,
                                        shared_prompt: str | None = None,
                                        picker: Agent[Any, Any] | None = None,
                                        num_picks: int | None = None,
                                        pick_prompt: str | None = None,
                                    ) -> Team[Any]
                                    
                                    create_team(
                                        agents: Sequence[AgentName | MessageNode[Any, Any]] | None = None,
                                        *,
                                        name: str | None = None,
                                        description: str | None = None,
                                        shared_prompt: str | None = None,
                                        picker: Agent[Any, Any] | None = None,
                                        num_picks: int | None = None,
                                        pick_prompt: str | None = None,
                                    ) -> Team[Any]
                                    

                                    Create a group from agent names or instances.

                                    Parameters:

                                    Name Type Description Default
                                    agents Sequence[AgentName | MessageNode[Any, Any]] | None

                                    List of agent names or instances (all if None)

                                    None
                                    name str | None

                                    Optional name for the team

                                    None
                                    description str | None

                                    Optional description for the team

                                    None
                                    shared_prompt str | None

                                    Optional prompt for all agents

                                    None
                                    picker Agent[Any, Any] | None

                                    Agent to use for picking agents

                                    None
                                    num_picks int | None

                                    Number of agents to pick

                                    None
                                    pick_prompt str | None

                                    Prompt to use for picking agents

                                    None
                                    Source code in src/llmling_agent/delegation/pool.py
                                    333
                                    334
                                    335
                                    336
                                    337
                                    338
                                    339
                                    340
                                    341
                                    342
                                    343
                                    344
                                    345
                                    346
                                    347
                                    348
                                    349
                                    350
                                    351
                                    352
                                    353
                                    354
                                    355
                                    356
                                    357
                                    358
                                    359
                                    360
                                    361
                                    362
                                    363
                                    364
                                    365
                                    366
                                    367
                                    368
                                    369
                                    370
                                    371
                                    372
                                    def create_team(
                                        self,
                                        agents: Sequence[AgentName | MessageNode[Any, Any]] | None = None,
                                        *,
                                        name: str | None = None,
                                        description: str | None = None,
                                        shared_prompt: str | None = None,
                                        picker: Agent[Any, Any] | None = None,
                                        num_picks: int | None = None,
                                        pick_prompt: str | None = None,
                                    ) -> Team[Any]:
                                        """Create a group from agent names or instances.
                                    
                                        Args:
                                            agents: List of agent names or instances (all if None)
                                            name: Optional name for the team
                                            description: Optional description for the team
                                            shared_prompt: Optional prompt for all agents
                                            picker: Agent to use for picking agents
                                            num_picks: Number of agents to pick
                                            pick_prompt: Prompt to use for picking agents
                                        """
                                        from llmling_agent.delegation.team import Team
                                    
                                        if agents is None:
                                            agents = list(self.agents.keys())
                                    
                                        resolved_agents = [self.get_agent(i) if isinstance(i, str) else i for i in agents]
                                        team = Team(
                                            name=name,
                                            description=description,
                                            agents=resolved_agents,
                                            shared_prompt=shared_prompt,
                                            picker=picker,
                                            num_picks=num_picks,
                                            pick_prompt=pick_prompt,
                                        )
                                        if name:
                                            self[name] = team
                                        return team
                                    

                                    create_team_run

                                    create_team_run(
                                        agents: Sequence[str],
                                        validator: MessageNode[Any, TResult] | None = None,
                                        *,
                                        name: str | None = None,
                                        description: str | None = None,
                                        shared_prompt: str | None = None,
                                        picker: Agent[Any, Any] | None = None,
                                        num_picks: int | None = None,
                                        pick_prompt: str | None = None,
                                    ) -> TeamRun[TPoolDeps, TResult]
                                    
                                    create_team_run(
                                        agents: Sequence[MessageNode[TDeps, Any]],
                                        validator: MessageNode[Any, TResult] | None = None,
                                        *,
                                        name: str | None = None,
                                        description: str | None = None,
                                        shared_prompt: str | None = None,
                                        picker: Agent[Any, Any] | None = None,
                                        num_picks: int | None = None,
                                        pick_prompt: str | None = None,
                                    ) -> TeamRun[TDeps, TResult]
                                    
                                    create_team_run(
                                        agents: Sequence[AgentName | MessageNode[Any, Any]],
                                        validator: MessageNode[Any, TResult] | None = None,
                                        *,
                                        name: str | None = None,
                                        description: str | None = None,
                                        shared_prompt: str | None = None,
                                        picker: Agent[Any, Any] | None = None,
                                        num_picks: int | None = None,
                                        pick_prompt: str | None = None,
                                    ) -> TeamRun[Any, TResult]
                                    
                                    create_team_run(
                                        agents: Sequence[AgentName | MessageNode[Any, Any]] | None = None,
                                        validator: MessageNode[Any, TResult] | None = None,
                                        *,
                                        name: str | None = None,
                                        description: str | None = None,
                                        shared_prompt: str | None = None,
                                        picker: Agent[Any, Any] | None = None,
                                        num_picks: int | None = None,
                                        pick_prompt: str | None = None,
                                    ) -> TeamRun[Any, TResult]
                                    

                                    Create a a sequential TeamRun from a list of Agents.

                                    Parameters:

                                    Name Type Description Default
                                    agents Sequence[AgentName | MessageNode[Any, Any]] | None

                                    List of agent names or team/agent instances (all if None)

                                    None
                                    validator MessageNode[Any, TResult] | None

                                    Node to validate the results of the TeamRun

                                    None
                                    name str | None

                                    Optional name for the team

                                    None
                                    description str | None

                                    Optional description for the team

                                    None
                                    shared_prompt str | None

                                    Optional prompt for all agents

                                    None
                                    picker Agent[Any, Any] | None

                                    Agent to use for picking agents

                                    None
                                    num_picks int | None

                                    Number of agents to pick

                                    None
                                    pick_prompt str | None

                                    Prompt to use for picking agents

                                    None
                                    Source code in src/llmling_agent/delegation/pool.py
                                    255
                                    256
                                    257
                                    258
                                    259
                                    260
                                    261
                                    262
                                    263
                                    264
                                    265
                                    266
                                    267
                                    268
                                    269
                                    270
                                    271
                                    272
                                    273
                                    274
                                    275
                                    276
                                    277
                                    278
                                    279
                                    280
                                    281
                                    282
                                    283
                                    284
                                    285
                                    286
                                    287
                                    288
                                    289
                                    290
                                    291
                                    292
                                    293
                                    294
                                    295
                                    296
                                    297
                                    298
                                    299
                                    300
                                    301
                                    302
                                    def create_team_run[TResult](
                                        self,
                                        agents: Sequence[AgentName | MessageNode[Any, Any]] | None = None,
                                        validator: MessageNode[Any, TResult] | None = None,
                                        *,
                                        name: str | None = None,
                                        description: str | None = None,
                                        shared_prompt: str | None = None,
                                        picker: Agent[Any, Any] | None = None,
                                        num_picks: int | None = None,
                                        pick_prompt: str | None = None,
                                    ) -> TeamRun[Any, TResult]:
                                        """Create a a sequential TeamRun from a list of Agents.
                                    
                                        Args:
                                            agents: List of agent names or team/agent instances (all if None)
                                            validator: Node to validate the results of the TeamRun
                                            name: Optional name for the team
                                            description: Optional description for the team
                                            shared_prompt: Optional prompt for all agents
                                            picker: Agent to use for picking agents
                                            num_picks: Number of agents to pick
                                            pick_prompt: Prompt to use for picking agents
                                        """
                                        from llmling_agent.delegation.teamrun import TeamRun
                                    
                                        if agents is None:
                                            agents = list(self.agents.keys())
                                    
                                        # First resolve/configure agents
                                        resolved_agents: list[MessageNode[Any, Any]] = []
                                        for agent in agents:
                                            if isinstance(agent, str):
                                                agent = self.get_agent(agent)
                                            resolved_agents.append(agent)
                                        team = TeamRun(
                                            resolved_agents,
                                            name=name,
                                            description=description,
                                            validator=validator,
                                            shared_prompt=shared_prompt,
                                            picker=picker,
                                            num_picks=num_picks,
                                            pick_prompt=pick_prompt,
                                        )
                                        if name:
                                            self[name] = team
                                        return team
                                    

                                    get_agent

                                    get_agent(
                                        agent: AgentName | Agent[Any, str],
                                        *,
                                        model_override: str | None = None,
                                        session: SessionIdType | SessionQuery = None,
                                    ) -> Agent[TPoolDeps]
                                    
                                    get_agent(
                                        agent: AgentName | Agent[Any, str],
                                        *,
                                        return_type: type[TResult],
                                        model_override: str | None = None,
                                        session: SessionIdType | SessionQuery = None,
                                    ) -> Agent[TPoolDeps, TResult]
                                    
                                    get_agent(
                                        agent: AgentName | Agent[Any, str],
                                        *,
                                        deps: TCustomDeps,
                                        model_override: str | None = None,
                                        session: SessionIdType | SessionQuery = None,
                                    ) -> Agent[TCustomDeps]
                                    
                                    get_agent(
                                        agent: AgentName | Agent[Any, str],
                                        *,
                                        deps: TCustomDeps,
                                        return_type: type[TResult],
                                        model_override: str | None = None,
                                        session: SessionIdType | SessionQuery = None,
                                    ) -> Agent[TCustomDeps, TResult]
                                    
                                    get_agent(
                                        agent: AgentName | Agent[Any, str],
                                        *,
                                        deps: Any | None = None,
                                        return_type: Any | None = None,
                                        model_override: str | None = None,
                                        session: SessionIdType | SessionQuery = None,
                                    ) -> Agent[Any, Any]
                                    

                                    Get or configure an agent from the pool.

                                    This method provides flexible agent configuration with dependency injection: - Without deps: Agent uses pool's shared dependencies - With deps: Agent uses provided custom dependencies

                                    Parameters:

                                    Name Type Description Default
                                    agent AgentName | Agent[Any, str]

                                    Either agent name or instance

                                    required
                                    deps Any | None

                                    Optional custom dependencies (overrides shared deps)

                                    None
                                    return_type Any | None

                                    Optional type for structured responses

                                    None
                                    model_override str | None

                                    Optional model override

                                    None
                                    session SessionIdType | SessionQuery

                                    Optional session ID or query to recover conversation

                                    None

                                    Returns:

                                    Name Type Description
                                    Either Agent[Any, Any]
                                    Agent[Any, Any]
                                    • Agent[TPoolDeps] when using pool's shared deps
                                    Agent[Any, Any]
                                    • Agent[TCustomDeps] when custom deps provided

                                    Raises:

                                    Type Description
                                    KeyError

                                    If agent name not found

                                    ValueError

                                    If configuration is invalid

                                    Source code in src/llmling_agent/delegation/pool.py
                                    731
                                    732
                                    733
                                    734
                                    735
                                    736
                                    737
                                    738
                                    739
                                    740
                                    741
                                    742
                                    743
                                    744
                                    745
                                    746
                                    747
                                    748
                                    749
                                    750
                                    751
                                    752
                                    753
                                    754
                                    755
                                    756
                                    757
                                    758
                                    759
                                    760
                                    761
                                    762
                                    763
                                    764
                                    765
                                    766
                                    767
                                    768
                                    769
                                    770
                                    771
                                    772
                                    773
                                    774
                                    775
                                    776
                                    777
                                    778
                                    779
                                    780
                                    781
                                    782
                                    783
                                    784
                                    785
                                    786
                                    787
                                    def get_agent(
                                        self,
                                        agent: AgentName | Agent[Any, str],
                                        *,
                                        deps: Any | None = None,
                                        return_type: Any | None = None,
                                        model_override: str | None = None,
                                        session: SessionIdType | SessionQuery = None,
                                    ) -> Agent[Any, Any]:
                                        """Get or configure an agent from the pool.
                                    
                                        This method provides flexible agent configuration with dependency injection:
                                        - Without deps: Agent uses pool's shared dependencies
                                        - With deps: Agent uses provided custom dependencies
                                    
                                        Args:
                                            agent: Either agent name or instance
                                            deps: Optional custom dependencies (overrides shared deps)
                                            return_type: Optional type for structured responses
                                            model_override: Optional model override
                                            session: Optional session ID or query to recover conversation
                                    
                                        Returns:
                                            Either:
                                            - Agent[TPoolDeps] when using pool's shared deps
                                            - Agent[TCustomDeps] when custom deps provided
                                    
                                        Raises:
                                            KeyError: If agent name not found
                                            ValueError: If configuration is invalid
                                        """
                                        from llmling_agent.agent import Agent
                                        from llmling_agent.agent.context import AgentContext
                                    
                                        # Get base agent
                                        base = agent if isinstance(agent, Agent) else self.agents[agent]
                                    
                                        # Setup context and dependencies
                                        if base.context is None:
                                            base.context = AgentContext[Any].create_default(base.name)
                                    
                                        # Use custom deps if provided, otherwise use shared deps
                                        base.context.data = deps if deps is not None else self.shared_deps
                                        base.context.pool = self
                                    
                                        # Apply overrides
                                        if model_override:
                                            base.set_model(model_override)
                                    
                                        if session:
                                            base.conversation.load_history_from_database(session=session)
                                    
                                        # Convert to structured if needed
                                        if return_type is not None:
                                            base.set_output_type(return_type)
                                    
                                        return base
                                    

                                    get_mermaid_diagram

                                    get_mermaid_diagram(include_details: bool = True) -> str
                                    

                                    Generate mermaid flowchart of all agents and their connections.

                                    Parameters:

                                    Name Type Description Default
                                    include_details bool

                                    Whether to show connection details (types, queues, etc)

                                    True
                                    Source code in src/llmling_agent/delegation/pool.py
                                    828
                                    829
                                    830
                                    831
                                    832
                                    833
                                    834
                                    835
                                    836
                                    837
                                    838
                                    839
                                    840
                                    841
                                    842
                                    843
                                    844
                                    845
                                    846
                                    847
                                    848
                                    849
                                    850
                                    851
                                    852
                                    853
                                    854
                                    855
                                    856
                                    857
                                    858
                                    859
                                    860
                                    861
                                    862
                                    863
                                    864
                                    def get_mermaid_diagram(self, include_details: bool = True) -> str:
                                        """Generate mermaid flowchart of all agents and their connections.
                                    
                                        Args:
                                            include_details: Whether to show connection details (types, queues, etc)
                                        """
                                        lines = ["flowchart LR"]
                                    
                                        # Add all agents as nodes
                                        for name in self.agents:
                                            lines.append(f"    {name}[{name}]")  # noqa: PERF401
                                    
                                        # Add all connections as edges
                                        for agent in self.agents.values():
                                            connections = agent.connections.get_connections()
                                            for talk in connections:
                                                talk = cast(Talk[Any], talk)  # help mypy understand it's a Talk
                                                source = talk.source.name
                                                for target in talk.targets:
                                                    if include_details:
                                                        details: list[str] = []
                                                        details.append(talk.connection_type)
                                                        if talk.queued:
                                                            details.append(f"queued({talk.queue_strategy})")
                                                        if fn := talk.filter_condition:  # type: ignore
                                                            details.append(f"filter:{fn.__name__}")
                                                        if fn := talk.stop_condition:  # type: ignore
                                                            details.append(f"stop:{fn.__name__}")
                                                        if fn := talk.exit_condition:  # type: ignore
                                                            details.append(f"exit:{fn.__name__}")
                                    
                                                        label = f"|{' '.join(details)}|" if details else ""
                                                        lines.append(f"    {source}--{label}-->{target.name}")
                                                    else:
                                                        lines.append(f"    {source}-->{target.name}")
                                    
                                        return "\n".join(lines)
                                    

                                    list_nodes

                                    list_nodes() -> list[str]
                                    

                                    List available agent names.

                                    Source code in src/llmling_agent/delegation/pool.py
                                    789
                                    790
                                    791
                                    def list_nodes(self) -> list[str]:
                                        """List available agent names."""
                                        return list(self.list_items())
                                    

                                    run_event_loop async

                                    run_event_loop()
                                    

                                    Run pool in event-watching mode until interrupted.

                                    Source code in src/llmling_agent/delegation/pool.py
                                    384
                                    385
                                    386
                                    387
                                    388
                                    389
                                    390
                                    391
                                    392
                                    async def run_event_loop(self):
                                        """Run pool in event-watching mode until interrupted."""
                                        print("Starting event watch mode...")
                                        print("Active nodes: ", ", ".join(self.list_nodes()))
                                        print("Press Ctrl+C to stop")
                                    
                                        with suppress(KeyboardInterrupt):
                                            while True:
                                                await asyncio.sleep(1)
                                    

                                    setup_agent_workers

                                    setup_agent_workers(agent: Agent[Any, Any])
                                    

                                    Set up workers for an agent from configuration.

                                    Source code in src/llmling_agent/delegation/pool.py
                                    673
                                    674
                                    675
                                    676
                                    677
                                    678
                                    679
                                    680
                                    681
                                    682
                                    683
                                    684
                                    685
                                    686
                                    687
                                    688
                                    689
                                    def setup_agent_workers(self, agent: Agent[Any, Any]):
                                        """Set up workers for an agent from configuration."""
                                        for worker_config in agent.context.config.workers:
                                            try:
                                                worker = self.nodes[worker_config.name]
                                                match worker_config:
                                                    case TeamWorkerConfig():
                                                        agent.register_worker(worker)
                                                    case AgentWorkerConfig():
                                                        agent.register_worker(
                                                            worker,
                                                            reset_history_on_run=worker_config.reset_history_on_run,
                                                            pass_message_history=worker_config.pass_message_history,
                                                        )
                                            except KeyError as e:
                                                msg = f"Worker agent {worker_config.name!r} not found"
                                                raise ValueError(msg) from e
                                    

                                    track_message_flow async

                                    track_message_flow() -> AsyncIterator[MessageFlowTracker]
                                    

                                    Track message flow during a context.

                                    Source code in src/llmling_agent/delegation/pool.py
                                    374
                                    375
                                    376
                                    377
                                    378
                                    379
                                    380
                                    381
                                    382
                                    @asynccontextmanager
                                    async def track_message_flow(self) -> AsyncIterator[MessageFlowTracker]:
                                        """Track message flow during a context."""
                                        tracker = MessageFlowTracker()
                                        self.connection_registry.message_flow.connect(tracker.track)
                                        try:
                                            yield tracker
                                        finally:
                                            self.connection_registry.message_flow.disconnect(tracker.track)
                                    

                                    AgentsManifest

                                    Bases: Schema

                                    Complete agent configuration manifest defining all available agents.

                                    This is the root configuration that: - Defines available response types (both inline and imported) - Configures all agent instances and their settings - Sets up custom role definitions and capabilities - Manages environment configurations

                                    A single manifest can define multiple agents that can work independently or collaborate through the orchestrator.

                                    Source code in src/llmling_agent/models/manifest.py
                                     49
                                     50
                                     51
                                     52
                                     53
                                     54
                                     55
                                     56
                                     57
                                     58
                                     59
                                     60
                                     61
                                     62
                                     63
                                     64
                                     65
                                     66
                                     67
                                     68
                                     69
                                     70
                                     71
                                     72
                                     73
                                     74
                                     75
                                     76
                                     77
                                     78
                                     79
                                     80
                                     81
                                     82
                                     83
                                     84
                                     85
                                     86
                                     87
                                     88
                                     89
                                     90
                                     91
                                     92
                                     93
                                     94
                                     95
                                     96
                                     97
                                     98
                                     99
                                    100
                                    101
                                    102
                                    103
                                    104
                                    105
                                    106
                                    107
                                    108
                                    109
                                    110
                                    111
                                    112
                                    113
                                    114
                                    115
                                    116
                                    117
                                    118
                                    119
                                    120
                                    121
                                    122
                                    123
                                    124
                                    125
                                    126
                                    127
                                    128
                                    129
                                    130
                                    131
                                    132
                                    133
                                    134
                                    135
                                    136
                                    137
                                    138
                                    139
                                    140
                                    141
                                    142
                                    143
                                    144
                                    145
                                    146
                                    147
                                    148
                                    149
                                    150
                                    151
                                    152
                                    153
                                    154
                                    155
                                    156
                                    157
                                    158
                                    159
                                    160
                                    161
                                    162
                                    163
                                    164
                                    165
                                    166
                                    167
                                    168
                                    169
                                    170
                                    171
                                    172
                                    173
                                    174
                                    175
                                    176
                                    177
                                    178
                                    179
                                    180
                                    181
                                    182
                                    183
                                    184
                                    185
                                    186
                                    187
                                    188
                                    189
                                    190
                                    191
                                    192
                                    193
                                    194
                                    195
                                    196
                                    197
                                    198
                                    199
                                    200
                                    201
                                    202
                                    203
                                    204
                                    205
                                    206
                                    207
                                    208
                                    209
                                    210
                                    211
                                    212
                                    213
                                    214
                                    215
                                    216
                                    217
                                    218
                                    219
                                    220
                                    221
                                    222
                                    223
                                    224
                                    225
                                    226
                                    227
                                    228
                                    229
                                    230
                                    231
                                    232
                                    233
                                    234
                                    235
                                    236
                                    237
                                    238
                                    239
                                    240
                                    241
                                    242
                                    243
                                    244
                                    245
                                    246
                                    247
                                    248
                                    249
                                    250
                                    251
                                    252
                                    253
                                    254
                                    255
                                    256
                                    257
                                    258
                                    259
                                    260
                                    261
                                    262
                                    263
                                    264
                                    265
                                    266
                                    267
                                    268
                                    269
                                    270
                                    271
                                    272
                                    273
                                    274
                                    275
                                    276
                                    277
                                    278
                                    279
                                    280
                                    281
                                    282
                                    283
                                    284
                                    285
                                    286
                                    287
                                    288
                                    289
                                    290
                                    291
                                    292
                                    293
                                    294
                                    295
                                    296
                                    297
                                    298
                                    299
                                    300
                                    301
                                    302
                                    303
                                    304
                                    305
                                    306
                                    307
                                    308
                                    309
                                    310
                                    311
                                    312
                                    313
                                    314
                                    315
                                    316
                                    317
                                    318
                                    319
                                    320
                                    321
                                    322
                                    323
                                    324
                                    325
                                    326
                                    327
                                    328
                                    329
                                    330
                                    331
                                    332
                                    333
                                    334
                                    335
                                    336
                                    337
                                    338
                                    339
                                    340
                                    341
                                    342
                                    343
                                    344
                                    345
                                    346
                                    347
                                    348
                                    349
                                    350
                                    351
                                    352
                                    353
                                    354
                                    355
                                    356
                                    357
                                    358
                                    359
                                    360
                                    361
                                    362
                                    363
                                    364
                                    365
                                    366
                                    367
                                    368
                                    369
                                    370
                                    371
                                    372
                                    373
                                    374
                                    375
                                    376
                                    377
                                    378
                                    379
                                    380
                                    381
                                    382
                                    383
                                    384
                                    385
                                    386
                                    387
                                    388
                                    389
                                    390
                                    391
                                    392
                                    393
                                    394
                                    395
                                    396
                                    397
                                    398
                                    399
                                    400
                                    401
                                    402
                                    403
                                    404
                                    405
                                    406
                                    407
                                    408
                                    409
                                    410
                                    411
                                    412
                                    413
                                    414
                                    415
                                    416
                                    417
                                    418
                                    419
                                    420
                                    421
                                    422
                                    423
                                    424
                                    425
                                    426
                                    427
                                    428
                                    429
                                    430
                                    431
                                    432
                                    433
                                    434
                                    435
                                    436
                                    437
                                    438
                                    439
                                    440
                                    441
                                    442
                                    443
                                    444
                                    445
                                    446
                                    447
                                    448
                                    449
                                    450
                                    451
                                    452
                                    453
                                    454
                                    455
                                    456
                                    457
                                    458
                                    459
                                    460
                                    461
                                    462
                                    463
                                    464
                                    465
                                    466
                                    467
                                    468
                                    469
                                    470
                                    471
                                    472
                                    473
                                    474
                                    475
                                    476
                                    477
                                    478
                                    479
                                    480
                                    481
                                    class AgentsManifest(Schema):
                                        """Complete agent configuration manifest defining all available agents.
                                    
                                        This is the root configuration that:
                                        - Defines available response types (both inline and imported)
                                        - Configures all agent instances and their settings
                                        - Sets up custom role definitions and capabilities
                                        - Manages environment configurations
                                    
                                        A single manifest can define multiple agents that can work independently
                                        or collaborate through the orchestrator.
                                        """
                                    
                                        INHERIT: str | list[str] | None = None
                                        """Inheritance references."""
                                    
                                        resources: dict[str, ResourceConfig | str] = Field(default_factory=dict)
                                        """Resource configurations defining available filesystems.
                                    
                                        Supports both full config and URI shorthand:
                                            resources:
                                              docs: "file://./docs"  # shorthand
                                              data:  # full config
                                                type: "source"
                                                uri: "s3://bucket/data"
                                                cached: true
                                        """
                                    
                                        agents: dict[str, AgentConfig] = Field(default_factory=dict)
                                        """Mapping of agent IDs to their configurations"""
                                    
                                        teams: dict[str, TeamConfig] = Field(default_factory=dict)
                                        """Mapping of team IDs to their configurations"""
                                    
                                        storage: StorageConfig = Field(default_factory=StorageConfig)
                                        """Storage provider configuration."""
                                    
                                        observability: ObservabilityConfig = Field(default_factory=ObservabilityConfig)
                                        """Observability provider configuration."""
                                    
                                        conversion: ConversionConfig = Field(default_factory=ConversionConfig)
                                        """Document conversion configuration."""
                                    
                                        responses: dict[str, StructuredResponseConfig] = Field(default_factory=dict)
                                        """Mapping of response names to their definitions"""
                                    
                                        jobs: dict[str, Job] = Field(default_factory=dict)
                                        """Pre-defined jobs, ready to be used by nodes."""
                                    
                                        mcp_servers: list[str | MCPServerConfig] = Field(default_factory=list)
                                        """List of MCP server configurations:
                                    
                                        These MCP servers are used to provide tools and other resources to the nodes.
                                        """
                                        pool_server: MCPPoolServerConfig = Field(default_factory=MCPPoolServerConfig)
                                        """Pool server configuration.
                                    
                                        This MCP server configuration is used for the pool MCP server,
                                        which exposes pool functionality to other applications / clients."""
                                    
                                        prompts: PromptLibraryConfig = Field(default_factory=PromptLibraryConfig)
                                    
                                        model_config = ConfigDict(use_attribute_docstrings=True, extra="forbid")
                                    
                                        @model_validator(mode="before")
                                        @classmethod
                                        def normalize_workers(cls, data: dict[str, Any]) -> dict[str, Any]:
                                            """Convert string workers to appropriate WorkerConfig for all agents."""
                                            teams = data.get("teams", {})
                                            agents = data.get("agents", {})
                                    
                                            # Process workers for all agents that have them
                                            for agent_name, agent_config in agents.items():
                                                if isinstance(agent_config, dict):
                                                    workers = agent_config.get("workers", [])
                                                else:
                                                    workers = agent_config.workers
                                    
                                                if workers:
                                                    normalized: list[BaseWorkerConfig] = []
                                    
                                                    for worker in workers:
                                                        match worker:
                                                            case str() as name if name in teams:
                                                                # Determine type based on presence in teams/agents
                                                                normalized.append(TeamWorkerConfig(name=name))
                                                            case str() as name if name in agents:
                                                                normalized.append(AgentWorkerConfig(name=name))
                                                            case str():  # Default to agent if type can't be determined
                                                                normalized.append(AgentWorkerConfig(name=name))
                                    
                                                            case dict() as config:
                                                                # If type is explicitly specified, use it
                                                                if worker_type := config.get("type"):
                                                                    match worker_type:
                                                                        case "team":
                                                                            normalized.append(TeamWorkerConfig(**config))
                                                                        case "agent":
                                                                            normalized.append(AgentWorkerConfig(**config))
                                                                        case _:
                                                                            msg = f"Invalid worker type: {worker_type}"
                                                                            raise ValueError(msg)
                                                                else:
                                                                    # Determine type based on worker name
                                                                    worker_name = config.get("name")
                                                                    if not worker_name:
                                                                        msg = "Worker config missing name"
                                                                        raise ValueError(msg)
                                    
                                                                    if worker_name in teams:
                                                                        normalized.append(TeamWorkerConfig(**config))
                                                                    else:
                                                                        normalized.append(AgentWorkerConfig(**config))
                                    
                                                            case BaseWorkerConfig():  # Already normalized
                                                                normalized.append(worker)
                                    
                                                            case _:
                                                                msg = f"Invalid worker configuration: {worker}"
                                                                raise ValueError(msg)
                                    
                                                    if isinstance(agent_config, dict):
                                                        agent_config["workers"] = normalized
                                                    else:
                                                        # Need to create a new dict with updated workers
                                                        agent_dict = agent_config.model_dump()
                                                        agent_dict["workers"] = normalized
                                                        agents[agent_name] = agent_dict
                                    
                                            return data
                                    
                                        @cached_property
                                        def resource_registry(self) -> ResourceRegistry:
                                            """Get registry with all configured resources."""
                                            registry = ResourceRegistry()
                                            for name, config in self.resources.items():
                                                if isinstance(config, str):
                                                    # Convert URI shorthand to SourceResourceConfig
                                                    config = SourceResourceConfig(uri=config)
                                                registry.register_from_config(name, config)
                                            return registry
                                    
                                        def clone_agent_config(
                                            self,
                                            name: str,
                                            new_name: str | None = None,
                                            *,
                                            template_context: dict[str, Any] | None = None,
                                            **overrides: Any,
                                        ) -> str:
                                            """Create a copy of an agent configuration.
                                    
                                            Args:
                                                name: Name of agent to clone
                                                new_name: Optional new name (auto-generated if None)
                                                template_context: Variables for template rendering
                                                **overrides: Configuration overrides for the clone
                                    
                                            Returns:
                                                Name of the new agent
                                    
                                            Raises:
                                                KeyError: If original agent not found
                                                ValueError: If new name already exists or if overrides invalid
                                            """
                                            if name not in self.agents:
                                                msg = f"Agent {name} not found"
                                                raise KeyError(msg)
                                    
                                            actual_name = new_name or f"{name}_copy_{len(self.agents)}"
                                            if actual_name in self.agents:
                                                msg = f"Agent {actual_name} already exists"
                                                raise ValueError(msg)
                                    
                                            # Deep copy the configuration
                                            config = self.agents[name].model_copy(deep=True)
                                    
                                            # Apply overrides
                                            for key, value in overrides.items():
                                                if not hasattr(config, key):
                                                    msg = f"Invalid override: {key}"
                                                    raise ValueError(msg)
                                                setattr(config, key, value)
                                    
                                            # Handle template rendering if context provided
                                            if template_context and "name" in template_context and "name" not in overrides:
                                                config.name = template_context["name"]
                                    
                                            # Note: system_prompts will be rendered during agent creation, not here
                                            # config.system_prompts remains as PromptConfig objects
                                    
                                            self.agents[actual_name] = config
                                            return actual_name
                                    
                                        @model_validator(mode="before")
                                        @classmethod
                                        def resolve_inheritance(cls, data: dict) -> dict:
                                            """Resolve agent inheritance chains."""
                                            nodes = data.get("agents", {})
                                            resolved: dict[str, dict] = {}
                                            seen: set[str] = set()
                                    
                                            def resolve_node(name: str) -> dict:
                                                if name in resolved:
                                                    return resolved[name]
                                    
                                                if name in seen:
                                                    msg = f"Circular inheritance detected: {name}"
                                                    raise ValueError(msg)
                                    
                                                seen.add(name)
                                                config = (
                                                    nodes[name].model_copy()
                                                    if hasattr(nodes[name], "model_copy")
                                                    else nodes[name].copy()
                                                )
                                                inherit = (
                                                    config.get("inherits") if isinstance(config, dict) else config.inherits
                                                )
                                                if inherit:
                                                    if inherit not in nodes:
                                                        msg = f"Parent agent {inherit} not found"
                                                        raise ValueError(msg)
                                    
                                                    # Get resolved parent config
                                                    parent = resolve_node(inherit)
                                                    # Merge parent with child (child overrides parent)
                                                    merged = parent.copy()
                                                    merged.update(config)
                                                    config = merged
                                    
                                                seen.remove(name)
                                                resolved[name] = config
                                                return config
                                    
                                            # Resolve all nodes
                                            for name in nodes:
                                                resolved[name] = resolve_node(name)
                                    
                                            # Update nodes with resolved configs
                                            data["agents"] = resolved
                                            return data
                                    
                                        @property
                                        def node_names(self) -> list[str]:
                                            """Get list of all agent and team names."""
                                            return list(self.agents.keys()) + list(self.teams.keys())
                                    
                                        @property
                                        def nodes(self) -> dict[str, Any]:
                                            """Get all agent and team configurations."""
                                            return {**self.agents, **self.teams}
                                    
                                        def get_mcp_servers(self) -> list[MCPServerConfig]:
                                            """Get processed MCP server configurations.
                                    
                                            Converts string entries to appropriate MCP server configs based on heuristics:
                                            - URLs ending with "/sse" -> SSE server
                                            - URLs starting with http(s):// -> HTTP server
                                            - Everything else -> stdio command
                                    
                                            Returns:
                                                List of MCPServerConfig instances
                                    
                                            Raises:
                                                ValueError: If string entry is empty
                                            """
                                            return [
                                                BaseMCPServerConfig.from_string(cfg) if isinstance(cfg, str) else cfg
                                                for cfg in self.mcp_servers
                                            ]
                                    
                                        @cached_property
                                        def prompt_manager(self) -> PromptManager:
                                            """Get prompt manager for this manifest."""
                                            from llmling_agent.prompts.manager import PromptManager
                                    
                                            return PromptManager(self.prompts)
                                    
                                        # @model_validator(mode="after")
                                        # def validate_response_types(self) -> AgentsManifest:
                                        #     """Ensure all agent output_types exist in responses or are inline."""
                                        #     for agent_id, agent in self.agents.items():
                                        #         if (
                                        #             isinstance(agent.output_type, str)
                                        #             and agent.output_type not in self.responses
                                        #         ):
                                        #             msg = f"'{agent.output_type=}' for '{agent_id=}' not found in responses"
                                        #             raise ValueError(msg)
                                        #     return self
                                    
                                        def get_agent[TAgentDeps](
                                            self, name: str, deps: TAgentDeps | None = None
                                        ) -> Agent[TAgentDeps, Any]:
                                            # TODO: Make this method async to support async function prompts
                                            from llmling import RuntimeConfig
                                    
                                            from llmling_agent import Agent, AgentContext
                                    
                                            config = self.agents[name]
                                            cfg = config.get_config()
                                            runtime = RuntimeConfig.from_config(cfg)  # Create runtime without async context
                                            # Create context with config path and capabilities
                                            context = AgentContext[TAgentDeps](
                                                node_name=name,
                                                data=deps,
                                                definition=self,
                                                config=config,
                                                runtime=runtime,
                                                # pool=self,
                                                # confirmation_callback=confirmation_callback,
                                            )
                                    
                                            # Resolve system prompts with new PromptConfig types
                                            from llmling_agent_config.system_prompts import (
                                                FilePromptConfig,
                                                FunctionPromptConfig,
                                                LibraryPromptConfig,
                                                StaticPromptConfig,
                                            )
                                    
                                            sys_prompts: list[str] = []
                                            for prompt in config.system_prompts:
                                                match prompt:
                                                    case (str() as sys_prompt) | StaticPromptConfig(content=sys_prompt):
                                                        sys_prompts.append(sys_prompt)
                                                    case FilePromptConfig(path=path, variables=variables):
                                                        template_path = Path(path)  # Load template from file
                                                        if not template_path.is_absolute() and config.config_file_path:
                                                            template_path = Path(config.config_file_path).parent / path
                                    
                                                        template_content = template_path.read_text("utf-8")
                                                        if variables:  # Apply variables if any
                                                            from jinja2 import Template
                                    
                                                            template = Template(template_content)
                                                            content = template.render(**variables)
                                                        else:
                                                            content = template_content
                                                        sys_prompts.append(content)
                                                    case LibraryPromptConfig(reference=reference):
                                                        try:  # Load from library
                                                            content = self.prompt_manager.get.sync(reference)
                                                            sys_prompts.append(content)
                                                        except Exception as e:
                                                            msg = (
                                                                f"Failed to load library prompt {reference!r} "
                                                                f"for agent {name}"
                                                            )
                                                            logger.exception(msg)
                                                            raise ValueError(msg) from e
                                                    case FunctionPromptConfig(function=function, arguments=arguments):
                                                        content = function(**arguments)  # Call function to get prompt content
                                                        sys_prompts.append(content)
                                            # Create agent with runtime and context
                                            return Agent(
                                                runtime=runtime,
                                                context=context,
                                                provider=config.get_provider(),
                                                system_prompt=sys_prompts,
                                                name=name,
                                                description=config.description,
                                                retries=config.retries,
                                                session=config.get_session_config(),
                                                output_retries=config.output_retries,
                                                end_strategy=config.end_strategy,
                                                debug=config.debug,
                                                output_type=self.get_output_type(name) or str,
                                                # name=config.name or name,
                                            )
                                    
                                        def get_used_providers(self) -> set[str]:
                                            """Get all providers configured in this manifest."""
                                            providers = set[str]()
                                    
                                            for agent_config in self.agents.values():
                                                match agent_config.provider:
                                                    case ("pydantic_ai" as typ) | BaseProviderConfig(type=typ):
                                                        providers.add(typ)
                                            return providers
                                    
                                        @classmethod
                                        def from_file(cls, path: JoinablePathLike) -> Self:
                                            """Load agent configuration from YAML file.
                                    
                                            Args:
                                                path: Path to the configuration file
                                    
                                            Returns:
                                                Loaded agent definition
                                    
                                            Raises:
                                                ValueError: If loading fails
                                            """
                                            import yamling
                                    
                                            try:
                                                data = yamling.load_yaml_file(path, resolve_inherit=True)
                                                agent_def = cls.model_validate(data)
                                                # Update all agents with the config file path and ensure names
                                                agents = {
                                                    name: config.model_copy(update={"config_file_path": str(path)})
                                                    for name, config in agent_def.agents.items()
                                                }
                                                return agent_def.model_copy(update={"agents": agents})
                                            except Exception as exc:
                                                msg = f"Failed to load agent config from {path}"
                                                raise ValueError(msg) from exc
                                    
                                        @cached_property
                                        def pool(self) -> AgentPool:
                                            """Create an agent pool from this manifest.
                                    
                                            Returns:
                                                Configured agent pool
                                            """
                                            from llmling_agent import AgentPool
                                    
                                            return AgentPool(manifest=self)
                                    
                                        def get_output_type(self, agent_name: str) -> type[Any] | None:
                                            """Get the resolved result type for an agent.
                                    
                                            Returns None if no result type is configured.
                                            """
                                            agent_config = self.agents[agent_name]
                                            if not agent_config.output_type:
                                                return None
                                            logger.debug("Building response model for %r", agent_config.output_type)
                                            if isinstance(agent_config.output_type, str):
                                                response_def = self.responses[agent_config.output_type]
                                                return response_def.response_schema.get_schema()  # type: ignore
                                            return agent_config.output_type.response_schema.get_schema()  # type: ignore
                                    

                                    INHERIT class-attribute instance-attribute

                                    INHERIT: str | list[str] | None = None
                                    

                                    Inheritance references.

                                    agents class-attribute instance-attribute

                                    agents: dict[str, AgentConfig] = Field(default_factory=dict)
                                    

                                    Mapping of agent IDs to their configurations

                                    conversion class-attribute instance-attribute

                                    conversion: ConversionConfig = Field(default_factory=ConversionConfig)
                                    

                                    Document conversion configuration.

                                    jobs class-attribute instance-attribute

                                    jobs: dict[str, Job] = Field(default_factory=dict)
                                    

                                    Pre-defined jobs, ready to be used by nodes.

                                    mcp_servers class-attribute instance-attribute

                                    mcp_servers: list[str | MCPServerConfig] = Field(default_factory=list)
                                    

                                    List of MCP server configurations:

                                    These MCP servers are used to provide tools and other resources to the nodes.

                                    node_names property

                                    node_names: list[str]
                                    

                                    Get list of all agent and team names.

                                    nodes property

                                    nodes: dict[str, Any]
                                    

                                    Get all agent and team configurations.

                                    observability class-attribute instance-attribute

                                    observability: ObservabilityConfig = Field(default_factory=ObservabilityConfig)
                                    

                                    Observability provider configuration.

                                    pool cached property

                                    pool: AgentPool
                                    

                                    Create an agent pool from this manifest.

                                    Returns:

                                    Type Description
                                    AgentPool

                                    Configured agent pool

                                    pool_server class-attribute instance-attribute

                                    pool_server: MCPPoolServerConfig = Field(default_factory=MCPPoolServerConfig)
                                    

                                    Pool server configuration.

                                    This MCP server configuration is used for the pool MCP server, which exposes pool functionality to other applications / clients.

                                    prompt_manager cached property

                                    prompt_manager: PromptManager
                                    

                                    Get prompt manager for this manifest.

                                    resource_registry cached property

                                    resource_registry: ResourceRegistry
                                    

                                    Get registry with all configured resources.

                                    resources class-attribute instance-attribute

                                    resources: dict[str, ResourceConfig | str] = Field(default_factory=dict)
                                    

                                    Resource configurations defining available filesystems.

                                    Supports both full config and URI shorthand

                                    resources: docs: "file://./docs" # shorthand data: # full config type: "source" uri: "s3://bucket/data" cached: true

                                    responses class-attribute instance-attribute

                                    responses: dict[str, StructuredResponseConfig] = Field(default_factory=dict)
                                    

                                    Mapping of response names to their definitions

                                    storage class-attribute instance-attribute

                                    storage: StorageConfig = Field(default_factory=StorageConfig)
                                    

                                    Storage provider configuration.

                                    teams class-attribute instance-attribute

                                    teams: dict[str, TeamConfig] = Field(default_factory=dict)
                                    

                                    Mapping of team IDs to their configurations

                                    clone_agent_config

                                    clone_agent_config(
                                        name: str,
                                        new_name: str | None = None,
                                        *,
                                        template_context: dict[str, Any] | None = None,
                                        **overrides: Any,
                                    ) -> str
                                    

                                    Create a copy of an agent configuration.

                                    Parameters:

                                    Name Type Description Default
                                    name str

                                    Name of agent to clone

                                    required
                                    new_name str | None

                                    Optional new name (auto-generated if None)

                                    None
                                    template_context dict[str, Any] | None

                                    Variables for template rendering

                                    None
                                    **overrides Any

                                    Configuration overrides for the clone

                                    {}

                                    Returns:

                                    Type Description
                                    str

                                    Name of the new agent

                                    Raises:

                                    Type Description
                                    KeyError

                                    If original agent not found

                                    ValueError

                                    If new name already exists or if overrides invalid

                                    Source code in src/llmling_agent/models/manifest.py
                                    191
                                    192
                                    193
                                    194
                                    195
                                    196
                                    197
                                    198
                                    199
                                    200
                                    201
                                    202
                                    203
                                    204
                                    205
                                    206
                                    207
                                    208
                                    209
                                    210
                                    211
                                    212
                                    213
                                    214
                                    215
                                    216
                                    217
                                    218
                                    219
                                    220
                                    221
                                    222
                                    223
                                    224
                                    225
                                    226
                                    227
                                    228
                                    229
                                    230
                                    231
                                    232
                                    233
                                    234
                                    235
                                    236
                                    237
                                    238
                                    239
                                    240
                                    241
                                    def clone_agent_config(
                                        self,
                                        name: str,
                                        new_name: str | None = None,
                                        *,
                                        template_context: dict[str, Any] | None = None,
                                        **overrides: Any,
                                    ) -> str:
                                        """Create a copy of an agent configuration.
                                    
                                        Args:
                                            name: Name of agent to clone
                                            new_name: Optional new name (auto-generated if None)
                                            template_context: Variables for template rendering
                                            **overrides: Configuration overrides for the clone
                                    
                                        Returns:
                                            Name of the new agent
                                    
                                        Raises:
                                            KeyError: If original agent not found
                                            ValueError: If new name already exists or if overrides invalid
                                        """
                                        if name not in self.agents:
                                            msg = f"Agent {name} not found"
                                            raise KeyError(msg)
                                    
                                        actual_name = new_name or f"{name}_copy_{len(self.agents)}"
                                        if actual_name in self.agents:
                                            msg = f"Agent {actual_name} already exists"
                                            raise ValueError(msg)
                                    
                                        # Deep copy the configuration
                                        config = self.agents[name].model_copy(deep=True)
                                    
                                        # Apply overrides
                                        for key, value in overrides.items():
                                            if not hasattr(config, key):
                                                msg = f"Invalid override: {key}"
                                                raise ValueError(msg)
                                            setattr(config, key, value)
                                    
                                        # Handle template rendering if context provided
                                        if template_context and "name" in template_context and "name" not in overrides:
                                            config.name = template_context["name"]
                                    
                                        # Note: system_prompts will be rendered during agent creation, not here
                                        # config.system_prompts remains as PromptConfig objects
                                    
                                        self.agents[actual_name] = config
                                        return actual_name
                                    

                                    from_file classmethod

                                    from_file(path: JoinablePathLike) -> Self
                                    

                                    Load agent configuration from YAML file.

                                    Parameters:

                                    Name Type Description Default
                                    path JoinablePathLike

                                    Path to the configuration file

                                    required

                                    Returns:

                                    Type Description
                                    Self

                                    Loaded agent definition

                                    Raises:

                                    Type Description
                                    ValueError

                                    If loading fails

                                    Source code in src/llmling_agent/models/manifest.py
                                    430
                                    431
                                    432
                                    433
                                    434
                                    435
                                    436
                                    437
                                    438
                                    439
                                    440
                                    441
                                    442
                                    443
                                    444
                                    445
                                    446
                                    447
                                    448
                                    449
                                    450
                                    451
                                    452
                                    453
                                    454
                                    455
                                    456
                                    @classmethod
                                    def from_file(cls, path: JoinablePathLike) -> Self:
                                        """Load agent configuration from YAML file.
                                    
                                        Args:
                                            path: Path to the configuration file
                                    
                                        Returns:
                                            Loaded agent definition
                                    
                                        Raises:
                                            ValueError: If loading fails
                                        """
                                        import yamling
                                    
                                        try:
                                            data = yamling.load_yaml_file(path, resolve_inherit=True)
                                            agent_def = cls.model_validate(data)
                                            # Update all agents with the config file path and ensure names
                                            agents = {
                                                name: config.model_copy(update={"config_file_path": str(path)})
                                                for name, config in agent_def.agents.items()
                                            }
                                            return agent_def.model_copy(update={"agents": agents})
                                        except Exception as exc:
                                            msg = f"Failed to load agent config from {path}"
                                            raise ValueError(msg) from exc
                                    

                                    get_mcp_servers

                                    get_mcp_servers() -> list[MCPServerConfig]
                                    

                                    Get processed MCP server configurations.

                                    Converts string entries to appropriate MCP server configs based on heuristics: - URLs ending with "/sse" -> SSE server - URLs starting with http(s):// -> HTTP server - Everything else -> stdio command

                                    Returns:

                                    Type Description
                                    list[MCPServerConfig]

                                    List of MCPServerConfig instances

                                    Raises:

                                    Type Description
                                    ValueError

                                    If string entry is empty

                                    Source code in src/llmling_agent/models/manifest.py
                                    302
                                    303
                                    304
                                    305
                                    306
                                    307
                                    308
                                    309
                                    310
                                    311
                                    312
                                    313
                                    314
                                    315
                                    316
                                    317
                                    318
                                    319
                                    def get_mcp_servers(self) -> list[MCPServerConfig]:
                                        """Get processed MCP server configurations.
                                    
                                        Converts string entries to appropriate MCP server configs based on heuristics:
                                        - URLs ending with "/sse" -> SSE server
                                        - URLs starting with http(s):// -> HTTP server
                                        - Everything else -> stdio command
                                    
                                        Returns:
                                            List of MCPServerConfig instances
                                    
                                        Raises:
                                            ValueError: If string entry is empty
                                        """
                                        return [
                                            BaseMCPServerConfig.from_string(cfg) if isinstance(cfg, str) else cfg
                                            for cfg in self.mcp_servers
                                        ]
                                    

                                    get_output_type

                                    get_output_type(agent_name: str) -> type[Any] | None
                                    

                                    Get the resolved result type for an agent.

                                    Returns None if no result type is configured.

                                    Source code in src/llmling_agent/models/manifest.py
                                    469
                                    470
                                    471
                                    472
                                    473
                                    474
                                    475
                                    476
                                    477
                                    478
                                    479
                                    480
                                    481
                                    def get_output_type(self, agent_name: str) -> type[Any] | None:
                                        """Get the resolved result type for an agent.
                                    
                                        Returns None if no result type is configured.
                                        """
                                        agent_config = self.agents[agent_name]
                                        if not agent_config.output_type:
                                            return None
                                        logger.debug("Building response model for %r", agent_config.output_type)
                                        if isinstance(agent_config.output_type, str):
                                            response_def = self.responses[agent_config.output_type]
                                            return response_def.response_schema.get_schema()  # type: ignore
                                        return agent_config.output_type.response_schema.get_schema()  # type: ignore
                                    

                                    get_used_providers

                                    get_used_providers() -> set[str]
                                    

                                    Get all providers configured in this manifest.

                                    Source code in src/llmling_agent/models/manifest.py
                                    420
                                    421
                                    422
                                    423
                                    424
                                    425
                                    426
                                    427
                                    428
                                    def get_used_providers(self) -> set[str]:
                                        """Get all providers configured in this manifest."""
                                        providers = set[str]()
                                    
                                        for agent_config in self.agents.values():
                                            match agent_config.provider:
                                                case ("pydantic_ai" as typ) | BaseProviderConfig(type=typ):
                                                    providers.add(typ)
                                        return providers
                                    

                                    normalize_workers classmethod

                                    normalize_workers(data: dict[str, Any]) -> dict[str, Any]
                                    

                                    Convert string workers to appropriate WorkerConfig for all agents.

                                    Source code in src/llmling_agent/models/manifest.py
                                    113
                                    114
                                    115
                                    116
                                    117
                                    118
                                    119
                                    120
                                    121
                                    122
                                    123
                                    124
                                    125
                                    126
                                    127
                                    128
                                    129
                                    130
                                    131
                                    132
                                    133
                                    134
                                    135
                                    136
                                    137
                                    138
                                    139
                                    140
                                    141
                                    142
                                    143
                                    144
                                    145
                                    146
                                    147
                                    148
                                    149
                                    150
                                    151
                                    152
                                    153
                                    154
                                    155
                                    156
                                    157
                                    158
                                    159
                                    160
                                    161
                                    162
                                    163
                                    164
                                    165
                                    166
                                    167
                                    168
                                    169
                                    170
                                    171
                                    172
                                    173
                                    174
                                    175
                                    176
                                    177
                                    178
                                    @model_validator(mode="before")
                                    @classmethod
                                    def normalize_workers(cls, data: dict[str, Any]) -> dict[str, Any]:
                                        """Convert string workers to appropriate WorkerConfig for all agents."""
                                        teams = data.get("teams", {})
                                        agents = data.get("agents", {})
                                    
                                        # Process workers for all agents that have them
                                        for agent_name, agent_config in agents.items():
                                            if isinstance(agent_config, dict):
                                                workers = agent_config.get("workers", [])
                                            else:
                                                workers = agent_config.workers
                                    
                                            if workers:
                                                normalized: list[BaseWorkerConfig] = []
                                    
                                                for worker in workers:
                                                    match worker:
                                                        case str() as name if name in teams:
                                                            # Determine type based on presence in teams/agents
                                                            normalized.append(TeamWorkerConfig(name=name))
                                                        case str() as name if name in agents:
                                                            normalized.append(AgentWorkerConfig(name=name))
                                                        case str():  # Default to agent if type can't be determined
                                                            normalized.append(AgentWorkerConfig(name=name))
                                    
                                                        case dict() as config:
                                                            # If type is explicitly specified, use it
                                                            if worker_type := config.get("type"):
                                                                match worker_type:
                                                                    case "team":
                                                                        normalized.append(TeamWorkerConfig(**config))
                                                                    case "agent":
                                                                        normalized.append(AgentWorkerConfig(**config))
                                                                    case _:
                                                                        msg = f"Invalid worker type: {worker_type}"
                                                                        raise ValueError(msg)
                                                            else:
                                                                # Determine type based on worker name
                                                                worker_name = config.get("name")
                                                                if not worker_name:
                                                                    msg = "Worker config missing name"
                                                                    raise ValueError(msg)
                                    
                                                                if worker_name in teams:
                                                                    normalized.append(TeamWorkerConfig(**config))
                                                                else:
                                                                    normalized.append(AgentWorkerConfig(**config))
                                    
                                                        case BaseWorkerConfig():  # Already normalized
                                                            normalized.append(worker)
                                    
                                                        case _:
                                                            msg = f"Invalid worker configuration: {worker}"
                                                            raise ValueError(msg)
                                    
                                                if isinstance(agent_config, dict):
                                                    agent_config["workers"] = normalized
                                                else:
                                                    # Need to create a new dict with updated workers
                                                    agent_dict = agent_config.model_dump()
                                                    agent_dict["workers"] = normalized
                                                    agents[agent_name] = agent_dict
                                    
                                        return data
                                    

                                    resolve_inheritance classmethod

                                    resolve_inheritance(data: dict) -> dict
                                    

                                    Resolve agent inheritance chains.

                                    Source code in src/llmling_agent/models/manifest.py
                                    243
                                    244
                                    245
                                    246
                                    247
                                    248
                                    249
                                    250
                                    251
                                    252
                                    253
                                    254
                                    255
                                    256
                                    257
                                    258
                                    259
                                    260
                                    261
                                    262
                                    263
                                    264
                                    265
                                    266
                                    267
                                    268
                                    269
                                    270
                                    271
                                    272
                                    273
                                    274
                                    275
                                    276
                                    277
                                    278
                                    279
                                    280
                                    281
                                    282
                                    283
                                    284
                                    285
                                    286
                                    287
                                    288
                                    289
                                    290
                                    @model_validator(mode="before")
                                    @classmethod
                                    def resolve_inheritance(cls, data: dict) -> dict:
                                        """Resolve agent inheritance chains."""
                                        nodes = data.get("agents", {})
                                        resolved: dict[str, dict] = {}
                                        seen: set[str] = set()
                                    
                                        def resolve_node(name: str) -> dict:
                                            if name in resolved:
                                                return resolved[name]
                                    
                                            if name in seen:
                                                msg = f"Circular inheritance detected: {name}"
                                                raise ValueError(msg)
                                    
                                            seen.add(name)
                                            config = (
                                                nodes[name].model_copy()
                                                if hasattr(nodes[name], "model_copy")
                                                else nodes[name].copy()
                                            )
                                            inherit = (
                                                config.get("inherits") if isinstance(config, dict) else config.inherits
                                            )
                                            if inherit:
                                                if inherit not in nodes:
                                                    msg = f"Parent agent {inherit} not found"
                                                    raise ValueError(msg)
                                    
                                                # Get resolved parent config
                                                parent = resolve_node(inherit)
                                                # Merge parent with child (child overrides parent)
                                                merged = parent.copy()
                                                merged.update(config)
                                                config = merged
                                    
                                            seen.remove(name)
                                            resolved[name] = config
                                            return config
                                    
                                        # Resolve all nodes
                                        for name in nodes:
                                            resolved[name] = resolve_node(name)
                                    
                                        # Update nodes with resolved configs
                                        data["agents"] = resolved
                                        return data
                                    

                                    AudioBase64Content

                                    Bases: AudioContent

                                    Audio from base64 data.

                                    Source code in src/llmling_agent/models/content.py
                                    250
                                    251
                                    252
                                    253
                                    254
                                    255
                                    256
                                    257
                                    258
                                    259
                                    260
                                    261
                                    262
                                    263
                                    264
                                    265
                                    266
                                    267
                                    268
                                    269
                                    270
                                    271
                                    272
                                    273
                                    274
                                    275
                                    276
                                    277
                                    278
                                    279
                                    280
                                    281
                                    282
                                    283
                                    284
                                    285
                                    286
                                    287
                                    288
                                    289
                                    290
                                    291
                                    292
                                    293
                                    class AudioBase64Content(AudioContent):
                                        """Audio from base64 data."""
                                    
                                        type: Literal["audio_base64"] = Field("audio_base64", init=False)
                                        """Base64-encoded audio."""
                                    
                                        data: str
                                        """Audio data in base64 format."""
                                    
                                        format: str | None = None  # mp3, wav, etc
                                        """Audio format."""
                                    
                                        def to_openai_format(self) -> dict[str, Any]:
                                            """Convert to OpenAI API format for audio models."""
                                            data_url = f"data:audio/{self.format or 'mp3'};base64,{self.data}"
                                            content = {"url": data_url, "format": self.format or "auto"}
                                            return {"type": "audio", "audio": content}
                                    
                                        @classmethod
                                        def from_bytes(cls, data: bytes, audio_format: str = "mp3") -> Self:
                                            """Create from raw bytes."""
                                            return cls(data=base64.b64encode(data).decode(), format=audio_format)
                                    
                                        @classmethod
                                        def from_path(cls, path: JoinablePathLike) -> Self:
                                            """Create from file path with auto format detection."""
                                            import mimetypes
                                    
                                            from upathtools import to_upath
                                    
                                            path_obj = to_upath(path)
                                            mime_type, _ = mimetypes.guess_type(str(path_obj))
                                            fmt = (
                                                mime_type.removeprefix("audio/")
                                                if mime_type and mime_type.startswith("audio/")
                                                else "mp3"
                                            )
                                    
                                            return cls(data=base64.b64encode(path_obj.read_bytes()).decode(), format=fmt)
                                    
                                        @property
                                        def mime_type(self) -> str:
                                            """Return the MIME type of the audio."""
                                            return f"audio/{self.format or 'mp3'}"
                                    

                                    data instance-attribute

                                    data: str
                                    

                                    Audio data in base64 format.

                                    format class-attribute instance-attribute

                                    format: str | None = None
                                    

                                    Audio format.

                                    mime_type property

                                    mime_type: str
                                    

                                    Return the MIME type of the audio.

                                    type class-attribute instance-attribute

                                    type: Literal['audio_base64'] = Field('audio_base64', init=False)
                                    

                                    Base64-encoded audio.

                                    from_bytes classmethod

                                    from_bytes(data: bytes, audio_format: str = 'mp3') -> Self
                                    

                                    Create from raw bytes.

                                    Source code in src/llmling_agent/models/content.py
                                    268
                                    269
                                    270
                                    271
                                    @classmethod
                                    def from_bytes(cls, data: bytes, audio_format: str = "mp3") -> Self:
                                        """Create from raw bytes."""
                                        return cls(data=base64.b64encode(data).decode(), format=audio_format)
                                    

                                    from_path classmethod

                                    from_path(path: JoinablePathLike) -> Self
                                    

                                    Create from file path with auto format detection.

                                    Source code in src/llmling_agent/models/content.py
                                    273
                                    274
                                    275
                                    276
                                    277
                                    278
                                    279
                                    280
                                    281
                                    282
                                    283
                                    284
                                    285
                                    286
                                    287
                                    288
                                    @classmethod
                                    def from_path(cls, path: JoinablePathLike) -> Self:
                                        """Create from file path with auto format detection."""
                                        import mimetypes
                                    
                                        from upathtools import to_upath
                                    
                                        path_obj = to_upath(path)
                                        mime_type, _ = mimetypes.guess_type(str(path_obj))
                                        fmt = (
                                            mime_type.removeprefix("audio/")
                                            if mime_type and mime_type.startswith("audio/")
                                            else "mp3"
                                        )
                                    
                                        return cls(data=base64.b64encode(path_obj.read_bytes()).decode(), format=fmt)
                                    

                                    to_openai_format

                                    to_openai_format() -> dict[str, Any]
                                    

                                    Convert to OpenAI API format for audio models.

                                    Source code in src/llmling_agent/models/content.py
                                    262
                                    263
                                    264
                                    265
                                    266
                                    def to_openai_format(self) -> dict[str, Any]:
                                        """Convert to OpenAI API format for audio models."""
                                        data_url = f"data:audio/{self.format or 'mp3'};base64,{self.data}"
                                        content = {"url": data_url, "format": self.format or "auto"}
                                        return {"type": "audio", "audio": content}
                                    

                                    AudioURLContent

                                    Bases: AudioContent

                                    Audio from URL.

                                    Source code in src/llmling_agent/models/content.py
                                    235
                                    236
                                    237
                                    238
                                    239
                                    240
                                    241
                                    242
                                    243
                                    244
                                    245
                                    246
                                    247
                                    class AudioURLContent(AudioContent):
                                        """Audio from URL."""
                                    
                                        type: Literal["audio_url"] = Field("audio_url", init=False)
                                        """URL-based audio."""
                                    
                                        url: str
                                        """URL to the audio."""
                                    
                                        def to_openai_format(self) -> dict[str, Any]:
                                            """Convert to OpenAI API format for audio models."""
                                            content = {"url": self.url, "format": self.format or "auto"}
                                            return {"type": "audio", "audio": content}
                                    

                                    type class-attribute instance-attribute

                                    type: Literal['audio_url'] = Field('audio_url', init=False)
                                    

                                    URL-based audio.

                                    url instance-attribute

                                    url: str
                                    

                                    URL to the audio.

                                    to_openai_format

                                    to_openai_format() -> dict[str, Any]
                                    

                                    Convert to OpenAI API format for audio models.

                                    Source code in src/llmling_agent/models/content.py
                                    244
                                    245
                                    246
                                    247
                                    def to_openai_format(self) -> dict[str, Any]:
                                        """Convert to OpenAI API format for audio models."""
                                        content = {"url": self.url, "format": self.format or "auto"}
                                        return {"type": "audio", "audio": content}
                                    

                                    BaseTeam

                                    Bases: MessageNode[TDeps, TResult]

                                    Base class for Team and TeamRun.

                                    Source code in src/llmling_agent/delegation/base_team.py
                                     74
                                     75
                                     76
                                     77
                                     78
                                     79
                                     80
                                     81
                                     82
                                     83
                                     84
                                     85
                                     86
                                     87
                                     88
                                     89
                                     90
                                     91
                                     92
                                     93
                                     94
                                     95
                                     96
                                     97
                                     98
                                     99
                                    100
                                    101
                                    102
                                    103
                                    104
                                    105
                                    106
                                    107
                                    108
                                    109
                                    110
                                    111
                                    112
                                    113
                                    114
                                    115
                                    116
                                    117
                                    118
                                    119
                                    120
                                    121
                                    122
                                    123
                                    124
                                    125
                                    126
                                    127
                                    128
                                    129
                                    130
                                    131
                                    132
                                    133
                                    134
                                    135
                                    136
                                    137
                                    138
                                    139
                                    140
                                    141
                                    142
                                    143
                                    144
                                    145
                                    146
                                    147
                                    148
                                    149
                                    150
                                    151
                                    152
                                    153
                                    154
                                    155
                                    156
                                    157
                                    158
                                    159
                                    160
                                    161
                                    162
                                    163
                                    164
                                    165
                                    166
                                    167
                                    168
                                    169
                                    170
                                    171
                                    172
                                    173
                                    174
                                    175
                                    176
                                    177
                                    178
                                    179
                                    180
                                    181
                                    182
                                    183
                                    184
                                    185
                                    186
                                    187
                                    188
                                    189
                                    190
                                    191
                                    192
                                    193
                                    194
                                    195
                                    196
                                    197
                                    198
                                    199
                                    200
                                    201
                                    202
                                    203
                                    204
                                    205
                                    206
                                    207
                                    208
                                    209
                                    210
                                    211
                                    212
                                    213
                                    214
                                    215
                                    216
                                    217
                                    218
                                    219
                                    220
                                    221
                                    222
                                    223
                                    224
                                    225
                                    226
                                    227
                                    228
                                    229
                                    230
                                    231
                                    232
                                    233
                                    234
                                    235
                                    236
                                    237
                                    238
                                    239
                                    240
                                    241
                                    242
                                    243
                                    244
                                    245
                                    246
                                    247
                                    248
                                    249
                                    250
                                    251
                                    252
                                    253
                                    254
                                    255
                                    256
                                    257
                                    258
                                    259
                                    260
                                    261
                                    262
                                    263
                                    264
                                    265
                                    266
                                    267
                                    268
                                    269
                                    270
                                    271
                                    272
                                    273
                                    274
                                    275
                                    276
                                    277
                                    278
                                    279
                                    280
                                    281
                                    282
                                    283
                                    284
                                    285
                                    286
                                    287
                                    288
                                    289
                                    290
                                    291
                                    292
                                    293
                                    294
                                    295
                                    296
                                    297
                                    298
                                    299
                                    300
                                    301
                                    302
                                    303
                                    304
                                    305
                                    306
                                    307
                                    308
                                    309
                                    310
                                    311
                                    312
                                    313
                                    314
                                    315
                                    316
                                    317
                                    318
                                    319
                                    320
                                    321
                                    322
                                    323
                                    324
                                    325
                                    326
                                    327
                                    328
                                    329
                                    330
                                    331
                                    332
                                    333
                                    334
                                    335
                                    336
                                    337
                                    338
                                    339
                                    340
                                    341
                                    342
                                    343
                                    344
                                    345
                                    346
                                    347
                                    348
                                    349
                                    350
                                    351
                                    352
                                    353
                                    354
                                    355
                                    356
                                    357
                                    358
                                    359
                                    360
                                    361
                                    362
                                    363
                                    364
                                    365
                                    366
                                    367
                                    368
                                    369
                                    370
                                    371
                                    372
                                    373
                                    374
                                    375
                                    376
                                    377
                                    378
                                    379
                                    380
                                    381
                                    382
                                    383
                                    384
                                    385
                                    386
                                    387
                                    388
                                    389
                                    390
                                    391
                                    392
                                    393
                                    394
                                    395
                                    396
                                    397
                                    398
                                    399
                                    400
                                    401
                                    402
                                    403
                                    404
                                    405
                                    406
                                    407
                                    408
                                    409
                                    410
                                    411
                                    412
                                    413
                                    414
                                    415
                                    416
                                    417
                                    418
                                    419
                                    420
                                    421
                                    422
                                    423
                                    424
                                    425
                                    426
                                    427
                                    428
                                    429
                                    430
                                    431
                                    432
                                    433
                                    434
                                    435
                                    436
                                    437
                                    438
                                    439
                                    440
                                    441
                                    442
                                    443
                                    444
                                    445
                                    446
                                    447
                                    448
                                    449
                                    450
                                    451
                                    452
                                    453
                                    454
                                    455
                                    456
                                    457
                                    458
                                    459
                                    460
                                    461
                                    462
                                    463
                                    464
                                    465
                                    466
                                    467
                                    468
                                    469
                                    470
                                    471
                                    472
                                    473
                                    474
                                    475
                                    476
                                    477
                                    478
                                    479
                                    480
                                    481
                                    482
                                    483
                                    484
                                    485
                                    486
                                    487
                                    488
                                    489
                                    490
                                    491
                                    492
                                    493
                                    494
                                    495
                                    496
                                    497
                                    498
                                    499
                                    500
                                    501
                                    502
                                    503
                                    504
                                    505
                                    506
                                    507
                                    508
                                    509
                                    510
                                    511
                                    512
                                    513
                                    514
                                    515
                                    516
                                    517
                                    518
                                    519
                                    520
                                    521
                                    522
                                    523
                                    524
                                    525
                                    526
                                    527
                                    528
                                    529
                                    530
                                    531
                                    532
                                    533
                                    534
                                    535
                                    536
                                    537
                                    538
                                    539
                                    540
                                    541
                                    class BaseTeam[TDeps, TResult](MessageNode[TDeps, TResult]):
                                        """Base class for Team and TeamRun."""
                                    
                                        def __init__(
                                            self,
                                            agents: Sequence[MessageNode[TDeps, TResult]],
                                            *,
                                            name: str | None = None,
                                            description: str | None = None,
                                            shared_prompt: str | None = None,
                                            mcp_servers: list[str | MCPServerConfig] | None = None,
                                            picker: Agent[Any, Any] | None = None,
                                            num_picks: int | None = None,
                                            pick_prompt: str | None = None,
                                        ):
                                            """Common variables only for typing."""
                                            from llmling_agent.delegation.teamrun import ExtendedTeamTalk
                                    
                                            self._name = name or " & ".join([i.name for i in agents])
                                            self.agents = EventedList[MessageNode]()
                                            self.agents.events.inserted.connect(self._on_node_added)
                                            self.agents.events.removed.connect(self._on_node_removed)
                                            self.agents.events.changed.connect(self._on_node_changed)
                                            super().__init__(
                                                name=self._name,
                                                context=self.context,
                                                mcp_servers=mcp_servers,
                                                description=description,
                                            )
                                            self.agents.extend(list(agents))
                                            self._team_talk = ExtendedTeamTalk()
                                            self.shared_prompt = shared_prompt
                                            self._main_task: asyncio.Task[Any] | None = None
                                            self._infinite = False
                                            self.picker = picker
                                            self.num_picks = num_picks
                                            self.pick_prompt = pick_prompt
                                    
                                        def to_tool(self, *, name: str | None = None, description: str | None = None) -> Tool:
                                            """Create a tool from this agent.
                                    
                                            Args:
                                                name: Optional tool name override
                                                description: Optional tool description override
                                            """
                                            tool_name = name or f"ask_{self.name}"
                                    
                                            async def wrapped_tool(prompt: str) -> TResult:
                                                result = await self.run(prompt)
                                                return result.data
                                    
                                            docstring = description or f"Get expert answer from node {self.name}"
                                            if self.description:
                                                docstring = f"{docstring}\n\n{self.description}"
                                    
                                            wrapped_tool.__doc__ = docstring
                                            wrapped_tool.__name__ = tool_name
                                    
                                            return Tool.from_callable(
                                                wrapped_tool,
                                                name_override=tool_name,
                                                description_override=docstring,
                                            )
                                    
                                        async def pick_agents(self, task: str) -> Sequence[MessageNode[Any, Any]]:
                                            """Pick agents to run."""
                                            if self.picker:
                                                if self.num_picks == 1:
                                                    result = await self.picker.talk.pick(self, task, self.pick_prompt)
                                                    return [result.selection]
                                                multi_result = await self.picker.talk.pick_multiple(
                                                    self,
                                                    task,
                                                    min_picks=self.num_picks or 1,
                                                    max_picks=self.num_picks,
                                                    prompt=self.pick_prompt,
                                                )
                                                return multi_result.selections
                                            return list(self.agents)
                                    
                                        def _on_node_changed(self, index: int, old: MessageNode, new: MessageNode):
                                            """Handle node replacement in the agents list."""
                                            self._on_node_removed(index, old)
                                            self._on_node_added(index, new)
                                    
                                        def _on_node_added(self, index: int, node: MessageNode[Any, Any]):
                                            """Handler for adding nodes to the team."""
                                            from llmling_agent.agent import Agent
                                    
                                            if isinstance(node, Agent):
                                                node.tools.add_provider(self.mcp)
                                            # TODO: Right now connecting here is not desired since emission means db logging
                                            # ideally db logging would not rely on the "public" agent signal.
                                    
                                            # node.tool_used.connect(self.tool_used)
                                    
                                        def _on_node_removed(self, index: int, node: MessageNode[Any, Any]):
                                            """Handler for removing nodes from the team."""
                                            from llmling_agent.agent import Agent
                                    
                                            if isinstance(node, Agent):
                                                node.tools.remove_provider(self.mcp)
                                            # node.tool_used.disconnect(self.tool_used)
                                    
                                        def __repr__(self) -> str:
                                            """Create readable representation."""
                                            members = ", ".join(agent.name for agent in self.agents)
                                            name = f" ({self.name})" if self.name else ""
                                            return f"{self.__class__.__name__}[{len(self.agents)}]{name}: {members}"
                                    
                                        def __len__(self) -> int:
                                            """Get number of team members."""
                                            return len(self.agents)
                                    
                                        def __iter__(self) -> Iterator[MessageNode[TDeps, TResult]]:
                                            """Iterate over team members."""
                                            return iter(self.agents)
                                    
                                        def __getitem__(self, index_or_name: int | str) -> MessageNode[TDeps, TResult]:
                                            """Get team member by index or name."""
                                            if isinstance(index_or_name, str):
                                                return next(agent for agent in self.agents if agent.name == index_or_name)
                                            return self.agents[index_or_name]
                                    
                                        def __or__(
                                            self,
                                            other: Agent[Any, Any] | ProcessorCallback[Any] | BaseTeam[Any, Any],
                                        ) -> TeamRun[Any, Any]:
                                            """Create a sequential pipeline."""
                                            from llmling_agent.agent import Agent
                                            from llmling_agent.delegation.teamrun import TeamRun
                                    
                                            # Handle conversion of callables first
                                            if callable(other):
                                                other = Agent.from_callback(other)
                                                other.context.pool = self.context.pool
                                    
                                            # If we're already a TeamRun, extend it
                                            if isinstance(self, TeamRun):
                                                if self.validator:
                                                    # If we have a validator, create new TeamRun to preserve validation
                                                    return TeamRun([self, other])
                                                self.agents.append(other)
                                                return self
                                            # Otherwise create new TeamRun
                                            return TeamRun([self, other])
                                    
                                        @overload
                                        def __and__(self, other: Team[None]) -> Team[None]: ...
                                    
                                        @overload
                                        def __and__(self, other: Team[TDeps]) -> Team[TDeps]: ...
                                    
                                        @overload
                                        def __and__(self, other: Team[Any]) -> Team[Any]: ...
                                    
                                        @overload
                                        def __and__(self, other: Agent[TDeps, Any]) -> Team[TDeps]: ...
                                    
                                        @overload
                                        def __and__(self, other: Agent[Any, Any]) -> Team[Any]: ...
                                    
                                        def __and__(
                                            self, other: Team[Any] | Agent[Any, Any] | ProcessorCallback[Any]
                                        ) -> Team[Any]:
                                            """Combine teams, preserving type safety for same types."""
                                            from llmling_agent.agent import Agent
                                            from llmling_agent.delegation.team import Team
                                    
                                            if callable(other):
                                                other = Agent.from_callback(other)
                                                other.context.pool = self.context.pool
                                    
                                            match other:
                                                case Team():
                                                    # Flatten when combining Teams
                                                    return Team([*self.agents, *other.agents])
                                                case _:
                                                    # Everything else just becomes a member
                                                    return Team([*self.agents, other])
                                    
                                        async def get_stats(self) -> AggregatedMessageStats:
                                            """Get aggregated stats from all team members."""
                                            stats = [await agent.get_stats() for agent in self.agents]
                                            return AggregatedMessageStats(stats=stats)
                                    
                                        @property
                                        def is_running(self) -> bool:
                                            """Whether execution is currently running."""
                                            return bool(self._main_task and not self._main_task.done())
                                    
                                        def is_busy(self) -> bool:
                                            """Check if team is processing any tasks."""
                                            return bool(self.task_manager._pending_tasks or self._main_task)
                                    
                                        async def stop(self):
                                            """Stop background execution if running."""
                                            if self._main_task and not self._main_task.done():
                                                self._main_task.cancel()
                                                await self._main_task
                                            self._main_task = None
                                            await self.task_manager.cleanup_tasks()
                                    
                                        async def wait(self) -> ChatMessage[Any] | None:
                                            """Wait for background execution to complete and return last message."""
                                            if not self._main_task:
                                                msg = "No execution running"
                                                raise RuntimeError(msg)
                                            if self._infinite:
                                                msg = "Cannot wait on infinite execution"
                                                raise RuntimeError(msg)
                                            try:
                                                return await self._main_task
                                            finally:
                                                await self.task_manager.cleanup_tasks()
                                                self._main_task = None
                                    
                                        async def run_in_background(
                                            self,
                                            *prompts: AnyPromptType | PIL.Image.Image | os.PathLike[str] | None,
                                            max_count: int | None = 1,  # 1 = single execution, None = indefinite
                                            interval: float = 1.0,
                                            **kwargs: Any,
                                        ) -> ExtendedTeamTalk:
                                            """Start execution in background.
                                    
                                            Args:
                                                prompts: Prompts to execute
                                                max_count: Maximum number of executions (None = run indefinitely)
                                                interval: Seconds between executions
                                                **kwargs: Additional args for execute()
                                            """
                                            if self._main_task:
                                                msg = "Execution already running"
                                                raise RuntimeError(msg)
                                            self._infinite = max_count is None
                                    
                                            async def _continuous() -> ChatMessage[Any] | None:
                                                count = 0
                                                last_message = None
                                                while max_count is None or count < max_count:
                                                    try:
                                                        result = await self.execute(*prompts, **kwargs)
                                                        last_message = result[-1].message if result else None
                                                        count += 1
                                                        if max_count is None or count < max_count:
                                                            await asyncio.sleep(interval)
                                                    except asyncio.CancelledError:
                                                        logger.debug("Background execution cancelled")
                                                        break
                                                return last_message
                                    
                                            self._main_task = self.task_manager.create_task(
                                                _continuous(), name="main_execution"
                                            )
                                            return self._team_talk
                                    
                                        @property
                                        def execution_stats(self) -> AggregatedTalkStats:
                                            """Get current execution statistics."""
                                            return self._team_talk.stats
                                    
                                        @property
                                        def talk(self) -> ExtendedTeamTalk:
                                            """Get current connection."""
                                            return self._team_talk
                                    
                                        @property
                                        def events(self) -> ListEvents:
                                            """Get events for the team."""
                                            return self.agents.events
                                    
                                        async def cancel(self):
                                            """Cancel execution and cleanup."""
                                            if self._main_task:
                                                self._main_task.cancel()
                                            await self.task_manager.cleanup_tasks()
                                    
                                        def get_structure_diagram(self) -> str:
                                            """Generate mermaid flowchart of node hierarchy."""
                                            lines = ["flowchart TD"]
                                    
                                            def add_node(node: MessageNode[Any, Any], parent: str | None = None):
                                                """Recursively add node and its members to diagram."""
                                                node_id = f"node_{id(node)}"
                                                lines.append(f"    {node_id}[{node.name}]")
                                                if parent:
                                                    lines.append(f"    {parent} --> {node_id}")
                                    
                                                # If it's a team, recursively add its members
                                                from llmling_agent.delegation.base_team import BaseTeam
                                    
                                                if isinstance(node, BaseTeam):
                                                    for member in node.agents:
                                                        add_node(member, node_id)
                                    
                                            # Start with root nodes (team members)
                                            for node in self.agents:
                                                add_node(node)
                                    
                                            return "\n".join(lines)
                                    
                                        def iter_agents(self) -> Iterator[Agent[Any, Any]]:
                                            """Recursively iterate over all child agents."""
                                            from llmling_agent.agent import Agent
                                    
                                            for node in self.agents:
                                                match node:
                                                    case BaseTeam():
                                                        yield from node.iter_agents()
                                                    case Agent():
                                                        yield node
                                                    case _:
                                                        msg = f"Invalid node type: {type(node)}"
                                                        raise ValueError(msg)
                                    
                                        @property
                                        def context(self) -> TeamContext:
                                            """Get shared pool from team members.
                                    
                                            Raises:
                                                ValueError: If team members belong to different pools
                                            """
                                            from llmling_agent.delegation.team import Team
                                    
                                            pool_ids: set[int] = set()
                                            shared_pool: AgentPool | None = None
                                            team_config: TeamConfig | None = None
                                    
                                            for agent in self.iter_agents():
                                                if agent.context and agent.context.pool:
                                                    pool_id = id(agent.context.pool)
                                                    if pool_id not in pool_ids:
                                                        pool_ids.add(pool_id)
                                                        shared_pool = agent.context.pool
                                                        if shared_pool.manifest.teams:
                                                            team_config = shared_pool.manifest.teams.get(self.name)
                                            if not team_config:
                                                mode = "parallel" if isinstance(self, Team) else "sequential"
                                                team_config = TeamConfig(name=self.name, mode=mode, members=[])
                                            if not pool_ids:
                                                logger.debug("No pool found for team.", team=self.name)
                                                return TeamContext(
                                                    node_name=self.name,
                                                    pool=shared_pool,
                                                    config=team_config,
                                                    definition=shared_pool.manifest if shared_pool else AgentsManifest(),
                                                )
                                    
                                            if len(pool_ids) > 1:
                                                msg = f"Team members in {self.name} belong to different pools"
                                                raise ValueError(msg)
                                            return TeamContext(
                                                node_name=self.name,
                                                pool=shared_pool,
                                                config=team_config,
                                                definition=shared_pool.manifest if shared_pool else AgentsManifest(),
                                            )
                                    
                                        @context.setter
                                        def context(self, value: NodeContext):
                                            msg = "Cannot set context on BaseTeam"
                                            raise RuntimeError(msg)
                                    
                                        async def distribute(
                                            self,
                                            content: str,
                                            *,
                                            tools: list[str] | None = None,
                                            resources: list[str] | None = None,
                                            metadata: dict[str, Any] | None = None,
                                        ):
                                            """Distribute content and capabilities to all team members."""
                                            for agent in self.iter_agents():
                                                # Add context message
                                                agent.conversation.add_context_message(
                                                    content, source="distribution", metadata=metadata
                                                )
                                    
                                                # Register tools if provided
                                                if tools:
                                                    for tool in tools:
                                                        agent.tools.register_tool(tool)
                                    
                                                # Load resources if provided
                                                if resources:
                                                    for resource in resources:
                                                        await agent.conversation.load_context_source(resource)
                                    
                                        @asynccontextmanager
                                        async def temporary_state(
                                            self,
                                            *,
                                            system_prompts: list[AnyPromptType] | None = None,
                                            replace_prompts: bool = False,
                                            tools: list[ToolType] | None = None,
                                            replace_tools: bool = False,
                                            history: list[AnyPromptType] | SessionQuery | None = None,
                                            replace_history: bool = False,
                                            pause_routing: bool = False,
                                            model: ModelType | None = None,
                                            provider: AgentProvider | None = None,
                                        ) -> AsyncIterator[Self]:
                                            """Temporarily modify state of all agents in the team.
                                    
                                            All agents in the team will enter their temporary state simultaneously.
                                    
                                            Args:
                                                system_prompts: Temporary system prompts to use
                                                replace_prompts: Whether to replace existing prompts
                                                tools: Temporary tools to make available
                                                replace_tools: Whether to replace existing tools
                                                history: Conversation history (prompts or query)
                                                replace_history: Whether to replace existing history
                                                pause_routing: Whether to pause message routing
                                                model: Temporary model override
                                                provider: Temporary provider override
                                            """
                                            # Get all agents (flattened) before entering context
                                            agents = list(self.iter_agents())
                                    
                                            async with AsyncExitStack() as stack:
                                                if pause_routing:
                                                    await stack.enter_async_context(self.connections.paused_routing())
                                                # Enter temporary state for all agents
                                                for agent in agents:
                                                    await stack.enter_async_context(
                                                        agent.temporary_state(
                                                            system_prompts=system_prompts,
                                                            replace_prompts=replace_prompts,
                                                            tools=tools,
                                                            replace_tools=replace_tools,
                                                            history=history,
                                                            replace_history=replace_history,
                                                            pause_routing=pause_routing,
                                                            model=model,
                                                            provider=provider,
                                                        )
                                                    )
                                                try:
                                                    yield self
                                                finally:
                                                    # AsyncExitStack will handle cleanup of all states
                                                    pass
                                    
                                        @abstractmethod
                                        async def execute(
                                            self,
                                            *prompts: AnyPromptType | PIL.Image.Image | os.PathLike[str] | None,
                                            **kwargs: Any,
                                        ) -> TeamResponse: ...
                                    
                                        def run_sync(
                                            self,
                                            *prompt: AnyPromptType | PIL.Image.Image | os.PathLike[str],
                                            store_history: bool = True,
                                        ) -> ChatMessage[TResult]:
                                            """Run agent synchronously (convenience wrapper).
                                    
                                            Args:
                                                prompt: User query or instruction
                                                store_history: Whether the message exchange should be added to the
                                                               context window
                                            Returns:
                                                Result containing response and run information
                                            """
                                            coro = self.run(*prompt, store_history=store_history)
                                            return self.task_manager.run_task_sync(coro)
                                    

                                    context property writable

                                    context: TeamContext
                                    

                                    Get shared pool from team members.

                                    Raises:

                                    Type Description
                                    ValueError

                                    If team members belong to different pools

                                    events property

                                    events: ListEvents
                                    

                                    Get events for the team.

                                    execution_stats property

                                    execution_stats: AggregatedTalkStats
                                    

                                    Get current execution statistics.

                                    is_running property

                                    is_running: bool
                                    

                                    Whether execution is currently running.

                                    talk property

                                    Get current connection.

                                    __and__

                                    __and__(other: Team[None]) -> Team[None]
                                    
                                    __and__(other: Team[TDeps]) -> Team[TDeps]
                                    
                                    __and__(other: Team[Any]) -> Team[Any]
                                    
                                    __and__(other: Agent[TDeps, Any]) -> Team[TDeps]
                                    
                                    __and__(other: Agent[Any, Any]) -> Team[Any]
                                    
                                    __and__(other: Team[Any] | Agent[Any, Any] | ProcessorCallback[Any]) -> Team[Any]
                                    

                                    Combine teams, preserving type safety for same types.

                                    Source code in src/llmling_agent/delegation/base_team.py
                                    236
                                    237
                                    238
                                    239
                                    240
                                    241
                                    242
                                    243
                                    244
                                    245
                                    246
                                    247
                                    248
                                    249
                                    250
                                    251
                                    252
                                    253
                                    def __and__(
                                        self, other: Team[Any] | Agent[Any, Any] | ProcessorCallback[Any]
                                    ) -> Team[Any]:
                                        """Combine teams, preserving type safety for same types."""
                                        from llmling_agent.agent import Agent
                                        from llmling_agent.delegation.team import Team
                                    
                                        if callable(other):
                                            other = Agent.from_callback(other)
                                            other.context.pool = self.context.pool
                                    
                                        match other:
                                            case Team():
                                                # Flatten when combining Teams
                                                return Team([*self.agents, *other.agents])
                                            case _:
                                                # Everything else just becomes a member
                                                return Team([*self.agents, other])
                                    

                                    __getitem__

                                    __getitem__(index_or_name: int | str) -> MessageNode[TDeps, TResult]
                                    

                                    Get team member by index or name.

                                    Source code in src/llmling_agent/delegation/base_team.py
                                    192
                                    193
                                    194
                                    195
                                    196
                                    def __getitem__(self, index_or_name: int | str) -> MessageNode[TDeps, TResult]:
                                        """Get team member by index or name."""
                                        if isinstance(index_or_name, str):
                                            return next(agent for agent in self.agents if agent.name == index_or_name)
                                        return self.agents[index_or_name]
                                    

                                    __init__

                                    __init__(
                                        agents: Sequence[MessageNode[TDeps, TResult]],
                                        *,
                                        name: str | None = None,
                                        description: str | None = None,
                                        shared_prompt: str | None = None,
                                        mcp_servers: list[str | MCPServerConfig] | None = None,
                                        picker: Agent[Any, Any] | None = None,
                                        num_picks: int | None = None,
                                        pick_prompt: str | None = None,
                                    )
                                    

                                    Common variables only for typing.

                                    Source code in src/llmling_agent/delegation/base_team.py
                                     77
                                     78
                                     79
                                     80
                                     81
                                     82
                                     83
                                     84
                                     85
                                     86
                                     87
                                     88
                                     89
                                     90
                                     91
                                     92
                                     93
                                     94
                                     95
                                     96
                                     97
                                     98
                                     99
                                    100
                                    101
                                    102
                                    103
                                    104
                                    105
                                    106
                                    107
                                    108
                                    109
                                    110
                                    def __init__(
                                        self,
                                        agents: Sequence[MessageNode[TDeps, TResult]],
                                        *,
                                        name: str | None = None,
                                        description: str | None = None,
                                        shared_prompt: str | None = None,
                                        mcp_servers: list[str | MCPServerConfig] | None = None,
                                        picker: Agent[Any, Any] | None = None,
                                        num_picks: int | None = None,
                                        pick_prompt: str | None = None,
                                    ):
                                        """Common variables only for typing."""
                                        from llmling_agent.delegation.teamrun import ExtendedTeamTalk
                                    
                                        self._name = name or " & ".join([i.name for i in agents])
                                        self.agents = EventedList[MessageNode]()
                                        self.agents.events.inserted.connect(self._on_node_added)
                                        self.agents.events.removed.connect(self._on_node_removed)
                                        self.agents.events.changed.connect(self._on_node_changed)
                                        super().__init__(
                                            name=self._name,
                                            context=self.context,
                                            mcp_servers=mcp_servers,
                                            description=description,
                                        )
                                        self.agents.extend(list(agents))
                                        self._team_talk = ExtendedTeamTalk()
                                        self.shared_prompt = shared_prompt
                                        self._main_task: asyncio.Task[Any] | None = None
                                        self._infinite = False
                                        self.picker = picker
                                        self.num_picks = num_picks
                                        self.pick_prompt = pick_prompt
                                    

                                    __iter__

                                    __iter__() -> Iterator[MessageNode[TDeps, TResult]]
                                    

                                    Iterate over team members.

                                    Source code in src/llmling_agent/delegation/base_team.py
                                    188
                                    189
                                    190
                                    def __iter__(self) -> Iterator[MessageNode[TDeps, TResult]]:
                                        """Iterate over team members."""
                                        return iter(self.agents)
                                    

                                    __len__

                                    __len__() -> int
                                    

                                    Get number of team members.

                                    Source code in src/llmling_agent/delegation/base_team.py
                                    184
                                    185
                                    186
                                    def __len__(self) -> int:
                                        """Get number of team members."""
                                        return len(self.agents)
                                    

                                    __or__

                                    __or__(
                                        other: Agent[Any, Any] | ProcessorCallback[Any] | BaseTeam[Any, Any],
                                    ) -> TeamRun[Any, Any]
                                    

                                    Create a sequential pipeline.

                                    Source code in src/llmling_agent/delegation/base_team.py
                                    198
                                    199
                                    200
                                    201
                                    202
                                    203
                                    204
                                    205
                                    206
                                    207
                                    208
                                    209
                                    210
                                    211
                                    212
                                    213
                                    214
                                    215
                                    216
                                    217
                                    218
                                    219
                                    def __or__(
                                        self,
                                        other: Agent[Any, Any] | ProcessorCallback[Any] | BaseTeam[Any, Any],
                                    ) -> TeamRun[Any, Any]:
                                        """Create a sequential pipeline."""
                                        from llmling_agent.agent import Agent
                                        from llmling_agent.delegation.teamrun import TeamRun
                                    
                                        # Handle conversion of callables first
                                        if callable(other):
                                            other = Agent.from_callback(other)
                                            other.context.pool = self.context.pool
                                    
                                        # If we're already a TeamRun, extend it
                                        if isinstance(self, TeamRun):
                                            if self.validator:
                                                # If we have a validator, create new TeamRun to preserve validation
                                                return TeamRun([self, other])
                                            self.agents.append(other)
                                            return self
                                        # Otherwise create new TeamRun
                                        return TeamRun([self, other])
                                    

                                    __repr__

                                    __repr__() -> str
                                    

                                    Create readable representation.

                                    Source code in src/llmling_agent/delegation/base_team.py
                                    178
                                    179
                                    180
                                    181
                                    182
                                    def __repr__(self) -> str:
                                        """Create readable representation."""
                                        members = ", ".join(agent.name for agent in self.agents)
                                        name = f" ({self.name})" if self.name else ""
                                        return f"{self.__class__.__name__}[{len(self.agents)}]{name}: {members}"
                                    

                                    cancel async

                                    cancel()
                                    

                                    Cancel execution and cleanup.

                                    Source code in src/llmling_agent/delegation/base_team.py
                                    346
                                    347
                                    348
                                    349
                                    350
                                    async def cancel(self):
                                        """Cancel execution and cleanup."""
                                        if self._main_task:
                                            self._main_task.cancel()
                                        await self.task_manager.cleanup_tasks()
                                    

                                    distribute async

                                    distribute(
                                        content: str,
                                        *,
                                        tools: list[str] | None = None,
                                        resources: list[str] | None = None,
                                        metadata: dict[str, Any] | None = None,
                                    )
                                    

                                    Distribute content and capabilities to all team members.

                                    Source code in src/llmling_agent/delegation/base_team.py
                                    438
                                    439
                                    440
                                    441
                                    442
                                    443
                                    444
                                    445
                                    446
                                    447
                                    448
                                    449
                                    450
                                    451
                                    452
                                    453
                                    454
                                    455
                                    456
                                    457
                                    458
                                    459
                                    460
                                    461
                                    async def distribute(
                                        self,
                                        content: str,
                                        *,
                                        tools: list[str] | None = None,
                                        resources: list[str] | None = None,
                                        metadata: dict[str, Any] | None = None,
                                    ):
                                        """Distribute content and capabilities to all team members."""
                                        for agent in self.iter_agents():
                                            # Add context message
                                            agent.conversation.add_context_message(
                                                content, source="distribution", metadata=metadata
                                            )
                                    
                                            # Register tools if provided
                                            if tools:
                                                for tool in tools:
                                                    agent.tools.register_tool(tool)
                                    
                                            # Load resources if provided
                                            if resources:
                                                for resource in resources:
                                                    await agent.conversation.load_context_source(resource)
                                    

                                    get_stats async

                                    get_stats() -> AggregatedMessageStats
                                    

                                    Get aggregated stats from all team members.

                                    Source code in src/llmling_agent/delegation/base_team.py
                                    255
                                    256
                                    257
                                    258
                                    async def get_stats(self) -> AggregatedMessageStats:
                                        """Get aggregated stats from all team members."""
                                        stats = [await agent.get_stats() for agent in self.agents]
                                        return AggregatedMessageStats(stats=stats)
                                    

                                    get_structure_diagram

                                    get_structure_diagram() -> str
                                    

                                    Generate mermaid flowchart of node hierarchy.

                                    Source code in src/llmling_agent/delegation/base_team.py
                                    352
                                    353
                                    354
                                    355
                                    356
                                    357
                                    358
                                    359
                                    360
                                    361
                                    362
                                    363
                                    364
                                    365
                                    366
                                    367
                                    368
                                    369
                                    370
                                    371
                                    372
                                    373
                                    374
                                    def get_structure_diagram(self) -> str:
                                        """Generate mermaid flowchart of node hierarchy."""
                                        lines = ["flowchart TD"]
                                    
                                        def add_node(node: MessageNode[Any, Any], parent: str | None = None):
                                            """Recursively add node and its members to diagram."""
                                            node_id = f"node_{id(node)}"
                                            lines.append(f"    {node_id}[{node.name}]")
                                            if parent:
                                                lines.append(f"    {parent} --> {node_id}")
                                    
                                            # If it's a team, recursively add its members
                                            from llmling_agent.delegation.base_team import BaseTeam
                                    
                                            if isinstance(node, BaseTeam):
                                                for member in node.agents:
                                                    add_node(member, node_id)
                                    
                                        # Start with root nodes (team members)
                                        for node in self.agents:
                                            add_node(node)
                                    
                                        return "\n".join(lines)
                                    

                                    is_busy

                                    is_busy() -> bool
                                    

                                    Check if team is processing any tasks.

                                    Source code in src/llmling_agent/delegation/base_team.py
                                    265
                                    266
                                    267
                                    def is_busy(self) -> bool:
                                        """Check if team is processing any tasks."""
                                        return bool(self.task_manager._pending_tasks or self._main_task)
                                    

                                    iter_agents

                                    iter_agents() -> Iterator[Agent[Any, Any]]
                                    

                                    Recursively iterate over all child agents.

                                    Source code in src/llmling_agent/delegation/base_team.py
                                    376
                                    377
                                    378
                                    379
                                    380
                                    381
                                    382
                                    383
                                    384
                                    385
                                    386
                                    387
                                    388
                                    def iter_agents(self) -> Iterator[Agent[Any, Any]]:
                                        """Recursively iterate over all child agents."""
                                        from llmling_agent.agent import Agent
                                    
                                        for node in self.agents:
                                            match node:
                                                case BaseTeam():
                                                    yield from node.iter_agents()
                                                case Agent():
                                                    yield node
                                                case _:
                                                    msg = f"Invalid node type: {type(node)}"
                                                    raise ValueError(msg)
                                    

                                    pick_agents async

                                    pick_agents(task: str) -> Sequence[MessageNode[Any, Any]]
                                    

                                    Pick agents to run.

                                    Source code in src/llmling_agent/delegation/base_team.py
                                    138
                                    139
                                    140
                                    141
                                    142
                                    143
                                    144
                                    145
                                    146
                                    147
                                    148
                                    149
                                    150
                                    151
                                    152
                                    async def pick_agents(self, task: str) -> Sequence[MessageNode[Any, Any]]:
                                        """Pick agents to run."""
                                        if self.picker:
                                            if self.num_picks == 1:
                                                result = await self.picker.talk.pick(self, task, self.pick_prompt)
                                                return [result.selection]
                                            multi_result = await self.picker.talk.pick_multiple(
                                                self,
                                                task,
                                                min_picks=self.num_picks or 1,
                                                max_picks=self.num_picks,
                                                prompt=self.pick_prompt,
                                            )
                                            return multi_result.selections
                                        return list(self.agents)
                                    

                                    run_in_background async

                                    run_in_background(
                                        *prompts: AnyPromptType | Image | PathLike[str] | None,
                                        max_count: int | None = 1,
                                        interval: float = 1.0,
                                        **kwargs: Any,
                                    ) -> ExtendedTeamTalk
                                    

                                    Start execution in background.

                                    Parameters:

                                    Name Type Description Default
                                    prompts AnyPromptType | Image | PathLike[str] | None

                                    Prompts to execute

                                    ()
                                    max_count int | None

                                    Maximum number of executions (None = run indefinitely)

                                    1
                                    interval float

                                    Seconds between executions

                                    1.0
                                    **kwargs Any

                                    Additional args for execute()

                                    {}
                                    Source code in src/llmling_agent/delegation/base_team.py
                                    291
                                    292
                                    293
                                    294
                                    295
                                    296
                                    297
                                    298
                                    299
                                    300
                                    301
                                    302
                                    303
                                    304
                                    305
                                    306
                                    307
                                    308
                                    309
                                    310
                                    311
                                    312
                                    313
                                    314
                                    315
                                    316
                                    317
                                    318
                                    319
                                    320
                                    321
                                    322
                                    323
                                    324
                                    325
                                    326
                                    327
                                    328
                                    329
                                    async def run_in_background(
                                        self,
                                        *prompts: AnyPromptType | PIL.Image.Image | os.PathLike[str] | None,
                                        max_count: int | None = 1,  # 1 = single execution, None = indefinite
                                        interval: float = 1.0,
                                        **kwargs: Any,
                                    ) -> ExtendedTeamTalk:
                                        """Start execution in background.
                                    
                                        Args:
                                            prompts: Prompts to execute
                                            max_count: Maximum number of executions (None = run indefinitely)
                                            interval: Seconds between executions
                                            **kwargs: Additional args for execute()
                                        """
                                        if self._main_task:
                                            msg = "Execution already running"
                                            raise RuntimeError(msg)
                                        self._infinite = max_count is None
                                    
                                        async def _continuous() -> ChatMessage[Any] | None:
                                            count = 0
                                            last_message = None
                                            while max_count is None or count < max_count:
                                                try:
                                                    result = await self.execute(*prompts, **kwargs)
                                                    last_message = result[-1].message if result else None
                                                    count += 1
                                                    if max_count is None or count < max_count:
                                                        await asyncio.sleep(interval)
                                                except asyncio.CancelledError:
                                                    logger.debug("Background execution cancelled")
                                                    break
                                            return last_message
                                    
                                        self._main_task = self.task_manager.create_task(
                                            _continuous(), name="main_execution"
                                        )
                                        return self._team_talk
                                    

                                    run_sync

                                    run_sync(
                                        *prompt: AnyPromptType | Image | PathLike[str], store_history: bool = True
                                    ) -> ChatMessage[TResult]
                                    

                                    Run agent synchronously (convenience wrapper).

                                    Parameters:

                                    Name Type Description Default
                                    prompt AnyPromptType | Image | PathLike[str]

                                    User query or instruction

                                    ()
                                    store_history bool

                                    Whether the message exchange should be added to the context window

                                    True

                                    Returns: Result containing response and run information

                                    Source code in src/llmling_agent/delegation/base_team.py
                                    526
                                    527
                                    528
                                    529
                                    530
                                    531
                                    532
                                    533
                                    534
                                    535
                                    536
                                    537
                                    538
                                    539
                                    540
                                    541
                                    def run_sync(
                                        self,
                                        *prompt: AnyPromptType | PIL.Image.Image | os.PathLike[str],
                                        store_history: bool = True,
                                    ) -> ChatMessage[TResult]:
                                        """Run agent synchronously (convenience wrapper).
                                    
                                        Args:
                                            prompt: User query or instruction
                                            store_history: Whether the message exchange should be added to the
                                                           context window
                                        Returns:
                                            Result containing response and run information
                                        """
                                        coro = self.run(*prompt, store_history=store_history)
                                        return self.task_manager.run_task_sync(coro)
                                    

                                    stop async

                                    stop()
                                    

                                    Stop background execution if running.

                                    Source code in src/llmling_agent/delegation/base_team.py
                                    269
                                    270
                                    271
                                    272
                                    273
                                    274
                                    275
                                    async def stop(self):
                                        """Stop background execution if running."""
                                        if self._main_task and not self._main_task.done():
                                            self._main_task.cancel()
                                            await self._main_task
                                        self._main_task = None
                                        await self.task_manager.cleanup_tasks()
                                    

                                    temporary_state async

                                    temporary_state(
                                        *,
                                        system_prompts: list[AnyPromptType] | None = None,
                                        replace_prompts: bool = False,
                                        tools: list[ToolType] | None = None,
                                        replace_tools: bool = False,
                                        history: list[AnyPromptType] | SessionQuery | None = None,
                                        replace_history: bool = False,
                                        pause_routing: bool = False,
                                        model: ModelType | None = None,
                                        provider: AgentProvider | None = None,
                                    ) -> AsyncIterator[Self]
                                    

                                    Temporarily modify state of all agents in the team.

                                    All agents in the team will enter their temporary state simultaneously.

                                    Parameters:

                                    Name Type Description Default
                                    system_prompts list[AnyPromptType] | None

                                    Temporary system prompts to use

                                    None
                                    replace_prompts bool

                                    Whether to replace existing prompts

                                    False
                                    tools list[ToolType] | None

                                    Temporary tools to make available

                                    None
                                    replace_tools bool

                                    Whether to replace existing tools

                                    False
                                    history list[AnyPromptType] | SessionQuery | None

                                    Conversation history (prompts or query)

                                    None
                                    replace_history bool

                                    Whether to replace existing history

                                    False
                                    pause_routing bool

                                    Whether to pause message routing

                                    False
                                    model ModelType | None

                                    Temporary model override

                                    None
                                    provider AgentProvider | None

                                    Temporary provider override

                                    None
                                    Source code in src/llmling_agent/delegation/base_team.py
                                    463
                                    464
                                    465
                                    466
                                    467
                                    468
                                    469
                                    470
                                    471
                                    472
                                    473
                                    474
                                    475
                                    476
                                    477
                                    478
                                    479
                                    480
                                    481
                                    482
                                    483
                                    484
                                    485
                                    486
                                    487
                                    488
                                    489
                                    490
                                    491
                                    492
                                    493
                                    494
                                    495
                                    496
                                    497
                                    498
                                    499
                                    500
                                    501
                                    502
                                    503
                                    504
                                    505
                                    506
                                    507
                                    508
                                    509
                                    510
                                    511
                                    512
                                    513
                                    514
                                    515
                                    516
                                    517
                                    @asynccontextmanager
                                    async def temporary_state(
                                        self,
                                        *,
                                        system_prompts: list[AnyPromptType] | None = None,
                                        replace_prompts: bool = False,
                                        tools: list[ToolType] | None = None,
                                        replace_tools: bool = False,
                                        history: list[AnyPromptType] | SessionQuery | None = None,
                                        replace_history: bool = False,
                                        pause_routing: bool = False,
                                        model: ModelType | None = None,
                                        provider: AgentProvider | None = None,
                                    ) -> AsyncIterator[Self]:
                                        """Temporarily modify state of all agents in the team.
                                    
                                        All agents in the team will enter their temporary state simultaneously.
                                    
                                        Args:
                                            system_prompts: Temporary system prompts to use
                                            replace_prompts: Whether to replace existing prompts
                                            tools: Temporary tools to make available
                                            replace_tools: Whether to replace existing tools
                                            history: Conversation history (prompts or query)
                                            replace_history: Whether to replace existing history
                                            pause_routing: Whether to pause message routing
                                            model: Temporary model override
                                            provider: Temporary provider override
                                        """
                                        # Get all agents (flattened) before entering context
                                        agents = list(self.iter_agents())
                                    
                                        async with AsyncExitStack() as stack:
                                            if pause_routing:
                                                await stack.enter_async_context(self.connections.paused_routing())
                                            # Enter temporary state for all agents
                                            for agent in agents:
                                                await stack.enter_async_context(
                                                    agent.temporary_state(
                                                        system_prompts=system_prompts,
                                                        replace_prompts=replace_prompts,
                                                        tools=tools,
                                                        replace_tools=replace_tools,
                                                        history=history,
                                                        replace_history=replace_history,
                                                        pause_routing=pause_routing,
                                                        model=model,
                                                        provider=provider,
                                                    )
                                                )
                                            try:
                                                yield self
                                            finally:
                                                # AsyncExitStack will handle cleanup of all states
                                                pass
                                    

                                    to_tool

                                    to_tool(*, name: str | None = None, description: str | None = None) -> Tool
                                    

                                    Create a tool from this agent.

                                    Parameters:

                                    Name Type Description Default
                                    name str | None

                                    Optional tool name override

                                    None
                                    description str | None

                                    Optional tool description override

                                    None
                                    Source code in src/llmling_agent/delegation/base_team.py
                                    112
                                    113
                                    114
                                    115
                                    116
                                    117
                                    118
                                    119
                                    120
                                    121
                                    122
                                    123
                                    124
                                    125
                                    126
                                    127
                                    128
                                    129
                                    130
                                    131
                                    132
                                    133
                                    134
                                    135
                                    136
                                    def to_tool(self, *, name: str | None = None, description: str | None = None) -> Tool:
                                        """Create a tool from this agent.
                                    
                                        Args:
                                            name: Optional tool name override
                                            description: Optional tool description override
                                        """
                                        tool_name = name or f"ask_{self.name}"
                                    
                                        async def wrapped_tool(prompt: str) -> TResult:
                                            result = await self.run(prompt)
                                            return result.data
                                    
                                        docstring = description or f"Get expert answer from node {self.name}"
                                        if self.description:
                                            docstring = f"{docstring}\n\n{self.description}"
                                    
                                        wrapped_tool.__doc__ = docstring
                                        wrapped_tool.__name__ = tool_name
                                    
                                        return Tool.from_callable(
                                            wrapped_tool,
                                            name_override=tool_name,
                                            description_override=docstring,
                                        )
                                    

                                    wait async

                                    wait() -> ChatMessage[Any] | None
                                    

                                    Wait for background execution to complete and return last message.

                                    Source code in src/llmling_agent/delegation/base_team.py
                                    277
                                    278
                                    279
                                    280
                                    281
                                    282
                                    283
                                    284
                                    285
                                    286
                                    287
                                    288
                                    289
                                    async def wait(self) -> ChatMessage[Any] | None:
                                        """Wait for background execution to complete and return last message."""
                                        if not self._main_task:
                                            msg = "No execution running"
                                            raise RuntimeError(msg)
                                        if self._infinite:
                                            msg = "Cannot wait on infinite execution"
                                            raise RuntimeError(msg)
                                        try:
                                            return await self._main_task
                                        finally:
                                            await self.task_manager.cleanup_tasks()
                                            self._main_task = None
                                    

                                    ChatMessage dataclass

                                    Common message format for all UI types.

                                    Generically typed with: ChatMessage[Type of Content] The type can either be str or a BaseModel subclass.

                                    Source code in src/llmling_agent/messaging/messages.py
                                    151
                                    152
                                    153
                                    154
                                    155
                                    156
                                    157
                                    158
                                    159
                                    160
                                    161
                                    162
                                    163
                                    164
                                    165
                                    166
                                    167
                                    168
                                    169
                                    170
                                    171
                                    172
                                    173
                                    174
                                    175
                                    176
                                    177
                                    178
                                    179
                                    180
                                    181
                                    182
                                    183
                                    184
                                    185
                                    186
                                    187
                                    188
                                    189
                                    190
                                    191
                                    192
                                    193
                                    194
                                    195
                                    196
                                    197
                                    198
                                    199
                                    200
                                    201
                                    202
                                    203
                                    204
                                    205
                                    206
                                    207
                                    208
                                    209
                                    210
                                    211
                                    212
                                    213
                                    214
                                    215
                                    216
                                    217
                                    218
                                    219
                                    220
                                    221
                                    222
                                    223
                                    224
                                    225
                                    226
                                    227
                                    228
                                    229
                                    230
                                    231
                                    232
                                    233
                                    234
                                    235
                                    236
                                    237
                                    238
                                    239
                                    240
                                    241
                                    242
                                    243
                                    244
                                    245
                                    246
                                    247
                                    248
                                    249
                                    250
                                    251
                                    252
                                    253
                                    254
                                    255
                                    256
                                    257
                                    258
                                    259
                                    260
                                    261
                                    262
                                    263
                                    264
                                    265
                                    266
                                    267
                                    268
                                    269
                                    270
                                    271
                                    272
                                    273
                                    274
                                    275
                                    276
                                    277
                                    278
                                    279
                                    280
                                    281
                                    282
                                    283
                                    284
                                    285
                                    286
                                    287
                                    288
                                    289
                                    290
                                    291
                                    292
                                    293
                                    294
                                    295
                                    296
                                    297
                                    298
                                    299
                                    300
                                    301
                                    302
                                    303
                                    304
                                    305
                                    306
                                    307
                                    308
                                    309
                                    310
                                    311
                                    312
                                    313
                                    314
                                    315
                                    316
                                    317
                                    318
                                    319
                                    320
                                    321
                                    322
                                    323
                                    324
                                    325
                                    326
                                    327
                                    328
                                    329
                                    330
                                    331
                                    332
                                    333
                                    334
                                    335
                                    336
                                    337
                                    338
                                    339
                                    340
                                    341
                                    342
                                    343
                                    344
                                    345
                                    346
                                    347
                                    348
                                    349
                                    350
                                    351
                                    352
                                    353
                                    354
                                    355
                                    356
                                    357
                                    358
                                    359
                                    360
                                    361
                                    362
                                    363
                                    364
                                    365
                                    366
                                    367
                                    368
                                    369
                                    370
                                    371
                                    372
                                    373
                                    374
                                    375
                                    376
                                    377
                                    378
                                    379
                                    380
                                    381
                                    382
                                    383
                                    384
                                    385
                                    386
                                    387
                                    388
                                    389
                                    390
                                    391
                                    392
                                    393
                                    394
                                    395
                                    396
                                    397
                                    398
                                    399
                                    400
                                    401
                                    402
                                    403
                                    404
                                    405
                                    406
                                    407
                                    408
                                    409
                                    410
                                    411
                                    412
                                    413
                                    414
                                    415
                                    416
                                    @dataclass
                                    class ChatMessage[TContent]:
                                        """Common message format for all UI types.
                                    
                                        Generically typed with: ChatMessage[Type of Content]
                                        The type can either be str or a BaseModel subclass.
                                        """
                                    
                                        content: TContent
                                        """Message content, typed as TContent (either str or BaseModel)."""
                                    
                                        role: MessageRole
                                        """Role of the message sender (user/assistant)."""
                                    
                                        metadata: SimpleJsonType = field(default_factory=dict)
                                        """Additional metadata about the message."""
                                    
                                        timestamp: datetime = field(default_factory=get_now)
                                        """When this message was created."""
                                    
                                        cost_info: TokenCost | None = None
                                        """Token usage and costs for this specific message if available."""
                                    
                                        message_id: str = field(default_factory=lambda: str(uuid4()))
                                        """Unique identifier for this message."""
                                    
                                        conversation_id: str | None = None
                                        """ID of the conversation this message belongs to."""
                                    
                                        response_time: float | None = None
                                        """Time it took the LLM to respond."""
                                    
                                        tool_calls: list[ToolCallInfo] = field(default_factory=list)
                                        """List of tool calls made during message generation."""
                                    
                                        associated_messages: list[ChatMessage[Any]] = field(default_factory=list)
                                        """List of messages which were generated during the the creation of this messsage."""
                                    
                                        name: str | None = None
                                        """Display name for the message sender in UI."""
                                    
                                        forwarded_from: list[str] = field(default_factory=list)
                                        """List of agent names (the chain) that forwarded this message to the sender."""
                                    
                                        provider_details: dict[str, Any] = field(default_factory=dict)
                                        """Provider specific metadata / extra information."""
                                    
                                        parts: Sequence[ModelResponsePart | ModelRequestPart] = field(default_factory=list)
                                        """The parts of the model message."""
                                    
                                        usage: RequestUsage = field(default_factory=RequestUsage)
                                        """Usage information for the request.
                                    
                                        This has a default to make tests easier,
                                        and to support loading old messages where usage will be missing.
                                        """
                                    
                                        model_name: str | None = None
                                        """The name of the model that generated the response."""
                                    
                                        provider_name: str | None = None
                                        """The name of the LLM provider that generated the response."""
                                    
                                        provider_response_id: str | None = None
                                        """request ID as specified by the model provider.
                                    
                                        This can be used to track the specific request to the model."""
                                    
                                        finish_reason: FinishReason | None = None
                                        """Reason the model finished generating the response.
                                    
                                        Normalized to OpenTelemetry values."""
                                    
                                        @property
                                        def kind(self) -> Literal["request", "response"]:
                                            """Role of the message."""
                                            match self.role:
                                                case "assistant":
                                                    return "response"
                                                case "user":
                                                    return "request"
                                    
                                        def to_pydantic_ai(self) -> ModelRequest | ModelResponse:
                                            """Convert this message to a Pydantic model."""
                                            match self.kind:
                                                case "request":
                                                    return ModelRequest(parts=self.parts, instructions=None)  # type: ignore
                                                case "response":
                                                    return ModelResponse(
                                                        parts=self.parts,  # type: ignore
                                                        usage=self.usage,
                                                        model_name=self.model_name,
                                                        timestamp=self.timestamp,
                                                        provider_name=self.provider_name,
                                                        provider_details=self.provider_details,
                                                        finish_reason=self.finish_reason,
                                                        provider_response_id=self.provider_response_id,
                                                    )
                                    
                                        @classmethod
                                        def from_pydantic_ai[TContentType](
                                            cls,
                                            content: TContentType,
                                            message: ModelRequest | ModelResponse,
                                            conversation_id: str | None = None,
                                            name: str | None = None,
                                            message_id: str | None = None,
                                            forwarded_from: list[str] | None = None,
                                        ) -> ChatMessage[TContentType]:
                                            """Convert a Pydantic model to a ChatMessage."""
                                            match message:
                                                case ModelRequest(parts=parts, instructions=_instructions):
                                                    return ChatMessage(
                                                        parts=parts,
                                                        content=content,
                                                        role="user" if message.kind == "request" else "assistant",
                                                        message_id=message_id or str(uuid.uuid4()),
                                                        # instructions=instructions,
                                                        forwarded_from=forwarded_from or [],
                                                        name=name,
                                                    )
                                                case ModelResponse(
                                                    parts=parts,
                                                    usage=usage,
                                                    model_name=model_name,
                                                    timestamp=timestamp,
                                                    provider_name=provider_name,
                                                    provider_details=provider_details,
                                                    finish_reason=finish_reason,
                                                    provider_response_id=provider_response_id,
                                                ):
                                                    return ChatMessage(
                                                        role="user" if message.kind == "request" else "assistant",
                                                        content=content,
                                                        parts=parts,
                                                        usage=usage,
                                                        message_id=message_id or str(uuid.uuid4()),
                                                        conversation_id=conversation_id,
                                                        model_name=model_name,
                                                        timestamp=timestamp,
                                                        provider_name=provider_name,
                                                        provider_details=provider_details or {},
                                                        finish_reason=finish_reason,
                                                        provider_response_id=provider_response_id,
                                                        name=name,
                                                        forwarded_from=forwarded_from or [],
                                                    )
                                                case _:
                                                    msg = f"Unknown message kind: {message.kind}"
                                                    raise ValueError(msg)
                                    
                                        def forwarded(self, previous_message: ChatMessage[Any]) -> Self:
                                            """Create new message showing it was forwarded from another message.
                                    
                                            Args:
                                                previous_message: The message that led to this one's creation
                                    
                                            Returns:
                                                New message with updated chain showing the path through previous message
                                            """
                                            from_ = [*previous_message.forwarded_from, previous_message.name or "unknown"]
                                            return replace(self, forwarded_from=from_)
                                    
                                        def to_text_message(self) -> ChatMessage[str]:
                                            """Convert this message to a text-only version."""
                                            return dataclasses.replace(self, content=str(self.content))  # type: ignore
                                    
                                        def to_request(self) -> Self:
                                            """Convert this message to a request message.
                                    
                                            If the message is already a request (user role), this is a no-op.
                                            If it's a response (assistant role), converts response parts to user content.
                                    
                                            Returns:
                                                New ChatMessage with role='user' and converted parts
                                            """
                                            if self.role == "user":
                                                # Already a request, return as-is
                                                return self
                                    
                                            # Convert response parts to user content
                                            converted_parts: list[Any] = []
                                            user_content: list[UserContent] = []
                                    
                                            for part in self.parts:
                                                match part:
                                                    case TextPart(content=text_content):
                                                        # Text parts become user content strings
                                                        user_content.append(text_content)
                                                    case FilePart(content=binary_content):
                                                        # File parts (images, etc.) become user content directly
                                                        user_content.append(binary_content)
                                                    case _:
                                                        # Other parts (tool calls, etc.) are kept as-is for now
                                                        # Could be extended to handle more conversion cases
                                                        pass
                                    
                                            # Create new UserPromptPart with converted content
                                            if user_content:
                                                if len(user_content) == 1 and isinstance(user_content[0], str):
                                                    # Single string content
                                                    converted_parts = [UserPromptPart(content=user_content[0])]
                                                else:
                                                    # Multi-modal content
                                                    converted_parts = [UserPromptPart(content=user_content)]
                                            else:
                                                # Fallback to text representation if no convertible parts
                                                converted_parts = [UserPromptPart(content=str(self.content))]
                                    
                                            return replace(self, role="user", parts=converted_parts, cost_info=None)
                                    
                                        @property
                                        def data(self) -> TContent:
                                            """Get content as typed data. Provides compat to AgentRunResult."""
                                            return self.content
                                    
                                        def format(
                                            self,
                                            style: FormatStyle = "simple",
                                            *,
                                            template: str | None = None,
                                            variables: dict[str, Any] | None = None,
                                            show_metadata: bool = False,
                                            show_costs: bool = False,
                                        ) -> str:
                                            """Format message with configurable style.
                                    
                                            Args:
                                                style: Predefined style or "custom" for custom template
                                                template: Custom Jinja template (required if style="custom")
                                                variables: Additional variables for template rendering
                                                show_metadata: Whether to include metadata
                                                show_costs: Whether to include cost information
                                    
                                            Raises:
                                                ValueError: If style is "custom" but no template provided
                                                        or if style is invalid
                                            """
                                            from jinjarope import Environment
                                            import yamling
                                    
                                            env = Environment(trim_blocks=True, lstrip_blocks=True)
                                            env.filters["to_yaml"] = yamling.dump_yaml
                                    
                                            match style:
                                                case "custom":
                                                    if not template:
                                                        msg = "Custom style requires a template"
                                                        raise ValueError(msg)
                                                    template_str = template
                                                case _ if style in MESSAGE_TEMPLATES:
                                                    template_str = MESSAGE_TEMPLATES[style]
                                                case _:
                                                    msg = f"Invalid style: {style}"
                                                    raise ValueError(msg)
                                            template_obj = env.from_string(template_str)
                                            vars_ = {
                                                **(self.__dict__),
                                                "show_metadata": show_metadata,
                                                "show_costs": show_costs,
                                            }
                                            print(vars_)
                                            if variables:
                                                vars_.update(variables)
                                    
                                            return template_obj.render(**vars_)
                                    

                                    associated_messages class-attribute instance-attribute

                                    associated_messages: list[ChatMessage[Any]] = field(default_factory=list)
                                    

                                    List of messages which were generated during the the creation of this messsage.

                                    content instance-attribute

                                    content: TContent
                                    

                                    Message content, typed as TContent (either str or BaseModel).

                                    conversation_id class-attribute instance-attribute

                                    conversation_id: str | None = None
                                    

                                    ID of the conversation this message belongs to.

                                    cost_info class-attribute instance-attribute

                                    cost_info: TokenCost | None = None
                                    

                                    Token usage and costs for this specific message if available.

                                    data property

                                    data: TContent
                                    

                                    Get content as typed data. Provides compat to AgentRunResult.

                                    finish_reason class-attribute instance-attribute

                                    finish_reason: FinishReason | None = None
                                    

                                    Reason the model finished generating the response.

                                    Normalized to OpenTelemetry values.

                                    forwarded_from class-attribute instance-attribute

                                    forwarded_from: list[str] = field(default_factory=list)
                                    

                                    List of agent names (the chain) that forwarded this message to the sender.

                                    kind property

                                    kind: Literal['request', 'response']
                                    

                                    Role of the message.

                                    message_id class-attribute instance-attribute

                                    message_id: str = field(default_factory=lambda: str(uuid4()))
                                    

                                    Unique identifier for this message.

                                    metadata class-attribute instance-attribute

                                    metadata: SimpleJsonType = field(default_factory=dict)
                                    

                                    Additional metadata about the message.

                                    model_name class-attribute instance-attribute

                                    model_name: str | None = None
                                    

                                    The name of the model that generated the response.

                                    name class-attribute instance-attribute

                                    name: str | None = None
                                    

                                    Display name for the message sender in UI.

                                    parts class-attribute instance-attribute

                                    parts: Sequence[ModelResponsePart | ModelRequestPart] = field(default_factory=list)
                                    

                                    The parts of the model message.

                                    provider_details class-attribute instance-attribute

                                    provider_details: dict[str, Any] = field(default_factory=dict)
                                    

                                    Provider specific metadata / extra information.

                                    provider_name class-attribute instance-attribute

                                    provider_name: str | None = None
                                    

                                    The name of the LLM provider that generated the response.

                                    provider_response_id class-attribute instance-attribute

                                    provider_response_id: str | None = None
                                    

                                    request ID as specified by the model provider.

                                    This can be used to track the specific request to the model.

                                    response_time class-attribute instance-attribute

                                    response_time: float | None = None
                                    

                                    Time it took the LLM to respond.

                                    role instance-attribute

                                    role: MessageRole
                                    

                                    Role of the message sender (user/assistant).

                                    timestamp class-attribute instance-attribute

                                    timestamp: datetime = field(default_factory=get_now)
                                    

                                    When this message was created.

                                    tool_calls class-attribute instance-attribute

                                    tool_calls: list[ToolCallInfo] = field(default_factory=list)
                                    

                                    List of tool calls made during message generation.

                                    usage class-attribute instance-attribute

                                    usage: RequestUsage = field(default_factory=RequestUsage)
                                    

                                    Usage information for the request.

                                    This has a default to make tests easier, and to support loading old messages where usage will be missing.

                                    format

                                    format(
                                        style: FormatStyle = "simple",
                                        *,
                                        template: str | None = None,
                                        variables: dict[str, Any] | None = None,
                                        show_metadata: bool = False,
                                        show_costs: bool = False,
                                    ) -> str
                                    

                                    Format message with configurable style.

                                    Parameters:

                                    Name Type Description Default
                                    style FormatStyle

                                    Predefined style or "custom" for custom template

                                    'simple'
                                    template str | None

                                    Custom Jinja template (required if style="custom")

                                    None
                                    variables dict[str, Any] | None

                                    Additional variables for template rendering

                                    None
                                    show_metadata bool

                                    Whether to include metadata

                                    False
                                    show_costs bool

                                    Whether to include cost information

                                    False

                                    Raises:

                                    Type Description
                                    ValueError

                                    If style is "custom" but no template provided or if style is invalid

                                    Source code in src/llmling_agent/messaging/messages.py
                                    367
                                    368
                                    369
                                    370
                                    371
                                    372
                                    373
                                    374
                                    375
                                    376
                                    377
                                    378
                                    379
                                    380
                                    381
                                    382
                                    383
                                    384
                                    385
                                    386
                                    387
                                    388
                                    389
                                    390
                                    391
                                    392
                                    393
                                    394
                                    395
                                    396
                                    397
                                    398
                                    399
                                    400
                                    401
                                    402
                                    403
                                    404
                                    405
                                    406
                                    407
                                    408
                                    409
                                    410
                                    411
                                    412
                                    413
                                    414
                                    415
                                    416
                                    def format(
                                        self,
                                        style: FormatStyle = "simple",
                                        *,
                                        template: str | None = None,
                                        variables: dict[str, Any] | None = None,
                                        show_metadata: bool = False,
                                        show_costs: bool = False,
                                    ) -> str:
                                        """Format message with configurable style.
                                    
                                        Args:
                                            style: Predefined style or "custom" for custom template
                                            template: Custom Jinja template (required if style="custom")
                                            variables: Additional variables for template rendering
                                            show_metadata: Whether to include metadata
                                            show_costs: Whether to include cost information
                                    
                                        Raises:
                                            ValueError: If style is "custom" but no template provided
                                                    or if style is invalid
                                        """
                                        from jinjarope import Environment
                                        import yamling
                                    
                                        env = Environment(trim_blocks=True, lstrip_blocks=True)
                                        env.filters["to_yaml"] = yamling.dump_yaml
                                    
                                        match style:
                                            case "custom":
                                                if not template:
                                                    msg = "Custom style requires a template"
                                                    raise ValueError(msg)
                                                template_str = template
                                            case _ if style in MESSAGE_TEMPLATES:
                                                template_str = MESSAGE_TEMPLATES[style]
                                            case _:
                                                msg = f"Invalid style: {style}"
                                                raise ValueError(msg)
                                        template_obj = env.from_string(template_str)
                                        vars_ = {
                                            **(self.__dict__),
                                            "show_metadata": show_metadata,
                                            "show_costs": show_costs,
                                        }
                                        print(vars_)
                                        if variables:
                                            vars_.update(variables)
                                    
                                        return template_obj.render(**vars_)
                                    

                                    forwarded

                                    forwarded(previous_message: ChatMessage[Any]) -> Self
                                    

                                    Create new message showing it was forwarded from another message.

                                    Parameters:

                                    Name Type Description Default
                                    previous_message ChatMessage[Any]

                                    The message that led to this one's creation

                                    required

                                    Returns:

                                    Type Description
                                    Self

                                    New message with updated chain showing the path through previous message

                                    Source code in src/llmling_agent/messaging/messages.py
                                    302
                                    303
                                    304
                                    305
                                    306
                                    307
                                    308
                                    309
                                    310
                                    311
                                    312
                                    def forwarded(self, previous_message: ChatMessage[Any]) -> Self:
                                        """Create new message showing it was forwarded from another message.
                                    
                                        Args:
                                            previous_message: The message that led to this one's creation
                                    
                                        Returns:
                                            New message with updated chain showing the path through previous message
                                        """
                                        from_ = [*previous_message.forwarded_from, previous_message.name or "unknown"]
                                        return replace(self, forwarded_from=from_)
                                    

                                    from_pydantic_ai classmethod

                                    from_pydantic_ai(
                                        content: TContentType,
                                        message: ModelRequest | ModelResponse,
                                        conversation_id: str | None = None,
                                        name: str | None = None,
                                        message_id: str | None = None,
                                        forwarded_from: list[str] | None = None,
                                    ) -> ChatMessage[TContentType]
                                    

                                    Convert a Pydantic model to a ChatMessage.

                                    Source code in src/llmling_agent/messaging/messages.py
                                    250
                                    251
                                    252
                                    253
                                    254
                                    255
                                    256
                                    257
                                    258
                                    259
                                    260
                                    261
                                    262
                                    263
                                    264
                                    265
                                    266
                                    267
                                    268
                                    269
                                    270
                                    271
                                    272
                                    273
                                    274
                                    275
                                    276
                                    277
                                    278
                                    279
                                    280
                                    281
                                    282
                                    283
                                    284
                                    285
                                    286
                                    287
                                    288
                                    289
                                    290
                                    291
                                    292
                                    293
                                    294
                                    295
                                    296
                                    297
                                    298
                                    299
                                    300
                                    @classmethod
                                    def from_pydantic_ai[TContentType](
                                        cls,
                                        content: TContentType,
                                        message: ModelRequest | ModelResponse,
                                        conversation_id: str | None = None,
                                        name: str | None = None,
                                        message_id: str | None = None,
                                        forwarded_from: list[str] | None = None,
                                    ) -> ChatMessage[TContentType]:
                                        """Convert a Pydantic model to a ChatMessage."""
                                        match message:
                                            case ModelRequest(parts=parts, instructions=_instructions):
                                                return ChatMessage(
                                                    parts=parts,
                                                    content=content,
                                                    role="user" if message.kind == "request" else "assistant",
                                                    message_id=message_id or str(uuid.uuid4()),
                                                    # instructions=instructions,
                                                    forwarded_from=forwarded_from or [],
                                                    name=name,
                                                )
                                            case ModelResponse(
                                                parts=parts,
                                                usage=usage,
                                                model_name=model_name,
                                                timestamp=timestamp,
                                                provider_name=provider_name,
                                                provider_details=provider_details,
                                                finish_reason=finish_reason,
                                                provider_response_id=provider_response_id,
                                            ):
                                                return ChatMessage(
                                                    role="user" if message.kind == "request" else "assistant",
                                                    content=content,
                                                    parts=parts,
                                                    usage=usage,
                                                    message_id=message_id or str(uuid.uuid4()),
                                                    conversation_id=conversation_id,
                                                    model_name=model_name,
                                                    timestamp=timestamp,
                                                    provider_name=provider_name,
                                                    provider_details=provider_details or {},
                                                    finish_reason=finish_reason,
                                                    provider_response_id=provider_response_id,
                                                    name=name,
                                                    forwarded_from=forwarded_from or [],
                                                )
                                            case _:
                                                msg = f"Unknown message kind: {message.kind}"
                                                raise ValueError(msg)
                                    

                                    to_pydantic_ai

                                    to_pydantic_ai() -> ModelRequest | ModelResponse
                                    

                                    Convert this message to a Pydantic model.

                                    Source code in src/llmling_agent/messaging/messages.py
                                    233
                                    234
                                    235
                                    236
                                    237
                                    238
                                    239
                                    240
                                    241
                                    242
                                    243
                                    244
                                    245
                                    246
                                    247
                                    248
                                    def to_pydantic_ai(self) -> ModelRequest | ModelResponse:
                                        """Convert this message to a Pydantic model."""
                                        match self.kind:
                                            case "request":
                                                return ModelRequest(parts=self.parts, instructions=None)  # type: ignore
                                            case "response":
                                                return ModelResponse(
                                                    parts=self.parts,  # type: ignore
                                                    usage=self.usage,
                                                    model_name=self.model_name,
                                                    timestamp=self.timestamp,
                                                    provider_name=self.provider_name,
                                                    provider_details=self.provider_details,
                                                    finish_reason=self.finish_reason,
                                                    provider_response_id=self.provider_response_id,
                                                )
                                    

                                    to_request

                                    to_request() -> Self
                                    

                                    Convert this message to a request message.

                                    If the message is already a request (user role), this is a no-op. If it's a response (assistant role), converts response parts to user content.

                                    Returns:

                                    Type Description
                                    Self

                                    New ChatMessage with role='user' and converted parts

                                    Source code in src/llmling_agent/messaging/messages.py
                                    318
                                    319
                                    320
                                    321
                                    322
                                    323
                                    324
                                    325
                                    326
                                    327
                                    328
                                    329
                                    330
                                    331
                                    332
                                    333
                                    334
                                    335
                                    336
                                    337
                                    338
                                    339
                                    340
                                    341
                                    342
                                    343
                                    344
                                    345
                                    346
                                    347
                                    348
                                    349
                                    350
                                    351
                                    352
                                    353
                                    354
                                    355
                                    356
                                    357
                                    358
                                    359
                                    360
                                    def to_request(self) -> Self:
                                        """Convert this message to a request message.
                                    
                                        If the message is already a request (user role), this is a no-op.
                                        If it's a response (assistant role), converts response parts to user content.
                                    
                                        Returns:
                                            New ChatMessage with role='user' and converted parts
                                        """
                                        if self.role == "user":
                                            # Already a request, return as-is
                                            return self
                                    
                                        # Convert response parts to user content
                                        converted_parts: list[Any] = []
                                        user_content: list[UserContent] = []
                                    
                                        for part in self.parts:
                                            match part:
                                                case TextPart(content=text_content):
                                                    # Text parts become user content strings
                                                    user_content.append(text_content)
                                                case FilePart(content=binary_content):
                                                    # File parts (images, etc.) become user content directly
                                                    user_content.append(binary_content)
                                                case _:
                                                    # Other parts (tool calls, etc.) are kept as-is for now
                                                    # Could be extended to handle more conversion cases
                                                    pass
                                    
                                        # Create new UserPromptPart with converted content
                                        if user_content:
                                            if len(user_content) == 1 and isinstance(user_content[0], str):
                                                # Single string content
                                                converted_parts = [UserPromptPart(content=user_content[0])]
                                            else:
                                                # Multi-modal content
                                                converted_parts = [UserPromptPart(content=user_content)]
                                        else:
                                            # Fallback to text representation if no convertible parts
                                            converted_parts = [UserPromptPart(content=str(self.content))]
                                    
                                        return replace(self, role="user", parts=converted_parts, cost_info=None)
                                    

                                    to_text_message

                                    to_text_message() -> ChatMessage[str]
                                    

                                    Convert this message to a text-only version.

                                    Source code in src/llmling_agent/messaging/messages.py
                                    314
                                    315
                                    316
                                    def to_text_message(self) -> ChatMessage[str]:
                                        """Convert this message to a text-only version."""
                                        return dataclasses.replace(self, content=str(self.content))  # type: ignore
                                    

                                    ImageBase64Content

                                    Bases: BaseImageContent

                                    Image from base64 data.

                                    Source code in src/llmling_agent/models/content.py
                                     97
                                     98
                                     99
                                    100
                                    101
                                    102
                                    103
                                    104
                                    105
                                    106
                                    107
                                    108
                                    109
                                    110
                                    111
                                    112
                                    113
                                    114
                                    115
                                    116
                                    117
                                    118
                                    119
                                    120
                                    121
                                    122
                                    123
                                    124
                                    125
                                    126
                                    127
                                    128
                                    129
                                    130
                                    131
                                    132
                                    133
                                    134
                                    135
                                    136
                                    137
                                    138
                                    class ImageBase64Content(BaseImageContent):
                                        """Image from base64 data."""
                                    
                                        type: Literal["image_base64"] = Field("image_base64", init=False)
                                        """Base64-encoded image."""
                                    
                                        data: str
                                        """Base64-encoded image data."""
                                    
                                        mime_type: str = "image/jpeg"
                                        """MIME type of the image."""
                                    
                                        def to_openai_format(self) -> dict[str, Any]:
                                            """Convert to OpenAI API format for vision models."""
                                            data_url = f"data:{self.mime_type};base64,{self.data}"
                                            content = {"url": data_url, "detail": self.detail or "auto"}
                                            return {"type": "image_url", "image_url": content}
                                    
                                        @classmethod
                                        def from_bytes(
                                            cls,
                                            data: bytes,
                                            *,
                                            detail: DetailLevel | None = None,
                                            description: str | None = None,
                                        ) -> ImageBase64Content:
                                            """Create image content from raw bytes.
                                    
                                            Args:
                                                data: Raw image bytes
                                                detail: Optional detail level for processing
                                                description: Optional description of the image
                                            """
                                            content = base64.b64encode(data).decode()
                                            return cls(data=content, detail=detail, description=description)
                                    
                                        @classmethod
                                        def from_pil_image(cls, image: PIL.Image.Image) -> ImageBase64Content:
                                            """Create content from PIL Image."""
                                            with io.BytesIO() as buffer:
                                                image.save(buffer, format="PNG")
                                                return cls(data=base64.b64encode(buffer.getvalue()).decode())
                                    

                                    data instance-attribute

                                    data: str
                                    

                                    Base64-encoded image data.

                                    mime_type class-attribute instance-attribute

                                    mime_type: str = 'image/jpeg'
                                    

                                    MIME type of the image.

                                    type class-attribute instance-attribute

                                    type: Literal['image_base64'] = Field('image_base64', init=False)
                                    

                                    Base64-encoded image.

                                    from_bytes classmethod

                                    from_bytes(
                                        data: bytes, *, detail: DetailLevel | None = None, description: str | None = None
                                    ) -> ImageBase64Content
                                    

                                    Create image content from raw bytes.

                                    Parameters:

                                    Name Type Description Default
                                    data bytes

                                    Raw image bytes

                                    required
                                    detail DetailLevel | None

                                    Optional detail level for processing

                                    None
                                    description str | None

                                    Optional description of the image

                                    None
                                    Source code in src/llmling_agent/models/content.py
                                    115
                                    116
                                    117
                                    118
                                    119
                                    120
                                    121
                                    122
                                    123
                                    124
                                    125
                                    126
                                    127
                                    128
                                    129
                                    130
                                    131
                                    @classmethod
                                    def from_bytes(
                                        cls,
                                        data: bytes,
                                        *,
                                        detail: DetailLevel | None = None,
                                        description: str | None = None,
                                    ) -> ImageBase64Content:
                                        """Create image content from raw bytes.
                                    
                                        Args:
                                            data: Raw image bytes
                                            detail: Optional detail level for processing
                                            description: Optional description of the image
                                        """
                                        content = base64.b64encode(data).decode()
                                        return cls(data=content, detail=detail, description=description)
                                    

                                    from_pil_image classmethod

                                    from_pil_image(image: Image) -> ImageBase64Content
                                    

                                    Create content from PIL Image.

                                    Source code in src/llmling_agent/models/content.py
                                    133
                                    134
                                    135
                                    136
                                    137
                                    138
                                    @classmethod
                                    def from_pil_image(cls, image: PIL.Image.Image) -> ImageBase64Content:
                                        """Create content from PIL Image."""
                                        with io.BytesIO() as buffer:
                                            image.save(buffer, format="PNG")
                                            return cls(data=base64.b64encode(buffer.getvalue()).decode())
                                    

                                    to_openai_format

                                    to_openai_format() -> dict[str, Any]
                                    

                                    Convert to OpenAI API format for vision models.

                                    Source code in src/llmling_agent/models/content.py
                                    109
                                    110
                                    111
                                    112
                                    113
                                    def to_openai_format(self) -> dict[str, Any]:
                                        """Convert to OpenAI API format for vision models."""
                                        data_url = f"data:{self.mime_type};base64,{self.data}"
                                        content = {"url": data_url, "detail": self.detail or "auto"}
                                        return {"type": "image_url", "image_url": content}
                                    

                                    ImageURLContent

                                    Bases: BaseImageContent

                                    Image from URL.

                                    Source code in src/llmling_agent/models/content.py
                                    82
                                    83
                                    84
                                    85
                                    86
                                    87
                                    88
                                    89
                                    90
                                    91
                                    92
                                    93
                                    94
                                    class ImageURLContent(BaseImageContent):
                                        """Image from URL."""
                                    
                                        type: Literal["image_url"] = Field("image_url", init=False)
                                        """URL-based image."""
                                    
                                        url: str
                                        """URL to the image."""
                                    
                                        def to_openai_format(self) -> dict[str, Any]:
                                            """Convert to OpenAI API format for vision models."""
                                            content = {"url": self.url, "detail": self.detail or "auto"}
                                            return {"type": "image_url", "image_url": content}
                                    

                                    type class-attribute instance-attribute

                                    type: Literal['image_url'] = Field('image_url', init=False)
                                    

                                    URL-based image.

                                    url instance-attribute

                                    url: str
                                    

                                    URL to the image.

                                    to_openai_format

                                    to_openai_format() -> dict[str, Any]
                                    

                                    Convert to OpenAI API format for vision models.

                                    Source code in src/llmling_agent/models/content.py
                                    91
                                    92
                                    93
                                    94
                                    def to_openai_format(self) -> dict[str, Any]:
                                        """Convert to OpenAI API format for vision models."""
                                        content = {"url": self.url, "detail": self.detail or "auto"}
                                        return {"type": "image_url", "image_url": content}
                                    

                                    MessageNode

                                    Bases: MessageEmitter[TDeps, TResult]

                                    Base class for all message processing nodes.

                                    Source code in src/llmling_agent/messaging/messagenode.py
                                     31
                                     32
                                     33
                                     34
                                     35
                                     36
                                     37
                                     38
                                     39
                                     40
                                     41
                                     42
                                     43
                                     44
                                     45
                                     46
                                     47
                                     48
                                     49
                                     50
                                     51
                                     52
                                     53
                                     54
                                     55
                                     56
                                     57
                                     58
                                     59
                                     60
                                     61
                                     62
                                     63
                                     64
                                     65
                                     66
                                     67
                                     68
                                     69
                                     70
                                     71
                                     72
                                     73
                                     74
                                     75
                                     76
                                     77
                                     78
                                     79
                                     80
                                     81
                                     82
                                     83
                                     84
                                     85
                                     86
                                     87
                                     88
                                     89
                                     90
                                     91
                                     92
                                     93
                                     94
                                     95
                                     96
                                     97
                                     98
                                     99
                                    100
                                    101
                                    102
                                    103
                                    104
                                    105
                                    106
                                    107
                                    108
                                    109
                                    110
                                    111
                                    112
                                    113
                                    114
                                    115
                                    116
                                    117
                                    118
                                    119
                                    120
                                    121
                                    122
                                    123
                                    124
                                    125
                                    126
                                    127
                                    128
                                    129
                                    130
                                    131
                                    132
                                    133
                                    134
                                    135
                                    136
                                    137
                                    138
                                    139
                                    140
                                    141
                                    142
                                    143
                                    144
                                    145
                                    146
                                    147
                                    148
                                    149
                                    150
                                    151
                                    152
                                    153
                                    154
                                    155
                                    156
                                    157
                                    158
                                    159
                                    160
                                    161
                                    162
                                    163
                                    class MessageNode[TDeps, TResult](MessageEmitter[TDeps, TResult]):
                                        """Base class for all message processing nodes."""
                                    
                                        tool_used = Signal(ToolCallInfo)
                                        """Signal emitted when node uses a tool."""
                                    
                                        async def pre_run(
                                            self,
                                            *prompt: AnyPromptType | PIL.Image.Image | os.PathLike[str] | ChatMessage,
                                        ) -> tuple[ChatMessage[Any], list[Content | str]]:
                                            """Hook to prepare a MessgeNode run call.
                                    
                                            Args:
                                                *prompt: The prompt(s) to prepare.
                                    
                                            Returns:
                                                A tuple of:
                                                    - Either incoming message, or a constructed incoming message based
                                                      on the prompt(s).
                                                    - A list of prompts to be sent to the model.
                                            """
                                            if len(prompt) == 1 and isinstance(prompt[0], ChatMessage):
                                                user_msg = prompt[0]
                                                prompts = await convert_prompts([user_msg.content])
                                                # Update received message's chain to show it came through its source
                                                user_msg = user_msg.forwarded(prompt[0]).to_request()
                                                # clear cost info to avoid double-counting
                                                final_prompt = "\n\n".join(str(p) for p in prompts)
                                            else:
                                                prompts = await convert_prompts(prompt)
                                                final_prompt = "\n\n".join(str(p) for p in prompts)
                                                # use format_prompts?
                                                user_msg = ChatMessage[str](
                                                    content=final_prompt,
                                                    role="user",
                                                    conversation_id=str(uuid4()),
                                                    parts=[
                                                        UserPromptPart(content=[content_to_pydantic_ai(i) for i in prompts])
                                                    ],
                                                )
                                            self.message_received.emit(user_msg)
                                            self.context.current_prompt = final_prompt
                                            return user_msg, prompts
                                    
                                        # async def post_run(
                                        #     self,
                                        #     message: ChatMessage[TResult],
                                        #     previous_message: ChatMessage[Any] | None,
                                        #     wait_for_connections: bool | None = None,
                                        # ) -> ChatMessage[Any]:
                                        #     # For chain processing, update the response's chain
                                        #     if previous_message:
                                        #         message = message.forwarded(previous_message)
                                        #         conversation_id = previous_message.conversation_id
                                        #     else:
                                        #         conversation_id = str(uuid4())
                                        #     # Set conversation_id on response message
                                        #     message = replace(message, conversation_id=conversation_id)
                                        #     self.message_sent.emit(message)
                                        #     await self.log_message(response_msg)
                                        #     await self.connections.route_message(message, wait=wait_for_connections)
                                        #     return message
                                    
                                        # @overload
                                        # async def run(
                                        #     self,
                                        #     *prompt: AnyPromptType | PIL.Image.Image | os.PathLike[str] | ChatMessage,
                                        #     wait_for_connections: bool | None = None,
                                        #     store_history: bool = True,
                                        #     output_type: None,
                                        #     **kwargs: Any,
                                        # ) -> ChatMessage[TResult]: ...
                                    
                                        # @overload
                                        # async def run[OutputTypeT](
                                        #     self,
                                        #     *prompt: AnyPromptType | PIL.Image.Image | os.PathLike[str] | ChatMessage,
                                        #     wait_for_connections: bool | None = None,
                                        #     store_history: bool = True,
                                        #     output_type: type[OutputTypeT],
                                        #     **kwargs: Any,
                                        # ) -> ChatMessage[OutputTypeT]: ...
                                    
                                        @method_spawner
                                        async def run[OutputTypeT](
                                            self,
                                            *prompt: AnyPromptType | PIL.Image.Image | os.PathLike[str] | ChatMessage,
                                            wait_for_connections: bool | None = None,
                                            store_history: bool = True,
                                            output_type: type[OutputTypeT] | None = None,
                                            **kwargs: Any,
                                        ) -> ChatMessage[Any]:
                                            """Execute node with prompts and handle message routing.
                                    
                                            Args:
                                                prompt: Input prompts
                                                wait_for_connections: Whether to wait for forwarded messages
                                                store_history: Whether to store in conversation history
                                                output_type: Type of output to expect
                                                **kwargs: Additional arguments for _run
                                            """
                                            from llmling_agent import Agent
                                    
                                            user_msg, prompts = await self.pre_run(*prompt)
                                            message = await self._run(
                                                *prompts,
                                                store_history=store_history,
                                                conversation_id=user_msg.conversation_id,
                                                output_type=output_type,
                                                **kwargs,
                                            )
                                    
                                            # For chain processing, update the response's chain
                                            if len(prompt) == 1 and isinstance(prompt[0], ChatMessage):
                                                message = message.forwarded(prompt[0])
                                    
                                            if store_history and isinstance(self, Agent):
                                                self.conversation.add_chat_messages([user_msg, message])
                                            self.message_sent.emit(message)
                                            await self.connections.route_message(message, wait=wait_for_connections)
                                            return message
                                    
                                        @abstractmethod
                                        async def get_stats(self) -> MessageStats | AggregatedMessageStats:
                                            """Get message statistics for this node."""
                                    
                                        @abstractmethod
                                        def run_iter(
                                            self,
                                            *prompts: Any,
                                            **kwargs: Any,
                                        ) -> AsyncIterator[ChatMessage[Any]]:
                                            """Yield messages during execution."""
                                    

                                    tool_used class-attribute instance-attribute

                                    tool_used = Signal(ToolCallInfo)
                                    

                                    Signal emitted when node uses a tool.

                                    get_stats abstractmethod async

                                    Get message statistics for this node.

                                    Source code in src/llmling_agent/messaging/messagenode.py
                                    153
                                    154
                                    155
                                    @abstractmethod
                                    async def get_stats(self) -> MessageStats | AggregatedMessageStats:
                                        """Get message statistics for this node."""
                                    

                                    pre_run async

                                    pre_run(
                                        *prompt: AnyPromptType | Image | PathLike[str] | ChatMessage,
                                    ) -> tuple[ChatMessage[Any], list[Content | str]]
                                    

                                    Hook to prepare a MessgeNode run call.

                                    Parameters:

                                    Name Type Description Default
                                    *prompt AnyPromptType | Image | PathLike[str] | ChatMessage

                                    The prompt(s) to prepare.

                                    ()

                                    Returns:

                                    Type Description
                                    tuple[ChatMessage[Any], list[Content | str]]

                                    A tuple of: - Either incoming message, or a constructed incoming message based on the prompt(s). - A list of prompts to be sent to the model.

                                    Source code in src/llmling_agent/messaging/messagenode.py
                                    37
                                    38
                                    39
                                    40
                                    41
                                    42
                                    43
                                    44
                                    45
                                    46
                                    47
                                    48
                                    49
                                    50
                                    51
                                    52
                                    53
                                    54
                                    55
                                    56
                                    57
                                    58
                                    59
                                    60
                                    61
                                    62
                                    63
                                    64
                                    65
                                    66
                                    67
                                    68
                                    69
                                    70
                                    71
                                    72
                                    73
                                    async def pre_run(
                                        self,
                                        *prompt: AnyPromptType | PIL.Image.Image | os.PathLike[str] | ChatMessage,
                                    ) -> tuple[ChatMessage[Any], list[Content | str]]:
                                        """Hook to prepare a MessgeNode run call.
                                    
                                        Args:
                                            *prompt: The prompt(s) to prepare.
                                    
                                        Returns:
                                            A tuple of:
                                                - Either incoming message, or a constructed incoming message based
                                                  on the prompt(s).
                                                - A list of prompts to be sent to the model.
                                        """
                                        if len(prompt) == 1 and isinstance(prompt[0], ChatMessage):
                                            user_msg = prompt[0]
                                            prompts = await convert_prompts([user_msg.content])
                                            # Update received message's chain to show it came through its source
                                            user_msg = user_msg.forwarded(prompt[0]).to_request()
                                            # clear cost info to avoid double-counting
                                            final_prompt = "\n\n".join(str(p) for p in prompts)
                                        else:
                                            prompts = await convert_prompts(prompt)
                                            final_prompt = "\n\n".join(str(p) for p in prompts)
                                            # use format_prompts?
                                            user_msg = ChatMessage[str](
                                                content=final_prompt,
                                                role="user",
                                                conversation_id=str(uuid4()),
                                                parts=[
                                                    UserPromptPart(content=[content_to_pydantic_ai(i) for i in prompts])
                                                ],
                                            )
                                        self.message_received.emit(user_msg)
                                        self.context.current_prompt = final_prompt
                                        return user_msg, prompts
                                    

                                    run async

                                    run(
                                        *prompt: AnyPromptType | Image | PathLike[str] | ChatMessage,
                                        wait_for_connections: bool | None = None,
                                        store_history: bool = True,
                                        output_type: type[OutputTypeT] | None = None,
                                        **kwargs: Any,
                                    ) -> ChatMessage[Any]
                                    

                                    Execute node with prompts and handle message routing.

                                    Parameters:

                                    Name Type Description Default
                                    prompt AnyPromptType | Image | PathLike[str] | ChatMessage

                                    Input prompts

                                    ()
                                    wait_for_connections bool | None

                                    Whether to wait for forwarded messages

                                    None
                                    store_history bool

                                    Whether to store in conversation history

                                    True
                                    output_type type[OutputTypeT] | None

                                    Type of output to expect

                                    None
                                    **kwargs Any

                                    Additional arguments for _run

                                    {}
                                    Source code in src/llmling_agent/messaging/messagenode.py
                                    114
                                    115
                                    116
                                    117
                                    118
                                    119
                                    120
                                    121
                                    122
                                    123
                                    124
                                    125
                                    126
                                    127
                                    128
                                    129
                                    130
                                    131
                                    132
                                    133
                                    134
                                    135
                                    136
                                    137
                                    138
                                    139
                                    140
                                    141
                                    142
                                    143
                                    144
                                    145
                                    146
                                    147
                                    148
                                    149
                                    150
                                    151
                                    @method_spawner
                                    async def run[OutputTypeT](
                                        self,
                                        *prompt: AnyPromptType | PIL.Image.Image | os.PathLike[str] | ChatMessage,
                                        wait_for_connections: bool | None = None,
                                        store_history: bool = True,
                                        output_type: type[OutputTypeT] | None = None,
                                        **kwargs: Any,
                                    ) -> ChatMessage[Any]:
                                        """Execute node with prompts and handle message routing.
                                    
                                        Args:
                                            prompt: Input prompts
                                            wait_for_connections: Whether to wait for forwarded messages
                                            store_history: Whether to store in conversation history
                                            output_type: Type of output to expect
                                            **kwargs: Additional arguments for _run
                                        """
                                        from llmling_agent import Agent
                                    
                                        user_msg, prompts = await self.pre_run(*prompt)
                                        message = await self._run(
                                            *prompts,
                                            store_history=store_history,
                                            conversation_id=user_msg.conversation_id,
                                            output_type=output_type,
                                            **kwargs,
                                        )
                                    
                                        # For chain processing, update the response's chain
                                        if len(prompt) == 1 and isinstance(prompt[0], ChatMessage):
                                            message = message.forwarded(prompt[0])
                                    
                                        if store_history and isinstance(self, Agent):
                                            self.conversation.add_chat_messages([user_msg, message])
                                        self.message_sent.emit(message)
                                        await self.connections.route_message(message, wait=wait_for_connections)
                                        return message
                                    

                                    run_iter abstractmethod

                                    run_iter(*prompts: Any, **kwargs: Any) -> AsyncIterator[ChatMessage[Any]]
                                    

                                    Yield messages during execution.

                                    Source code in src/llmling_agent/messaging/messagenode.py
                                    157
                                    158
                                    159
                                    160
                                    161
                                    162
                                    163
                                    @abstractmethod
                                    def run_iter(
                                        self,
                                        *prompts: Any,
                                        **kwargs: Any,
                                    ) -> AsyncIterator[ChatMessage[Any]]:
                                        """Yield messages during execution."""
                                    

                                    PDFBase64Content

                                    Bases: BasePDFContent

                                    PDF from base64 data.

                                    Source code in src/llmling_agent/models/content.py
                                    197
                                    198
                                    199
                                    200
                                    201
                                    202
                                    203
                                    204
                                    205
                                    206
                                    207
                                    208
                                    209
                                    210
                                    211
                                    212
                                    213
                                    214
                                    215
                                    216
                                    217
                                    218
                                    219
                                    220
                                    221
                                    222
                                    class PDFBase64Content(BasePDFContent):
                                        """PDF from base64 data."""
                                    
                                        type: Literal["pdf_base64"] = Field("pdf_base64", init=False)
                                        """Base64-data based PDF."""
                                    
                                        data: str
                                        """Base64-encoded PDF data."""
                                    
                                        def to_openai_format(self) -> dict[str, Any]:
                                            """Convert to OpenAI API format for PDF handling."""
                                            data_url = f"data:application/pdf;base64,{self.data}"
                                            content = {"url": data_url, "detail": self.detail or "auto"}
                                            return {"type": "file", "file": content}
                                    
                                        @classmethod
                                        def from_bytes(
                                            cls,
                                            data: bytes,
                                            *,
                                            detail: DetailLevel | None = None,
                                            description: str | None = None,
                                        ) -> Self:
                                            """Create PDF content from raw bytes."""
                                            content = base64.b64encode(data).decode()
                                            return cls(data=content, detail=detail, description=description)
                                    

                                    data instance-attribute

                                    data: str
                                    

                                    Base64-encoded PDF data.

                                    type class-attribute instance-attribute

                                    type: Literal['pdf_base64'] = Field('pdf_base64', init=False)
                                    

                                    Base64-data based PDF.

                                    from_bytes classmethod

                                    from_bytes(
                                        data: bytes, *, detail: DetailLevel | None = None, description: str | None = None
                                    ) -> Self
                                    

                                    Create PDF content from raw bytes.

                                    Source code in src/llmling_agent/models/content.py
                                    212
                                    213
                                    214
                                    215
                                    216
                                    217
                                    218
                                    219
                                    220
                                    221
                                    222
                                    @classmethod
                                    def from_bytes(
                                        cls,
                                        data: bytes,
                                        *,
                                        detail: DetailLevel | None = None,
                                        description: str | None = None,
                                    ) -> Self:
                                        """Create PDF content from raw bytes."""
                                        content = base64.b64encode(data).decode()
                                        return cls(data=content, detail=detail, description=description)
                                    

                                    to_openai_format

                                    to_openai_format() -> dict[str, Any]
                                    

                                    Convert to OpenAI API format for PDF handling.

                                    Source code in src/llmling_agent/models/content.py
                                    206
                                    207
                                    208
                                    209
                                    210
                                    def to_openai_format(self) -> dict[str, Any]:
                                        """Convert to OpenAI API format for PDF handling."""
                                        data_url = f"data:application/pdf;base64,{self.data}"
                                        content = {"url": data_url, "detail": self.detail or "auto"}
                                        return {"type": "file", "file": content}
                                    

                                    PDFURLContent

                                    Bases: BasePDFContent

                                    PDF from URL.

                                    Source code in src/llmling_agent/models/content.py
                                    182
                                    183
                                    184
                                    185
                                    186
                                    187
                                    188
                                    189
                                    190
                                    191
                                    192
                                    193
                                    194
                                    class PDFURLContent(BasePDFContent):
                                        """PDF from URL."""
                                    
                                        type: Literal["pdf_url"] = Field("pdf_url", init=False)
                                        """URL-based PDF."""
                                    
                                        url: str
                                        """URL to the PDF document."""
                                    
                                        def to_openai_format(self) -> dict[str, Any]:
                                            """Convert to OpenAI API format for PDF handling."""
                                            content = {"url": self.url, "detail": self.detail or "auto"}
                                            return {"type": "file", "file": content}
                                    

                                    type class-attribute instance-attribute

                                    type: Literal['pdf_url'] = Field('pdf_url', init=False)
                                    

                                    URL-based PDF.

                                    url instance-attribute

                                    url: str
                                    

                                    URL to the PDF document.

                                    to_openai_format

                                    to_openai_format() -> dict[str, Any]
                                    

                                    Convert to OpenAI API format for PDF handling.

                                    Source code in src/llmling_agent/models/content.py
                                    191
                                    192
                                    193
                                    194
                                    def to_openai_format(self) -> dict[str, Any]:
                                        """Convert to OpenAI API format for PDF handling."""
                                        content = {"url": self.url, "detail": self.detail or "auto"}
                                        return {"type": "file", "file": content}
                                    

                                    Team

                                    Bases: BaseTeam[TDeps, Any]

                                    Group of agents that can execute together.

                                    Source code in src/llmling_agent/delegation/team.py
                                     65
                                     66
                                     67
                                     68
                                     69
                                     70
                                     71
                                     72
                                     73
                                     74
                                     75
                                     76
                                     77
                                     78
                                     79
                                     80
                                     81
                                     82
                                     83
                                     84
                                     85
                                     86
                                     87
                                     88
                                     89
                                     90
                                     91
                                     92
                                     93
                                     94
                                     95
                                     96
                                     97
                                     98
                                     99
                                    100
                                    101
                                    102
                                    103
                                    104
                                    105
                                    106
                                    107
                                    108
                                    109
                                    110
                                    111
                                    112
                                    113
                                    114
                                    115
                                    116
                                    117
                                    118
                                    119
                                    120
                                    121
                                    122
                                    123
                                    124
                                    125
                                    126
                                    127
                                    128
                                    129
                                    130
                                    131
                                    132
                                    133
                                    134
                                    135
                                    136
                                    137
                                    138
                                    139
                                    140
                                    141
                                    142
                                    143
                                    144
                                    145
                                    146
                                    147
                                    148
                                    149
                                    150
                                    151
                                    152
                                    153
                                    154
                                    155
                                    156
                                    157
                                    158
                                    159
                                    160
                                    161
                                    162
                                    163
                                    164
                                    165
                                    166
                                    167
                                    168
                                    169
                                    170
                                    171
                                    172
                                    173
                                    174
                                    175
                                    176
                                    177
                                    178
                                    179
                                    180
                                    181
                                    182
                                    183
                                    184
                                    185
                                    186
                                    187
                                    188
                                    189
                                    190
                                    191
                                    192
                                    193
                                    194
                                    195
                                    196
                                    197
                                    198
                                    199
                                    200
                                    201
                                    202
                                    203
                                    204
                                    205
                                    206
                                    207
                                    208
                                    209
                                    210
                                    211
                                    212
                                    213
                                    214
                                    215
                                    216
                                    217
                                    218
                                    219
                                    220
                                    221
                                    222
                                    223
                                    224
                                    225
                                    226
                                    227
                                    228
                                    229
                                    230
                                    231
                                    232
                                    233
                                    234
                                    235
                                    236
                                    237
                                    238
                                    239
                                    240
                                    241
                                    242
                                    243
                                    244
                                    245
                                    246
                                    247
                                    248
                                    249
                                    250
                                    251
                                    252
                                    253
                                    254
                                    255
                                    256
                                    257
                                    258
                                    259
                                    260
                                    261
                                    262
                                    263
                                    264
                                    265
                                    266
                                    267
                                    268
                                    269
                                    270
                                    271
                                    272
                                    273
                                    274
                                    275
                                    276
                                    277
                                    278
                                    279
                                    280
                                    281
                                    282
                                    283
                                    284
                                    285
                                    286
                                    287
                                    288
                                    289
                                    290
                                    291
                                    292
                                    293
                                    294
                                    295
                                    296
                                    297
                                    298
                                    299
                                    300
                                    301
                                    302
                                    class Team[TDeps = None](BaseTeam[TDeps, Any]):
                                        """Group of agents that can execute together."""
                                    
                                        async def execute(
                                            self,
                                            *prompts: AnyPromptType | PIL.Image.Image | os.PathLike[str] | None,
                                            **kwargs: Any,
                                        ) -> TeamResponse:
                                            """Run all agents in parallel with monitoring."""
                                            from llmling_agent.talk.talk import Talk
                                    
                                            self._team_talk.clear()
                                    
                                            start_time = get_now()
                                            responses: list[AgentResponse[Any]] = []
                                            errors: dict[str, Exception] = {}
                                            final_prompt = list(prompts)
                                            if self.shared_prompt:
                                                final_prompt.insert(0, self.shared_prompt)
                                            combined_prompt = "\n".join([await to_prompt(p) for p in final_prompt])
                                            all_nodes = list(await self.pick_agents(combined_prompt))
                                            # Create Talk connections for monitoring this execution
                                            execution_talks: list[Talk[Any]] = []
                                            for node in all_nodes:
                                                talk = Talk[Any](
                                                    node,
                                                    [],  # No actual forwarding, just for tracking
                                                    connection_type="run",
                                                    queued=True,
                                                    queue_strategy="latest",
                                                )
                                                execution_talks.append(talk)
                                                self._team_talk.append(talk)  # Add to base class's TeamTalk
                                    
                                            async def _run(node: MessageNode[TDeps, Any]):
                                                try:
                                                    start = perf_counter()
                                                    message = await node.run(*final_prompt, **kwargs)
                                                    timing = perf_counter() - start
                                                    r = AgentResponse(agent_name=node.name, message=message, timing=timing)
                                                    responses.append(r)
                                    
                                                    # Update talk stats for this agent
                                                    talk = next(t for t in execution_talks if t.source == node)
                                                    talk._stats.messages.append(message)
                                    
                                                except Exception as e:  # noqa: BLE001
                                                    errors[node.name] = e
                                    
                                            # Run all agents in parallel
                                            await asyncio.gather(*[_run(node) for node in all_nodes])
                                    
                                            return TeamResponse(responses=responses, start_time=start_time, errors=errors)
                                    
                                        def __prompt__(self) -> str:
                                            """Format team info for prompts."""
                                            members = ", ".join(a.name for a in self.agents)
                                            desc = f" - {self.description}" if self.description else ""
                                            return f"Parallel Team {self.name!r}{desc}\nMembers: {members}"
                                    
                                        async def run_iter(
                                            self,
                                            *prompts: AnyPromptType,
                                            **kwargs: Any,
                                        ) -> AsyncIterator[ChatMessage[Any]]:
                                            """Yield messages as they arrive from parallel execution."""
                                            queue: asyncio.Queue[ChatMessage[Any] | None] = asyncio.Queue()
                                            failures: dict[str, Exception] = {}
                                    
                                            async def _run(node: MessageNode[TDeps, Any]):
                                                try:
                                                    message = await node.run(*prompts, **kwargs)
                                                    await queue.put(message)
                                                except Exception as e:
                                                    logger.exception("Error executing node", name=node.name)
                                                    failures[node.name] = e
                                                    # Put None to maintain queue count
                                                    await queue.put(None)
                                    
                                            # Get nodes to run
                                            combined_prompt = "\n".join([await to_prompt(p) for p in prompts])
                                            all_nodes = list(await self.pick_agents(combined_prompt))
                                    
                                            # Start all agents
                                            tasks = [asyncio.create_task(_run(n), name=f"run_{n.name}") for n in all_nodes]
                                    
                                            try:
                                                # Yield messages as they arrive
                                                for _ in all_nodes:
                                                    if msg := await queue.get():
                                                        yield msg
                                    
                                                # If any failures occurred, raise error with details
                                                if failures:
                                                    error_details = "\n".join(
                                                        f"- {name}: {error}" for name, error in failures.items()
                                                    )
                                                    error_msg = f"Some nodes failed to execute:\n{error_details}"
                                                    raise RuntimeError(error_msg)
                                    
                                            finally:
                                                # Clean up any remaining tasks
                                                for task in tasks:
                                                    if not task.done():
                                                        task.cancel()
                                    
                                        async def _run(
                                            self,
                                            *prompts: AnyPromptType | PIL.Image.Image | os.PathLike[str] | None,
                                            wait_for_connections: bool | None = None,
                                            message_id: str | None = None,
                                            conversation_id: str | None = None,
                                            **kwargs: Any,
                                        ) -> ChatMessage[list[Any]]:
                                            """Run all agents in parallel and return combined message."""
                                            result: TeamResponse = await self.execute(*prompts, **kwargs)
                                            message_id = message_id or str(uuid4())
                                            return ChatMessage(
                                                content=[r.message.content for r in result if r.message],
                                                role="assistant",
                                                name=self.name,
                                                message_id=message_id,
                                                conversation_id=conversation_id,
                                                metadata={
                                                    "agent_names": [r.agent_name for r in result],
                                                    "errors": {name: str(error) for name, error in result.errors.items()},
                                                    "start_time": result.start_time.isoformat(),
                                                },
                                            )
                                    
                                        async def run_stream(
                                            self,
                                            *prompts: AnyPromptType | PIL.Image.Image | os.PathLike[str],
                                            **kwargs: Any,
                                        ) -> AsyncIterator[
                                            tuple[MessageNode[Any, Any], AgentStreamEvent | StreamCompleteEvent]
                                        ]:
                                            """Stream responses from all team members in parallel.
                                    
                                            Args:
                                                prompts: Input prompts to process in parallel
                                                kwargs: Additional arguments passed to each agent
                                    
                                            Yields:
                                                Tuples of (agent, event) where agent is the Agent instance
                                                and event is the streaming event from that agent.
                                            """
                                            # Get nodes to run
                                            combined_prompt = "\n".join([await to_prompt(p) for p in prompts])
                                            all_nodes = list(await self.pick_agents(combined_prompt))
                                    
                                            # Create list of streams that yield (agent, event) tuples
                                            agent_streams = [
                                                normalize_stream_for_teams(agent, *prompts, **kwargs)
                                                for agent in all_nodes
                                                if hasattr(agent, "run_stream")
                                            ]
                                    
                                            # Merge all agent streams
                                            async for agent_event_tuple in as_generated(agent_streams):
                                                yield agent_event_tuple
                                    
                                        async def run_job[TJobResult](
                                            self,
                                            job: Job[TDeps, TJobResult],
                                            *,
                                            store_history: bool = True,
                                            include_agent_tools: bool = True,
                                        ) -> list[AgentResponse[TJobResult]]:
                                            """Execute a job across all team members in parallel.
                                    
                                            Args:
                                                job: Job configuration to execute
                                                store_history: Whether to add job execution to conversation history
                                                include_agent_tools: Whether to include agent's tools alongside job tools
                                    
                                            Returns:
                                                List of responses from all agents
                                    
                                            Raises:
                                                JobError: If job execution fails for any agent
                                                ValueError: If job configuration is invalid
                                            """
                                            from llmling_agent.agent import Agent
                                            from llmling_agent.tasks import JobError
                                    
                                            responses: list[AgentResponse[TJobResult]] = []
                                            errors: dict[str, Exception] = {}
                                            start_time = get_now()
                                    
                                            # Validate dependencies for all agents
                                            if job.required_dependency is not None:
                                                invalid_agents = [
                                                    agent.name
                                                    for agent in self.iter_agents()
                                                    if not isinstance(agent.context.data, job.required_dependency)
                                                ]
                                                if invalid_agents:
                                                    msg = (
                                                        f"Agents {', '.join(invalid_agents)} don't have required "
                                                        f"dependency type: {job.required_dependency}"
                                                    )
                                                    raise JobError(msg)
                                    
                                            try:
                                                # Load knowledge for all agents if provided
                                                if job.knowledge:
                                                    # TODO: resources
                                                    tools = [t.name for t in job.get_tools()]
                                                    await self.distribute(content="", tools=tools)
                                    
                                                prompt = await job.get_prompt()
                                    
                                                async def _run(agent: MessageNode[TDeps, TJobResult]):
                                                    assert isinstance(agent, Agent)
                                                    try:
                                                        with agent.tools.temporary_tools(
                                                            job.get_tools(), exclusive=not include_agent_tools
                                                        ):
                                                            start = perf_counter()
                                                            resp = AgentResponse(
                                                                agent_name=agent.name,
                                                                message=await agent.run(prompt, store_history=store_history),  # pyright: ignore
                                                                timing=perf_counter() - start,
                                                            )
                                                            responses.append(resp)
                                                    except Exception as e:  # noqa: BLE001
                                                        errors[agent.name] = e
                                    
                                                # Run job in parallel on all agents
                                                await asyncio.gather(*[_run(node) for node in self.agents])
                                    
                                                return TeamResponse(responses=responses, start_time=start_time, errors=errors)
                                    
                                            except Exception as e:
                                                msg = "Job execution failed"
                                                logger.exception(msg)
                                                raise JobError(msg) from e
                                    

                                    __prompt__

                                    __prompt__() -> str
                                    

                                    Format team info for prompts.

                                    Source code in src/llmling_agent/delegation/team.py
                                    119
                                    120
                                    121
                                    122
                                    123
                                    def __prompt__(self) -> str:
                                        """Format team info for prompts."""
                                        members = ", ".join(a.name for a in self.agents)
                                        desc = f" - {self.description}" if self.description else ""
                                        return f"Parallel Team {self.name!r}{desc}\nMembers: {members}"
                                    

                                    execute async

                                    execute(
                                        *prompts: AnyPromptType | Image | PathLike[str] | None, **kwargs: Any
                                    ) -> TeamResponse
                                    

                                    Run all agents in parallel with monitoring.

                                    Source code in src/llmling_agent/delegation/team.py
                                     68
                                     69
                                     70
                                     71
                                     72
                                     73
                                     74
                                     75
                                     76
                                     77
                                     78
                                     79
                                     80
                                     81
                                     82
                                     83
                                     84
                                     85
                                     86
                                     87
                                     88
                                     89
                                     90
                                     91
                                     92
                                     93
                                     94
                                     95
                                     96
                                     97
                                     98
                                     99
                                    100
                                    101
                                    102
                                    103
                                    104
                                    105
                                    106
                                    107
                                    108
                                    109
                                    110
                                    111
                                    112
                                    113
                                    114
                                    115
                                    116
                                    117
                                    async def execute(
                                        self,
                                        *prompts: AnyPromptType | PIL.Image.Image | os.PathLike[str] | None,
                                        **kwargs: Any,
                                    ) -> TeamResponse:
                                        """Run all agents in parallel with monitoring."""
                                        from llmling_agent.talk.talk import Talk
                                    
                                        self._team_talk.clear()
                                    
                                        start_time = get_now()
                                        responses: list[AgentResponse[Any]] = []
                                        errors: dict[str, Exception] = {}
                                        final_prompt = list(prompts)
                                        if self.shared_prompt:
                                            final_prompt.insert(0, self.shared_prompt)
                                        combined_prompt = "\n".join([await to_prompt(p) for p in final_prompt])
                                        all_nodes = list(await self.pick_agents(combined_prompt))
                                        # Create Talk connections for monitoring this execution
                                        execution_talks: list[Talk[Any]] = []
                                        for node in all_nodes:
                                            talk = Talk[Any](
                                                node,
                                                [],  # No actual forwarding, just for tracking
                                                connection_type="run",
                                                queued=True,
                                                queue_strategy="latest",
                                            )
                                            execution_talks.append(talk)
                                            self._team_talk.append(talk)  # Add to base class's TeamTalk
                                    
                                        async def _run(node: MessageNode[TDeps, Any]):
                                            try:
                                                start = perf_counter()
                                                message = await node.run(*final_prompt, **kwargs)
                                                timing = perf_counter() - start
                                                r = AgentResponse(agent_name=node.name, message=message, timing=timing)
                                                responses.append(r)
                                    
                                                # Update talk stats for this agent
                                                talk = next(t for t in execution_talks if t.source == node)
                                                talk._stats.messages.append(message)
                                    
                                            except Exception as e:  # noqa: BLE001
                                                errors[node.name] = e
                                    
                                        # Run all agents in parallel
                                        await asyncio.gather(*[_run(node) for node in all_nodes])
                                    
                                        return TeamResponse(responses=responses, start_time=start_time, errors=errors)
                                    

                                    run_iter async

                                    run_iter(*prompts: AnyPromptType, **kwargs: Any) -> AsyncIterator[ChatMessage[Any]]
                                    

                                    Yield messages as they arrive from parallel execution.

                                    Source code in src/llmling_agent/delegation/team.py
                                    125
                                    126
                                    127
                                    128
                                    129
                                    130
                                    131
                                    132
                                    133
                                    134
                                    135
                                    136
                                    137
                                    138
                                    139
                                    140
                                    141
                                    142
                                    143
                                    144
                                    145
                                    146
                                    147
                                    148
                                    149
                                    150
                                    151
                                    152
                                    153
                                    154
                                    155
                                    156
                                    157
                                    158
                                    159
                                    160
                                    161
                                    162
                                    163
                                    164
                                    165
                                    166
                                    167
                                    168
                                    169
                                    async def run_iter(
                                        self,
                                        *prompts: AnyPromptType,
                                        **kwargs: Any,
                                    ) -> AsyncIterator[ChatMessage[Any]]:
                                        """Yield messages as they arrive from parallel execution."""
                                        queue: asyncio.Queue[ChatMessage[Any] | None] = asyncio.Queue()
                                        failures: dict[str, Exception] = {}
                                    
                                        async def _run(node: MessageNode[TDeps, Any]):
                                            try:
                                                message = await node.run(*prompts, **kwargs)
                                                await queue.put(message)
                                            except Exception as e:
                                                logger.exception("Error executing node", name=node.name)
                                                failures[node.name] = e
                                                # Put None to maintain queue count
                                                await queue.put(None)
                                    
                                        # Get nodes to run
                                        combined_prompt = "\n".join([await to_prompt(p) for p in prompts])
                                        all_nodes = list(await self.pick_agents(combined_prompt))
                                    
                                        # Start all agents
                                        tasks = [asyncio.create_task(_run(n), name=f"run_{n.name}") for n in all_nodes]
                                    
                                        try:
                                            # Yield messages as they arrive
                                            for _ in all_nodes:
                                                if msg := await queue.get():
                                                    yield msg
                                    
                                            # If any failures occurred, raise error with details
                                            if failures:
                                                error_details = "\n".join(
                                                    f"- {name}: {error}" for name, error in failures.items()
                                                )
                                                error_msg = f"Some nodes failed to execute:\n{error_details}"
                                                raise RuntimeError(error_msg)
                                    
                                        finally:
                                            # Clean up any remaining tasks
                                            for task in tasks:
                                                if not task.done():
                                                    task.cancel()
                                    

                                    run_job async

                                    run_job(
                                        job: Job[TDeps, TJobResult],
                                        *,
                                        store_history: bool = True,
                                        include_agent_tools: bool = True,
                                    ) -> list[AgentResponse[TJobResult]]
                                    

                                    Execute a job across all team members in parallel.

                                    Parameters:

                                    Name Type Description Default
                                    job Job[TDeps, TJobResult]

                                    Job configuration to execute

                                    required
                                    store_history bool

                                    Whether to add job execution to conversation history

                                    True
                                    include_agent_tools bool

                                    Whether to include agent's tools alongside job tools

                                    True

                                    Returns:

                                    Type Description
                                    list[AgentResponse[TJobResult]]

                                    List of responses from all agents

                                    Raises:

                                    Type Description
                                    JobError

                                    If job execution fails for any agent

                                    ValueError

                                    If job configuration is invalid

                                    Source code in src/llmling_agent/delegation/team.py
                                    227
                                    228
                                    229
                                    230
                                    231
                                    232
                                    233
                                    234
                                    235
                                    236
                                    237
                                    238
                                    239
                                    240
                                    241
                                    242
                                    243
                                    244
                                    245
                                    246
                                    247
                                    248
                                    249
                                    250
                                    251
                                    252
                                    253
                                    254
                                    255
                                    256
                                    257
                                    258
                                    259
                                    260
                                    261
                                    262
                                    263
                                    264
                                    265
                                    266
                                    267
                                    268
                                    269
                                    270
                                    271
                                    272
                                    273
                                    274
                                    275
                                    276
                                    277
                                    278
                                    279
                                    280
                                    281
                                    282
                                    283
                                    284
                                    285
                                    286
                                    287
                                    288
                                    289
                                    290
                                    291
                                    292
                                    293
                                    294
                                    295
                                    296
                                    297
                                    298
                                    299
                                    300
                                    301
                                    302
                                    async def run_job[TJobResult](
                                        self,
                                        job: Job[TDeps, TJobResult],
                                        *,
                                        store_history: bool = True,
                                        include_agent_tools: bool = True,
                                    ) -> list[AgentResponse[TJobResult]]:
                                        """Execute a job across all team members in parallel.
                                    
                                        Args:
                                            job: Job configuration to execute
                                            store_history: Whether to add job execution to conversation history
                                            include_agent_tools: Whether to include agent's tools alongside job tools
                                    
                                        Returns:
                                            List of responses from all agents
                                    
                                        Raises:
                                            JobError: If job execution fails for any agent
                                            ValueError: If job configuration is invalid
                                        """
                                        from llmling_agent.agent import Agent
                                        from llmling_agent.tasks import JobError
                                    
                                        responses: list[AgentResponse[TJobResult]] = []
                                        errors: dict[str, Exception] = {}
                                        start_time = get_now()
                                    
                                        # Validate dependencies for all agents
                                        if job.required_dependency is not None:
                                            invalid_agents = [
                                                agent.name
                                                for agent in self.iter_agents()
                                                if not isinstance(agent.context.data, job.required_dependency)
                                            ]
                                            if invalid_agents:
                                                msg = (
                                                    f"Agents {', '.join(invalid_agents)} don't have required "
                                                    f"dependency type: {job.required_dependency}"
                                                )
                                                raise JobError(msg)
                                    
                                        try:
                                            # Load knowledge for all agents if provided
                                            if job.knowledge:
                                                # TODO: resources
                                                tools = [t.name for t in job.get_tools()]
                                                await self.distribute(content="", tools=tools)
                                    
                                            prompt = await job.get_prompt()
                                    
                                            async def _run(agent: MessageNode[TDeps, TJobResult]):
                                                assert isinstance(agent, Agent)
                                                try:
                                                    with agent.tools.temporary_tools(
                                                        job.get_tools(), exclusive=not include_agent_tools
                                                    ):
                                                        start = perf_counter()
                                                        resp = AgentResponse(
                                                            agent_name=agent.name,
                                                            message=await agent.run(prompt, store_history=store_history),  # pyright: ignore
                                                            timing=perf_counter() - start,
                                                        )
                                                        responses.append(resp)
                                                except Exception as e:  # noqa: BLE001
                                                    errors[agent.name] = e
                                    
                                            # Run job in parallel on all agents
                                            await asyncio.gather(*[_run(node) for node in self.agents])
                                    
                                            return TeamResponse(responses=responses, start_time=start_time, errors=errors)
                                    
                                        except Exception as e:
                                            msg = "Job execution failed"
                                            logger.exception(msg)
                                            raise JobError(msg) from e
                                    

                                    run_stream async

                                    run_stream(
                                        *prompts: AnyPromptType | Image | PathLike[str], **kwargs: Any
                                    ) -> AsyncIterator[tuple[MessageNode[Any, Any], AgentStreamEvent | StreamCompleteEvent]]
                                    

                                    Stream responses from all team members in parallel.

                                    Parameters:

                                    Name Type Description Default
                                    prompts AnyPromptType | Image | PathLike[str]

                                    Input prompts to process in parallel

                                    ()
                                    kwargs Any

                                    Additional arguments passed to each agent

                                    {}

                                    Yields:

                                    Type Description
                                    AsyncIterator[tuple[MessageNode[Any, Any], AgentStreamEvent | StreamCompleteEvent]]

                                    Tuples of (agent, event) where agent is the Agent instance

                                    AsyncIterator[tuple[MessageNode[Any, Any], AgentStreamEvent | StreamCompleteEvent]]

                                    and event is the streaming event from that agent.

                                    Source code in src/llmling_agent/delegation/team.py
                                    195
                                    196
                                    197
                                    198
                                    199
                                    200
                                    201
                                    202
                                    203
                                    204
                                    205
                                    206
                                    207
                                    208
                                    209
                                    210
                                    211
                                    212
                                    213
                                    214
                                    215
                                    216
                                    217
                                    218
                                    219
                                    220
                                    221
                                    222
                                    223
                                    224
                                    225
                                    async def run_stream(
                                        self,
                                        *prompts: AnyPromptType | PIL.Image.Image | os.PathLike[str],
                                        **kwargs: Any,
                                    ) -> AsyncIterator[
                                        tuple[MessageNode[Any, Any], AgentStreamEvent | StreamCompleteEvent]
                                    ]:
                                        """Stream responses from all team members in parallel.
                                    
                                        Args:
                                            prompts: Input prompts to process in parallel
                                            kwargs: Additional arguments passed to each agent
                                    
                                        Yields:
                                            Tuples of (agent, event) where agent is the Agent instance
                                            and event is the streaming event from that agent.
                                        """
                                        # Get nodes to run
                                        combined_prompt = "\n".join([await to_prompt(p) for p in prompts])
                                        all_nodes = list(await self.pick_agents(combined_prompt))
                                    
                                        # Create list of streams that yield (agent, event) tuples
                                        agent_streams = [
                                            normalize_stream_for_teams(agent, *prompts, **kwargs)
                                            for agent in all_nodes
                                            if hasattr(agent, "run_stream")
                                        ]
                                    
                                        # Merge all agent streams
                                        async for agent_event_tuple in as_generated(agent_streams):
                                            yield agent_event_tuple
                                    

                                    TeamRun

                                    Bases: BaseTeam[TDeps, TResult]

                                    Handles team operations with monitoring.

                                    Source code in src/llmling_agent/delegation/teamrun.py
                                     60
                                     61
                                     62
                                     63
                                     64
                                     65
                                     66
                                     67
                                     68
                                     69
                                     70
                                     71
                                     72
                                     73
                                     74
                                     75
                                     76
                                     77
                                     78
                                     79
                                     80
                                     81
                                     82
                                     83
                                     84
                                     85
                                     86
                                     87
                                     88
                                     89
                                     90
                                     91
                                     92
                                     93
                                     94
                                     95
                                     96
                                     97
                                     98
                                     99
                                    100
                                    101
                                    102
                                    103
                                    104
                                    105
                                    106
                                    107
                                    108
                                    109
                                    110
                                    111
                                    112
                                    113
                                    114
                                    115
                                    116
                                    117
                                    118
                                    119
                                    120
                                    121
                                    122
                                    123
                                    124
                                    125
                                    126
                                    127
                                    128
                                    129
                                    130
                                    131
                                    132
                                    133
                                    134
                                    135
                                    136
                                    137
                                    138
                                    139
                                    140
                                    141
                                    142
                                    143
                                    144
                                    145
                                    146
                                    147
                                    148
                                    149
                                    150
                                    151
                                    152
                                    153
                                    154
                                    155
                                    156
                                    157
                                    158
                                    159
                                    160
                                    161
                                    162
                                    163
                                    164
                                    165
                                    166
                                    167
                                    168
                                    169
                                    170
                                    171
                                    172
                                    173
                                    174
                                    175
                                    176
                                    177
                                    178
                                    179
                                    180
                                    181
                                    182
                                    183
                                    184
                                    185
                                    186
                                    187
                                    188
                                    189
                                    190
                                    191
                                    192
                                    193
                                    194
                                    195
                                    196
                                    197
                                    198
                                    199
                                    200
                                    201
                                    202
                                    203
                                    204
                                    205
                                    206
                                    207
                                    208
                                    209
                                    210
                                    211
                                    212
                                    213
                                    214
                                    215
                                    216
                                    217
                                    218
                                    219
                                    220
                                    221
                                    222
                                    223
                                    224
                                    225
                                    226
                                    227
                                    228
                                    229
                                    230
                                    231
                                    232
                                    233
                                    234
                                    235
                                    236
                                    237
                                    238
                                    239
                                    240
                                    241
                                    242
                                    243
                                    244
                                    245
                                    246
                                    247
                                    248
                                    249
                                    250
                                    251
                                    252
                                    253
                                    254
                                    255
                                    256
                                    257
                                    258
                                    259
                                    260
                                    261
                                    262
                                    263
                                    264
                                    265
                                    266
                                    267
                                    268
                                    269
                                    270
                                    271
                                    272
                                    273
                                    274
                                    275
                                    276
                                    277
                                    278
                                    279
                                    280
                                    281
                                    282
                                    283
                                    284
                                    285
                                    286
                                    287
                                    288
                                    289
                                    290
                                    291
                                    292
                                    293
                                    294
                                    295
                                    296
                                    297
                                    298
                                    299
                                    300
                                    301
                                    302
                                    303
                                    304
                                    305
                                    306
                                    307
                                    308
                                    309
                                    310
                                    311
                                    312
                                    313
                                    314
                                    315
                                    316
                                    317
                                    318
                                    319
                                    320
                                    321
                                    class TeamRun[TDeps, TResult](BaseTeam[TDeps, TResult]):
                                        """Handles team operations with monitoring."""
                                    
                                        @overload  # validator set: it defines the output
                                        def __init__(
                                            self,
                                            agents: Sequence[MessageNode[TDeps, Any]],
                                            *,
                                            name: str | None = None,
                                            description: str | None = None,
                                            shared_prompt: str | None = None,
                                            validator: MessageNode[Any, TResult],
                                            picker: Agent[Any, Any] | None = None,
                                            num_picks: int | None = None,
                                            pick_prompt: str | None = None,
                                        ): ...
                                    
                                        @overload
                                        def __init__(  # no validator, but all nodes same output type.
                                            self,
                                            agents: Sequence[MessageNode[TDeps, TResult]],
                                            *,
                                            name: str | None = None,
                                            description: str | None = None,
                                            shared_prompt: str | None = None,
                                            validator: None = None,
                                            picker: Agent[Any, Any] | None = None,
                                            num_picks: int | None = None,
                                            pick_prompt: str | None = None,
                                        ): ...
                                    
                                        @overload
                                        def __init__(
                                            self,
                                            agents: Sequence[MessageNode[TDeps, Any]],
                                            *,
                                            name: str | None = None,
                                            description: str | None = None,
                                            shared_prompt: str | None = None,
                                            validator: MessageNode[Any, TResult] | None = None,
                                            picker: Agent[Any, Any] | None = None,
                                            num_picks: int | None = None,
                                            pick_prompt: str | None = None,
                                        ): ...
                                    
                                        def __init__(
                                            self,
                                            agents: Sequence[MessageNode[TDeps, Any]],
                                            *,
                                            name: str | None = None,
                                            description: str | None = None,
                                            shared_prompt: str | None = None,
                                            validator: MessageNode[Any, TResult] | None = None,
                                            picker: Agent[Any, Any] | None = None,
                                            num_picks: int | None = None,
                                            pick_prompt: str | None = None,
                                            # result_mode: ResultMode = "last",
                                        ):
                                            super().__init__(
                                                agents,
                                                name=name,
                                                description=description,
                                                shared_prompt=shared_prompt,
                                                picker=picker,
                                                num_picks=num_picks,
                                                pick_prompt=pick_prompt,
                                            )
                                            self.validator = validator
                                            self.result_mode = "last"
                                    
                                        def __prompt__(self) -> str:
                                            """Format team info for prompts."""
                                            members = " -> ".join(a.name for a in self.agents)
                                            desc = f" - {self.description}" if self.description else ""
                                            return f"Sequential Team {self.name!r}{desc}\nPipeline: {members}"
                                    
                                        async def _run(
                                            self,
                                            *prompts: AnyPromptType | PIL.Image.Image | os.PathLike[str] | None,
                                            wait_for_connections: bool | None = None,
                                            message_id: str | None = None,
                                            conversation_id: str | None = None,
                                            **kwargs: Any,
                                        ) -> ChatMessage[TResult]:
                                            """Run agents sequentially and return combined message.
                                    
                                            This message wraps execute and extracts the ChatMessage in order to fulfill
                                            the "message protocol".
                                            """
                                            message_id = message_id or str(uuid4())
                                    
                                            result = await self.execute(*prompts, **kwargs)
                                            all_messages = [r.message for r in result if r.message]
                                            assert all_messages, "Error during execution, returned None for TeamRun"
                                            # Determine content based on mode
                                            match self.result_mode:
                                                case "last":
                                                    content = all_messages[-1].content
                                                # case "concat":
                                                #     content = "\n".join(msg.format() for msg in all_messages)
                                                case _:
                                                    msg = f"Invalid result mode: {self.result_mode}"
                                                    raise ValueError(msg)
                                    
                                            return ChatMessage(
                                                content=content,
                                                role="assistant",
                                                name=self.name,
                                                associated_messages=all_messages,
                                                message_id=message_id,
                                                conversation_id=conversation_id,
                                                metadata={
                                                    "execution_order": [r.agent_name for r in result],
                                                    "start_time": result.start_time.isoformat(),
                                                    "errors": {name: str(error) for name, error in result.errors.items()},
                                                },
                                            )
                                    
                                        async def execute(
                                            self,
                                            *prompts: AnyPromptType | PIL.Image.Image | os.PathLike[str] | None,
                                            **kwargs: Any,
                                        ) -> TeamResponse[TResult]:
                                            """Start execution with optional monitoring."""
                                            self._team_talk.clear()
                                            start_time = get_now()
                                            final_prompt = list(prompts)
                                            if self.shared_prompt:
                                                final_prompt.insert(0, self.shared_prompt)
                                    
                                            responses = [
                                                i
                                                async for i in self.execute_iter(*final_prompt)
                                                if isinstance(i, AgentResponse)
                                            ]
                                            return TeamResponse(responses, start_time)
                                    
                                        async def run_iter(
                                            self,
                                            *prompts: AnyPromptType | PIL.Image.Image | os.PathLike[str],
                                            **kwargs: Any,
                                        ) -> AsyncIterator[ChatMessage[Any]]:
                                            """Yield messages from the execution chain."""
                                            async for item in self.execute_iter(*prompts, **kwargs):
                                                match item:
                                                    case AgentResponse():
                                                        if item.message:
                                                            yield item.message
                                                    case Talk():
                                                        pass
                                    
                                        async def execute_iter(
                                            self,
                                            *prompt: AnyPromptType | PIL.Image.Image | os.PathLike[str],
                                            **kwargs: Any,
                                        ) -> AsyncIterator[Talk[Any] | AgentResponse[Any]]:
                                            from toprompt import to_prompt
                                    
                                            connections: list[Talk[Any]] = []
                                            try:
                                                combined_prompt = "\n".join([await to_prompt(p) for p in prompt])
                                                all_nodes = list(await self.pick_agents(combined_prompt))
                                                if self.validator:
                                                    all_nodes.append(self.validator)
                                                first = all_nodes[0]
                                                connections = [s.connect_to(t, queued=True) for s, t in pairwise(all_nodes)]
                                                for conn in connections:
                                                    self._team_talk.append(conn)
                                    
                                                # First agent
                                                start = perf_counter()
                                                message = await first.run(*prompt, **kwargs)
                                                timing = perf_counter() - start
                                                response = AgentResponse[Any](first.name, message=message, timing=timing)
                                                yield response
                                    
                                                # Process through chain
                                                for connection in connections:
                                                    target = connection.targets[0]
                                                    target_name = target.name
                                                    yield connection
                                    
                                                    # Let errors propagate - they break the chain
                                                    start = perf_counter()
                                                    messages = await connection.trigger()
                                    
                                                    if target == all_nodes[-1]:
                                                        last_talk = Talk[Any](target, [], connection_type="run")
                                                        if response.message:
                                                            last_talk.stats.messages.append(response.message)
                                                        self._team_talk.append(last_talk)
                                    
                                                    timing = perf_counter() - start
                                                    msg = messages[0]
                                                    response = AgentResponse[Any](target_name, message=msg, timing=timing)
                                                    yield response
                                    
                                            finally:  # Always clean up connections
                                                for connection in connections:
                                                    connection.disconnect()
                                    
                                        async def run_stream(
                                            self,
                                            *prompts: AnyPromptType | PIL.Image.Image | os.PathLike[str],
                                            require_all: bool = True,
                                            **kwargs: Any,
                                        ) -> AsyncIterator[
                                            tuple[MessageNode[Any, Any], AgentStreamEvent | StreamCompleteEvent]
                                        ]:
                                            """Stream responses through the chain of team members.
                                    
                                            Args:
                                                prompts: Input prompts to process through the chain
                                                require_all: If True, fail if any agent fails. If False,
                                                             continue with remaining agents.
                                                kwargs: Additional arguments passed to each agent
                                    
                                            Yields:
                                                Tuples of (agent, event) where agent is the Agent instance
                                                and event is the streaming event.
                                            """
                                            from pydantic_ai import PartDeltaEvent, TextPartDelta
                                    
                                            from llmling_agent.agent.agent import StreamCompleteEvent
                                    
                                            current_message = prompts
                                            collected_content = []
                                    
                                            for agent in self.agents:
                                                try:
                                                    agent_content = []
                                    
                                                    # Use wrapper to normalize all streaming nodes to (agent, event) tuples
                                                    def _raise_streaming_error(agent=agent):
                                                        msg = f"Agent {agent.name} does not support streaming"
                                                        raise ValueError(msg)  # noqa: TRY301
                                    
                                                    if hasattr(agent, "run_stream"):
                                                        stream = normalize_stream_for_teams(agent, *current_message, **kwargs)
                                                    else:
                                                        _raise_streaming_error()
                                    
                                                    async for agent_event_tuple in stream:
                                                        actual_agent, event = agent_event_tuple
                                                        match event:
                                                            case PartDeltaEvent(delta=TextPartDelta(content_delta=delta)):
                                                                agent_content.append(delta)
                                                                collected_content.append(delta)
                                                                yield (actual_agent, event)  # Yield tuple with agent context
                                                            case StreamCompleteEvent(message=message):
                                                                # Use complete response as input for next agent
                                                                current_message = (message.content,)
                                                                yield (actual_agent, event)  # Yield tuple with agent context
                                                            case _:
                                                                yield (actual_agent, event)  # Yield tuple with agent context
                                    
                                                except Exception as e:
                                                    if require_all:
                                                        msg = f"Chain broken at {agent.name}: {e}"
                                                        logger.exception(msg)
                                                        raise ValueError(msg) from e
                                                    logger.warning("Chain handler failed", name=agent.name, error=e)
                                    

                                    __prompt__

                                    __prompt__() -> str
                                    

                                    Format team info for prompts.

                                    Source code in src/llmling_agent/delegation/teamrun.py
                                    130
                                    131
                                    132
                                    133
                                    134
                                    def __prompt__(self) -> str:
                                        """Format team info for prompts."""
                                        members = " -> ".join(a.name for a in self.agents)
                                        desc = f" - {self.description}" if self.description else ""
                                        return f"Sequential Team {self.name!r}{desc}\nPipeline: {members}"
                                    

                                    execute async

                                    execute(
                                        *prompts: AnyPromptType | Image | PathLike[str] | None, **kwargs: Any
                                    ) -> TeamResponse[TResult]
                                    

                                    Start execution with optional monitoring.

                                    Source code in src/llmling_agent/delegation/teamrun.py
                                    178
                                    179
                                    180
                                    181
                                    182
                                    183
                                    184
                                    185
                                    186
                                    187
                                    188
                                    189
                                    190
                                    191
                                    192
                                    193
                                    194
                                    195
                                    async def execute(
                                        self,
                                        *prompts: AnyPromptType | PIL.Image.Image | os.PathLike[str] | None,
                                        **kwargs: Any,
                                    ) -> TeamResponse[TResult]:
                                        """Start execution with optional monitoring."""
                                        self._team_talk.clear()
                                        start_time = get_now()
                                        final_prompt = list(prompts)
                                        if self.shared_prompt:
                                            final_prompt.insert(0, self.shared_prompt)
                                    
                                        responses = [
                                            i
                                            async for i in self.execute_iter(*final_prompt)
                                            if isinstance(i, AgentResponse)
                                        ]
                                        return TeamResponse(responses, start_time)
                                    

                                    run_iter async

                                    run_iter(
                                        *prompts: AnyPromptType | Image | PathLike[str], **kwargs: Any
                                    ) -> AsyncIterator[ChatMessage[Any]]
                                    

                                    Yield messages from the execution chain.

                                    Source code in src/llmling_agent/delegation/teamrun.py
                                    197
                                    198
                                    199
                                    200
                                    201
                                    202
                                    203
                                    204
                                    205
                                    206
                                    207
                                    208
                                    209
                                    async def run_iter(
                                        self,
                                        *prompts: AnyPromptType | PIL.Image.Image | os.PathLike[str],
                                        **kwargs: Any,
                                    ) -> AsyncIterator[ChatMessage[Any]]:
                                        """Yield messages from the execution chain."""
                                        async for item in self.execute_iter(*prompts, **kwargs):
                                            match item:
                                                case AgentResponse():
                                                    if item.message:
                                                        yield item.message
                                                case Talk():
                                                    pass
                                    

                                    run_stream async

                                    run_stream(
                                        *prompts: AnyPromptType | Image | PathLike[str],
                                        require_all: bool = True,
                                        **kwargs: Any,
                                    ) -> AsyncIterator[tuple[MessageNode[Any, Any], AgentStreamEvent | StreamCompleteEvent]]
                                    

                                    Stream responses through the chain of team members.

                                    Parameters:

                                    Name Type Description Default
                                    prompts AnyPromptType | Image | PathLike[str]

                                    Input prompts to process through the chain

                                    ()
                                    require_all bool

                                    If True, fail if any agent fails. If False, continue with remaining agents.

                                    True
                                    kwargs Any

                                    Additional arguments passed to each agent

                                    {}

                                    Yields:

                                    Type Description
                                    AsyncIterator[tuple[MessageNode[Any, Any], AgentStreamEvent | StreamCompleteEvent]]

                                    Tuples of (agent, event) where agent is the Agent instance

                                    AsyncIterator[tuple[MessageNode[Any, Any], AgentStreamEvent | StreamCompleteEvent]]

                                    and event is the streaming event.

                                    Source code in src/llmling_agent/delegation/teamrun.py
                                    261
                                    262
                                    263
                                    264
                                    265
                                    266
                                    267
                                    268
                                    269
                                    270
                                    271
                                    272
                                    273
                                    274
                                    275
                                    276
                                    277
                                    278
                                    279
                                    280
                                    281
                                    282
                                    283
                                    284
                                    285
                                    286
                                    287
                                    288
                                    289
                                    290
                                    291
                                    292
                                    293
                                    294
                                    295
                                    296
                                    297
                                    298
                                    299
                                    300
                                    301
                                    302
                                    303
                                    304
                                    305
                                    306
                                    307
                                    308
                                    309
                                    310
                                    311
                                    312
                                    313
                                    314
                                    315
                                    316
                                    317
                                    318
                                    319
                                    320
                                    321
                                    async def run_stream(
                                        self,
                                        *prompts: AnyPromptType | PIL.Image.Image | os.PathLike[str],
                                        require_all: bool = True,
                                        **kwargs: Any,
                                    ) -> AsyncIterator[
                                        tuple[MessageNode[Any, Any], AgentStreamEvent | StreamCompleteEvent]
                                    ]:
                                        """Stream responses through the chain of team members.
                                    
                                        Args:
                                            prompts: Input prompts to process through the chain
                                            require_all: If True, fail if any agent fails. If False,
                                                         continue with remaining agents.
                                            kwargs: Additional arguments passed to each agent
                                    
                                        Yields:
                                            Tuples of (agent, event) where agent is the Agent instance
                                            and event is the streaming event.
                                        """
                                        from pydantic_ai import PartDeltaEvent, TextPartDelta
                                    
                                        from llmling_agent.agent.agent import StreamCompleteEvent
                                    
                                        current_message = prompts
                                        collected_content = []
                                    
                                        for agent in self.agents:
                                            try:
                                                agent_content = []
                                    
                                                # Use wrapper to normalize all streaming nodes to (agent, event) tuples
                                                def _raise_streaming_error(agent=agent):
                                                    msg = f"Agent {agent.name} does not support streaming"
                                                    raise ValueError(msg)  # noqa: TRY301
                                    
                                                if hasattr(agent, "run_stream"):
                                                    stream = normalize_stream_for_teams(agent, *current_message, **kwargs)
                                                else:
                                                    _raise_streaming_error()
                                    
                                                async for agent_event_tuple in stream:
                                                    actual_agent, event = agent_event_tuple
                                                    match event:
                                                        case PartDeltaEvent(delta=TextPartDelta(content_delta=delta)):
                                                            agent_content.append(delta)
                                                            collected_content.append(delta)
                                                            yield (actual_agent, event)  # Yield tuple with agent context
                                                        case StreamCompleteEvent(message=message):
                                                            # Use complete response as input for next agent
                                                            current_message = (message.content,)
                                                            yield (actual_agent, event)  # Yield tuple with agent context
                                                        case _:
                                                            yield (actual_agent, event)  # Yield tuple with agent context
                                    
                                            except Exception as e:
                                                if require_all:
                                                    msg = f"Chain broken at {agent.name}: {e}"
                                                    logger.exception(msg)
                                                    raise ValueError(msg) from e
                                                logger.warning("Chain handler failed", name=agent.name, error=e)
                                    

                                    Tool dataclass

                                    Information about a registered tool.

                                    Source code in src/llmling_agent/tools/base.py
                                     77
                                     78
                                     79
                                     80
                                     81
                                     82
                                     83
                                     84
                                     85
                                     86
                                     87
                                     88
                                     89
                                     90
                                     91
                                     92
                                     93
                                     94
                                     95
                                     96
                                     97
                                     98
                                     99
                                    100
                                    101
                                    102
                                    103
                                    104
                                    105
                                    106
                                    107
                                    108
                                    109
                                    110
                                    111
                                    112
                                    113
                                    114
                                    115
                                    116
                                    117
                                    118
                                    119
                                    120
                                    121
                                    122
                                    123
                                    124
                                    125
                                    126
                                    127
                                    128
                                    129
                                    130
                                    131
                                    132
                                    133
                                    134
                                    135
                                    136
                                    137
                                    138
                                    139
                                    140
                                    141
                                    142
                                    143
                                    144
                                    145
                                    146
                                    147
                                    148
                                    149
                                    150
                                    151
                                    152
                                    153
                                    154
                                    155
                                    156
                                    157
                                    158
                                    159
                                    160
                                    161
                                    162
                                    163
                                    164
                                    165
                                    166
                                    167
                                    168
                                    169
                                    170
                                    171
                                    172
                                    173
                                    174
                                    175
                                    176
                                    177
                                    178
                                    179
                                    180
                                    181
                                    182
                                    183
                                    184
                                    185
                                    186
                                    187
                                    188
                                    189
                                    190
                                    191
                                    192
                                    193
                                    194
                                    195
                                    196
                                    197
                                    198
                                    199
                                    200
                                    201
                                    202
                                    203
                                    204
                                    205
                                    206
                                    207
                                    208
                                    209
                                    210
                                    211
                                    212
                                    213
                                    214
                                    215
                                    216
                                    217
                                    218
                                    219
                                    220
                                    221
                                    222
                                    223
                                    224
                                    225
                                    226
                                    227
                                    228
                                    229
                                    230
                                    231
                                    232
                                    233
                                    234
                                    235
                                    236
                                    237
                                    238
                                    239
                                    240
                                    241
                                    242
                                    243
                                    244
                                    245
                                    246
                                    247
                                    248
                                    249
                                    250
                                    251
                                    252
                                    253
                                    254
                                    255
                                    256
                                    257
                                    258
                                    259
                                    260
                                    261
                                    262
                                    263
                                    264
                                    265
                                    266
                                    267
                                    268
                                    269
                                    270
                                    271
                                    272
                                    273
                                    274
                                    275
                                    276
                                    277
                                    278
                                    279
                                    280
                                    281
                                    282
                                    283
                                    284
                                    285
                                    286
                                    287
                                    288
                                    289
                                    290
                                    291
                                    292
                                    293
                                    294
                                    295
                                    296
                                    297
                                    298
                                    299
                                    300
                                    301
                                    302
                                    303
                                    304
                                    305
                                    306
                                    307
                                    308
                                    309
                                    310
                                    311
                                    312
                                    313
                                    314
                                    @dataclass
                                    class Tool:
                                        """Information about a registered tool."""
                                    
                                        callable: LLMCallableTool
                                        """The actual tool implementation"""
                                    
                                        enabled: bool = True
                                        """Whether the tool is currently enabled"""
                                    
                                        source: ToolSource = "runtime"
                                        """Where the tool came from."""
                                    
                                        priority: int = 100
                                        """Priority for tool execution (lower = higher priority)"""
                                    
                                        requires_confirmation: bool = False
                                        """Whether tool execution needs explicit confirmation"""
                                    
                                        requires_capability: str | None = None
                                        """Optional capability required to use this tool"""
                                    
                                        agent_name: str | None = None
                                        """The agent name as an identifier for agent-as-a-tool."""
                                    
                                        metadata: dict[str, str] = field(default_factory=dict)
                                        """Additional tool metadata"""
                                    
                                        cache_enabled: bool = False
                                        """Whether to enable caching for this tool."""
                                    
                                        category: ToolKind | None = None
                                        """The category of the tool."""
                                    
                                        @property
                                        def schema(self) -> schemez.OpenAIFunctionTool:
                                            """Get the OpenAI function schema for the tool."""
                                            return self.callable.get_schema()
                                    
                                        @property
                                        def name(self) -> str:
                                            """Get tool name."""
                                            return self.callable.name
                                    
                                        @property
                                        def description(self) -> str | None:
                                            """Get tool description."""
                                            return self.callable.description
                                    
                                        def matches_filter(self, state: ToolState) -> bool:
                                            """Check if tool matches state filter."""
                                            match state:
                                                case "all":
                                                    return True
                                                case "enabled":
                                                    return self.enabled
                                                case "disabled":
                                                    return not self.enabled
                                    
                                        @property
                                        def parameters(self) -> list[ToolParameter]:
                                            """Get information about tool parameters."""
                                            schema = self.schema["function"]
                                            properties: dict[str, Property] = schema.get("properties", {})  # type: ignore
                                            required: list[str] = schema.get("required", [])  # type: ignore
                                    
                                            return [
                                                ToolParameter(
                                                    name=name,
                                                    required=name in required,
                                                    type_info=details.get("type"),
                                                    description=details.get("description"),
                                                )
                                                for name, details in properties.items()
                                            ]
                                    
                                        def format_info(self, indent: str = "  ") -> str:
                                            """Format complete tool information."""
                                            lines = [f"{indent}{self.name}"]
                                            if self.description:
                                                lines.append(f"{indent}  {self.description}")
                                            if self.parameters:
                                                lines.append(f"{indent}  Parameters:")
                                                lines.extend(f"{indent}    {param}" for param in self.parameters)
                                            if self.metadata:
                                                lines.append(f"{indent}  Metadata:")
                                                lines.extend(f"{indent}    {k}: {v}" for k, v in self.metadata.items())
                                            return "\n".join(lines)
                                    
                                        @logfire.instrument("Executing tool {self.name} with args={args}, kwargs={kwargs}")
                                        async def execute(self, *args: Any, **kwargs: Any) -> Any:
                                            """Execute tool, handling both sync and async cases."""
                                            return await execute(self.callable.callable, *args, **kwargs, use_thread=True)
                                    
                                        @classmethod
                                        def from_code(
                                            cls,
                                            code: str,
                                            name: str | None = None,
                                            description: str | None = None,
                                        ) -> Self:
                                            """Create a tool from a code string."""
                                            namespace: dict[str, Any] = {}
                                            exec(code, namespace)
                                            func = next((v for v in namespace.values() if callable(v)), None)
                                            if not func:
                                                msg = "No callable found in provided code"
                                                raise ValueError(msg)
                                            return cls.from_callable(
                                                func, name_override=name, description_override=description
                                            )
                                    
                                        @classmethod
                                        def from_callable(
                                            cls,
                                            fn: Callable[..., Any] | str,
                                            *,
                                            name_override: str | None = None,
                                            description_override: str | None = None,
                                            schema_override: schemez.OpenAIFunctionDefinition | None = None,
                                            **kwargs: Any,
                                        ) -> Self:
                                            tool = LLMCallableTool.from_callable(
                                                fn,
                                                name_override=name_override,
                                                description_override=description_override,
                                                schema_override=schema_override,
                                            )
                                            return cls(tool, **kwargs)
                                    
                                        @classmethod
                                        def from_crewai_tool(
                                            cls,
                                            tool: Any,
                                            *,
                                            name_override: str | None = None,
                                            description_override: str | None = None,
                                            schema_override: schemez.OpenAIFunctionDefinition | None = None,
                                            **kwargs: Any,
                                        ) -> Self:
                                            """Allows importing crewai tools."""
                                            # vaidate_import("crewai_tools", "crewai")
                                            try:
                                                from crewai.tools import BaseTool as CrewAiBaseTool  # pyright: ignore
                                            except ImportError as e:
                                                msg = "crewai package not found. Please install it with 'pip install crewai'"
                                                raise ImportError(msg) from e
                                    
                                            if not isinstance(tool, CrewAiBaseTool):
                                                msg = f"Expected CrewAI BaseTool, got {type(tool)}"
                                                raise TypeError(msg)
                                    
                                            return cls.from_callable(
                                                tool._run,
                                                name_override=name_override or tool.__class__.__name__.removesuffix("Tool"),
                                                description_override=description_override or tool.description,
                                                schema_override=schema_override,
                                                **kwargs,
                                            )
                                    
                                        @classmethod
                                        def from_langchain_tool(
                                            cls,
                                            tool: Any,
                                            *,
                                            name_override: str | None = None,
                                            description_override: str | None = None,
                                            schema_override: schemez.OpenAIFunctionDefinition | None = None,
                                            **kwargs: Any,
                                        ) -> Self:
                                            """Create a tool from a LangChain tool."""
                                            # vaidate_import("langchain_core", "langchain")
                                            try:
                                                from langchain_core.tools import (  # pyright: ignore
                                                    BaseTool as LangChainBaseTool,
                                                )
                                            except ImportError as e:
                                                msg = "langchain-core package not found."
                                                raise ImportError(msg) from e
                                    
                                            if not isinstance(tool, LangChainBaseTool):
                                                msg = f"Expected LangChain BaseTool, got {type(tool)}"
                                                raise TypeError(msg)
                                    
                                            return cls.from_callable(
                                                tool.invoke,
                                                name_override=name_override or tool.name,
                                                description_override=description_override or tool.description,
                                                schema_override=schema_override,
                                                **kwargs,
                                            )
                                    
                                        @classmethod
                                        def from_autogen_tool(
                                            cls,
                                            tool: Any,
                                            *,
                                            name_override: str | None = None,
                                            description_override: str | None = None,
                                            schema_override: schemez.OpenAIFunctionDefinition | None = None,
                                            **kwargs: Any,
                                        ) -> Self:
                                            """Create a tool from a AutoGen tool."""
                                            # vaidate_import("autogen_core", "autogen")
                                            try:
                                                from autogen_core import CancellationToken  # pyright: ignore
                                                from autogen_core.tools import BaseTool  # pyright: ignore
                                            except ImportError as e:
                                                msg = "autogent_core package not found."
                                                raise ImportError(msg) from e
                                    
                                            if not isinstance(tool, BaseTool):
                                                msg = f"Expected AutoGent BaseTool, got {type(tool)}"
                                                raise TypeError(msg)
                                            token = CancellationToken()
                                    
                                            input_model = tool.__class__.__orig_bases__[0].__args__[0]  # type: ignore
                                    
                                            name = name_override or tool.name or tool.__class__.__name__.removesuffix("Tool")
                                            description = (
                                                description_override
                                                or tool.description
                                                or inspect.getdoc(tool.__class__)
                                                or ""
                                            )
                                    
                                            async def wrapper(**kwargs: Any) -> Any:
                                                # Convert kwargs to the expected input model
                                                model = input_model(**kwargs)
                                                return await tool.run(model, cancellation_token=token)
                                    
                                            return cls.from_callable(
                                                wrapper,  # type: ignore
                                                name_override=name,
                                                description_override=description,
                                                schema_override=schema_override,
                                                **kwargs,
                                            )
                                    

                                    agent_name class-attribute instance-attribute

                                    agent_name: str | None = None
                                    

                                    The agent name as an identifier for agent-as-a-tool.

                                    cache_enabled class-attribute instance-attribute

                                    cache_enabled: bool = False
                                    

                                    Whether to enable caching for this tool.

                                    callable instance-attribute

                                    callable: LLMCallableTool
                                    

                                    The actual tool implementation

                                    category class-attribute instance-attribute

                                    category: ToolKind | None = None
                                    

                                    The category of the tool.

                                    description property

                                    description: str | None
                                    

                                    Get tool description.

                                    enabled class-attribute instance-attribute

                                    enabled: bool = True
                                    

                                    Whether the tool is currently enabled

                                    metadata class-attribute instance-attribute

                                    metadata: dict[str, str] = field(default_factory=dict)
                                    

                                    Additional tool metadata

                                    name property

                                    name: str
                                    

                                    Get tool name.

                                    parameters property

                                    parameters: list[ToolParameter]
                                    

                                    Get information about tool parameters.

                                    priority class-attribute instance-attribute

                                    priority: int = 100
                                    

                                    Priority for tool execution (lower = higher priority)

                                    requires_capability class-attribute instance-attribute

                                    requires_capability: str | None = None
                                    

                                    Optional capability required to use this tool

                                    requires_confirmation class-attribute instance-attribute

                                    requires_confirmation: bool = False
                                    

                                    Whether tool execution needs explicit confirmation

                                    schema property

                                    schema: OpenAIFunctionTool
                                    

                                    Get the OpenAI function schema for the tool.

                                    source class-attribute instance-attribute

                                    source: ToolSource = 'runtime'
                                    

                                    Where the tool came from.

                                    execute async

                                    execute(*args: Any, **kwargs: Any) -> Any
                                    

                                    Execute tool, handling both sync and async cases.

                                    Source code in src/llmling_agent/tools/base.py
                                    166
                                    167
                                    168
                                    169
                                    @logfire.instrument("Executing tool {self.name} with args={args}, kwargs={kwargs}")
                                    async def execute(self, *args: Any, **kwargs: Any) -> Any:
                                        """Execute tool, handling both sync and async cases."""
                                        return await execute(self.callable.callable, *args, **kwargs, use_thread=True)
                                    

                                    format_info

                                    format_info(indent: str = '  ') -> str
                                    

                                    Format complete tool information.

                                    Source code in src/llmling_agent/tools/base.py
                                    153
                                    154
                                    155
                                    156
                                    157
                                    158
                                    159
                                    160
                                    161
                                    162
                                    163
                                    164
                                    def format_info(self, indent: str = "  ") -> str:
                                        """Format complete tool information."""
                                        lines = [f"{indent}{self.name}"]
                                        if self.description:
                                            lines.append(f"{indent}  {self.description}")
                                        if self.parameters:
                                            lines.append(f"{indent}  Parameters:")
                                            lines.extend(f"{indent}    {param}" for param in self.parameters)
                                        if self.metadata:
                                            lines.append(f"{indent}  Metadata:")
                                            lines.extend(f"{indent}    {k}: {v}" for k, v in self.metadata.items())
                                        return "\n".join(lines)
                                    

                                    from_autogen_tool classmethod

                                    from_autogen_tool(
                                        tool: Any,
                                        *,
                                        name_override: str | None = None,
                                        description_override: str | None = None,
                                        schema_override: OpenAIFunctionDefinition | None = None,
                                        **kwargs: Any,
                                    ) -> Self
                                    

                                    Create a tool from a AutoGen tool.

                                    Source code in src/llmling_agent/tools/base.py
                                    269
                                    270
                                    271
                                    272
                                    273
                                    274
                                    275
                                    276
                                    277
                                    278
                                    279
                                    280
                                    281
                                    282
                                    283
                                    284
                                    285
                                    286
                                    287
                                    288
                                    289
                                    290
                                    291
                                    292
                                    293
                                    294
                                    295
                                    296
                                    297
                                    298
                                    299
                                    300
                                    301
                                    302
                                    303
                                    304
                                    305
                                    306
                                    307
                                    308
                                    309
                                    310
                                    311
                                    312
                                    313
                                    314
                                    @classmethod
                                    def from_autogen_tool(
                                        cls,
                                        tool: Any,
                                        *,
                                        name_override: str | None = None,
                                        description_override: str | None = None,
                                        schema_override: schemez.OpenAIFunctionDefinition | None = None,
                                        **kwargs: Any,
                                    ) -> Self:
                                        """Create a tool from a AutoGen tool."""
                                        # vaidate_import("autogen_core", "autogen")
                                        try:
                                            from autogen_core import CancellationToken  # pyright: ignore
                                            from autogen_core.tools import BaseTool  # pyright: ignore
                                        except ImportError as e:
                                            msg = "autogent_core package not found."
                                            raise ImportError(msg) from e
                                    
                                        if not isinstance(tool, BaseTool):
                                            msg = f"Expected AutoGent BaseTool, got {type(tool)}"
                                            raise TypeError(msg)
                                        token = CancellationToken()
                                    
                                        input_model = tool.__class__.__orig_bases__[0].__args__[0]  # type: ignore
                                    
                                        name = name_override or tool.name or tool.__class__.__name__.removesuffix("Tool")
                                        description = (
                                            description_override
                                            or tool.description
                                            or inspect.getdoc(tool.__class__)
                                            or ""
                                        )
                                    
                                        async def wrapper(**kwargs: Any) -> Any:
                                            # Convert kwargs to the expected input model
                                            model = input_model(**kwargs)
                                            return await tool.run(model, cancellation_token=token)
                                    
                                        return cls.from_callable(
                                            wrapper,  # type: ignore
                                            name_override=name,
                                            description_override=description,
                                            schema_override=schema_override,
                                            **kwargs,
                                        )
                                    

                                    from_code classmethod

                                    from_code(code: str, name: str | None = None, description: str | None = None) -> Self
                                    

                                    Create a tool from a code string.

                                    Source code in src/llmling_agent/tools/base.py
                                    171
                                    172
                                    173
                                    174
                                    175
                                    176
                                    177
                                    178
                                    179
                                    180
                                    181
                                    182
                                    183
                                    184
                                    185
                                    186
                                    187
                                    @classmethod
                                    def from_code(
                                        cls,
                                        code: str,
                                        name: str | None = None,
                                        description: str | None = None,
                                    ) -> Self:
                                        """Create a tool from a code string."""
                                        namespace: dict[str, Any] = {}
                                        exec(code, namespace)
                                        func = next((v for v in namespace.values() if callable(v)), None)
                                        if not func:
                                            msg = "No callable found in provided code"
                                            raise ValueError(msg)
                                        return cls.from_callable(
                                            func, name_override=name, description_override=description
                                        )
                                    

                                    from_crewai_tool classmethod

                                    from_crewai_tool(
                                        tool: Any,
                                        *,
                                        name_override: str | None = None,
                                        description_override: str | None = None,
                                        schema_override: OpenAIFunctionDefinition | None = None,
                                        **kwargs: Any,
                                    ) -> Self
                                    

                                    Allows importing crewai tools.

                                    Source code in src/llmling_agent/tools/base.py
                                    207
                                    208
                                    209
                                    210
                                    211
                                    212
                                    213
                                    214
                                    215
                                    216
                                    217
                                    218
                                    219
                                    220
                                    221
                                    222
                                    223
                                    224
                                    225
                                    226
                                    227
                                    228
                                    229
                                    230
                                    231
                                    232
                                    233
                                    234
                                    235
                                    @classmethod
                                    def from_crewai_tool(
                                        cls,
                                        tool: Any,
                                        *,
                                        name_override: str | None = None,
                                        description_override: str | None = None,
                                        schema_override: schemez.OpenAIFunctionDefinition | None = None,
                                        **kwargs: Any,
                                    ) -> Self:
                                        """Allows importing crewai tools."""
                                        # vaidate_import("crewai_tools", "crewai")
                                        try:
                                            from crewai.tools import BaseTool as CrewAiBaseTool  # pyright: ignore
                                        except ImportError as e:
                                            msg = "crewai package not found. Please install it with 'pip install crewai'"
                                            raise ImportError(msg) from e
                                    
                                        if not isinstance(tool, CrewAiBaseTool):
                                            msg = f"Expected CrewAI BaseTool, got {type(tool)}"
                                            raise TypeError(msg)
                                    
                                        return cls.from_callable(
                                            tool._run,
                                            name_override=name_override or tool.__class__.__name__.removesuffix("Tool"),
                                            description_override=description_override or tool.description,
                                            schema_override=schema_override,
                                            **kwargs,
                                        )
                                    

                                    from_langchain_tool classmethod

                                    from_langchain_tool(
                                        tool: Any,
                                        *,
                                        name_override: str | None = None,
                                        description_override: str | None = None,
                                        schema_override: OpenAIFunctionDefinition | None = None,
                                        **kwargs: Any,
                                    ) -> Self
                                    

                                    Create a tool from a LangChain tool.

                                    Source code in src/llmling_agent/tools/base.py
                                    237
                                    238
                                    239
                                    240
                                    241
                                    242
                                    243
                                    244
                                    245
                                    246
                                    247
                                    248
                                    249
                                    250
                                    251
                                    252
                                    253
                                    254
                                    255
                                    256
                                    257
                                    258
                                    259
                                    260
                                    261
                                    262
                                    263
                                    264
                                    265
                                    266
                                    267
                                    @classmethod
                                    def from_langchain_tool(
                                        cls,
                                        tool: Any,
                                        *,
                                        name_override: str | None = None,
                                        description_override: str | None = None,
                                        schema_override: schemez.OpenAIFunctionDefinition | None = None,
                                        **kwargs: Any,
                                    ) -> Self:
                                        """Create a tool from a LangChain tool."""
                                        # vaidate_import("langchain_core", "langchain")
                                        try:
                                            from langchain_core.tools import (  # pyright: ignore
                                                BaseTool as LangChainBaseTool,
                                            )
                                        except ImportError as e:
                                            msg = "langchain-core package not found."
                                            raise ImportError(msg) from e
                                    
                                        if not isinstance(tool, LangChainBaseTool):
                                            msg = f"Expected LangChain BaseTool, got {type(tool)}"
                                            raise TypeError(msg)
                                    
                                        return cls.from_callable(
                                            tool.invoke,
                                            name_override=name_override or tool.name,
                                            description_override=description_override or tool.description,
                                            schema_override=schema_override,
                                            **kwargs,
                                        )
                                    

                                    matches_filter

                                    matches_filter(state: ToolState) -> bool
                                    

                                    Check if tool matches state filter.

                                    Source code in src/llmling_agent/tools/base.py
                                    126
                                    127
                                    128
                                    129
                                    130
                                    131
                                    132
                                    133
                                    134
                                    def matches_filter(self, state: ToolState) -> bool:
                                        """Check if tool matches state filter."""
                                        match state:
                                            case "all":
                                                return True
                                            case "enabled":
                                                return self.enabled
                                            case "disabled":
                                                return not self.enabled
                                    

                                    ToolCallInfo

                                    Bases: Schema

                                    Information about an executed tool call.

                                    Source code in src/llmling_agent/tools/tool_call_info.py
                                     71
                                     72
                                     73
                                     74
                                     75
                                     76
                                     77
                                     78
                                     79
                                     80
                                     81
                                     82
                                     83
                                     84
                                     85
                                     86
                                     87
                                     88
                                     89
                                     90
                                     91
                                     92
                                     93
                                     94
                                     95
                                     96
                                     97
                                     98
                                     99
                                    100
                                    101
                                    102
                                    103
                                    104
                                    105
                                    106
                                    107
                                    108
                                    109
                                    110
                                    111
                                    112
                                    113
                                    114
                                    115
                                    116
                                    117
                                    118
                                    119
                                    120
                                    121
                                    122
                                    123
                                    124
                                    125
                                    126
                                    127
                                    128
                                    129
                                    130
                                    131
                                    132
                                    133
                                    134
                                    135
                                    136
                                    137
                                    138
                                    139
                                    140
                                    141
                                    142
                                    143
                                    144
                                    145
                                    146
                                    147
                                    148
                                    149
                                    150
                                    151
                                    152
                                    153
                                    154
                                    155
                                    156
                                    157
                                    158
                                    159
                                    160
                                    161
                                    162
                                    163
                                    164
                                    165
                                    166
                                    167
                                    class ToolCallInfo(Schema):
                                        """Information about an executed tool call."""
                                    
                                        tool_name: str
                                        """Name of the tool that was called."""
                                    
                                        args: dict[str, Any]
                                        """Arguments passed to the tool."""
                                    
                                        result: Any
                                        """Result returned by the tool."""
                                    
                                        agent_name: str
                                        """Name of the calling agent."""
                                    
                                        tool_call_id: str = Field(default_factory=lambda: str(uuid4()))
                                        """ID provided by the model (e.g. OpenAI function call ID)."""
                                    
                                        timestamp: datetime = Field(default_factory=get_now)
                                        """When the tool was called."""
                                    
                                        message_id: str | None = None
                                        """ID of the message that triggered this tool call."""
                                    
                                        error: str | None = None
                                        """Error message if the tool call failed."""
                                    
                                        timing: float | None = None
                                        """Time taken for this specific tool call in seconds."""
                                    
                                        agent_tool_name: str | None = None
                                        """If this tool is agent-based, the name of that agent."""
                                    
                                        def format(
                                            self,
                                            style: FormatStyle = "simple",
                                            *,
                                            template: str | None = None,
                                            variables: dict[str, Any] | None = None,
                                            show_timing: bool = True,
                                            show_ids: bool = False,
                                        ) -> str:
                                            """Format tool call information with configurable style.
                                    
                                            Args:
                                                style: Predefined style to use:
                                                    - simple: Compact single-line format
                                                    - detailed: Multi-line with all details
                                                    - markdown: Formatted markdown with syntax highlighting
                                                template: Optional custom template (required if style="custom")
                                                variables: Additional variables for template rendering
                                                show_timing: Whether to include execution timing
                                                show_ids: Whether to include tool_call_id and message_id
                                    
                                            Returns:
                                                Formatted tool call information
                                    
                                            Raises:
                                                ValueError: If style is invalid or custom template is missing
                                            """
                                            from jinjarope import Environment
                                    
                                            # Select template
                                            if template:
                                                template_str = template
                                            elif style in TEMPLATES:
                                                template_str = TEMPLATES[style]
                                            else:
                                                msg = f"Invalid style: {style}"
                                                raise ValueError(msg)
                                    
                                            # Prepare template variables
                                            vars_ = {
                                                "tool_name": self.tool_name,
                                                "args": self.args,  # No pre-formatting needed
                                                "result": self.result,
                                                "error": self.error,
                                                "agent_name": self.agent_name,
                                                "timestamp": self.timestamp,
                                                "timing": self.timing if show_timing else None,
                                                "agent_tool_name": self.agent_tool_name,
                                            }
                                    
                                            if show_ids:
                                                vars_.update({
                                                    "tool_call_id": self.tool_call_id,
                                                    "message_id": self.message_id,
                                                })
                                    
                                            if variables:
                                                vars_.update(variables)
                                    
                                            # Render template
                                            env = Environment(trim_blocks=True, lstrip_blocks=True)
                                            env.filters["repr"] = repr  # Add repr filter
                                            template_obj = env.from_string(template_str)
                                            return template_obj.render(**vars_)
                                    

                                    agent_name instance-attribute

                                    agent_name: str
                                    

                                    Name of the calling agent.

                                    agent_tool_name class-attribute instance-attribute

                                    agent_tool_name: str | None = None
                                    

                                    If this tool is agent-based, the name of that agent.

                                    args instance-attribute

                                    args: dict[str, Any]
                                    

                                    Arguments passed to the tool.

                                    error class-attribute instance-attribute

                                    error: str | None = None
                                    

                                    Error message if the tool call failed.

                                    message_id class-attribute instance-attribute

                                    message_id: str | None = None
                                    

                                    ID of the message that triggered this tool call.

                                    result instance-attribute

                                    result: Any
                                    

                                    Result returned by the tool.

                                    timestamp class-attribute instance-attribute

                                    timestamp: datetime = Field(default_factory=get_now)
                                    

                                    When the tool was called.

                                    timing class-attribute instance-attribute

                                    timing: float | None = None
                                    

                                    Time taken for this specific tool call in seconds.

                                    tool_call_id class-attribute instance-attribute

                                    tool_call_id: str = Field(default_factory=lambda: str(uuid4()))
                                    

                                    ID provided by the model (e.g. OpenAI function call ID).

                                    tool_name instance-attribute

                                    tool_name: str
                                    

                                    Name of the tool that was called.

                                    format

                                    format(
                                        style: FormatStyle = "simple",
                                        *,
                                        template: str | None = None,
                                        variables: dict[str, Any] | None = None,
                                        show_timing: bool = True,
                                        show_ids: bool = False,
                                    ) -> str
                                    

                                    Format tool call information with configurable style.

                                    Parameters:

                                    Name Type Description Default
                                    style FormatStyle

                                    Predefined style to use: - simple: Compact single-line format - detailed: Multi-line with all details - markdown: Formatted markdown with syntax highlighting

                                    'simple'
                                    template str | None

                                    Optional custom template (required if style="custom")

                                    None
                                    variables dict[str, Any] | None

                                    Additional variables for template rendering

                                    None
                                    show_timing bool

                                    Whether to include execution timing

                                    True
                                    show_ids bool

                                    Whether to include tool_call_id and message_id

                                    False

                                    Returns:

                                    Type Description
                                    str

                                    Formatted tool call information

                                    Raises:

                                    Type Description
                                    ValueError

                                    If style is invalid or custom template is missing

                                    Source code in src/llmling_agent/tools/tool_call_info.py
                                    104
                                    105
                                    106
                                    107
                                    108
                                    109
                                    110
                                    111
                                    112
                                    113
                                    114
                                    115
                                    116
                                    117
                                    118
                                    119
                                    120
                                    121
                                    122
                                    123
                                    124
                                    125
                                    126
                                    127
                                    128
                                    129
                                    130
                                    131
                                    132
                                    133
                                    134
                                    135
                                    136
                                    137
                                    138
                                    139
                                    140
                                    141
                                    142
                                    143
                                    144
                                    145
                                    146
                                    147
                                    148
                                    149
                                    150
                                    151
                                    152
                                    153
                                    154
                                    155
                                    156
                                    157
                                    158
                                    159
                                    160
                                    161
                                    162
                                    163
                                    164
                                    165
                                    166
                                    167
                                    def format(
                                        self,
                                        style: FormatStyle = "simple",
                                        *,
                                        template: str | None = None,
                                        variables: dict[str, Any] | None = None,
                                        show_timing: bool = True,
                                        show_ids: bool = False,
                                    ) -> str:
                                        """Format tool call information with configurable style.
                                    
                                        Args:
                                            style: Predefined style to use:
                                                - simple: Compact single-line format
                                                - detailed: Multi-line with all details
                                                - markdown: Formatted markdown with syntax highlighting
                                            template: Optional custom template (required if style="custom")
                                            variables: Additional variables for template rendering
                                            show_timing: Whether to include execution timing
                                            show_ids: Whether to include tool_call_id and message_id
                                    
                                        Returns:
                                            Formatted tool call information
                                    
                                        Raises:
                                            ValueError: If style is invalid or custom template is missing
                                        """
                                        from jinjarope import Environment
                                    
                                        # Select template
                                        if template:
                                            template_str = template
                                        elif style in TEMPLATES:
                                            template_str = TEMPLATES[style]
                                        else:
                                            msg = f"Invalid style: {style}"
                                            raise ValueError(msg)
                                    
                                        # Prepare template variables
                                        vars_ = {
                                            "tool_name": self.tool_name,
                                            "args": self.args,  # No pre-formatting needed
                                            "result": self.result,
                                            "error": self.error,
                                            "agent_name": self.agent_name,
                                            "timestamp": self.timestamp,
                                            "timing": self.timing if show_timing else None,
                                            "agent_tool_name": self.agent_tool_name,
                                        }
                                    
                                        if show_ids:
                                            vars_.update({
                                                "tool_call_id": self.tool_call_id,
                                                "message_id": self.message_id,
                                            })
                                    
                                        if variables:
                                            vars_.update(variables)
                                    
                                        # Render template
                                        env = Environment(trim_blocks=True, lstrip_blocks=True)
                                        env.filters["repr"] = repr  # Add repr filter
                                        template_obj = env.from_string(template_str)
                                        return template_obj.render(**vars_)
                                    

                                    VideoURLContent

                                    Bases: VideoContent

                                    Video from URL.

                                    Source code in src/llmling_agent/models/content.py
                                    306
                                    307
                                    308
                                    309
                                    310
                                    311
                                    312
                                    313
                                    314
                                    315
                                    316
                                    317
                                    318
                                    class VideoURLContent(VideoContent):
                                        """Video from URL."""
                                    
                                        type: Literal["video_url"] = Field("video_url", init=False)
                                        """URL-based video."""
                                    
                                        url: str
                                        """URL to the video."""
                                    
                                        def to_openai_format(self) -> dict[str, Any]:
                                            """Convert to OpenAI API format for video models."""
                                            content = {"url": self.url, "format": self.format or "auto"}
                                            return {"type": "video", "video": content}
                                    

                                    type class-attribute instance-attribute

                                    type: Literal['video_url'] = Field('video_url', init=False)
                                    

                                    URL-based video.

                                    url instance-attribute

                                    url: str
                                    

                                    URL to the video.

                                    to_openai_format

                                    to_openai_format() -> dict[str, Any]
                                    

                                    Convert to OpenAI API format for video models.

                                    Source code in src/llmling_agent/models/content.py
                                    315
                                    316
                                    317
                                    318
                                    def to_openai_format(self) -> dict[str, Any]:
                                        """Convert to OpenAI API format for video models."""
                                        content = {"url": self.url, "format": self.format or "auto"}
                                        return {"type": "video", "video": content}