Skip to content

llmling_agent

Class info

Classes

Name Children Inherits
Agent
llmling_agent.agent.agent
Agent for AI-powered interaction with LLMling resources and tools.
    AgentConfig
    llmling_agent.models.agents
    Configuration for a single agent in the system.
      • NodeConfig
      AgentContext
      llmling_agent.agent.context
      Runtime context for agent execution.
        AgentPool
        llmling_agent.delegation.pool
        Pool managing message processing nodes (agents and teams).
          AgentsManifest
          llmling_agent.models.manifest
          Complete agent configuration manifest defining all available agents.
            AudioBase64Content
            llmling_agent.models.content
            Audio from base64 data.
              AudioURLContent
              llmling_agent.models.content
              Audio from URL.
                BaseTeam
                llmling_agent.delegation.base_team
                Base class for Team and TeamRun.
                ChatMessage
                llmling_agent.messaging.messages
                Common message format for all UI types.
                  ImageBase64Content
                  llmling_agent.models.content
                  Image from base64 data.
                    ImageURLContent
                    llmling_agent.models.content
                    Image from URL.
                      JSONCode
                      llmling_agent.common_types
                      JSON with syntax validation.
                        MessageNode
                        llmling_agent.messaging.messagenode
                        Base class for all message processing nodes.
                        PDFBase64Content
                        llmling_agent.models.content
                        PDF from base64 data.
                          PDFURLContent
                          llmling_agent.models.content
                          PDF from URL.
                            PythonCode
                            llmling_agent.common_types
                            Python with syntax validation.
                              StructuredAgent
                              llmling_agent.agent.structured
                              Wrapper for Agent that enforces a specific result type.
                                TOMLCode
                                llmling_agent.common_types
                                TOML with syntax validation.
                                  Team
                                  llmling_agent.delegation.team
                                  Group of agents that can execute together.
                                    TeamRun
                                    llmling_agent.delegation.teamrun
                                    Handles team operations with monitoring.
                                      Tool
                                      llmling_agent.tools.base
                                      Information about a registered tool.
                                        ToolCallInfo
                                        llmling_agent.tools.tool_call_info
                                        Information about an executed tool call.
                                          VideoURLContent
                                          llmling_agent.models.content
                                          Video from URL.
                                            YAMLCode
                                            llmling_agent.common_types
                                            YAML with syntax validation.

                                              🛈 DocStrings

                                              Agent configuration and creation.

                                              Agent

                                              Bases: MessageNode[TDeps, str], TaskManagerMixin

                                              Agent for AI-powered interaction with LLMling resources and tools.

                                              Generically typed with: LLMLingAgent[Type of Dependencies, Type of Result]

                                              This agent integrates LLMling's resource system with PydanticAI's agent capabilities. It provides: - Access to resources through RuntimeConfig - Tool registration for resource operations - System prompt customization - Signals - Message history management - Database logging

                                              Source code in src/llmling_agent/agent/agent.py
                                               119
                                               120
                                               121
                                               122
                                               123
                                               124
                                               125
                                               126
                                               127
                                               128
                                               129
                                               130
                                               131
                                               132
                                               133
                                               134
                                               135
                                               136
                                               137
                                               138
                                               139
                                               140
                                               141
                                               142
                                               143
                                               144
                                               145
                                               146
                                               147
                                               148
                                               149
                                               150
                                               151
                                               152
                                               153
                                               154
                                               155
                                               156
                                               157
                                               158
                                               159
                                               160
                                               161
                                               162
                                               163
                                               164
                                               165
                                               166
                                               167
                                               168
                                               169
                                               170
                                               171
                                               172
                                               173
                                               174
                                               175
                                               176
                                               177
                                               178
                                               179
                                               180
                                               181
                                               182
                                               183
                                               184
                                               185
                                               186
                                               187
                                               188
                                               189
                                               190
                                               191
                                               192
                                               193
                                               194
                                               195
                                               196
                                               197
                                               198
                                               199
                                               200
                                               201
                                               202
                                               203
                                               204
                                               205
                                               206
                                               207
                                               208
                                               209
                                               210
                                               211
                                               212
                                               213
                                               214
                                               215
                                               216
                                               217
                                               218
                                               219
                                               220
                                               221
                                               222
                                               223
                                               224
                                               225
                                               226
                                               227
                                               228
                                               229
                                               230
                                               231
                                               232
                                               233
                                               234
                                               235
                                               236
                                               237
                                               238
                                               239
                                               240
                                               241
                                               242
                                               243
                                               244
                                               245
                                               246
                                               247
                                               248
                                               249
                                               250
                                               251
                                               252
                                               253
                                               254
                                               255
                                               256
                                               257
                                               258
                                               259
                                               260
                                               261
                                               262
                                               263
                                               264
                                               265
                                               266
                                               267
                                               268
                                               269
                                               270
                                               271
                                               272
                                               273
                                               274
                                               275
                                               276
                                               277
                                               278
                                               279
                                               280
                                               281
                                               282
                                               283
                                               284
                                               285
                                               286
                                               287
                                               288
                                               289
                                               290
                                               291
                                               292
                                               293
                                               294
                                               295
                                               296
                                               297
                                               298
                                               299
                                               300
                                               301
                                               302
                                               303
                                               304
                                               305
                                               306
                                               307
                                               308
                                               309
                                               310
                                               311
                                               312
                                               313
                                               314
                                               315
                                               316
                                               317
                                               318
                                               319
                                               320
                                               321
                                               322
                                               323
                                               324
                                               325
                                               326
                                               327
                                               328
                                               329
                                               330
                                               331
                                               332
                                               333
                                               334
                                               335
                                               336
                                               337
                                               338
                                               339
                                               340
                                               341
                                               342
                                               343
                                               344
                                               345
                                               346
                                               347
                                               348
                                               349
                                               350
                                               351
                                               352
                                               353
                                               354
                                               355
                                               356
                                               357
                                               358
                                               359
                                               360
                                               361
                                               362
                                               363
                                               364
                                               365
                                               366
                                               367
                                               368
                                               369
                                               370
                                               371
                                               372
                                               373
                                               374
                                               375
                                               376
                                               377
                                               378
                                               379
                                               380
                                               381
                                               382
                                               383
                                               384
                                               385
                                               386
                                               387
                                               388
                                               389
                                               390
                                               391
                                               392
                                               393
                                               394
                                               395
                                               396
                                               397
                                               398
                                               399
                                               400
                                               401
                                               402
                                               403
                                               404
                                               405
                                               406
                                               407
                                               408
                                               409
                                               410
                                               411
                                               412
                                               413
                                               414
                                               415
                                               416
                                               417
                                               418
                                               419
                                               420
                                               421
                                               422
                                               423
                                               424
                                               425
                                               426
                                               427
                                               428
                                               429
                                               430
                                               431
                                               432
                                               433
                                               434
                                               435
                                               436
                                               437
                                               438
                                               439
                                               440
                                               441
                                               442
                                               443
                                               444
                                               445
                                               446
                                               447
                                               448
                                               449
                                               450
                                               451
                                               452
                                               453
                                               454
                                               455
                                               456
                                               457
                                               458
                                               459
                                               460
                                               461
                                               462
                                               463
                                               464
                                               465
                                               466
                                               467
                                               468
                                               469
                                               470
                                               471
                                               472
                                               473
                                               474
                                               475
                                               476
                                               477
                                               478
                                               479
                                               480
                                               481
                                               482
                                               483
                                               484
                                               485
                                               486
                                               487
                                               488
                                               489
                                               490
                                               491
                                               492
                                               493
                                               494
                                               495
                                               496
                                               497
                                               498
                                               499
                                               500
                                               501
                                               502
                                               503
                                               504
                                               505
                                               506
                                               507
                                               508
                                               509
                                               510
                                               511
                                               512
                                               513
                                               514
                                               515
                                               516
                                               517
                                               518
                                               519
                                               520
                                               521
                                               522
                                               523
                                               524
                                               525
                                               526
                                               527
                                               528
                                               529
                                               530
                                               531
                                               532
                                               533
                                               534
                                               535
                                               536
                                               537
                                               538
                                               539
                                               540
                                               541
                                               542
                                               543
                                               544
                                               545
                                               546
                                               547
                                               548
                                               549
                                               550
                                               551
                                               552
                                               553
                                               554
                                               555
                                               556
                                               557
                                               558
                                               559
                                               560
                                               561
                                               562
                                               563
                                               564
                                               565
                                               566
                                               567
                                               568
                                               569
                                               570
                                               571
                                               572
                                               573
                                               574
                                               575
                                               576
                                               577
                                               578
                                               579
                                               580
                                               581
                                               582
                                               583
                                               584
                                               585
                                               586
                                               587
                                               588
                                               589
                                               590
                                               591
                                               592
                                               593
                                               594
                                               595
                                               596
                                               597
                                               598
                                               599
                                               600
                                               601
                                               602
                                               603
                                               604
                                               605
                                               606
                                               607
                                               608
                                               609
                                               610
                                               611
                                               612
                                               613
                                               614
                                               615
                                               616
                                               617
                                               618
                                               619
                                               620
                                               621
                                               622
                                               623
                                               624
                                               625
                                               626
                                               627
                                               628
                                               629
                                               630
                                               631
                                               632
                                               633
                                               634
                                               635
                                               636
                                               637
                                               638
                                               639
                                               640
                                               641
                                               642
                                               643
                                               644
                                               645
                                               646
                                               647
                                               648
                                               649
                                               650
                                               651
                                               652
                                               653
                                               654
                                               655
                                               656
                                               657
                                               658
                                               659
                                               660
                                               661
                                               662
                                               663
                                               664
                                               665
                                               666
                                               667
                                               668
                                               669
                                               670
                                               671
                                               672
                                               673
                                               674
                                               675
                                               676
                                               677
                                               678
                                               679
                                               680
                                               681
                                               682
                                               683
                                               684
                                               685
                                               686
                                               687
                                               688
                                               689
                                               690
                                               691
                                               692
                                               693
                                               694
                                               695
                                               696
                                               697
                                               698
                                               699
                                               700
                                               701
                                               702
                                               703
                                               704
                                               705
                                               706
                                               707
                                               708
                                               709
                                               710
                                               711
                                               712
                                               713
                                               714
                                               715
                                               716
                                               717
                                               718
                                               719
                                               720
                                               721
                                               722
                                               723
                                               724
                                               725
                                               726
                                               727
                                               728
                                               729
                                               730
                                               731
                                               732
                                               733
                                               734
                                               735
                                               736
                                               737
                                               738
                                               739
                                               740
                                               741
                                               742
                                               743
                                               744
                                               745
                                               746
                                               747
                                               748
                                               749
                                               750
                                               751
                                               752
                                               753
                                               754
                                               755
                                               756
                                               757
                                               758
                                               759
                                               760
                                               761
                                               762
                                               763
                                               764
                                               765
                                               766
                                               767
                                               768
                                               769
                                               770
                                               771
                                               772
                                               773
                                               774
                                               775
                                               776
                                               777
                                               778
                                               779
                                               780
                                               781
                                               782
                                               783
                                               784
                                               785
                                               786
                                               787
                                               788
                                               789
                                               790
                                               791
                                               792
                                               793
                                               794
                                               795
                                               796
                                               797
                                               798
                                               799
                                               800
                                               801
                                               802
                                               803
                                               804
                                               805
                                               806
                                               807
                                               808
                                               809
                                               810
                                               811
                                               812
                                               813
                                               814
                                               815
                                               816
                                               817
                                               818
                                               819
                                               820
                                               821
                                               822
                                               823
                                               824
                                               825
                                               826
                                               827
                                               828
                                               829
                                               830
                                               831
                                               832
                                               833
                                               834
                                               835
                                               836
                                               837
                                               838
                                               839
                                               840
                                               841
                                               842
                                               843
                                               844
                                               845
                                               846
                                               847
                                               848
                                               849
                                               850
                                               851
                                               852
                                               853
                                               854
                                               855
                                               856
                                               857
                                               858
                                               859
                                               860
                                               861
                                               862
                                               863
                                               864
                                               865
                                               866
                                               867
                                               868
                                               869
                                               870
                                               871
                                               872
                                               873
                                               874
                                               875
                                               876
                                               877
                                               878
                                               879
                                               880
                                               881
                                               882
                                               883
                                               884
                                               885
                                               886
                                               887
                                               888
                                               889
                                               890
                                               891
                                               892
                                               893
                                               894
                                               895
                                               896
                                               897
                                               898
                                               899
                                               900
                                               901
                                               902
                                               903
                                               904
                                               905
                                               906
                                               907
                                               908
                                               909
                                               910
                                               911
                                               912
                                               913
                                               914
                                               915
                                               916
                                               917
                                               918
                                               919
                                               920
                                               921
                                               922
                                               923
                                               924
                                               925
                                               926
                                               927
                                               928
                                               929
                                               930
                                               931
                                               932
                                               933
                                               934
                                               935
                                               936
                                               937
                                               938
                                               939
                                               940
                                               941
                                               942
                                               943
                                               944
                                               945
                                               946
                                               947
                                               948
                                               949
                                               950
                                               951
                                               952
                                               953
                                               954
                                               955
                                               956
                                               957
                                               958
                                               959
                                               960
                                               961
                                               962
                                               963
                                               964
                                               965
                                               966
                                               967
                                               968
                                               969
                                               970
                                               971
                                               972
                                               973
                                               974
                                               975
                                               976
                                               977
                                               978
                                               979
                                               980
                                               981
                                               982
                                               983
                                               984
                                               985
                                               986
                                               987
                                               988
                                               989
                                               990
                                               991
                                               992
                                               993
                                               994
                                               995
                                               996
                                               997
                                               998
                                               999
                                              1000
                                              1001
                                              1002
                                              1003
                                              1004
                                              1005
                                              1006
                                              1007
                                              1008
                                              1009
                                              1010
                                              1011
                                              1012
                                              1013
                                              1014
                                              1015
                                              1016
                                              1017
                                              1018
                                              1019
                                              1020
                                              1021
                                              1022
                                              1023
                                              1024
                                              1025
                                              1026
                                              1027
                                              1028
                                              1029
                                              1030
                                              1031
                                              1032
                                              1033
                                              1034
                                              1035
                                              1036
                                              1037
                                              1038
                                              1039
                                              1040
                                              1041
                                              1042
                                              1043
                                              1044
                                              1045
                                              1046
                                              1047
                                              1048
                                              1049
                                              1050
                                              1051
                                              1052
                                              1053
                                              1054
                                              1055
                                              1056
                                              1057
                                              1058
                                              1059
                                              1060
                                              1061
                                              1062
                                              1063
                                              1064
                                              1065
                                              1066
                                              1067
                                              1068
                                              1069
                                              1070
                                              1071
                                              1072
                                              1073
                                              1074
                                              1075
                                              1076
                                              1077
                                              1078
                                              1079
                                              1080
                                              1081
                                              1082
                                              1083
                                              1084
                                              1085
                                              1086
                                              1087
                                              1088
                                              1089
                                              1090
                                              1091
                                              1092
                                              1093
                                              1094
                                              1095
                                              1096
                                              1097
                                              1098
                                              1099
                                              1100
                                              1101
                                              1102
                                              1103
                                              1104
                                              1105
                                              1106
                                              1107
                                              1108
                                              1109
                                              1110
                                              1111
                                              1112
                                              1113
                                              1114
                                              1115
                                              1116
                                              1117
                                              1118
                                              1119
                                              1120
                                              1121
                                              1122
                                              1123
                                              1124
                                              1125
                                              1126
                                              1127
                                              1128
                                              1129
                                              1130
                                              1131
                                              1132
                                              1133
                                              1134
                                              1135
                                              1136
                                              1137
                                              1138
                                              1139
                                              1140
                                              1141
                                              1142
                                              1143
                                              1144
                                              1145
                                              1146
                                              1147
                                              1148
                                              1149
                                              1150
                                              1151
                                              1152
                                              1153
                                              1154
                                              1155
                                              1156
                                              1157
                                              1158
                                              1159
                                              1160
                                              1161
                                              1162
                                              1163
                                              1164
                                              1165
                                              1166
                                              1167
                                              1168
                                              1169
                                              1170
                                              1171
                                              1172
                                              1173
                                              1174
                                              1175
                                              1176
                                              1177
                                              1178
                                              1179
                                              1180
                                              1181
                                              1182
                                              1183
                                              1184
                                              1185
                                              1186
                                              1187
                                              1188
                                              1189
                                              1190
                                              1191
                                              1192
                                              1193
                                              1194
                                              1195
                                              1196
                                              1197
                                              1198
                                              1199
                                              1200
                                              1201
                                              1202
                                              1203
                                              1204
                                              1205
                                              1206
                                              1207
                                              1208
                                              1209
                                              1210
                                              1211
                                              1212
                                              1213
                                              1214
                                              1215
                                              1216
                                              1217
                                              1218
                                              1219
                                              1220
                                              1221
                                              1222
                                              1223
                                              1224
                                              1225
                                              1226
                                              1227
                                              1228
                                              1229
                                              1230
                                              1231
                                              1232
                                              1233
                                              1234
                                              1235
                                              1236
                                              1237
                                              1238
                                              1239
                                              1240
                                              1241
                                              1242
                                              1243
                                              1244
                                              1245
                                              1246
                                              1247
                                              1248
                                              1249
                                              1250
                                              1251
                                              1252
                                              1253
                                              1254
                                              1255
                                              1256
                                              1257
                                              1258
                                              1259
                                              1260
                                              1261
                                              1262
                                              1263
                                              1264
                                              1265
                                              1266
                                              @track_agent("Agent")
                                              class Agent[TDeps](MessageNode[TDeps, str], TaskManagerMixin):
                                                  """Agent for AI-powered interaction with LLMling resources and tools.
                                              
                                                  Generically typed with: LLMLingAgent[Type of Dependencies, Type of Result]
                                              
                                                  This agent integrates LLMling's resource system with PydanticAI's agent capabilities.
                                                  It provides:
                                                  - Access to resources through RuntimeConfig
                                                  - Tool registration for resource operations
                                                  - System prompt customization
                                                  - Signals
                                                  - Message history management
                                                  - Database logging
                                                  """
                                              
                                                  @dataclass(frozen=True)
                                                  class AgentReset:
                                                      """Emitted when agent is reset."""
                                              
                                                      agent_name: AgentName
                                                      previous_tools: dict[str, bool]
                                                      new_tools: dict[str, bool]
                                                      timestamp: datetime = field(default_factory=get_now)
                                              
                                                  # this fixes weird mypy issue
                                                  conversation: ConversationManager
                                                  talk: Interactions
                                                  model_changed = Signal(object)  # Model | None
                                                  chunk_streamed = Signal(str, str)  # (chunk, message_id)
                                                  run_failed = Signal(str, Exception)
                                                  agent_reset = Signal(AgentReset)
                                              
                                                  def __init__(  # noqa: PLR0915
                                                      # we dont use AgentKwargs here so that we can work with explicit ones in the ctor
                                                      self,
                                                      name: str = "llmling-agent",
                                                      provider: AgentType = "pydantic_ai",
                                                      *,
                                                      model: ModelType = None,
                                                      runtime: RuntimeConfig | Config | StrPath | None = None,
                                                      context: AgentContext[TDeps] | None = None,
                                                      session: SessionIdType | SessionQuery | MemoryConfig | bool | int = None,
                                                      system_prompt: AnyPromptType | Sequence[AnyPromptType] = (),
                                                      description: str | None = None,
                                                      tools: Sequence[ToolType] | None = None,
                                                      capabilities: Capabilities | None = None,
                                                      mcp_servers: Sequence[str | MCPServerConfig] | None = None,
                                                      resources: Sequence[Resource | PromptType | str] = (),
                                                      retries: int = 1,
                                                      result_retries: int | None = None,
                                                      end_strategy: EndStrategy = "early",
                                                      defer_model_check: bool = False,
                                                      input_provider: InputProvider | None = None,
                                                      parallel_init: bool = True,
                                                      debug: bool = False,
                                                  ):
                                                      """Initialize agent with runtime configuration.
                                              
                                                      Args:
                                                          runtime: Runtime configuration providing access to resources/tools
                                                          context: Agent context with capabilities and configuration
                                                          provider: Agent type to use (ai: PydanticAIProvider, human: HumanProvider)
                                                          session: Memory configuration.
                                                              - None: Default memory config
                                                              - False: Disable message history (max_messages=0)
                                                              - int: Max tokens for memory
                                                              - str/UUID: Session identifier
                                                              - SessionQuery: Query to recover conversation
                                                              - MemoryConfig: Complete memory configuration
                                                          model: The default model to use (defaults to GPT-4)
                                                          system_prompt: Static system prompts to use for this agent
                                                          name: Name of the agent for logging
                                                          description: Description of the Agent ("what it can do")
                                                          tools: List of tools to register with the agent
                                                          capabilities: Capabilities for the agent
                                                          mcp_servers: MCP servers to connect to
                                                          resources: Additional resources to load
                                                          retries: Default number of retries for failed operations
                                                          result_retries: Max retries for result validation (defaults to retries)
                                                          end_strategy: Strategy for handling tool calls that are requested alongside
                                                                        a final result
                                                          defer_model_check: Whether to defer model evaluation until first run
                                                          input_provider: Provider for human input (tool confirmation / HumanProviders)
                                                          parallel_init: Whether to initialize resources in parallel
                                                          debug: Whether to enable debug mode
                                                      """
                                                      from llmling_agent.agent import AgentContext
                                                      from llmling_agent.agent.conversation import ConversationManager
                                                      from llmling_agent.agent.interactions import Interactions
                                                      from llmling_agent.agent.sys_prompts import SystemPrompts
                                                      from llmling_agent.resource_providers.capability_provider import (
                                                          CapabilitiesResourceProvider,
                                                      )
                                                      from llmling_agent_providers.base import AgentProvider
                                              
                                                      self._infinite = False
                                                      # save some stuff for asnyc init
                                                      self._owns_runtime = False
                                                      # prepare context
                                                      ctx = context or AgentContext[TDeps].create_default(
                                                          name,
                                                          input_provider=input_provider,
                                                          capabilities=capabilities,
                                                      )
                                                      self._context = ctx
                                                      memory_cfg = (
                                                          session
                                                          if isinstance(session, MemoryConfig)
                                                          else MemoryConfig.from_value(session)
                                                      )
                                                      super().__init__(
                                                          name=name,
                                                          context=ctx,
                                                          description=description,
                                                          enable_logging=memory_cfg.enable,
                                                          mcp_servers=mcp_servers,
                                                      )
                                                      # Initialize runtime
                                                      match runtime:
                                                          case None:
                                                              ctx.runtime = RuntimeConfig.from_config(Config())
                                                          case Config() | str() | PathLike():
                                                              ctx.runtime = RuntimeConfig.from_config(runtime)
                                                          case RuntimeConfig():
                                                              ctx.runtime = runtime
                                              
                                                      runtime_provider = RuntimePromptProvider(ctx.runtime)
                                                      ctx.definition.prompt_manager.providers["runtime"] = runtime_provider
                                                      # Initialize tool manager
                                                      all_tools = list(tools or [])
                                                      self.tools = ToolManager(all_tools)
                                                      self.tools.add_provider(self.mcp)
                                                      if builtin_tools := ctx.config.get_tool_provider():
                                                          self.tools.add_provider(builtin_tools)
                                              
                                                      # Initialize conversation manager
                                                      resources = list(resources)
                                                      if ctx.config.knowledge:
                                                          resources.extend(ctx.config.knowledge.get_resources())
                                                      self.conversation = ConversationManager(self, memory_cfg, resources=resources)
                                                      # Initialize provider
                                                      match provider:
                                                          case "pydantic_ai":
                                                              validate_import("pydantic_ai", "pydantic_ai")
                                                              from llmling_agent_providers.pydanticai import PydanticAIProvider
                                              
                                                              if model and not isinstance(model, str):
                                                                  from pydantic_ai import models
                                              
                                                                  assert isinstance(model, models.Model)
                                                              self._provider: AgentProvider = PydanticAIProvider(
                                                                  model=model,
                                                                  retries=retries,
                                                                  end_strategy=end_strategy,
                                                                  result_retries=result_retries,
                                                                  defer_model_check=defer_model_check,
                                                                  debug=debug,
                                                                  context=ctx,
                                                              )
                                                          case "human":
                                                              from llmling_agent_providers.human import HumanProvider
                                              
                                                              self._provider = HumanProvider(name=name, debug=debug, context=ctx)
                                                          case Callable():
                                                              from llmling_agent_providers.callback import CallbackProvider
                                              
                                                              self._provider = CallbackProvider(
                                                                  provider, name=name, debug=debug, context=ctx
                                                              )
                                                          case "litellm":
                                                              validate_import("litellm", "litellm")
                                                              from llmling_agent_providers.litellm_provider import LiteLLMProvider
                                              
                                                              self._provider = LiteLLMProvider(
                                                                  name=name,
                                                                  debug=debug,
                                                                  retries=retries,
                                                                  context=ctx,
                                                                  model=model,
                                                              )
                                                          case AgentProvider():
                                                              self._provider = provider
                                                              self._provider.context = ctx
                                                          case _:
                                                              msg = f"Invalid agent type: {type}"
                                                              raise ValueError(msg)
                                                      self.tools.add_provider(CapabilitiesResourceProvider(ctx.capabilities))
                                              
                                                      if ctx and ctx.definition:
                                                          from llmling_agent.observability import registry
                                              
                                                          registry.register_providers(ctx.definition.observability)
                                              
                                                      # init variables
                                                      self._debug = debug
                                                      self._result_type: type | None = None
                                                      self.parallel_init = parallel_init
                                                      self.name = name
                                                      self._background_task: asyncio.Task[Any] | None = None
                                              
                                                      # Forward provider signals
                                                      self._provider.chunk_streamed.connect(self.chunk_streamed)
                                                      self._provider.model_changed.connect(self.model_changed)
                                                      self._provider.tool_used.connect(self.tool_used)
                                                      self._provider.model_changed.connect(self.model_changed)
                                              
                                                      self.talk = Interactions(self)
                                              
                                                      # Set up system prompts
                                                      config_prompts = ctx.config.system_prompts if ctx else []
                                                      all_prompts: list[AnyPromptType] = list(config_prompts)
                                                      if isinstance(system_prompt, list):
                                                          all_prompts.extend(system_prompt)
                                                      else:
                                                          all_prompts.append(system_prompt)
                                                      self.sys_prompts = SystemPrompts(all_prompts, context=ctx)
                                              
                                                  def __repr__(self) -> str:
                                                      desc = f", {self.description!r}" if self.description else ""
                                                      tools = f", tools={len(self.tools)}" if self.tools else ""
                                                      return f"Agent({self.name!r}, provider={self._provider.NAME!r}{desc}{tools})"
                                              
                                                  def __prompt__(self) -> str:
                                                      typ = self._provider.__class__.__name__
                                                      model = self.model_name or "default"
                                                      parts = [f"Agent: {self.name}", f"Type: {typ}", f"Model: {model}"]
                                                      if self.description:
                                                          parts.append(f"Description: {self.description}")
                                                      parts.extend([self.tools.__prompt__(), self.conversation.__prompt__()])
                                              
                                                      return "\n".join(parts)
                                              
                                                  async def __aenter__(self) -> Self:
                                                      """Enter async context and set up MCP servers."""
                                                      try:
                                                          # Collect all coroutines that need to be run
                                                          coros: list[Coroutine[Any, Any, Any]] = []
                                              
                                                          # Runtime initialization if needed
                                                          runtime_ref = self.context.runtime
                                                          if runtime_ref and not runtime_ref._initialized:
                                                              self._owns_runtime = True
                                                              coros.append(runtime_ref.__aenter__())
                                              
                                                          # Events initialization
                                                          coros.append(super().__aenter__())
                                              
                                                          # Get conversation init tasks directly
                                                          coros.extend(self.conversation.get_initialization_tasks())
                                              
                                                          # Execute coroutines either in parallel or sequentially
                                                          if self.parallel_init and coros:
                                                              await asyncio.gather(*coros)
                                                          else:
                                                              for coro in coros:
                                                                  await coro
                                                          if runtime_ref:
                                                              self.tools.add_provider(RuntimeResourceProvider(runtime_ref))
                                                          for provider in await self.context.config.get_toolsets():
                                                              self.tools.add_provider(provider)
                                                      except Exception as e:
                                                          # Clean up in reverse order
                                                          if self._owns_runtime and runtime_ref and self.context.runtime == runtime_ref:
                                                              await runtime_ref.__aexit__(type(e), e, e.__traceback__)
                                                          msg = "Failed to initialize agent"
                                                          raise RuntimeError(msg) from e
                                                      else:
                                                          return self
                                              
                                                  async def __aexit__(
                                                      self,
                                                      exc_type: type[BaseException] | None,
                                                      exc_val: BaseException | None,
                                                      exc_tb: TracebackType | None,
                                                  ):
                                                      """Exit async context."""
                                                      await super().__aexit__(exc_type, exc_val, exc_tb)
                                                      try:
                                                          await self.mcp.__aexit__(exc_type, exc_val, exc_tb)
                                                      finally:
                                                          if self._owns_runtime and self.context.runtime:
                                                              self.tools.remove_provider("runtime")
                                                              await self.context.runtime.__aexit__(exc_type, exc_val, exc_tb)
                                                          # for provider in await self.context.config.get_toolsets():
                                                          #     self.tools.remove_provider(provider.name)
                                              
                                                  @overload
                                                  def __and__(
                                                      self, other: Agent[TDeps] | StructuredAgent[TDeps, Any]
                                                  ) -> Team[TDeps]: ...
                                              
                                                  @overload
                                                  def __and__(self, other: Team[TDeps]) -> Team[TDeps]: ...
                                              
                                                  @overload
                                                  def __and__(self, other: ProcessorCallback[Any]) -> Team[TDeps]: ...
                                              
                                                  def __and__(self, other: MessageNode[Any, Any] | ProcessorCallback[Any]) -> Team[Any]:
                                                      """Create agent group using | operator.
                                              
                                                      Example:
                                                          group = analyzer & planner & executor  # Create group of 3
                                                          group = analyzer & existing_group  # Add to existing group
                                                      """
                                                      from llmling_agent.agent import StructuredAgent
                                                      from llmling_agent.delegation.team import Team
                                              
                                                      match other:
                                                          case Team():
                                                              return Team([self, *other.agents])
                                                          case Callable():
                                                              if callable(other):
                                                                  if has_return_type(other, str):
                                                                      agent_2 = Agent.from_callback(other)
                                                                  else:
                                                                      agent_2 = StructuredAgent.from_callback(other)
                                                              agent_2.context.pool = self.context.pool
                                                              return Team([self, agent_2])
                                                          case MessageNode():
                                                              return Team([self, other])
                                                          case _:
                                                              msg = f"Invalid agent type: {type(other)}"
                                                              raise ValueError(msg)
                                              
                                                  @overload
                                                  def __or__(self, other: MessageNode[TDeps, Any]) -> TeamRun[TDeps, Any]: ...
                                              
                                                  @overload
                                                  def __or__[TOtherDeps](
                                                      self,
                                                      other: MessageNode[TOtherDeps, Any],
                                                  ) -> TeamRun[Any, Any]: ...
                                              
                                                  @overload
                                                  def __or__(self, other: ProcessorCallback[Any]) -> TeamRun[Any, Any]: ...
                                              
                                                  def __or__(self, other: MessageNode[Any, Any] | ProcessorCallback[Any]) -> TeamRun:
                                                      # Create new execution with sequential mode (for piping)
                                                      from llmling_agent import StructuredAgent, TeamRun
                                              
                                                      if callable(other):
                                                          if has_return_type(other, str):
                                                              other = Agent.from_callback(other)
                                                          else:
                                                              other = StructuredAgent.from_callback(other)
                                                          other.context.pool = self.context.pool
                                              
                                                      return TeamRun([self, other])
                                              
                                                  @classmethod
                                                  def from_callback(
                                                      cls,
                                                      callback: ProcessorCallback[str],
                                                      *,
                                                      name: str | None = None,
                                                      debug: bool = False,
                                                      **kwargs: Any,
                                                  ) -> Agent[None]:
                                                      """Create an agent from a processing callback.
                                              
                                                      Args:
                                                          callback: Function to process messages. Can be:
                                                              - sync or async
                                                              - with or without context
                                                              - must return str for pipeline compatibility
                                                          name: Optional name for the agent
                                                          debug: Whether to enable debug mode
                                                          kwargs: Additional arguments for agent
                                                      """
                                                      from llmling_agent_providers.callback import CallbackProvider
                                              
                                                      name = name or callback.__name__ or "processor"
                                                      provider = CallbackProvider(callback, name=name)
                                                      return Agent[None](provider=provider, name=name, debug=debug, **kwargs)
                                              
                                                  @property
                                                  def name(self) -> str:
                                                      """Get agent name."""
                                                      return self._name or "llmling-agent"
                                              
                                                  @name.setter
                                                  def name(self, value: str):
                                                      self._provider.name = value
                                                      self._name = value
                                              
                                                  @property
                                                  def context(self) -> AgentContext[TDeps]:
                                                      """Get agent context."""
                                                      return self._context
                                              
                                                  @context.setter
                                                  def context(self, value: AgentContext[TDeps]):
                                                      """Set agent context and propagate to provider."""
                                                      self._provider.context = value
                                                      self.mcp.context = value
                                                      self._context = value
                                              
                                                  def set_result_type(
                                                      self,
                                                      result_type: type[TResult] | str | ResponseDefinition | None,
                                                      *,
                                                      tool_name: str | None = None,
                                                      tool_description: str | None = None,
                                                  ):
                                                      """Set or update the result type for this agent.
                                              
                                                      Args:
                                                          result_type: New result type, can be:
                                                              - A Python type for validation
                                                              - Name of a response definition
                                                              - Response definition instance
                                                              - None to reset to unstructured mode
                                                          tool_name: Optional override for tool name
                                                          tool_description: Optional override for tool description
                                                      """
                                                      logger.debug("Setting result type to: %s for %r", result_type, self.name)
                                                      self._result_type = to_type(result_type)
                                              
                                                  @property
                                                  def provider(self) -> AgentProvider:
                                                      """Get the underlying provider."""
                                                      return self._provider
                                              
                                                  @provider.setter
                                                  def provider(self, value: AgentProvider, model: ModelType = None):
                                                      """Set the underlying provider."""
                                                      from llmling_agent_providers.base import AgentProvider
                                              
                                                      name = self.name
                                                      debug = self._debug
                                                      self._provider.chunk_streamed.disconnect(self.chunk_streamed)
                                                      self._provider.model_changed.disconnect(self.model_changed)
                                                      self._provider.tool_used.disconnect(self.tool_used)
                                                      self._provider.model_changed.disconnect(self.model_changed)
                                                      match value:
                                                          case AgentProvider():
                                                              self._provider = value
                                                          case "pydantic_ai":
                                                              validate_import("pydantic_ai", "pydantic_ai")
                                                              from llmling_agent_providers.pydanticai import PydanticAIProvider
                                              
                                                              self._provider = PydanticAIProvider(model=model, name=name, debug=debug)
                                                          case "human":
                                                              from llmling_agent_providers.human import HumanProvider
                                              
                                                              self._provider = HumanProvider(name=name, debug=debug)
                                                          case "litellm":
                                                              validate_import("litellm", "litellm")
                                                              from llmling_agent_providers.litellm_provider import LiteLLMProvider
                                              
                                                              self._provider = LiteLLMProvider(model=model, name=name, debug=debug)
                                                          case Callable():
                                                              from llmling_agent_providers.callback import CallbackProvider
                                              
                                                              self._provider = CallbackProvider(value, name=name, debug=debug)
                                                          case _:
                                                              msg = f"Invalid agent type: {type}"
                                                              raise ValueError(msg)
                                                      self._provider.chunk_streamed.connect(self.chunk_streamed)
                                                      self._provider.model_changed.connect(self.model_changed)
                                                      self._provider.tool_used.connect(self.tool_used)
                                                      self._provider.model_changed.connect(self.model_changed)
                                                      self._provider.context = self._context
                                              
                                                  @overload
                                                  def to_structured(
                                                      self,
                                                      result_type: None,
                                                      *,
                                                      tool_name: str | None = None,
                                                      tool_description: str | None = None,
                                                  ) -> Self: ...
                                              
                                                  @overload
                                                  def to_structured[TResult](
                                                      self,
                                                      result_type: type[TResult] | str | ResponseDefinition,
                                                      *,
                                                      tool_name: str | None = None,
                                                      tool_description: str | None = None,
                                                  ) -> StructuredAgent[TDeps, TResult]: ...
                                              
                                                  def to_structured[TResult](
                                                      self,
                                                      result_type: type[TResult] | str | ResponseDefinition | None,
                                                      *,
                                                      tool_name: str | None = None,
                                                      tool_description: str | None = None,
                                                  ) -> StructuredAgent[TDeps, TResult] | Self:
                                                      """Convert this agent to a structured agent.
                                              
                                                      If result_type is None, returns self unchanged (no wrapping).
                                                      Otherwise creates a StructuredAgent wrapper.
                                              
                                                      Args:
                                                          result_type: Type for structured responses. Can be:
                                                              - A Python type (Pydantic model)
                                                              - Name of response definition from context
                                                              - Complete response definition
                                                              - None to skip wrapping
                                                          tool_name: Optional override for result tool name
                                                          tool_description: Optional override for result tool description
                                              
                                                      Returns:
                                                          Either StructuredAgent wrapper or self unchanged
                                                      from llmling_agent.agent import StructuredAgent
                                                      """
                                                      if result_type is None:
                                                          return self
                                              
                                                      from llmling_agent.agent import StructuredAgent
                                              
                                                      return StructuredAgent(
                                                          self,
                                                          result_type=result_type,
                                                          tool_name=tool_name,
                                                          tool_description=tool_description,
                                                      )
                                              
                                                  def is_busy(self) -> bool:
                                                      """Check if agent is currently processing tasks."""
                                                      return bool(self._pending_tasks or self._background_task)
                                              
                                                  @property
                                                  def model_name(self) -> str | None:
                                                      """Get the model name in a consistent format."""
                                                      return self._provider.model_name
                                              
                                                  def to_tool(
                                                      self,
                                                      *,
                                                      name: str | None = None,
                                                      reset_history_on_run: bool = True,
                                                      pass_message_history: bool = False,
                                                      share_context: bool = False,
                                                      parent: AnyAgent[Any, Any] | None = None,
                                                  ) -> Tool:
                                                      """Create a tool from this agent.
                                              
                                                      Args:
                                                          name: Optional tool name override
                                                          reset_history_on_run: Clear agent's history before each run
                                                          pass_message_history: Pass parent's message history to agent
                                                          share_context: Whether to pass parent's context/deps
                                                          parent: Optional parent agent for history/context sharing
                                                      """
                                                      tool_name = f"ask_{self.name}"
                                              
                                                      async def wrapped_tool(prompt: str) -> str:
                                                          if pass_message_history and not parent:
                                                              msg = "Parent agent required for message history sharing"
                                                              raise ToolError(msg)
                                              
                                                          if reset_history_on_run:
                                                              self.conversation.clear()
                                              
                                                          history = None
                                                          if pass_message_history and parent:
                                                              history = parent.conversation.get_history()
                                                              old = self.conversation.get_history()
                                                              self.conversation.set_history(history)
                                                          result = await self.run(prompt, result_type=self._result_type)
                                                          if history:
                                                              self.conversation.set_history(old)
                                                          return result.data
                                              
                                                      normalized_name = self.name.replace("_", " ").title()
                                                      docstring = f"Get expert answer from specialized agent: {normalized_name}"
                                                      if self.description:
                                                          docstring = f"{docstring}\n\n{self.description}"
                                              
                                                      wrapped_tool.__doc__ = docstring
                                                      wrapped_tool.__name__ = tool_name
                                              
                                                      return Tool.from_callable(
                                                          wrapped_tool,
                                                          name_override=tool_name,
                                                          description_override=docstring,
                                                      )
                                              
                                                  @track_action("Calling Agent.run: {prompts}:")
                                                  async def _run(
                                                      self,
                                                      *prompts: AnyPromptType | PIL.Image.Image | os.PathLike[str] | ChatMessage[Any],
                                                      result_type: type[TResult] | None = None,
                                                      model: ModelType = None,
                                                      store_history: bool = True,
                                                      tool_choice: str | list[str] | None = None,
                                                      usage_limits: UsageLimits | None = None,
                                                      message_id: str | None = None,
                                                      conversation_id: str | None = None,
                                                      messages: list[ChatMessage[Any]] | None = None,
                                                      wait_for_connections: bool | None = None,
                                                  ) -> ChatMessage[TResult]:
                                                      """Run agent with prompt and get response.
                                              
                                                      Args:
                                                          prompts: User query or instruction
                                                          result_type: Optional type for structured responses
                                                          model: Optional model override
                                                          store_history: Whether the message exchange should be added to the
                                                                          context window
                                                          tool_choice: Filter tool choice by name
                                                          usage_limits: Optional usage limits for the model
                                                          message_id: Optional message id for the returned message.
                                                                      Automatically generated if not provided.
                                                          conversation_id: Optional conversation id for the returned message.
                                                          messages: Optional list of messages to replace the conversation history
                                                          wait_for_connections: Whether to wait for connected agents to complete
                                              
                                                      Returns:
                                                          Result containing response and run information
                                              
                                                      Raises:
                                                          UnexpectedModelBehavior: If the model fails or behaves unexpectedly
                                                      """
                                                      """Run agent with prompt and get response."""
                                                      message_id = message_id or str(uuid4())
                                                      tools = await self.tools.get_tools(state="enabled", names=tool_choice)
                                                      self.set_result_type(result_type)
                                                      start_time = time.perf_counter()
                                                      sys_prompt = await self.sys_prompts.format_system_prompt(self)
                                              
                                                      message_history = (
                                                          messages if messages is not None else self.conversation.get_history()
                                                      )
                                                      try:
                                                          result = await self._provider.generate_response(
                                                              *await convert_prompts(prompts),
                                                              message_id=message_id,
                                                              message_history=message_history,
                                                              tools=tools,
                                                              result_type=result_type,
                                                              usage_limits=usage_limits,
                                                              model=model,
                                                              system_prompt=sys_prompt,
                                                          )
                                                      except Exception as e:
                                                          logger.exception("Agent run failed")
                                                          self.run_failed.emit("Agent run failed", e)
                                                          raise
                                                      else:
                                                          response_msg = ChatMessage[TResult](
                                                              content=result.content,
                                                              role="assistant",
                                                              name=self.name,
                                                              model=result.model_name,
                                                              message_id=message_id,
                                                              conversation_id=conversation_id,
                                                              tool_calls=result.tool_calls,
                                                              cost_info=result.cost_and_usage,
                                                              response_time=time.perf_counter() - start_time,
                                                              provider_extra=result.provider_extra or {},
                                                          )
                                                          if self._debug:
                                                              import devtools
                                              
                                                              devtools.debug(response_msg)
                                                          return response_msg
                                              
                                                  @asynccontextmanager
                                                  async def run_stream(
                                                      self,
                                                      *prompt: AnyPromptType | PIL.Image.Image | os.PathLike[str],
                                                      result_type: type[TResult] | None = None,
                                                      model: ModelType = None,
                                                      tool_choice: str | list[str] | None = None,
                                                      store_history: bool = True,
                                                      usage_limits: UsageLimits | None = None,
                                                      message_id: str | None = None,
                                                      conversation_id: str | None = None,
                                                      messages: list[ChatMessage[Any]] | None = None,
                                                      wait_for_connections: bool | None = None,
                                                  ) -> AsyncIterator[StreamingResponseProtocol[TResult]]:
                                                      """Run agent with prompt and get a streaming response.
                                              
                                                      Args:
                                                          prompt: User query or instruction
                                                          result_type: Optional type for structured responses
                                                          model: Optional model override
                                                          tool_choice: Filter tool choice by name
                                                          store_history: Whether the message exchange should be added to the
                                                                         context window
                                                          usage_limits: Optional usage limits for the model
                                                          message_id: Optional message id for the returned message.
                                                                      Automatically generated if not provided.
                                                          conversation_id: Optional conversation id for the returned message.
                                                          messages: Optional list of messages to replace the conversation history
                                                          wait_for_connections: Whether to wait for connected agents to complete
                                              
                                                      Returns:
                                                          A streaming result to iterate over.
                                              
                                                      Raises:
                                                          UnexpectedModelBehavior: If the model fails or behaves unexpectedly
                                                      """
                                                      message_id = message_id or str(uuid4())
                                                      user_msg, prompts = await self.pre_run(*prompt)
                                                      self.set_result_type(result_type)
                                                      start_time = time.perf_counter()
                                                      sys_prompt = await self.sys_prompts.format_system_prompt(self)
                                                      tools = await self.tools.get_tools(state="enabled", names=tool_choice)
                                                      message_history = (
                                                          messages if messages is not None else self.conversation.get_history()
                                                      )
                                                      try:
                                                          async with self._provider.stream_response(
                                                              *prompts,
                                                              message_id=message_id,
                                                              message_history=message_history,
                                                              result_type=result_type,
                                                              model=model,
                                                              store_history=store_history,
                                                              tools=tools,
                                                              usage_limits=usage_limits,
                                                              system_prompt=sys_prompt,
                                                          ) as stream:
                                                              yield stream
                                                              usage = stream.usage()
                                                              cost_info = None
                                                              model_name = stream.model_name  # type: ignore
                                                              if model_name:
                                                                  cost_info = await TokenCost.from_usage(
                                                                      usage,
                                                                      model_name,
                                                                      str(user_msg.content),
                                                                      str(stream.formatted_content),  # type: ignore
                                                                  )
                                                              response_msg = ChatMessage[TResult](
                                                                  content=cast(TResult, stream.formatted_content),  # type: ignore
                                                                  role="assistant",
                                                                  name=self.name,
                                                                  model=model_name,
                                                                  message_id=message_id,
                                                                  conversation_id=user_msg.conversation_id,
                                                                  cost_info=cost_info,
                                                                  response_time=time.perf_counter() - start_time,
                                                                  # provider_extra=stream.provider_extra or {},
                                                              )
                                                              self.message_sent.emit(response_msg)
                                                              if store_history:
                                                                  self.conversation.add_chat_messages([user_msg, response_msg])
                                                              await self.connections.route_message(
                                                                  response_msg,
                                                                  wait=wait_for_connections,
                                                              )
                                              
                                                      except Exception as e:
                                                          logger.exception("Agent stream failed")
                                                          self.run_failed.emit("Agent stream failed", e)
                                                          raise
                                              
                                                  async def run_iter(
                                                      self,
                                                      *prompt_groups: Sequence[AnyPromptType | PIL.Image.Image | os.PathLike[str]],
                                                      result_type: type[TResult] | None = None,
                                                      model: ModelType = None,
                                                      store_history: bool = True,
                                                      wait_for_connections: bool | None = None,
                                                  ) -> AsyncIterator[ChatMessage[TResult]]:
                                                      """Run agent sequentially on multiple prompt groups.
                                              
                                                      Args:
                                                          prompt_groups: Groups of prompts to process sequentially
                                                          result_type: Optional type for structured responses
                                                          model: Optional model override
                                                          store_history: Whether to store in conversation history
                                                          wait_for_connections: Whether to wait for connected agents
                                              
                                                      Yields:
                                                          Response messages in sequence
                                              
                                                      Example:
                                                          questions = [
                                                              ["What is your name?"],
                                                              ["How old are you?", image1],
                                                              ["Describe this image", image2],
                                                          ]
                                                          async for response in agent.run_iter(*questions):
                                                              print(response.content)
                                                      """
                                                      for prompts in prompt_groups:
                                                          response = await self.run(
                                                              *prompts,
                                                              result_type=result_type,
                                                              model=model,
                                                              store_history=store_history,
                                                              wait_for_connections=wait_for_connections,
                                                          )
                                                          yield response  # pyright: ignore
                                              
                                                  def run_sync(
                                                      self,
                                                      *prompt: AnyPromptType | PIL.Image.Image | os.PathLike[str],
                                                      result_type: type[TResult] | None = None,
                                                      deps: TDeps | None = None,
                                                      model: ModelType = None,
                                                      store_history: bool = True,
                                                  ) -> ChatMessage[TResult]:
                                                      """Run agent synchronously (convenience wrapper).
                                              
                                                      Args:
                                                          prompt: User query or instruction
                                                          result_type: Optional type for structured responses
                                                          deps: Optional dependencies for the agent
                                                          model: Optional model override
                                                          store_history: Whether the message exchange should be added to the
                                                                         context window
                                                      Returns:
                                                          Result containing response and run information
                                                      """
                                                      coro = self.run(
                                                          *prompt,
                                                          model=model,
                                                          store_history=store_history,
                                                          result_type=result_type,
                                                      )
                                                      return self.run_task_sync(coro)  # type: ignore
                                              
                                                  async def run_job(
                                                      self,
                                                      job: Job[TDeps, str | None],
                                                      *,
                                                      store_history: bool = True,
                                                      include_agent_tools: bool = True,
                                                  ) -> ChatMessage[str]:
                                                      """Execute a pre-defined task.
                                              
                                                      Args:
                                                          job: Job configuration to execute
                                                          store_history: Whether the message exchange should be added to the
                                                                         context window
                                                          include_agent_tools: Whether to include agent tools
                                                      Returns:
                                                          Job execution result
                                              
                                                      Raises:
                                                          JobError: If task execution fails
                                                          ValueError: If task configuration is invalid
                                                      """
                                                      from llmling_agent.tasks import JobError
                                              
                                                      if job.required_dependency is not None:  # noqa: SIM102
                                                          if not isinstance(self.context.data, job.required_dependency):
                                                              msg = (
                                                                  f"Agent dependencies ({type(self.context.data)}) "
                                                                  f"don't match job requirement ({job.required_dependency})"
                                                              )
                                                              raise JobError(msg)
                                              
                                                      # Load task knowledge
                                                      if job.knowledge:
                                                          # Add knowledge sources to context
                                                          resources: list[Resource | str] = list(job.knowledge.paths) + list(
                                                              job.knowledge.resources
                                                          )
                                                          for source in resources:
                                                              await self.conversation.load_context_source(source)
                                                          for prompt in job.knowledge.prompts:
                                                              await self.conversation.load_context_source(prompt)
                                                      try:
                                                          # Register task tools temporarily
                                                          tools = job.get_tools()
                                                          with self.tools.temporary_tools(tools, exclusive=not include_agent_tools):
                                                              # Execute job with job-specific tools
                                                              return await self.run(await job.get_prompt(), store_history=store_history)
                                              
                                                      except Exception as e:
                                                          msg = f"Task execution failed: {e}"
                                                          logger.exception(msg)
                                                          raise JobError(msg) from e
                                              
                                                  async def run_in_background(
                                                      self,
                                                      *prompt: AnyPromptType | PIL.Image.Image | os.PathLike[str],
                                                      max_count: int | None = None,
                                                      interval: float = 1.0,
                                                      block: bool = False,
                                                      **kwargs: Any,
                                                  ) -> ChatMessage[TResult] | None:
                                                      """Run agent continuously in background with prompt or dynamic prompt function.
                                              
                                                      Args:
                                                          prompt: Static prompt or function that generates prompts
                                                          max_count: Maximum number of runs (None = infinite)
                                                          interval: Seconds between runs
                                                          block: Whether to block until completion
                                                          **kwargs: Arguments passed to run()
                                                      """
                                                      self._infinite = max_count is None
                                              
                                                      async def _continuous():
                                                          count = 0
                                                          msg = "%s: Starting continuous run (max_count=%s, interval=%s) for %r"
                                                          logger.debug(msg, self.name, max_count, interval, self.name)
                                                          latest = None
                                                          while max_count is None or count < max_count:
                                                              try:
                                                                  current_prompts = [
                                                                      call_with_context(p, self.context, **kwargs) if callable(p) else p
                                                                      for p in prompt
                                                                  ]
                                                                  msg = "%s: Generated prompt #%d: %s"
                                                                  logger.debug(msg, self.name, count, current_prompts)
                                              
                                                                  latest = await self.run(current_prompts, **kwargs)
                                                                  msg = "%s: Run continous result #%d"
                                                                  logger.debug(msg, self.name, count)
                                              
                                                                  count += 1
                                                                  await asyncio.sleep(interval)
                                                              except asyncio.CancelledError:
                                                                  logger.debug("%s: Continuous run cancelled", self.name)
                                                                  break
                                                              except Exception:
                                                                  logger.exception("%s: Background run failed", self.name)
                                                                  await asyncio.sleep(interval)
                                                          msg = "%s: Continuous run completed after %d iterations"
                                                          logger.debug(msg, self.name, count)
                                                          return latest
                                              
                                                      # Cancel any existing background task
                                                      await self.stop()
                                                      task = asyncio.create_task(_continuous(), name=f"background_{self.name}")
                                                      if block:
                                                          try:
                                                              return await task  # type: ignore
                                                          finally:
                                                              if not task.done():
                                                                  task.cancel()
                                                      else:
                                                          logger.debug("%s: Started background task %s", self.name, task.get_name())
                                                          self._background_task = task
                                                          return None
                                              
                                                  async def stop(self):
                                                      """Stop continuous execution if running."""
                                                      if self._background_task and not self._background_task.done():
                                                          self._background_task.cancel()
                                                          await self._background_task
                                                          self._background_task = None
                                              
                                                  async def wait(self) -> ChatMessage[TResult]:
                                                      """Wait for background execution to complete."""
                                                      if not self._background_task:
                                                          msg = "No background task running"
                                                          raise RuntimeError(msg)
                                                      if self._infinite:
                                                          msg = "Cannot wait on infinite execution"
                                                          raise RuntimeError(msg)
                                                      try:
                                                          return await self._background_task
                                                      finally:
                                                          self._background_task = None
                                              
                                                  def clear_history(self):
                                                      """Clear both internal and pydantic-ai history."""
                                                      self._logger.clear_state()
                                                      self.conversation.clear()
                                                      logger.debug("Cleared history and reset tool state")
                                              
                                                  async def share(
                                                      self,
                                                      target: AnyAgent[TDeps, Any],
                                                      *,
                                                      tools: list[str] | None = None,
                                                      resources: list[str] | None = None,
                                                      history: bool | int | None = None,  # bool or number of messages
                                                      token_limit: int | None = None,
                                                  ):
                                                      """Share capabilities and knowledge with another agent.
                                              
                                                      Args:
                                                          target: Agent to share with
                                                          tools: List of tool names to share
                                                          resources: List of resource names to share
                                                          history: Share conversation history:
                                                                  - True: Share full history
                                                                  - int: Number of most recent messages to share
                                                                  - None: Don't share history
                                                          token_limit: Optional max tokens for history
                                              
                                                      Raises:
                                                          ValueError: If requested items don't exist
                                                          RuntimeError: If runtime not available for resources
                                                      """
                                                      # Share tools if requested
                                                      for name in tools or []:
                                                          if tool := self.tools.get(name):
                                                              meta = {"shared_from": self.name}
                                                              target.tools.register_tool(tool.callable, metadata=meta)
                                                          else:
                                                              msg = f"Tool not found: {name}"
                                                              raise ValueError(msg)
                                              
                                                      # Share resources if requested
                                                      if resources:
                                                          if not self.runtime:
                                                              msg = "No runtime available for sharing resources"
                                                              raise RuntimeError(msg)
                                                          for name in resources:
                                                              if resource := self.runtime.get_resource(name):
                                                                  await target.conversation.load_context_source(resource)  # type: ignore
                                                              else:
                                                                  msg = f"Resource not found: {name}"
                                                                  raise ValueError(msg)
                                              
                                                      # Share history if requested
                                                      if history:
                                                          history_text = await self.conversation.format_history(
                                                              max_tokens=token_limit,
                                                              num_messages=history if isinstance(history, int) else None,
                                                          )
                                                          target.conversation.add_context_message(
                                                              history_text, source=self.name, metadata={"type": "shared_history"}
                                                          )
                                              
                                                  def register_worker(
                                                      self,
                                                      worker: MessageNode[Any, Any],
                                                      *,
                                                      name: str | None = None,
                                                      reset_history_on_run: bool = True,
                                                      pass_message_history: bool = False,
                                                      share_context: bool = False,
                                                  ) -> Tool:
                                                      """Register another agent as a worker tool."""
                                                      return self.tools.register_worker(
                                                          worker,
                                                          name=name,
                                                          reset_history_on_run=reset_history_on_run,
                                                          pass_message_history=pass_message_history,
                                                          share_context=share_context,
                                                          parent=self if (pass_message_history or share_context) else None,
                                                      )
                                              
                                                  def set_model(self, model: ModelType):
                                                      """Set the model for this agent.
                                              
                                                      Args:
                                                          model: New model to use (name or instance)
                                              
                                                      Emits:
                                                          model_changed signal with the new model
                                                      """
                                                      self._provider.set_model(model)
                                              
                                                  async def reset(self):
                                                      """Reset agent state (conversation history and tool states)."""
                                                      old_tools = await self.tools.list_tools()
                                                      self.conversation.clear()
                                                      self.tools.reset_states()
                                                      new_tools = await self.tools.list_tools()
                                              
                                                      event = self.AgentReset(
                                                          agent_name=self.name,
                                                          previous_tools=old_tools,
                                                          new_tools=new_tools,
                                                      )
                                                      self.agent_reset.emit(event)
                                              
                                                  @property
                                                  def runtime(self) -> RuntimeConfig:
                                                      """Get runtime configuration from context."""
                                                      assert self.context.runtime
                                                      return self.context.runtime
                                              
                                                  @runtime.setter
                                                  def runtime(self, value: RuntimeConfig):
                                                      """Set runtime configuration and update context."""
                                                      self.context.runtime = value
                                              
                                                  @property
                                                  def stats(self) -> MessageStats:
                                                      return MessageStats(messages=self._logger.message_history)
                                              
                                                  @asynccontextmanager
                                                  async def temporary_state(
                                                      self,
                                                      *,
                                                      system_prompts: list[AnyPromptType] | None = None,
                                                      replace_prompts: bool = False,
                                                      tools: list[ToolType] | None = None,
                                                      replace_tools: bool = False,
                                                      history: list[AnyPromptType] | SessionQuery | None = None,
                                                      replace_history: bool = False,
                                                      pause_routing: bool = False,
                                                      model: ModelType | None = None,
                                                      provider: AgentProvider | None = None,
                                                  ) -> AsyncIterator[Self]:
                                                      """Temporarily modify agent state.
                                              
                                                      Args:
                                                          system_prompts: Temporary system prompts to use
                                                          replace_prompts: Whether to replace existing prompts
                                                          tools: Temporary tools to make available
                                                          replace_tools: Whether to replace existing tools
                                                          history: Conversation history (prompts or query)
                                                          replace_history: Whether to replace existing history
                                                          pause_routing: Whether to pause message routing
                                                          model: Temporary model override
                                                          provider: Temporary provider override
                                                      """
                                                      old_model = self._provider.model if hasattr(self._provider, "model") else None  # pyright: ignore
                                                      old_provider = self._provider
                                              
                                                      async with AsyncExitStack() as stack:
                                                          # System prompts (async)
                                                          if system_prompts is not None:
                                                              await stack.enter_async_context(
                                                                  self.sys_prompts.temporary_prompt(
                                                                      system_prompts, exclusive=replace_prompts
                                                                  )
                                                              )
                                              
                                                          # Tools (sync)
                                                          if tools is not None:
                                                              stack.enter_context(
                                                                  self.tools.temporary_tools(tools, exclusive=replace_tools)
                                                              )
                                              
                                                          # History (async)
                                                          if history is not None:
                                                              await stack.enter_async_context(
                                                                  self.conversation.temporary_state(
                                                                      history, replace_history=replace_history
                                                                  )
                                                              )
                                              
                                                          # Routing (async)
                                                          if pause_routing:
                                                              await stack.enter_async_context(self.connections.paused_routing())
                                              
                                                          # Model/Provider
                                                          if provider is not None:
                                                              self._provider = provider
                                                          elif model is not None:
                                                              self._provider.set_model(model)
                                              
                                                          try:
                                                              yield self
                                                          finally:
                                                              # Restore model/provider
                                                              if provider is not None:
                                                                  self._provider = old_provider
                                                              elif model is not None and old_model:
                                                                  self._provider.set_model(old_model)
                                              

                                              context property writable

                                              context: AgentContext[TDeps]
                                              

                                              Get agent context.

                                              model_name property

                                              model_name: str | None
                                              

                                              Get the model name in a consistent format.

                                              name property writable

                                              name: str
                                              

                                              Get agent name.

                                              provider property writable

                                              provider: AgentProvider
                                              

                                              Get the underlying provider.

                                              runtime property writable

                                              runtime: RuntimeConfig
                                              

                                              Get runtime configuration from context.

                                              AgentReset dataclass

                                              Emitted when agent is reset.

                                              Source code in src/llmling_agent/agent/agent.py
                                              135
                                              136
                                              137
                                              138
                                              139
                                              140
                                              141
                                              142
                                              @dataclass(frozen=True)
                                              class AgentReset:
                                                  """Emitted when agent is reset."""
                                              
                                                  agent_name: AgentName
                                                  previous_tools: dict[str, bool]
                                                  new_tools: dict[str, bool]
                                                  timestamp: datetime = field(default_factory=get_now)
                                              

                                              __aenter__ async

                                              __aenter__() -> Self
                                              

                                              Enter async context and set up MCP servers.

                                              Source code in src/llmling_agent/agent/agent.py
                                              352
                                              353
                                              354
                                              355
                                              356
                                              357
                                              358
                                              359
                                              360
                                              361
                                              362
                                              363
                                              364
                                              365
                                              366
                                              367
                                              368
                                              369
                                              370
                                              371
                                              372
                                              373
                                              374
                                              375
                                              376
                                              377
                                              378
                                              379
                                              380
                                              381
                                              382
                                              383
                                              384
                                              385
                                              386
                                              387
                                              async def __aenter__(self) -> Self:
                                                  """Enter async context and set up MCP servers."""
                                                  try:
                                                      # Collect all coroutines that need to be run
                                                      coros: list[Coroutine[Any, Any, Any]] = []
                                              
                                                      # Runtime initialization if needed
                                                      runtime_ref = self.context.runtime
                                                      if runtime_ref and not runtime_ref._initialized:
                                                          self._owns_runtime = True
                                                          coros.append(runtime_ref.__aenter__())
                                              
                                                      # Events initialization
                                                      coros.append(super().__aenter__())
                                              
                                                      # Get conversation init tasks directly
                                                      coros.extend(self.conversation.get_initialization_tasks())
                                              
                                                      # Execute coroutines either in parallel or sequentially
                                                      if self.parallel_init and coros:
                                                          await asyncio.gather(*coros)
                                                      else:
                                                          for coro in coros:
                                                              await coro
                                                      if runtime_ref:
                                                          self.tools.add_provider(RuntimeResourceProvider(runtime_ref))
                                                      for provider in await self.context.config.get_toolsets():
                                                          self.tools.add_provider(provider)
                                                  except Exception as e:
                                                      # Clean up in reverse order
                                                      if self._owns_runtime and runtime_ref and self.context.runtime == runtime_ref:
                                                          await runtime_ref.__aexit__(type(e), e, e.__traceback__)
                                                      msg = "Failed to initialize agent"
                                                      raise RuntimeError(msg) from e
                                                  else:
                                                      return self
                                              

                                              __aexit__ async

                                              __aexit__(
                                                  exc_type: type[BaseException] | None,
                                                  exc_val: BaseException | None,
                                                  exc_tb: TracebackType | None,
                                              )
                                              

                                              Exit async context.

                                              Source code in src/llmling_agent/agent/agent.py
                                              389
                                              390
                                              391
                                              392
                                              393
                                              394
                                              395
                                              396
                                              397
                                              398
                                              399
                                              400
                                              401
                                              402
                                              async def __aexit__(
                                                  self,
                                                  exc_type: type[BaseException] | None,
                                                  exc_val: BaseException | None,
                                                  exc_tb: TracebackType | None,
                                              ):
                                                  """Exit async context."""
                                                  await super().__aexit__(exc_type, exc_val, exc_tb)
                                                  try:
                                                      await self.mcp.__aexit__(exc_type, exc_val, exc_tb)
                                                  finally:
                                                      if self._owns_runtime and self.context.runtime:
                                                          self.tools.remove_provider("runtime")
                                                          await self.context.runtime.__aexit__(exc_type, exc_val, exc_tb)
                                              

                                              __and__

                                              __and__(other: Agent[TDeps] | StructuredAgent[TDeps, Any]) -> Team[TDeps]
                                              
                                              __and__(other: Team[TDeps]) -> Team[TDeps]
                                              
                                              __and__(other: ProcessorCallback[Any]) -> Team[TDeps]
                                              
                                              __and__(other: MessageNode[Any, Any] | ProcessorCallback[Any]) -> Team[Any]
                                              

                                              Create agent group using | operator.

                                              Example

                                              group = analyzer & planner & executor # Create group of 3 group = analyzer & existing_group # Add to existing group

                                              Source code in src/llmling_agent/agent/agent.py
                                              417
                                              418
                                              419
                                              420
                                              421
                                              422
                                              423
                                              424
                                              425
                                              426
                                              427
                                              428
                                              429
                                              430
                                              431
                                              432
                                              433
                                              434
                                              435
                                              436
                                              437
                                              438
                                              439
                                              440
                                              441
                                              442
                                              def __and__(self, other: MessageNode[Any, Any] | ProcessorCallback[Any]) -> Team[Any]:
                                                  """Create agent group using | operator.
                                              
                                                  Example:
                                                      group = analyzer & planner & executor  # Create group of 3
                                                      group = analyzer & existing_group  # Add to existing group
                                                  """
                                                  from llmling_agent.agent import StructuredAgent
                                                  from llmling_agent.delegation.team import Team
                                              
                                                  match other:
                                                      case Team():
                                                          return Team([self, *other.agents])
                                                      case Callable():
                                                          if callable(other):
                                                              if has_return_type(other, str):
                                                                  agent_2 = Agent.from_callback(other)
                                                              else:
                                                                  agent_2 = StructuredAgent.from_callback(other)
                                                          agent_2.context.pool = self.context.pool
                                                          return Team([self, agent_2])
                                                      case MessageNode():
                                                          return Team([self, other])
                                                      case _:
                                                          msg = f"Invalid agent type: {type(other)}"
                                                          raise ValueError(msg)
                                              

                                              __init__

                                              __init__(
                                                  name: str = "llmling-agent",
                                                  provider: AgentType = "pydantic_ai",
                                                  *,
                                                  model: ModelType = None,
                                                  runtime: RuntimeConfig | Config | StrPath | None = None,
                                                  context: AgentContext[TDeps] | None = None,
                                                  session: SessionIdType | SessionQuery | MemoryConfig | bool | int = None,
                                                  system_prompt: AnyPromptType | Sequence[AnyPromptType] = (),
                                                  description: str | None = None,
                                                  tools: Sequence[ToolType] | None = None,
                                                  capabilities: Capabilities | None = None,
                                                  mcp_servers: Sequence[str | MCPServerConfig] | None = None,
                                                  resources: Sequence[Resource | PromptType | str] = (),
                                                  retries: int = 1,
                                                  result_retries: int | None = None,
                                                  end_strategy: EndStrategy = "early",
                                                  defer_model_check: bool = False,
                                                  input_provider: InputProvider | None = None,
                                                  parallel_init: bool = True,
                                                  debug: bool = False,
                                              )
                                              

                                              Initialize agent with runtime configuration.

                                              Parameters:

                                              Name Type Description Default
                                              runtime RuntimeConfig | Config | StrPath | None

                                              Runtime configuration providing access to resources/tools

                                              None
                                              context AgentContext[TDeps] | None

                                              Agent context with capabilities and configuration

                                              None
                                              provider AgentType

                                              Agent type to use (ai: PydanticAIProvider, human: HumanProvider)

                                              'pydantic_ai'
                                              session SessionIdType | SessionQuery | MemoryConfig | bool | int

                                              Memory configuration. - None: Default memory config - False: Disable message history (max_messages=0) - int: Max tokens for memory - str/UUID: Session identifier - SessionQuery: Query to recover conversation - MemoryConfig: Complete memory configuration

                                              None
                                              model ModelType

                                              The default model to use (defaults to GPT-4)

                                              None
                                              system_prompt AnyPromptType | Sequence[AnyPromptType]

                                              Static system prompts to use for this agent

                                              ()
                                              name str

                                              Name of the agent for logging

                                              'llmling-agent'
                                              description str | None

                                              Description of the Agent ("what it can do")

                                              None
                                              tools Sequence[ToolType] | None

                                              List of tools to register with the agent

                                              None
                                              capabilities Capabilities | None

                                              Capabilities for the agent

                                              None
                                              mcp_servers Sequence[str | MCPServerConfig] | None

                                              MCP servers to connect to

                                              None
                                              resources Sequence[Resource | PromptType | str]

                                              Additional resources to load

                                              ()
                                              retries int

                                              Default number of retries for failed operations

                                              1
                                              result_retries int | None

                                              Max retries for result validation (defaults to retries)

                                              None
                                              end_strategy EndStrategy

                                              Strategy for handling tool calls that are requested alongside a final result

                                              'early'
                                              defer_model_check bool

                                              Whether to defer model evaluation until first run

                                              False
                                              input_provider InputProvider | None

                                              Provider for human input (tool confirmation / HumanProviders)

                                              None
                                              parallel_init bool

                                              Whether to initialize resources in parallel

                                              True
                                              debug bool

                                              Whether to enable debug mode

                                              False
                                              Source code in src/llmling_agent/agent/agent.py
                                              152
                                              153
                                              154
                                              155
                                              156
                                              157
                                              158
                                              159
                                              160
                                              161
                                              162
                                              163
                                              164
                                              165
                                              166
                                              167
                                              168
                                              169
                                              170
                                              171
                                              172
                                              173
                                              174
                                              175
                                              176
                                              177
                                              178
                                              179
                                              180
                                              181
                                              182
                                              183
                                              184
                                              185
                                              186
                                              187
                                              188
                                              189
                                              190
                                              191
                                              192
                                              193
                                              194
                                              195
                                              196
                                              197
                                              198
                                              199
                                              200
                                              201
                                              202
                                              203
                                              204
                                              205
                                              206
                                              207
                                              208
                                              209
                                              210
                                              211
                                              212
                                              213
                                              214
                                              215
                                              216
                                              217
                                              218
                                              219
                                              220
                                              221
                                              222
                                              223
                                              224
                                              225
                                              226
                                              227
                                              228
                                              229
                                              230
                                              231
                                              232
                                              233
                                              234
                                              235
                                              236
                                              237
                                              238
                                              239
                                              240
                                              241
                                              242
                                              243
                                              244
                                              245
                                              246
                                              247
                                              248
                                              249
                                              250
                                              251
                                              252
                                              253
                                              254
                                              255
                                              256
                                              257
                                              258
                                              259
                                              260
                                              261
                                              262
                                              263
                                              264
                                              265
                                              266
                                              267
                                              268
                                              269
                                              270
                                              271
                                              272
                                              273
                                              274
                                              275
                                              276
                                              277
                                              278
                                              279
                                              280
                                              281
                                              282
                                              283
                                              284
                                              285
                                              286
                                              287
                                              288
                                              289
                                              290
                                              291
                                              292
                                              293
                                              294
                                              295
                                              296
                                              297
                                              298
                                              299
                                              300
                                              301
                                              302
                                              303
                                              304
                                              305
                                              306
                                              307
                                              308
                                              309
                                              310
                                              311
                                              312
                                              313
                                              314
                                              315
                                              316
                                              317
                                              318
                                              319
                                              320
                                              321
                                              322
                                              323
                                              324
                                              325
                                              326
                                              327
                                              328
                                              329
                                              330
                                              331
                                              332
                                              333
                                              334
                                              335
                                              def __init__(  # noqa: PLR0915
                                                  # we dont use AgentKwargs here so that we can work with explicit ones in the ctor
                                                  self,
                                                  name: str = "llmling-agent",
                                                  provider: AgentType = "pydantic_ai",
                                                  *,
                                                  model: ModelType = None,
                                                  runtime: RuntimeConfig | Config | StrPath | None = None,
                                                  context: AgentContext[TDeps] | None = None,
                                                  session: SessionIdType | SessionQuery | MemoryConfig | bool | int = None,
                                                  system_prompt: AnyPromptType | Sequence[AnyPromptType] = (),
                                                  description: str | None = None,
                                                  tools: Sequence[ToolType] | None = None,
                                                  capabilities: Capabilities | None = None,
                                                  mcp_servers: Sequence[str | MCPServerConfig] | None = None,
                                                  resources: Sequence[Resource | PromptType | str] = (),
                                                  retries: int = 1,
                                                  result_retries: int | None = None,
                                                  end_strategy: EndStrategy = "early",
                                                  defer_model_check: bool = False,
                                                  input_provider: InputProvider | None = None,
                                                  parallel_init: bool = True,
                                                  debug: bool = False,
                                              ):
                                                  """Initialize agent with runtime configuration.
                                              
                                                  Args:
                                                      runtime: Runtime configuration providing access to resources/tools
                                                      context: Agent context with capabilities and configuration
                                                      provider: Agent type to use (ai: PydanticAIProvider, human: HumanProvider)
                                                      session: Memory configuration.
                                                          - None: Default memory config
                                                          - False: Disable message history (max_messages=0)
                                                          - int: Max tokens for memory
                                                          - str/UUID: Session identifier
                                                          - SessionQuery: Query to recover conversation
                                                          - MemoryConfig: Complete memory configuration
                                                      model: The default model to use (defaults to GPT-4)
                                                      system_prompt: Static system prompts to use for this agent
                                                      name: Name of the agent for logging
                                                      description: Description of the Agent ("what it can do")
                                                      tools: List of tools to register with the agent
                                                      capabilities: Capabilities for the agent
                                                      mcp_servers: MCP servers to connect to
                                                      resources: Additional resources to load
                                                      retries: Default number of retries for failed operations
                                                      result_retries: Max retries for result validation (defaults to retries)
                                                      end_strategy: Strategy for handling tool calls that are requested alongside
                                                                    a final result
                                                      defer_model_check: Whether to defer model evaluation until first run
                                                      input_provider: Provider for human input (tool confirmation / HumanProviders)
                                                      parallel_init: Whether to initialize resources in parallel
                                                      debug: Whether to enable debug mode
                                                  """
                                                  from llmling_agent.agent import AgentContext
                                                  from llmling_agent.agent.conversation import ConversationManager
                                                  from llmling_agent.agent.interactions import Interactions
                                                  from llmling_agent.agent.sys_prompts import SystemPrompts
                                                  from llmling_agent.resource_providers.capability_provider import (
                                                      CapabilitiesResourceProvider,
                                                  )
                                                  from llmling_agent_providers.base import AgentProvider
                                              
                                                  self._infinite = False
                                                  # save some stuff for asnyc init
                                                  self._owns_runtime = False
                                                  # prepare context
                                                  ctx = context or AgentContext[TDeps].create_default(
                                                      name,
                                                      input_provider=input_provider,
                                                      capabilities=capabilities,
                                                  )
                                                  self._context = ctx
                                                  memory_cfg = (
                                                      session
                                                      if isinstance(session, MemoryConfig)
                                                      else MemoryConfig.from_value(session)
                                                  )
                                                  super().__init__(
                                                      name=name,
                                                      context=ctx,
                                                      description=description,
                                                      enable_logging=memory_cfg.enable,
                                                      mcp_servers=mcp_servers,
                                                  )
                                                  # Initialize runtime
                                                  match runtime:
                                                      case None:
                                                          ctx.runtime = RuntimeConfig.from_config(Config())
                                                      case Config() | str() | PathLike():
                                                          ctx.runtime = RuntimeConfig.from_config(runtime)
                                                      case RuntimeConfig():
                                                          ctx.runtime = runtime
                                              
                                                  runtime_provider = RuntimePromptProvider(ctx.runtime)
                                                  ctx.definition.prompt_manager.providers["runtime"] = runtime_provider
                                                  # Initialize tool manager
                                                  all_tools = list(tools or [])
                                                  self.tools = ToolManager(all_tools)
                                                  self.tools.add_provider(self.mcp)
                                                  if builtin_tools := ctx.config.get_tool_provider():
                                                      self.tools.add_provider(builtin_tools)
                                              
                                                  # Initialize conversation manager
                                                  resources = list(resources)
                                                  if ctx.config.knowledge:
                                                      resources.extend(ctx.config.knowledge.get_resources())
                                                  self.conversation = ConversationManager(self, memory_cfg, resources=resources)
                                                  # Initialize provider
                                                  match provider:
                                                      case "pydantic_ai":
                                                          validate_import("pydantic_ai", "pydantic_ai")
                                                          from llmling_agent_providers.pydanticai import PydanticAIProvider
                                              
                                                          if model and not isinstance(model, str):
                                                              from pydantic_ai import models
                                              
                                                              assert isinstance(model, models.Model)
                                                          self._provider: AgentProvider = PydanticAIProvider(
                                                              model=model,
                                                              retries=retries,
                                                              end_strategy=end_strategy,
                                                              result_retries=result_retries,
                                                              defer_model_check=defer_model_check,
                                                              debug=debug,
                                                              context=ctx,
                                                          )
                                                      case "human":
                                                          from llmling_agent_providers.human import HumanProvider
                                              
                                                          self._provider = HumanProvider(name=name, debug=debug, context=ctx)
                                                      case Callable():
                                                          from llmling_agent_providers.callback import CallbackProvider
                                              
                                                          self._provider = CallbackProvider(
                                                              provider, name=name, debug=debug, context=ctx
                                                          )
                                                      case "litellm":
                                                          validate_import("litellm", "litellm")
                                                          from llmling_agent_providers.litellm_provider import LiteLLMProvider
                                              
                                                          self._provider = LiteLLMProvider(
                                                              name=name,
                                                              debug=debug,
                                                              retries=retries,
                                                              context=ctx,
                                                              model=model,
                                                          )
                                                      case AgentProvider():
                                                          self._provider = provider
                                                          self._provider.context = ctx
                                                      case _:
                                                          msg = f"Invalid agent type: {type}"
                                                          raise ValueError(msg)
                                                  self.tools.add_provider(CapabilitiesResourceProvider(ctx.capabilities))
                                              
                                                  if ctx and ctx.definition:
                                                      from llmling_agent.observability import registry
                                              
                                                      registry.register_providers(ctx.definition.observability)
                                              
                                                  # init variables
                                                  self._debug = debug
                                                  self._result_type: type | None = None
                                                  self.parallel_init = parallel_init
                                                  self.name = name
                                                  self._background_task: asyncio.Task[Any] | None = None
                                              
                                                  # Forward provider signals
                                                  self._provider.chunk_streamed.connect(self.chunk_streamed)
                                                  self._provider.model_changed.connect(self.model_changed)
                                                  self._provider.tool_used.connect(self.tool_used)
                                                  self._provider.model_changed.connect(self.model_changed)
                                              
                                                  self.talk = Interactions(self)
                                              
                                                  # Set up system prompts
                                                  config_prompts = ctx.config.system_prompts if ctx else []
                                                  all_prompts: list[AnyPromptType] = list(config_prompts)
                                                  if isinstance(system_prompt, list):
                                                      all_prompts.extend(system_prompt)
                                                  else:
                                                      all_prompts.append(system_prompt)
                                                  self.sys_prompts = SystemPrompts(all_prompts, context=ctx)
                                              

                                              _run async

                                              _run(
                                                  *prompts: AnyPromptType | Image | PathLike[str] | ChatMessage[Any],
                                                  result_type: type[TResult] | None = None,
                                                  model: ModelType = None,
                                                  store_history: bool = True,
                                                  tool_choice: str | list[str] | None = None,
                                                  usage_limits: UsageLimits | None = None,
                                                  message_id: str | None = None,
                                                  conversation_id: str | None = None,
                                                  messages: list[ChatMessage[Any]] | None = None,
                                                  wait_for_connections: bool | None = None,
                                              ) -> ChatMessage[TResult]
                                              

                                              Run agent with prompt and get response.

                                              Parameters:

                                              Name Type Description Default
                                              prompts AnyPromptType | Image | PathLike[str] | ChatMessage[Any]

                                              User query or instruction

                                              ()
                                              result_type type[TResult] | None

                                              Optional type for structured responses

                                              None
                                              model ModelType

                                              Optional model override

                                              None
                                              store_history bool

                                              Whether the message exchange should be added to the context window

                                              True
                                              tool_choice str | list[str] | None

                                              Filter tool choice by name

                                              None
                                              usage_limits UsageLimits | None

                                              Optional usage limits for the model

                                              None
                                              message_id str | None

                                              Optional message id for the returned message. Automatically generated if not provided.

                                              None
                                              conversation_id str | None

                                              Optional conversation id for the returned message.

                                              None
                                              messages list[ChatMessage[Any]] | None

                                              Optional list of messages to replace the conversation history

                                              None
                                              wait_for_connections bool | None

                                              Whether to wait for connected agents to complete

                                              None

                                              Returns:

                                              Type Description
                                              ChatMessage[TResult]

                                              Result containing response and run information

                                              Raises:

                                              Type Description
                                              UnexpectedModelBehavior

                                              If the model fails or behaves unexpectedly

                                              Source code in src/llmling_agent/agent/agent.py
                                              700
                                              701
                                              702
                                              703
                                              704
                                              705
                                              706
                                              707
                                              708
                                              709
                                              710
                                              711
                                              712
                                              713
                                              714
                                              715
                                              716
                                              717
                                              718
                                              719
                                              720
                                              721
                                              722
                                              723
                                              724
                                              725
                                              726
                                              727
                                              728
                                              729
                                              730
                                              731
                                              732
                                              733
                                              734
                                              735
                                              736
                                              737
                                              738
                                              739
                                              740
                                              741
                                              742
                                              743
                                              744
                                              745
                                              746
                                              747
                                              748
                                              749
                                              750
                                              751
                                              752
                                              753
                                              754
                                              755
                                              756
                                              757
                                              758
                                              759
                                              760
                                              761
                                              762
                                              763
                                              764
                                              765
                                              766
                                              767
                                              768
                                              769
                                              770
                                              771
                                              772
                                              773
                                              774
                                              775
                                              776
                                              777
                                              778
                                              @track_action("Calling Agent.run: {prompts}:")
                                              async def _run(
                                                  self,
                                                  *prompts: AnyPromptType | PIL.Image.Image | os.PathLike[str] | ChatMessage[Any],
                                                  result_type: type[TResult] | None = None,
                                                  model: ModelType = None,
                                                  store_history: bool = True,
                                                  tool_choice: str | list[str] | None = None,
                                                  usage_limits: UsageLimits | None = None,
                                                  message_id: str | None = None,
                                                  conversation_id: str | None = None,
                                                  messages: list[ChatMessage[Any]] | None = None,
                                                  wait_for_connections: bool | None = None,
                                              ) -> ChatMessage[TResult]:
                                                  """Run agent with prompt and get response.
                                              
                                                  Args:
                                                      prompts: User query or instruction
                                                      result_type: Optional type for structured responses
                                                      model: Optional model override
                                                      store_history: Whether the message exchange should be added to the
                                                                      context window
                                                      tool_choice: Filter tool choice by name
                                                      usage_limits: Optional usage limits for the model
                                                      message_id: Optional message id for the returned message.
                                                                  Automatically generated if not provided.
                                                      conversation_id: Optional conversation id for the returned message.
                                                      messages: Optional list of messages to replace the conversation history
                                                      wait_for_connections: Whether to wait for connected agents to complete
                                              
                                                  Returns:
                                                      Result containing response and run information
                                              
                                                  Raises:
                                                      UnexpectedModelBehavior: If the model fails or behaves unexpectedly
                                                  """
                                                  """Run agent with prompt and get response."""
                                                  message_id = message_id or str(uuid4())
                                                  tools = await self.tools.get_tools(state="enabled", names=tool_choice)
                                                  self.set_result_type(result_type)
                                                  start_time = time.perf_counter()
                                                  sys_prompt = await self.sys_prompts.format_system_prompt(self)
                                              
                                                  message_history = (
                                                      messages if messages is not None else self.conversation.get_history()
                                                  )
                                                  try:
                                                      result = await self._provider.generate_response(
                                                          *await convert_prompts(prompts),
                                                          message_id=message_id,
                                                          message_history=message_history,
                                                          tools=tools,
                                                          result_type=result_type,
                                                          usage_limits=usage_limits,
                                                          model=model,
                                                          system_prompt=sys_prompt,
                                                      )
                                                  except Exception as e:
                                                      logger.exception("Agent run failed")
                                                      self.run_failed.emit("Agent run failed", e)
                                                      raise
                                                  else:
                                                      response_msg = ChatMessage[TResult](
                                                          content=result.content,
                                                          role="assistant",
                                                          name=self.name,
                                                          model=result.model_name,
                                                          message_id=message_id,
                                                          conversation_id=conversation_id,
                                                          tool_calls=result.tool_calls,
                                                          cost_info=result.cost_and_usage,
                                                          response_time=time.perf_counter() - start_time,
                                                          provider_extra=result.provider_extra or {},
                                                      )
                                                      if self._debug:
                                                          import devtools
                                              
                                                          devtools.debug(response_msg)
                                                      return response_msg
                                              

                                              clear_history

                                              clear_history()
                                              

                                              Clear both internal and pydantic-ai history.

                                              Source code in src/llmling_agent/agent/agent.py
                                              1075
                                              1076
                                              1077
                                              1078
                                              1079
                                              def clear_history(self):
                                                  """Clear both internal and pydantic-ai history."""
                                                  self._logger.clear_state()
                                                  self.conversation.clear()
                                                  logger.debug("Cleared history and reset tool state")
                                              

                                              from_callback classmethod

                                              from_callback(
                                                  callback: ProcessorCallback[str],
                                                  *,
                                                  name: str | None = None,
                                                  debug: bool = False,
                                                  **kwargs: Any,
                                              ) -> Agent[None]
                                              

                                              Create an agent from a processing callback.

                                              Parameters:

                                              Name Type Description Default
                                              callback ProcessorCallback[str]

                                              Function to process messages. Can be: - sync or async - with or without context - must return str for pipeline compatibility

                                              required
                                              name str | None

                                              Optional name for the agent

                                              None
                                              debug bool

                                              Whether to enable debug mode

                                              False
                                              kwargs Any

                                              Additional arguments for agent

                                              {}
                                              Source code in src/llmling_agent/agent/agent.py
                                              469
                                              470
                                              471
                                              472
                                              473
                                              474
                                              475
                                              476
                                              477
                                              478
                                              479
                                              480
                                              481
                                              482
                                              483
                                              484
                                              485
                                              486
                                              487
                                              488
                                              489
                                              490
                                              491
                                              492
                                              493
                                              @classmethod
                                              def from_callback(
                                                  cls,
                                                  callback: ProcessorCallback[str],
                                                  *,
                                                  name: str | None = None,
                                                  debug: bool = False,
                                                  **kwargs: Any,
                                              ) -> Agent[None]:
                                                  """Create an agent from a processing callback.
                                              
                                                  Args:
                                                      callback: Function to process messages. Can be:
                                                          - sync or async
                                                          - with or without context
                                                          - must return str for pipeline compatibility
                                                      name: Optional name for the agent
                                                      debug: Whether to enable debug mode
                                                      kwargs: Additional arguments for agent
                                                  """
                                                  from llmling_agent_providers.callback import CallbackProvider
                                              
                                                  name = name or callback.__name__ or "processor"
                                                  provider = CallbackProvider(callback, name=name)
                                                  return Agent[None](provider=provider, name=name, debug=debug, **kwargs)
                                              

                                              is_busy

                                              is_busy() -> bool
                                              

                                              Check if agent is currently processing tasks.

                                              Source code in src/llmling_agent/agent/agent.py
                                              639
                                              640
                                              641
                                              def is_busy(self) -> bool:
                                                  """Check if agent is currently processing tasks."""
                                                  return bool(self._pending_tasks or self._background_task)
                                              

                                              register_worker

                                              register_worker(
                                                  worker: MessageNode[Any, Any],
                                                  *,
                                                  name: str | None = None,
                                                  reset_history_on_run: bool = True,
                                                  pass_message_history: bool = False,
                                                  share_context: bool = False,
                                              ) -> Tool
                                              

                                              Register another agent as a worker tool.

                                              Source code in src/llmling_agent/agent/agent.py
                                              1137
                                              1138
                                              1139
                                              1140
                                              1141
                                              1142
                                              1143
                                              1144
                                              1145
                                              1146
                                              1147
                                              1148
                                              1149
                                              1150
                                              1151
                                              1152
                                              1153
                                              1154
                                              def register_worker(
                                                  self,
                                                  worker: MessageNode[Any, Any],
                                                  *,
                                                  name: str | None = None,
                                                  reset_history_on_run: bool = True,
                                                  pass_message_history: bool = False,
                                                  share_context: bool = False,
                                              ) -> Tool:
                                                  """Register another agent as a worker tool."""
                                                  return self.tools.register_worker(
                                                      worker,
                                                      name=name,
                                                      reset_history_on_run=reset_history_on_run,
                                                      pass_message_history=pass_message_history,
                                                      share_context=share_context,
                                                      parent=self if (pass_message_history or share_context) else None,
                                                  )
                                              

                                              reset async

                                              reset()
                                              

                                              Reset agent state (conversation history and tool states).

                                              Source code in src/llmling_agent/agent/agent.py
                                              1167
                                              1168
                                              1169
                                              1170
                                              1171
                                              1172
                                              1173
                                              1174
                                              1175
                                              1176
                                              1177
                                              1178
                                              1179
                                              async def reset(self):
                                                  """Reset agent state (conversation history and tool states)."""
                                                  old_tools = await self.tools.list_tools()
                                                  self.conversation.clear()
                                                  self.tools.reset_states()
                                                  new_tools = await self.tools.list_tools()
                                              
                                                  event = self.AgentReset(
                                                      agent_name=self.name,
                                                      previous_tools=old_tools,
                                                      new_tools=new_tools,
                                                  )
                                                  self.agent_reset.emit(event)
                                              

                                              run_in_background async

                                              run_in_background(
                                                  *prompt: AnyPromptType | Image | PathLike[str],
                                                  max_count: int | None = None,
                                                  interval: float = 1.0,
                                                  block: bool = False,
                                                  **kwargs: Any,
                                              ) -> ChatMessage[TResult] | None
                                              

                                              Run agent continuously in background with prompt or dynamic prompt function.

                                              Parameters:

                                              Name Type Description Default
                                              prompt AnyPromptType | Image | PathLike[str]

                                              Static prompt or function that generates prompts

                                              ()
                                              max_count int | None

                                              Maximum number of runs (None = infinite)

                                              None
                                              interval float

                                              Seconds between runs

                                              1.0
                                              block bool

                                              Whether to block until completion

                                              False
                                              **kwargs Any

                                              Arguments passed to run()

                                              {}
                                              Source code in src/llmling_agent/agent/agent.py
                                               992
                                               993
                                               994
                                               995
                                               996
                                               997
                                               998
                                               999
                                              1000
                                              1001
                                              1002
                                              1003
                                              1004
                                              1005
                                              1006
                                              1007
                                              1008
                                              1009
                                              1010
                                              1011
                                              1012
                                              1013
                                              1014
                                              1015
                                              1016
                                              1017
                                              1018
                                              1019
                                              1020
                                              1021
                                              1022
                                              1023
                                              1024
                                              1025
                                              1026
                                              1027
                                              1028
                                              1029
                                              1030
                                              1031
                                              1032
                                              1033
                                              1034
                                              1035
                                              1036
                                              1037
                                              1038
                                              1039
                                              1040
                                              1041
                                              1042
                                              1043
                                              1044
                                              1045
                                              1046
                                              1047
                                              1048
                                              1049
                                              1050
                                              1051
                                              1052
                                              1053
                                              async def run_in_background(
                                                  self,
                                                  *prompt: AnyPromptType | PIL.Image.Image | os.PathLike[str],
                                                  max_count: int | None = None,
                                                  interval: float = 1.0,
                                                  block: bool = False,
                                                  **kwargs: Any,
                                              ) -> ChatMessage[TResult] | None:
                                                  """Run agent continuously in background with prompt or dynamic prompt function.
                                              
                                                  Args:
                                                      prompt: Static prompt or function that generates prompts
                                                      max_count: Maximum number of runs (None = infinite)
                                                      interval: Seconds between runs
                                                      block: Whether to block until completion
                                                      **kwargs: Arguments passed to run()
                                                  """
                                                  self._infinite = max_count is None
                                              
                                                  async def _continuous():
                                                      count = 0
                                                      msg = "%s: Starting continuous run (max_count=%s, interval=%s) for %r"
                                                      logger.debug(msg, self.name, max_count, interval, self.name)
                                                      latest = None
                                                      while max_count is None or count < max_count:
                                                          try:
                                                              current_prompts = [
                                                                  call_with_context(p, self.context, **kwargs) if callable(p) else p
                                                                  for p in prompt
                                                              ]
                                                              msg = "%s: Generated prompt #%d: %s"
                                                              logger.debug(msg, self.name, count, current_prompts)
                                              
                                                              latest = await self.run(current_prompts, **kwargs)
                                                              msg = "%s: Run continous result #%d"
                                                              logger.debug(msg, self.name, count)
                                              
                                                              count += 1
                                                              await asyncio.sleep(interval)
                                                          except asyncio.CancelledError:
                                                              logger.debug("%s: Continuous run cancelled", self.name)
                                                              break
                                                          except Exception:
                                                              logger.exception("%s: Background run failed", self.name)
                                                              await asyncio.sleep(interval)
                                                      msg = "%s: Continuous run completed after %d iterations"
                                                      logger.debug(msg, self.name, count)
                                                      return latest
                                              
                                                  # Cancel any existing background task
                                                  await self.stop()
                                                  task = asyncio.create_task(_continuous(), name=f"background_{self.name}")
                                                  if block:
                                                      try:
                                                          return await task  # type: ignore
                                                      finally:
                                                          if not task.done():
                                                              task.cancel()
                                                  else:
                                                      logger.debug("%s: Started background task %s", self.name, task.get_name())
                                                      self._background_task = task
                                                      return None
                                              

                                              run_iter async

                                              run_iter(
                                                  *prompt_groups: Sequence[AnyPromptType | Image | PathLike[str]],
                                                  result_type: type[TResult] | None = None,
                                                  model: ModelType = None,
                                                  store_history: bool = True,
                                                  wait_for_connections: bool | None = None,
                                              ) -> AsyncIterator[ChatMessage[TResult]]
                                              

                                              Run agent sequentially on multiple prompt groups.

                                              Parameters:

                                              Name Type Description Default
                                              prompt_groups Sequence[AnyPromptType | Image | PathLike[str]]

                                              Groups of prompts to process sequentially

                                              ()
                                              result_type type[TResult] | None

                                              Optional type for structured responses

                                              None
                                              model ModelType

                                              Optional model override

                                              None
                                              store_history bool

                                              Whether to store in conversation history

                                              True
                                              wait_for_connections bool | None

                                              Whether to wait for connected agents

                                              None

                                              Yields:

                                              Type Description
                                              AsyncIterator[ChatMessage[TResult]]

                                              Response messages in sequence

                                              Example

                                              questions = [ ["What is your name?"], ["How old are you?", image1], ["Describe this image", image2], ] async for response in agent.run_iter(*questions): print(response.content)

                                              Source code in src/llmling_agent/agent/agent.py
                                              872
                                              873
                                              874
                                              875
                                              876
                                              877
                                              878
                                              879
                                              880
                                              881
                                              882
                                              883
                                              884
                                              885
                                              886
                                              887
                                              888
                                              889
                                              890
                                              891
                                              892
                                              893
                                              894
                                              895
                                              896
                                              897
                                              898
                                              899
                                              900
                                              901
                                              902
                                              903
                                              904
                                              905
                                              906
                                              907
                                              908
                                              909
                                              async def run_iter(
                                                  self,
                                                  *prompt_groups: Sequence[AnyPromptType | PIL.Image.Image | os.PathLike[str]],
                                                  result_type: type[TResult] | None = None,
                                                  model: ModelType = None,
                                                  store_history: bool = True,
                                                  wait_for_connections: bool | None = None,
                                              ) -> AsyncIterator[ChatMessage[TResult]]:
                                                  """Run agent sequentially on multiple prompt groups.
                                              
                                                  Args:
                                                      prompt_groups: Groups of prompts to process sequentially
                                                      result_type: Optional type for structured responses
                                                      model: Optional model override
                                                      store_history: Whether to store in conversation history
                                                      wait_for_connections: Whether to wait for connected agents
                                              
                                                  Yields:
                                                      Response messages in sequence
                                              
                                                  Example:
                                                      questions = [
                                                          ["What is your name?"],
                                                          ["How old are you?", image1],
                                                          ["Describe this image", image2],
                                                      ]
                                                      async for response in agent.run_iter(*questions):
                                                          print(response.content)
                                                  """
                                                  for prompts in prompt_groups:
                                                      response = await self.run(
                                                          *prompts,
                                                          result_type=result_type,
                                                          model=model,
                                                          store_history=store_history,
                                                          wait_for_connections=wait_for_connections,
                                                      )
                                                      yield response  # pyright: ignore
                                              

                                              run_job async

                                              run_job(
                                                  job: Job[TDeps, str | None],
                                                  *,
                                                  store_history: bool = True,
                                                  include_agent_tools: bool = True,
                                              ) -> ChatMessage[str]
                                              

                                              Execute a pre-defined task.

                                              Parameters:

                                              Name Type Description Default
                                              job Job[TDeps, str | None]

                                              Job configuration to execute

                                              required
                                              store_history bool

                                              Whether the message exchange should be added to the context window

                                              True
                                              include_agent_tools bool

                                              Whether to include agent tools

                                              True

                                              Returns: Job execution result

                                              Raises:

                                              Type Description
                                              JobError

                                              If task execution fails

                                              ValueError

                                              If task configuration is invalid

                                              Source code in src/llmling_agent/agent/agent.py
                                              939
                                              940
                                              941
                                              942
                                              943
                                              944
                                              945
                                              946
                                              947
                                              948
                                              949
                                              950
                                              951
                                              952
                                              953
                                              954
                                              955
                                              956
                                              957
                                              958
                                              959
                                              960
                                              961
                                              962
                                              963
                                              964
                                              965
                                              966
                                              967
                                              968
                                              969
                                              970
                                              971
                                              972
                                              973
                                              974
                                              975
                                              976
                                              977
                                              978
                                              979
                                              980
                                              981
                                              982
                                              983
                                              984
                                              985
                                              986
                                              987
                                              988
                                              989
                                              990
                                              async def run_job(
                                                  self,
                                                  job: Job[TDeps, str | None],
                                                  *,
                                                  store_history: bool = True,
                                                  include_agent_tools: bool = True,
                                              ) -> ChatMessage[str]:
                                                  """Execute a pre-defined task.
                                              
                                                  Args:
                                                      job: Job configuration to execute
                                                      store_history: Whether the message exchange should be added to the
                                                                     context window
                                                      include_agent_tools: Whether to include agent tools
                                                  Returns:
                                                      Job execution result
                                              
                                                  Raises:
                                                      JobError: If task execution fails
                                                      ValueError: If task configuration is invalid
                                                  """
                                                  from llmling_agent.tasks import JobError
                                              
                                                  if job.required_dependency is not None:  # noqa: SIM102
                                                      if not isinstance(self.context.data, job.required_dependency):
                                                          msg = (
                                                              f"Agent dependencies ({type(self.context.data)}) "
                                                              f"don't match job requirement ({job.required_dependency})"
                                                          )
                                                          raise JobError(msg)
                                              
                                                  # Load task knowledge
                                                  if job.knowledge:
                                                      # Add knowledge sources to context
                                                      resources: list[Resource | str] = list(job.knowledge.paths) + list(
                                                          job.knowledge.resources
                                                      )
                                                      for source in resources:
                                                          await self.conversation.load_context_source(source)
                                                      for prompt in job.knowledge.prompts:
                                                          await self.conversation.load_context_source(prompt)
                                                  try:
                                                      # Register task tools temporarily
                                                      tools = job.get_tools()
                                                      with self.tools.temporary_tools(tools, exclusive=not include_agent_tools):
                                                          # Execute job with job-specific tools
                                                          return await self.run(await job.get_prompt(), store_history=store_history)
                                              
                                                  except Exception as e:
                                                      msg = f"Task execution failed: {e}"
                                                      logger.exception(msg)
                                                      raise JobError(msg) from e
                                              

                                              run_stream async

                                              run_stream(
                                                  *prompt: AnyPromptType | Image | PathLike[str],
                                                  result_type: type[TResult] | None = None,
                                                  model: ModelType = None,
                                                  tool_choice: str | list[str] | None = None,
                                                  store_history: bool = True,
                                                  usage_limits: UsageLimits | None = None,
                                                  message_id: str | None = None,
                                                  conversation_id: str | None = None,
                                                  messages: list[ChatMessage[Any]] | None = None,
                                                  wait_for_connections: bool | None = None,
                                              ) -> AsyncIterator[StreamingResponseProtocol[TResult]]
                                              

                                              Run agent with prompt and get a streaming response.

                                              Parameters:

                                              Name Type Description Default
                                              prompt AnyPromptType | Image | PathLike[str]

                                              User query or instruction

                                              ()
                                              result_type type[TResult] | None

                                              Optional type for structured responses

                                              None
                                              model ModelType

                                              Optional model override

                                              None
                                              tool_choice str | list[str] | None

                                              Filter tool choice by name

                                              None
                                              store_history bool

                                              Whether the message exchange should be added to the context window

                                              True
                                              usage_limits UsageLimits | None

                                              Optional usage limits for the model

                                              None
                                              message_id str | None

                                              Optional message id for the returned message. Automatically generated if not provided.

                                              None
                                              conversation_id str | None

                                              Optional conversation id for the returned message.

                                              None
                                              messages list[ChatMessage[Any]] | None

                                              Optional list of messages to replace the conversation history

                                              None
                                              wait_for_connections bool | None

                                              Whether to wait for connected agents to complete

                                              None

                                              Returns:

                                              Type Description
                                              AsyncIterator[StreamingResponseProtocol[TResult]]

                                              A streaming result to iterate over.

                                              Raises:

                                              Type Description
                                              UnexpectedModelBehavior

                                              If the model fails or behaves unexpectedly

                                              Source code in src/llmling_agent/agent/agent.py
                                              780
                                              781
                                              782
                                              783
                                              784
                                              785
                                              786
                                              787
                                              788
                                              789
                                              790
                                              791
                                              792
                                              793
                                              794
                                              795
                                              796
                                              797
                                              798
                                              799
                                              800
                                              801
                                              802
                                              803
                                              804
                                              805
                                              806
                                              807
                                              808
                                              809
                                              810
                                              811
                                              812
                                              813
                                              814
                                              815
                                              816
                                              817
                                              818
                                              819
                                              820
                                              821
                                              822
                                              823
                                              824
                                              825
                                              826
                                              827
                                              828
                                              829
                                              830
                                              831
                                              832
                                              833
                                              834
                                              835
                                              836
                                              837
                                              838
                                              839
                                              840
                                              841
                                              842
                                              843
                                              844
                                              845
                                              846
                                              847
                                              848
                                              849
                                              850
                                              851
                                              852
                                              853
                                              854
                                              855
                                              856
                                              857
                                              858
                                              859
                                              860
                                              861
                                              862
                                              863
                                              864
                                              865
                                              866
                                              867
                                              868
                                              869
                                              870
                                              @asynccontextmanager
                                              async def run_stream(
                                                  self,
                                                  *prompt: AnyPromptType | PIL.Image.Image | os.PathLike[str],
                                                  result_type: type[TResult] | None = None,
                                                  model: ModelType = None,
                                                  tool_choice: str | list[str] | None = None,
                                                  store_history: bool = True,
                                                  usage_limits: UsageLimits | None = None,
                                                  message_id: str | None = None,
                                                  conversation_id: str | None = None,
                                                  messages: list[ChatMessage[Any]] | None = None,
                                                  wait_for_connections: bool | None = None,
                                              ) -> AsyncIterator[StreamingResponseProtocol[TResult]]:
                                                  """Run agent with prompt and get a streaming response.
                                              
                                                  Args:
                                                      prompt: User query or instruction
                                                      result_type: Optional type for structured responses
                                                      model: Optional model override
                                                      tool_choice: Filter tool choice by name
                                                      store_history: Whether the message exchange should be added to the
                                                                     context window
                                                      usage_limits: Optional usage limits for the model
                                                      message_id: Optional message id for the returned message.
                                                                  Automatically generated if not provided.
                                                      conversation_id: Optional conversation id for the returned message.
                                                      messages: Optional list of messages to replace the conversation history
                                                      wait_for_connections: Whether to wait for connected agents to complete
                                              
                                                  Returns:
                                                      A streaming result to iterate over.
                                              
                                                  Raises:
                                                      UnexpectedModelBehavior: If the model fails or behaves unexpectedly
                                                  """
                                                  message_id = message_id or str(uuid4())
                                                  user_msg, prompts = await self.pre_run(*prompt)
                                                  self.set_result_type(result_type)
                                                  start_time = time.perf_counter()
                                                  sys_prompt = await self.sys_prompts.format_system_prompt(self)
                                                  tools = await self.tools.get_tools(state="enabled", names=tool_choice)
                                                  message_history = (
                                                      messages if messages is not None else self.conversation.get_history()
                                                  )
                                                  try:
                                                      async with self._provider.stream_response(
                                                          *prompts,
                                                          message_id=message_id,
                                                          message_history=message_history,
                                                          result_type=result_type,
                                                          model=model,
                                                          store_history=store_history,
                                                          tools=tools,
                                                          usage_limits=usage_limits,
                                                          system_prompt=sys_prompt,
                                                      ) as stream:
                                                          yield stream
                                                          usage = stream.usage()
                                                          cost_info = None
                                                          model_name = stream.model_name  # type: ignore
                                                          if model_name:
                                                              cost_info = await TokenCost.from_usage(
                                                                  usage,
                                                                  model_name,
                                                                  str(user_msg.content),
                                                                  str(stream.formatted_content),  # type: ignore
                                                              )
                                                          response_msg = ChatMessage[TResult](
                                                              content=cast(TResult, stream.formatted_content),  # type: ignore
                                                              role="assistant",
                                                              name=self.name,
                                                              model=model_name,
                                                              message_id=message_id,
                                                              conversation_id=user_msg.conversation_id,
                                                              cost_info=cost_info,
                                                              response_time=time.perf_counter() - start_time,
                                                              # provider_extra=stream.provider_extra or {},
                                                          )
                                                          self.message_sent.emit(response_msg)
                                                          if store_history:
                                                              self.conversation.add_chat_messages([user_msg, response_msg])
                                                          await self.connections.route_message(
                                                              response_msg,
                                                              wait=wait_for_connections,
                                                          )
                                              
                                                  except Exception as e:
                                                      logger.exception("Agent stream failed")
                                                      self.run_failed.emit("Agent stream failed", e)
                                                      raise
                                              

                                              run_sync

                                              run_sync(
                                                  *prompt: AnyPromptType | Image | PathLike[str],
                                                  result_type: type[TResult] | None = None,
                                                  deps: TDeps | None = None,
                                                  model: ModelType = None,
                                                  store_history: bool = True,
                                              ) -> ChatMessage[TResult]
                                              

                                              Run agent synchronously (convenience wrapper).

                                              Parameters:

                                              Name Type Description Default
                                              prompt AnyPromptType | Image | PathLike[str]

                                              User query or instruction

                                              ()
                                              result_type type[TResult] | None

                                              Optional type for structured responses

                                              None
                                              deps TDeps | None

                                              Optional dependencies for the agent

                                              None
                                              model ModelType

                                              Optional model override

                                              None
                                              store_history bool

                                              Whether the message exchange should be added to the context window

                                              True

                                              Returns: Result containing response and run information

                                              Source code in src/llmling_agent/agent/agent.py
                                              911
                                              912
                                              913
                                              914
                                              915
                                              916
                                              917
                                              918
                                              919
                                              920
                                              921
                                              922
                                              923
                                              924
                                              925
                                              926
                                              927
                                              928
                                              929
                                              930
                                              931
                                              932
                                              933
                                              934
                                              935
                                              936
                                              937
                                              def run_sync(
                                                  self,
                                                  *prompt: AnyPromptType | PIL.Image.Image | os.PathLike[str],
                                                  result_type: type[TResult] | None = None,
                                                  deps: TDeps | None = None,
                                                  model: ModelType = None,
                                                  store_history: bool = True,
                                              ) -> ChatMessage[TResult]:
                                                  """Run agent synchronously (convenience wrapper).
                                              
                                                  Args:
                                                      prompt: User query or instruction
                                                      result_type: Optional type for structured responses
                                                      deps: Optional dependencies for the agent
                                                      model: Optional model override
                                                      store_history: Whether the message exchange should be added to the
                                                                     context window
                                                  Returns:
                                                      Result containing response and run information
                                                  """
                                                  coro = self.run(
                                                      *prompt,
                                                      model=model,
                                                      store_history=store_history,
                                                      result_type=result_type,
                                                  )
                                                  return self.run_task_sync(coro)  # type: ignore
                                              

                                              set_model

                                              set_model(model: ModelType)
                                              

                                              Set the model for this agent.

                                              Parameters:

                                              Name Type Description Default
                                              model ModelType

                                              New model to use (name or instance)

                                              required
                                              Emits

                                              model_changed signal with the new model

                                              Source code in src/llmling_agent/agent/agent.py
                                              1156
                                              1157
                                              1158
                                              1159
                                              1160
                                              1161
                                              1162
                                              1163
                                              1164
                                              1165
                                              def set_model(self, model: ModelType):
                                                  """Set the model for this agent.
                                              
                                                  Args:
                                                      model: New model to use (name or instance)
                                              
                                                  Emits:
                                                      model_changed signal with the new model
                                                  """
                                                  self._provider.set_model(model)
                                              

                                              set_result_type

                                              set_result_type(
                                                  result_type: type[TResult] | str | ResponseDefinition | None,
                                                  *,
                                                  tool_name: str | None = None,
                                                  tool_description: str | None = None,
                                              )
                                              

                                              Set or update the result type for this agent.

                                              Parameters:

                                              Name Type Description Default
                                              result_type type[TResult] | str | ResponseDefinition | None

                                              New result type, can be: - A Python type for validation - Name of a response definition - Response definition instance - None to reset to unstructured mode

                                              required
                                              tool_name str | None

                                              Optional override for tool name

                                              None
                                              tool_description str | None

                                              Optional override for tool description

                                              None
                                              Source code in src/llmling_agent/agent/agent.py
                                              517
                                              518
                                              519
                                              520
                                              521
                                              522
                                              523
                                              524
                                              525
                                              526
                                              527
                                              528
                                              529
                                              530
                                              531
                                              532
                                              533
                                              534
                                              535
                                              536
                                              def set_result_type(
                                                  self,
                                                  result_type: type[TResult] | str | ResponseDefinition | None,
                                                  *,
                                                  tool_name: str | None = None,
                                                  tool_description: str | None = None,
                                              ):
                                                  """Set or update the result type for this agent.
                                              
                                                  Args:
                                                      result_type: New result type, can be:
                                                          - A Python type for validation
                                                          - Name of a response definition
                                                          - Response definition instance
                                                          - None to reset to unstructured mode
                                                      tool_name: Optional override for tool name
                                                      tool_description: Optional override for tool description
                                                  """
                                                  logger.debug("Setting result type to: %s for %r", result_type, self.name)
                                                  self._result_type = to_type(result_type)
                                              

                                              share async

                                              share(
                                                  target: AnyAgent[TDeps, Any],
                                                  *,
                                                  tools: list[str] | None = None,
                                                  resources: list[str] | None = None,
                                                  history: bool | int | None = None,
                                                  token_limit: int | None = None,
                                              )
                                              

                                              Share capabilities and knowledge with another agent.

                                              Parameters:

                                              Name Type Description Default
                                              target AnyAgent[TDeps, Any]

                                              Agent to share with

                                              required
                                              tools list[str] | None

                                              List of tool names to share

                                              None
                                              resources list[str] | None

                                              List of resource names to share

                                              None
                                              history bool | int | None

                                              Share conversation history: - True: Share full history - int: Number of most recent messages to share - None: Don't share history

                                              None
                                              token_limit int | None

                                              Optional max tokens for history

                                              None

                                              Raises:

                                              Type Description
                                              ValueError

                                              If requested items don't exist

                                              RuntimeError

                                              If runtime not available for resources

                                              Source code in src/llmling_agent/agent/agent.py
                                              1081
                                              1082
                                              1083
                                              1084
                                              1085
                                              1086
                                              1087
                                              1088
                                              1089
                                              1090
                                              1091
                                              1092
                                              1093
                                              1094
                                              1095
                                              1096
                                              1097
                                              1098
                                              1099
                                              1100
                                              1101
                                              1102
                                              1103
                                              1104
                                              1105
                                              1106
                                              1107
                                              1108
                                              1109
                                              1110
                                              1111
                                              1112
                                              1113
                                              1114
                                              1115
                                              1116
                                              1117
                                              1118
                                              1119
                                              1120
                                              1121
                                              1122
                                              1123
                                              1124
                                              1125
                                              1126
                                              1127
                                              1128
                                              1129
                                              1130
                                              1131
                                              1132
                                              1133
                                              1134
                                              1135
                                              async def share(
                                                  self,
                                                  target: AnyAgent[TDeps, Any],
                                                  *,
                                                  tools: list[str] | None = None,
                                                  resources: list[str] | None = None,
                                                  history: bool | int | None = None,  # bool or number of messages
                                                  token_limit: int | None = None,
                                              ):
                                                  """Share capabilities and knowledge with another agent.
                                              
                                                  Args:
                                                      target: Agent to share with
                                                      tools: List of tool names to share
                                                      resources: List of resource names to share
                                                      history: Share conversation history:
                                                              - True: Share full history
                                                              - int: Number of most recent messages to share
                                                              - None: Don't share history
                                                      token_limit: Optional max tokens for history
                                              
                                                  Raises:
                                                      ValueError: If requested items don't exist
                                                      RuntimeError: If runtime not available for resources
                                                  """
                                                  # Share tools if requested
                                                  for name in tools or []:
                                                      if tool := self.tools.get(name):
                                                          meta = {"shared_from": self.name}
                                                          target.tools.register_tool(tool.callable, metadata=meta)
                                                      else:
                                                          msg = f"Tool not found: {name}"
                                                          raise ValueError(msg)
                                              
                                                  # Share resources if requested
                                                  if resources:
                                                      if not self.runtime:
                                                          msg = "No runtime available for sharing resources"
                                                          raise RuntimeError(msg)
                                                      for name in resources:
                                                          if resource := self.runtime.get_resource(name):
                                                              await target.conversation.load_context_source(resource)  # type: ignore
                                                          else:
                                                              msg = f"Resource not found: {name}"
                                                              raise ValueError(msg)
                                              
                                                  # Share history if requested
                                                  if history:
                                                      history_text = await self.conversation.format_history(
                                                          max_tokens=token_limit,
                                                          num_messages=history if isinstance(history, int) else None,
                                                      )
                                                      target.conversation.add_context_message(
                                                          history_text, source=self.name, metadata={"type": "shared_history"}
                                                      )
                                              

                                              stop async

                                              stop()
                                              

                                              Stop continuous execution if running.

                                              Source code in src/llmling_agent/agent/agent.py
                                              1055
                                              1056
                                              1057
                                              1058
                                              1059
                                              1060
                                              async def stop(self):
                                                  """Stop continuous execution if running."""
                                                  if self._background_task and not self._background_task.done():
                                                      self._background_task.cancel()
                                                      await self._background_task
                                                      self._background_task = None
                                              

                                              temporary_state async

                                              temporary_state(
                                                  *,
                                                  system_prompts: list[AnyPromptType] | None = None,
                                                  replace_prompts: bool = False,
                                                  tools: list[ToolType] | None = None,
                                                  replace_tools: bool = False,
                                                  history: list[AnyPromptType] | SessionQuery | None = None,
                                                  replace_history: bool = False,
                                                  pause_routing: bool = False,
                                                  model: ModelType | None = None,
                                                  provider: AgentProvider | None = None,
                                              ) -> AsyncIterator[Self]
                                              

                                              Temporarily modify agent state.

                                              Parameters:

                                              Name Type Description Default
                                              system_prompts list[AnyPromptType] | None

                                              Temporary system prompts to use

                                              None
                                              replace_prompts bool

                                              Whether to replace existing prompts

                                              False
                                              tools list[ToolType] | None

                                              Temporary tools to make available

                                              None
                                              replace_tools bool

                                              Whether to replace existing tools

                                              False
                                              history list[AnyPromptType] | SessionQuery | None

                                              Conversation history (prompts or query)

                                              None
                                              replace_history bool

                                              Whether to replace existing history

                                              False
                                              pause_routing bool

                                              Whether to pause message routing

                                              False
                                              model ModelType | None

                                              Temporary model override

                                              None
                                              provider AgentProvider | None

                                              Temporary provider override

                                              None
                                              Source code in src/llmling_agent/agent/agent.py
                                              1196
                                              1197
                                              1198
                                              1199
                                              1200
                                              1201
                                              1202
                                              1203
                                              1204
                                              1205
                                              1206
                                              1207
                                              1208
                                              1209
                                              1210
                                              1211
                                              1212
                                              1213
                                              1214
                                              1215
                                              1216
                                              1217
                                              1218
                                              1219
                                              1220
                                              1221
                                              1222
                                              1223
                                              1224
                                              1225
                                              1226
                                              1227
                                              1228
                                              1229
                                              1230
                                              1231
                                              1232
                                              1233
                                              1234
                                              1235
                                              1236
                                              1237
                                              1238
                                              1239
                                              1240
                                              1241
                                              1242
                                              1243
                                              1244
                                              1245
                                              1246
                                              1247
                                              1248
                                              1249
                                              1250
                                              1251
                                              1252
                                              1253
                                              1254
                                              1255
                                              1256
                                              1257
                                              1258
                                              1259
                                              1260
                                              1261
                                              1262
                                              1263
                                              1264
                                              1265
                                              1266
                                              @asynccontextmanager
                                              async def temporary_state(
                                                  self,
                                                  *,
                                                  system_prompts: list[AnyPromptType] | None = None,
                                                  replace_prompts: bool = False,
                                                  tools: list[ToolType] | None = None,
                                                  replace_tools: bool = False,
                                                  history: list[AnyPromptType] | SessionQuery | None = None,
                                                  replace_history: bool = False,
                                                  pause_routing: bool = False,
                                                  model: ModelType | None = None,
                                                  provider: AgentProvider | None = None,
                                              ) -> AsyncIterator[Self]:
                                                  """Temporarily modify agent state.
                                              
                                                  Args:
                                                      system_prompts: Temporary system prompts to use
                                                      replace_prompts: Whether to replace existing prompts
                                                      tools: Temporary tools to make available
                                                      replace_tools: Whether to replace existing tools
                                                      history: Conversation history (prompts or query)
                                                      replace_history: Whether to replace existing history
                                                      pause_routing: Whether to pause message routing
                                                      model: Temporary model override
                                                      provider: Temporary provider override
                                                  """
                                                  old_model = self._provider.model if hasattr(self._provider, "model") else None  # pyright: ignore
                                                  old_provider = self._provider
                                              
                                                  async with AsyncExitStack() as stack:
                                                      # System prompts (async)
                                                      if system_prompts is not None:
                                                          await stack.enter_async_context(
                                                              self.sys_prompts.temporary_prompt(
                                                                  system_prompts, exclusive=replace_prompts
                                                              )
                                                          )
                                              
                                                      # Tools (sync)
                                                      if tools is not None:
                                                          stack.enter_context(
                                                              self.tools.temporary_tools(tools, exclusive=replace_tools)
                                                          )
                                              
                                                      # History (async)
                                                      if history is not None:
                                                          await stack.enter_async_context(
                                                              self.conversation.temporary_state(
                                                                  history, replace_history=replace_history
                                                              )
                                                          )
                                              
                                                      # Routing (async)
                                                      if pause_routing:
                                                          await stack.enter_async_context(self.connections.paused_routing())
                                              
                                                      # Model/Provider
                                                      if provider is not None:
                                                          self._provider = provider
                                                      elif model is not None:
                                                          self._provider.set_model(model)
                                              
                                                      try:
                                                          yield self
                                                      finally:
                                                          # Restore model/provider
                                                          if provider is not None:
                                                              self._provider = old_provider
                                                          elif model is not None and old_model:
                                                              self._provider.set_model(old_model)
                                              

                                              to_structured

                                              to_structured(
                                                  result_type: None,
                                                  *,
                                                  tool_name: str | None = None,
                                                  tool_description: str | None = None,
                                              ) -> Self
                                              
                                              to_structured(
                                                  result_type: type[TResult] | str | ResponseDefinition,
                                                  *,
                                                  tool_name: str | None = None,
                                                  tool_description: str | None = None,
                                              ) -> StructuredAgent[TDeps, TResult]
                                              
                                              to_structured(
                                                  result_type: type[TResult] | str | ResponseDefinition | None,
                                                  *,
                                                  tool_name: str | None = None,
                                                  tool_description: str | None = None,
                                              ) -> StructuredAgent[TDeps, TResult] | Self
                                              

                                              Convert this agent to a structured agent.

                                              If result_type is None, returns self unchanged (no wrapping). Otherwise creates a StructuredAgent wrapper.

                                              Parameters:

                                              Name Type Description Default
                                              result_type type[TResult] | str | ResponseDefinition | None

                                              Type for structured responses. Can be: - A Python type (Pydantic model) - Name of response definition from context - Complete response definition - None to skip wrapping

                                              required
                                              tool_name str | None

                                              Optional override for result tool name

                                              None
                                              tool_description str | None

                                              Optional override for result tool description

                                              None

                                              Returns:

                                              Type Description
                                              StructuredAgent[TDeps, TResult] | Self

                                              Either StructuredAgent wrapper or self unchanged

                                              from llmling_agent.agent import StructuredAgent

                                              Source code in src/llmling_agent/agent/agent.py
                                              602
                                              603
                                              604
                                              605
                                              606
                                              607
                                              608
                                              609
                                              610
                                              611
                                              612
                                              613
                                              614
                                              615
                                              616
                                              617
                                              618
                                              619
                                              620
                                              621
                                              622
                                              623
                                              624
                                              625
                                              626
                                              627
                                              628
                                              629
                                              630
                                              631
                                              632
                                              633
                                              634
                                              635
                                              636
                                              637
                                              def to_structured[TResult](
                                                  self,
                                                  result_type: type[TResult] | str | ResponseDefinition | None,
                                                  *,
                                                  tool_name: str | None = None,
                                                  tool_description: str | None = None,
                                              ) -> StructuredAgent[TDeps, TResult] | Self:
                                                  """Convert this agent to a structured agent.
                                              
                                                  If result_type is None, returns self unchanged (no wrapping).
                                                  Otherwise creates a StructuredAgent wrapper.
                                              
                                                  Args:
                                                      result_type: Type for structured responses. Can be:
                                                          - A Python type (Pydantic model)
                                                          - Name of response definition from context
                                                          - Complete response definition
                                                          - None to skip wrapping
                                                      tool_name: Optional override for result tool name
                                                      tool_description: Optional override for result tool description
                                              
                                                  Returns:
                                                      Either StructuredAgent wrapper or self unchanged
                                                  from llmling_agent.agent import StructuredAgent
                                                  """
                                                  if result_type is None:
                                                      return self
                                              
                                                  from llmling_agent.agent import StructuredAgent
                                              
                                                  return StructuredAgent(
                                                      self,
                                                      result_type=result_type,
                                                      tool_name=tool_name,
                                                      tool_description=tool_description,
                                                  )
                                              

                                              to_tool

                                              to_tool(
                                                  *,
                                                  name: str | None = None,
                                                  reset_history_on_run: bool = True,
                                                  pass_message_history: bool = False,
                                                  share_context: bool = False,
                                                  parent: AnyAgent[Any, Any] | None = None,
                                              ) -> Tool
                                              

                                              Create a tool from this agent.

                                              Parameters:

                                              Name Type Description Default
                                              name str | None

                                              Optional tool name override

                                              None
                                              reset_history_on_run bool

                                              Clear agent's history before each run

                                              True
                                              pass_message_history bool

                                              Pass parent's message history to agent

                                              False
                                              share_context bool

                                              Whether to pass parent's context/deps

                                              False
                                              parent AnyAgent[Any, Any] | None

                                              Optional parent agent for history/context sharing

                                              None
                                              Source code in src/llmling_agent/agent/agent.py
                                              648
                                              649
                                              650
                                              651
                                              652
                                              653
                                              654
                                              655
                                              656
                                              657
                                              658
                                              659
                                              660
                                              661
                                              662
                                              663
                                              664
                                              665
                                              666
                                              667
                                              668
                                              669
                                              670
                                              671
                                              672
                                              673
                                              674
                                              675
                                              676
                                              677
                                              678
                                              679
                                              680
                                              681
                                              682
                                              683
                                              684
                                              685
                                              686
                                              687
                                              688
                                              689
                                              690
                                              691
                                              692
                                              693
                                              694
                                              695
                                              696
                                              697
                                              698
                                              def to_tool(
                                                  self,
                                                  *,
                                                  name: str | None = None,
                                                  reset_history_on_run: bool = True,
                                                  pass_message_history: bool = False,
                                                  share_context: bool = False,
                                                  parent: AnyAgent[Any, Any] | None = None,
                                              ) -> Tool:
                                                  """Create a tool from this agent.
                                              
                                                  Args:
                                                      name: Optional tool name override
                                                      reset_history_on_run: Clear agent's history before each run
                                                      pass_message_history: Pass parent's message history to agent
                                                      share_context: Whether to pass parent's context/deps
                                                      parent: Optional parent agent for history/context sharing
                                                  """
                                                  tool_name = f"ask_{self.name}"
                                              
                                                  async def wrapped_tool(prompt: str) -> str:
                                                      if pass_message_history and not parent:
                                                          msg = "Parent agent required for message history sharing"
                                                          raise ToolError(msg)
                                              
                                                      if reset_history_on_run:
                                                          self.conversation.clear()
                                              
                                                      history = None
                                                      if pass_message_history and parent:
                                                          history = parent.conversation.get_history()
                                                          old = self.conversation.get_history()
                                                          self.conversation.set_history(history)
                                                      result = await self.run(prompt, result_type=self._result_type)
                                                      if history:
                                                          self.conversation.set_history(old)
                                                      return result.data
                                              
                                                  normalized_name = self.name.replace("_", " ").title()
                                                  docstring = f"Get expert answer from specialized agent: {normalized_name}"
                                                  if self.description:
                                                      docstring = f"{docstring}\n\n{self.description}"
                                              
                                                  wrapped_tool.__doc__ = docstring
                                                  wrapped_tool.__name__ = tool_name
                                              
                                                  return Tool.from_callable(
                                                      wrapped_tool,
                                                      name_override=tool_name,
                                                      description_override=docstring,
                                                  )
                                              

                                              wait async

                                              wait() -> ChatMessage[TResult]
                                              

                                              Wait for background execution to complete.

                                              Source code in src/llmling_agent/agent/agent.py
                                              1062
                                              1063
                                              1064
                                              1065
                                              1066
                                              1067
                                              1068
                                              1069
                                              1070
                                              1071
                                              1072
                                              1073
                                              async def wait(self) -> ChatMessage[TResult]:
                                                  """Wait for background execution to complete."""
                                                  if not self._background_task:
                                                      msg = "No background task running"
                                                      raise RuntimeError(msg)
                                                  if self._infinite:
                                                      msg = "Cannot wait on infinite execution"
                                                      raise RuntimeError(msg)
                                                  try:
                                                      return await self._background_task
                                                  finally:
                                                      self._background_task = None
                                              

                                              AgentConfig

                                              Bases: NodeConfig

                                              Configuration for a single agent in the system.

                                              Defines an agent's complete configuration including its model, environment, capabilities, and behavior settings. Each agent can have its own: - Language model configuration - Environment setup (tools and resources) - Response type definitions - System prompts and default user prompts - Role-based capabilities

                                              The configuration can be loaded from YAML or created programmatically.

                                              Source code in src/llmling_agent/models/agents.py
                                               53
                                               54
                                               55
                                               56
                                               57
                                               58
                                               59
                                               60
                                               61
                                               62
                                               63
                                               64
                                               65
                                               66
                                               67
                                               68
                                               69
                                               70
                                               71
                                               72
                                               73
                                               74
                                               75
                                               76
                                               77
                                               78
                                               79
                                               80
                                               81
                                               82
                                               83
                                               84
                                               85
                                               86
                                               87
                                               88
                                               89
                                               90
                                               91
                                               92
                                               93
                                               94
                                               95
                                               96
                                               97
                                               98
                                               99
                                              100
                                              101
                                              102
                                              103
                                              104
                                              105
                                              106
                                              107
                                              108
                                              109
                                              110
                                              111
                                              112
                                              113
                                              114
                                              115
                                              116
                                              117
                                              118
                                              119
                                              120
                                              121
                                              122
                                              123
                                              124
                                              125
                                              126
                                              127
                                              128
                                              129
                                              130
                                              131
                                              132
                                              133
                                              134
                                              135
                                              136
                                              137
                                              138
                                              139
                                              140
                                              141
                                              142
                                              143
                                              144
                                              145
                                              146
                                              147
                                              148
                                              149
                                              150
                                              151
                                              152
                                              153
                                              154
                                              155
                                              156
                                              157
                                              158
                                              159
                                              160
                                              161
                                              162
                                              163
                                              164
                                              165
                                              166
                                              167
                                              168
                                              169
                                              170
                                              171
                                              172
                                              173
                                              174
                                              175
                                              176
                                              177
                                              178
                                              179
                                              180
                                              181
                                              182
                                              183
                                              184
                                              185
                                              186
                                              187
                                              188
                                              189
                                              190
                                              191
                                              192
                                              193
                                              194
                                              195
                                              196
                                              197
                                              198
                                              199
                                              200
                                              201
                                              202
                                              203
                                              204
                                              205
                                              206
                                              207
                                              208
                                              209
                                              210
                                              211
                                              212
                                              213
                                              214
                                              215
                                              216
                                              217
                                              218
                                              219
                                              220
                                              221
                                              222
                                              223
                                              224
                                              225
                                              226
                                              227
                                              228
                                              229
                                              230
                                              231
                                              232
                                              233
                                              234
                                              235
                                              236
                                              237
                                              238
                                              239
                                              240
                                              241
                                              242
                                              243
                                              244
                                              245
                                              246
                                              247
                                              248
                                              249
                                              250
                                              251
                                              252
                                              253
                                              254
                                              255
                                              256
                                              257
                                              258
                                              259
                                              260
                                              261
                                              262
                                              263
                                              264
                                              265
                                              266
                                              267
                                              268
                                              269
                                              270
                                              271
                                              272
                                              273
                                              274
                                              275
                                              276
                                              277
                                              278
                                              279
                                              280
                                              281
                                              282
                                              283
                                              284
                                              285
                                              286
                                              287
                                              288
                                              289
                                              290
                                              291
                                              292
                                              293
                                              294
                                              295
                                              296
                                              297
                                              298
                                              299
                                              300
                                              301
                                              302
                                              303
                                              304
                                              305
                                              306
                                              307
                                              308
                                              309
                                              310
                                              311
                                              312
                                              313
                                              314
                                              315
                                              316
                                              317
                                              318
                                              319
                                              320
                                              321
                                              322
                                              323
                                              324
                                              325
                                              326
                                              327
                                              328
                                              329
                                              330
                                              331
                                              332
                                              333
                                              334
                                              335
                                              336
                                              337
                                              338
                                              339
                                              340
                                              341
                                              342
                                              343
                                              344
                                              345
                                              346
                                              347
                                              348
                                              349
                                              350
                                              351
                                              352
                                              353
                                              class AgentConfig(NodeConfig):
                                                  """Configuration for a single agent in the system.
                                              
                                                  Defines an agent's complete configuration including its model, environment,
                                                  capabilities, and behavior settings. Each agent can have its own:
                                                  - Language model configuration
                                                  - Environment setup (tools and resources)
                                                  - Response type definitions
                                                  - System prompts and default user prompts
                                                  - Role-based capabilities
                                              
                                                  The configuration can be loaded from YAML or created programmatically.
                                                  """
                                              
                                                  provider: ProviderConfig | ProviderName = "pydantic_ai"
                                                  """Provider configuration or shorthand type"""
                                              
                                                  inherits: str | None = None
                                                  """Name of agent config to inherit from"""
                                              
                                                  model: str | AnyModelConfig | None = None
                                                  """The model to use for this agent. Can be either a simple model name
                                                  string (e.g. 'openai:gpt-4') or a structured model definition."""
                                              
                                                  tools: list[ToolConfig | str] = Field(default_factory=list)
                                                  """A list of tools to register with this agent."""
                                              
                                                  toolsets: list[ToolsetConfig] = Field(default_factory=list)
                                                  """Toolset configurations for extensible tool collections."""
                                              
                                                  environment: str | AgentEnvironment | None = None
                                                  """Environments configuration (path or object)"""
                                              
                                                  capabilities: Capabilities = Field(default_factory=Capabilities)
                                                  """Current agent's capabilities."""
                                              
                                                  session: str | SessionQuery | MemoryConfig | None = None
                                                  """Session configuration for conversation recovery."""
                                              
                                                  result_type: str | ResponseDefinition | None = None
                                                  """Name of the response definition to use"""
                                              
                                                  retries: int = 1
                                                  """Number of retries for failed operations (maps to pydantic-ai's retries)"""
                                              
                                                  result_tool_name: str = "final_result"
                                                  """Name of the tool used for structured responses"""
                                              
                                                  result_tool_description: str | None = None
                                                  """Custom description for the result tool"""
                                              
                                                  result_retries: int | None = None
                                                  """Max retries for result validation"""
                                              
                                                  end_strategy: EndStrategy = "early"
                                                  """The strategy for handling multiple tool calls when a final result is found"""
                                              
                                                  avatar: str | None = None
                                                  """URL or path to agent's avatar image"""
                                              
                                                  system_prompts: list[str] = Field(default_factory=list)
                                                  """System prompts for the agent"""
                                              
                                                  library_system_prompts: list[str] = Field(default_factory=list)
                                                  """System prompts for the agent from the library"""
                                              
                                                  user_prompts: list[str] = Field(default_factory=list)
                                                  """Default user prompts for the agent"""
                                              
                                                  # context_sources: list[ContextSource] = Field(default_factory=list)
                                                  # """Initial context sources to load"""
                                              
                                                  config_file_path: str | None = None
                                                  """Config file path for resolving environment."""
                                              
                                                  knowledge: Knowledge | None = None
                                                  """Knowledge sources for this agent."""
                                              
                                                  workers: list[WorkerConfig] = Field(default_factory=list)
                                                  """Worker agents which will be available as tools."""
                                              
                                                  requires_tool_confirmation: ToolConfirmationMode = "per_tool"
                                                  """How to handle tool confirmation:
                                                  - "always": Always require confirmation for all tools
                                                  - "never": Never require confirmation (ignore tool settings)
                                                  - "per_tool": Use individual tool settings
                                                  """
                                              
                                                  debug: bool = False
                                                  """Enable debug output for this agent."""
                                              
                                                  def is_structured(self) -> bool:
                                                      """Check if this config defines a structured agent."""
                                                      return self.result_type is not None
                                              
                                                  @model_validator(mode="before")
                                                  @classmethod
                                                  def validate_result_type(cls, data: dict[str, Any]) -> dict[str, Any]:
                                                      """Convert result type and apply its settings."""
                                                      result_type = data.get("result_type")
                                                      if isinstance(result_type, dict):
                                                          # Extract response-specific settings
                                                          tool_name = result_type.pop("result_tool_name", None)
                                                          tool_description = result_type.pop("result_tool_description", None)
                                                          retries = result_type.pop("result_retries", None)
                                              
                                                          # Convert remaining dict to ResponseDefinition
                                                          if "type" not in result_type:
                                                              result_type["type"] = "inline"
                                                          data["result_type"] = InlineResponseDefinition(**result_type)
                                              
                                                          # Apply extracted settings to agent config
                                                          if tool_name:
                                                              data["result_tool_name"] = tool_name
                                                          if tool_description:
                                                              data["result_tool_description"] = tool_description
                                                          if retries is not None:
                                                              data["result_retries"] = retries
                                              
                                                      return data
                                              
                                                  @model_validator(mode="before")
                                                  @classmethod
                                                  def handle_model_types(cls, data: dict[str, Any]) -> dict[str, Any]:
                                                      """Convert model inputs to appropriate format."""
                                                      model = data.get("model")
                                                      match model:
                                                          case str():
                                                              data["model"] = {"type": "string", "identifier": model}
                                                      return data
                                              
                                                  async def get_toolsets(self) -> list[ResourceProvider]:
                                                      """Get all resource providers for this agent."""
                                                      providers: list[ResourceProvider] = []
                                              
                                                      # Add providers from toolsets
                                                      for toolset_config in self.toolsets:
                                                          try:
                                                              provider = toolset_config.get_provider()
                                                              providers.append(provider)
                                                          except Exception as e:
                                                              logger.exception(
                                                                  "Failed to create provider for toolset: %r", toolset_config
                                                              )
                                                              msg = f"Failed to create provider for toolset: {e}"
                                                              raise ValueError(msg) from e
                                              
                                                      return providers
                                              
                                                  def get_tool_provider(self) -> ResourceProvider | None:
                                                      """Get tool provider for this agent."""
                                                      from llmling_agent.tools.base import Tool
                                              
                                                      # Create provider for static tools
                                                      if not self.tools:
                                                          return None
                                                      static_tools: list[Tool] = []
                                                      for tool_config in self.tools:
                                                          try:
                                                              match tool_config:
                                                                  case str():
                                                                      if tool_config.startswith("crewai_tools"):
                                                                          obj = import_class(tool_config)()
                                                                          static_tools.append(Tool.from_crewai_tool(obj))
                                                                      elif tool_config.startswith("langchain"):
                                                                          obj = import_class(tool_config)()
                                                                          static_tools.append(Tool.from_langchain_tool(obj))
                                                                      else:
                                                                          tool = Tool.from_callable(tool_config)
                                                                          static_tools.append(tool)
                                                                  case BaseToolConfig():
                                                                      static_tools.append(tool_config.get_tool())
                                                          except Exception:
                                                              logger.exception("Failed to load tool %r", tool_config)
                                                              continue
                                              
                                                      return StaticResourceProvider(name="builtin", tools=static_tools)
                                              
                                                  def get_session_config(self) -> MemoryConfig:
                                                      """Get resolved memory configuration."""
                                                      match self.session:
                                                          case str() | UUID():
                                                              return MemoryConfig(session=SessionQuery(name=str(self.session)))
                                                          case SessionQuery():
                                                              return MemoryConfig(session=self.session)
                                                          case MemoryConfig():
                                                              return self.session
                                                          case None:
                                                              return MemoryConfig()
                                              
                                                  def get_system_prompts(self) -> list[BasePrompt]:
                                                      """Get all system prompts as BasePrompts."""
                                                      prompts: list[BasePrompt] = []
                                                      for prompt in self.system_prompts:
                                                          match prompt:
                                                              case str():
                                                                  # Convert string to StaticPrompt
                                                                  static_prompt = StaticPrompt(
                                                                      name="system",
                                                                      description="System prompt",
                                                                      messages=[PromptMessage(role="system", content=prompt)],
                                                                  )
                                                                  prompts.append(static_prompt)
                                                              case BasePrompt():
                                                                  prompts.append(prompt)
                                                      return prompts
                                              
                                                  def get_provider(self) -> AgentProvider:
                                                      """Get resolved provider instance.
                                              
                                                      Creates provider instance based on configuration:
                                                      - Full provider config: Use as-is
                                                      - Shorthand type: Create default provider config
                                                      """
                                                      # If string shorthand is used, convert to default provider config
                                                      from llmling_agent_config.providers import (
                                                          CallbackProviderConfig,
                                                          HumanProviderConfig,
                                                          LiteLLMProviderConfig,
                                                          PydanticAIProviderConfig,
                                                      )
                                              
                                                      provider_config = self.provider
                                                      if isinstance(provider_config, str):
                                                          match provider_config:
                                                              case "pydantic_ai":
                                                                  provider_config = PydanticAIProviderConfig(model=self.model)
                                                              case "human":
                                                                  provider_config = HumanProviderConfig()
                                                              case "litellm":
                                                                  provider_config = LiteLLMProviderConfig(
                                                                      model=self.model if isinstance(self.model, str) else None
                                                                  )
                                                              case _:
                                                                  try:
                                                                      fn = import_callable(provider_config)
                                                                      provider_config = CallbackProviderConfig(fn=fn)
                                                                  except Exception:  # noqa: BLE001
                                                                      msg = f"Invalid provider type: {provider_config}"
                                                                      raise ValueError(msg)  # noqa: B904
                                              
                                                      # Create provider instance from config
                                                      return provider_config.get_provider()
                                              
                                                  def render_system_prompts(self, context: dict[str, Any] | None = None) -> list[str]:
                                                      """Render system prompts with context."""
                                                      if not context:
                                                          # Default context
                                                          context = {"name": self.name, "id": 1, "model": self.model}
                                                      return [render_prompt(p, {"agent": context}) for p in self.system_prompts]
                                              
                                                  def get_config(self) -> Config:
                                                      """Get configuration for this agent."""
                                                      match self.environment:
                                                          case None:
                                                              # Create minimal config
                                                              caps = LLMCapabilitiesConfig()
                                                              global_settings = GlobalSettings(llm_capabilities=caps)
                                                              return Config(global_settings=global_settings)
                                                          case str() as path:
                                                              # Backward compatibility: treat as file path
                                                              resolved = self._resolve_environment_path(path, self.config_file_path)
                                                              return Config.from_file(resolved)
                                                          case FileEnvironment(uri=uri) as env:
                                                              # Handle FileEnvironment instance
                                                              resolved = env.get_file_path()
                                                              return Config.from_file(resolved)
                                                          case {"type": "file", "uri": uri}:
                                                              # Handle raw dict matching file environment structure
                                                              return Config.from_file(uri)
                                                          case {"type": "inline", "config": config}:
                                                              return config
                                                          case InlineEnvironment() as config:
                                                              return config
                                                          case _:
                                                              msg = f"Invalid environment configuration: {self.environment}"
                                                              raise ValueError(msg)
                                              
                                                  def get_environment_path(self) -> str | None:
                                                      """Get environment file path if available."""
                                                      match self.environment:
                                                          case str() as path:
                                                              return self._resolve_environment_path(path, self.config_file_path)
                                                          case {"type": "file", "uri": uri} | FileEnvironment(uri=uri):
                                                              return uri
                                                          case _:
                                                              return None
                                              
                                                  @staticmethod
                                                  def _resolve_environment_path(env: str, config_file_path: str | None = None) -> str:
                                                      """Resolve environment path from config store or relative path."""
                                                      from upath import UPath
                                              
                                                      try:
                                                          config_store = ConfigStore()
                                                          return config_store.get_config(env)
                                                      except KeyError:
                                                          if config_file_path:
                                                              base_dir = UPath(config_file_path).parent
                                                              return str(base_dir / env)
                                                          return env
                                              

                                              avatar class-attribute instance-attribute

                                              avatar: str | None = None
                                              

                                              URL or path to agent's avatar image

                                              capabilities class-attribute instance-attribute

                                              capabilities: Capabilities = Field(default_factory=Capabilities)
                                              

                                              Current agent's capabilities.

                                              config_file_path class-attribute instance-attribute

                                              config_file_path: str | None = None
                                              

                                              Config file path for resolving environment.

                                              debug class-attribute instance-attribute

                                              debug: bool = False
                                              

                                              Enable debug output for this agent.

                                              end_strategy class-attribute instance-attribute

                                              end_strategy: EndStrategy = 'early'
                                              

                                              The strategy for handling multiple tool calls when a final result is found

                                              environment class-attribute instance-attribute

                                              environment: str | AgentEnvironment | None = None
                                              

                                              Environments configuration (path or object)

                                              inherits class-attribute instance-attribute

                                              inherits: str | None = None
                                              

                                              Name of agent config to inherit from

                                              knowledge class-attribute instance-attribute

                                              knowledge: Knowledge | None = None
                                              

                                              Knowledge sources for this agent.

                                              library_system_prompts class-attribute instance-attribute

                                              library_system_prompts: list[str] = Field(default_factory=list)
                                              

                                              System prompts for the agent from the library

                                              model class-attribute instance-attribute

                                              model: str | AnyModelConfig | None = None
                                              

                                              The model to use for this agent. Can be either a simple model name string (e.g. 'openai:gpt-4') or a structured model definition.

                                              provider class-attribute instance-attribute

                                              provider: ProviderConfig | ProviderName = 'pydantic_ai'
                                              

                                              Provider configuration or shorthand type

                                              requires_tool_confirmation class-attribute instance-attribute

                                              requires_tool_confirmation: ToolConfirmationMode = 'per_tool'
                                              

                                              How to handle tool confirmation: - "always": Always require confirmation for all tools - "never": Never require confirmation (ignore tool settings) - "per_tool": Use individual tool settings

                                              result_retries class-attribute instance-attribute

                                              result_retries: int | None = None
                                              

                                              Max retries for result validation

                                              result_tool_description class-attribute instance-attribute

                                              result_tool_description: str | None = None
                                              

                                              Custom description for the result tool

                                              result_tool_name class-attribute instance-attribute

                                              result_tool_name: str = 'final_result'
                                              

                                              Name of the tool used for structured responses

                                              result_type class-attribute instance-attribute

                                              result_type: str | ResponseDefinition | None = None
                                              

                                              Name of the response definition to use

                                              retries class-attribute instance-attribute

                                              retries: int = 1
                                              

                                              Number of retries for failed operations (maps to pydantic-ai's retries)

                                              session class-attribute instance-attribute

                                              session: str | SessionQuery | MemoryConfig | None = None
                                              

                                              Session configuration for conversation recovery.

                                              system_prompts class-attribute instance-attribute

                                              system_prompts: list[str] = Field(default_factory=list)
                                              

                                              System prompts for the agent

                                              tools class-attribute instance-attribute

                                              tools: list[ToolConfig | str] = Field(default_factory=list)
                                              

                                              A list of tools to register with this agent.

                                              toolsets class-attribute instance-attribute

                                              toolsets: list[ToolsetConfig] = Field(default_factory=list)
                                              

                                              Toolset configurations for extensible tool collections.

                                              user_prompts class-attribute instance-attribute

                                              user_prompts: list[str] = Field(default_factory=list)
                                              

                                              Default user prompts for the agent

                                              workers class-attribute instance-attribute

                                              workers: list[WorkerConfig] = Field(default_factory=list)
                                              

                                              Worker agents which will be available as tools.

                                              _resolve_environment_path staticmethod

                                              _resolve_environment_path(env: str, config_file_path: str | None = None) -> str
                                              

                                              Resolve environment path from config store or relative path.

                                              Source code in src/llmling_agent/models/agents.py
                                              341
                                              342
                                              343
                                              344
                                              345
                                              346
                                              347
                                              348
                                              349
                                              350
                                              351
                                              352
                                              353
                                              @staticmethod
                                              def _resolve_environment_path(env: str, config_file_path: str | None = None) -> str:
                                                  """Resolve environment path from config store or relative path."""
                                                  from upath import UPath
                                              
                                                  try:
                                                      config_store = ConfigStore()
                                                      return config_store.get_config(env)
                                                  except KeyError:
                                                      if config_file_path:
                                                          base_dir = UPath(config_file_path).parent
                                                          return str(base_dir / env)
                                                      return env
                                              

                                              get_config

                                              get_config() -> Config
                                              

                                              Get configuration for this agent.

                                              Source code in src/llmling_agent/models/agents.py
                                              304
                                              305
                                              306
                                              307
                                              308
                                              309
                                              310
                                              311
                                              312
                                              313
                                              314
                                              315
                                              316
                                              317
                                              318
                                              319
                                              320
                                              321
                                              322
                                              323
                                              324
                                              325
                                              326
                                              327
                                              328
                                              329
                                              def get_config(self) -> Config:
                                                  """Get configuration for this agent."""
                                                  match self.environment:
                                                      case None:
                                                          # Create minimal config
                                                          caps = LLMCapabilitiesConfig()
                                                          global_settings = GlobalSettings(llm_capabilities=caps)
                                                          return Config(global_settings=global_settings)
                                                      case str() as path:
                                                          # Backward compatibility: treat as file path
                                                          resolved = self._resolve_environment_path(path, self.config_file_path)
                                                          return Config.from_file(resolved)
                                                      case FileEnvironment(uri=uri) as env:
                                                          # Handle FileEnvironment instance
                                                          resolved = env.get_file_path()
                                                          return Config.from_file(resolved)
                                                      case {"type": "file", "uri": uri}:
                                                          # Handle raw dict matching file environment structure
                                                          return Config.from_file(uri)
                                                      case {"type": "inline", "config": config}:
                                                          return config
                                                      case InlineEnvironment() as config:
                                                          return config
                                                      case _:
                                                          msg = f"Invalid environment configuration: {self.environment}"
                                                          raise ValueError(msg)
                                              

                                              get_environment_path

                                              get_environment_path() -> str | None
                                              

                                              Get environment file path if available.

                                              Source code in src/llmling_agent/models/agents.py
                                              331
                                              332
                                              333
                                              334
                                              335
                                              336
                                              337
                                              338
                                              339
                                              def get_environment_path(self) -> str | None:
                                                  """Get environment file path if available."""
                                                  match self.environment:
                                                      case str() as path:
                                                          return self._resolve_environment_path(path, self.config_file_path)
                                                      case {"type": "file", "uri": uri} | FileEnvironment(uri=uri):
                                                          return uri
                                                      case _:
                                                          return None
                                              

                                              get_provider

                                              get_provider() -> AgentProvider
                                              

                                              Get resolved provider instance.

                                              Creates provider instance based on configuration: - Full provider config: Use as-is - Shorthand type: Create default provider config

                                              Source code in src/llmling_agent/models/agents.py
                                              260
                                              261
                                              262
                                              263
                                              264
                                              265
                                              266
                                              267
                                              268
                                              269
                                              270
                                              271
                                              272
                                              273
                                              274
                                              275
                                              276
                                              277
                                              278
                                              279
                                              280
                                              281
                                              282
                                              283
                                              284
                                              285
                                              286
                                              287
                                              288
                                              289
                                              290
                                              291
                                              292
                                              293
                                              294
                                              295
                                              def get_provider(self) -> AgentProvider:
                                                  """Get resolved provider instance.
                                              
                                                  Creates provider instance based on configuration:
                                                  - Full provider config: Use as-is
                                                  - Shorthand type: Create default provider config
                                                  """
                                                  # If string shorthand is used, convert to default provider config
                                                  from llmling_agent_config.providers import (
                                                      CallbackProviderConfig,
                                                      HumanProviderConfig,
                                                      LiteLLMProviderConfig,
                                                      PydanticAIProviderConfig,
                                                  )
                                              
                                                  provider_config = self.provider
                                                  if isinstance(provider_config, str):
                                                      match provider_config:
                                                          case "pydantic_ai":
                                                              provider_config = PydanticAIProviderConfig(model=self.model)
                                                          case "human":
                                                              provider_config = HumanProviderConfig()
                                                          case "litellm":
                                                              provider_config = LiteLLMProviderConfig(
                                                                  model=self.model if isinstance(self.model, str) else None
                                                              )
                                                          case _:
                                                              try:
                                                                  fn = import_callable(provider_config)
                                                                  provider_config = CallbackProviderConfig(fn=fn)
                                                              except Exception:  # noqa: BLE001
                                                                  msg = f"Invalid provider type: {provider_config}"
                                                                  raise ValueError(msg)  # noqa: B904
                                              
                                                  # Create provider instance from config
                                                  return provider_config.get_provider()
                                              

                                              get_session_config

                                              get_session_config() -> MemoryConfig
                                              

                                              Get resolved memory configuration.

                                              Source code in src/llmling_agent/models/agents.py
                                              231
                                              232
                                              233
                                              234
                                              235
                                              236
                                              237
                                              238
                                              239
                                              240
                                              241
                                              def get_session_config(self) -> MemoryConfig:
                                                  """Get resolved memory configuration."""
                                                  match self.session:
                                                      case str() | UUID():
                                                          return MemoryConfig(session=SessionQuery(name=str(self.session)))
                                                      case SessionQuery():
                                                          return MemoryConfig(session=self.session)
                                                      case MemoryConfig():
                                                          return self.session
                                                      case None:
                                                          return MemoryConfig()
                                              

                                              get_system_prompts

                                              get_system_prompts() -> list[BasePrompt]
                                              

                                              Get all system prompts as BasePrompts.

                                              Source code in src/llmling_agent/models/agents.py
                                              243
                                              244
                                              245
                                              246
                                              247
                                              248
                                              249
                                              250
                                              251
                                              252
                                              253
                                              254
                                              255
                                              256
                                              257
                                              258
                                              def get_system_prompts(self) -> list[BasePrompt]:
                                                  """Get all system prompts as BasePrompts."""
                                                  prompts: list[BasePrompt] = []
                                                  for prompt in self.system_prompts:
                                                      match prompt:
                                                          case str():
                                                              # Convert string to StaticPrompt
                                                              static_prompt = StaticPrompt(
                                                                  name="system",
                                                                  description="System prompt",
                                                                  messages=[PromptMessage(role="system", content=prompt)],
                                                              )
                                                              prompts.append(static_prompt)
                                                          case BasePrompt():
                                                              prompts.append(prompt)
                                                  return prompts
                                              

                                              get_tool_provider

                                              get_tool_provider() -> ResourceProvider | None
                                              

                                              Get tool provider for this agent.

                                              Source code in src/llmling_agent/models/agents.py
                                              202
                                              203
                                              204
                                              205
                                              206
                                              207
                                              208
                                              209
                                              210
                                              211
                                              212
                                              213
                                              214
                                              215
                                              216
                                              217
                                              218
                                              219
                                              220
                                              221
                                              222
                                              223
                                              224
                                              225
                                              226
                                              227
                                              228
                                              229
                                              def get_tool_provider(self) -> ResourceProvider | None:
                                                  """Get tool provider for this agent."""
                                                  from llmling_agent.tools.base import Tool
                                              
                                                  # Create provider for static tools
                                                  if not self.tools:
                                                      return None
                                                  static_tools: list[Tool] = []
                                                  for tool_config in self.tools:
                                                      try:
                                                          match tool_config:
                                                              case str():
                                                                  if tool_config.startswith("crewai_tools"):
                                                                      obj = import_class(tool_config)()
                                                                      static_tools.append(Tool.from_crewai_tool(obj))
                                                                  elif tool_config.startswith("langchain"):
                                                                      obj = import_class(tool_config)()
                                                                      static_tools.append(Tool.from_langchain_tool(obj))
                                                                  else:
                                                                      tool = Tool.from_callable(tool_config)
                                                                      static_tools.append(tool)
                                                              case BaseToolConfig():
                                                                  static_tools.append(tool_config.get_tool())
                                                      except Exception:
                                                          logger.exception("Failed to load tool %r", tool_config)
                                                          continue
                                              
                                                  return StaticResourceProvider(name="builtin", tools=static_tools)
                                              

                                              get_toolsets async

                                              get_toolsets() -> list[ResourceProvider]
                                              

                                              Get all resource providers for this agent.

                                              Source code in src/llmling_agent/models/agents.py
                                              184
                                              185
                                              186
                                              187
                                              188
                                              189
                                              190
                                              191
                                              192
                                              193
                                              194
                                              195
                                              196
                                              197
                                              198
                                              199
                                              200
                                              async def get_toolsets(self) -> list[ResourceProvider]:
                                                  """Get all resource providers for this agent."""
                                                  providers: list[ResourceProvider] = []
                                              
                                                  # Add providers from toolsets
                                                  for toolset_config in self.toolsets:
                                                      try:
                                                          provider = toolset_config.get_provider()
                                                          providers.append(provider)
                                                      except Exception as e:
                                                          logger.exception(
                                                              "Failed to create provider for toolset: %r", toolset_config
                                                          )
                                                          msg = f"Failed to create provider for toolset: {e}"
                                                          raise ValueError(msg) from e
                                              
                                                  return providers
                                              

                                              handle_model_types classmethod

                                              handle_model_types(data: dict[str, Any]) -> dict[str, Any]
                                              

                                              Convert model inputs to appropriate format.

                                              Source code in src/llmling_agent/models/agents.py
                                              174
                                              175
                                              176
                                              177
                                              178
                                              179
                                              180
                                              181
                                              182
                                              @model_validator(mode="before")
                                              @classmethod
                                              def handle_model_types(cls, data: dict[str, Any]) -> dict[str, Any]:
                                                  """Convert model inputs to appropriate format."""
                                                  model = data.get("model")
                                                  match model:
                                                      case str():
                                                          data["model"] = {"type": "string", "identifier": model}
                                                  return data
                                              

                                              is_structured

                                              is_structured() -> bool
                                              

                                              Check if this config defines a structured agent.

                                              Source code in src/llmling_agent/models/agents.py
                                              144
                                              145
                                              146
                                              def is_structured(self) -> bool:
                                                  """Check if this config defines a structured agent."""
                                                  return self.result_type is not None
                                              

                                              render_system_prompts

                                              render_system_prompts(context: dict[str, Any] | None = None) -> list[str]
                                              

                                              Render system prompts with context.

                                              Source code in src/llmling_agent/models/agents.py
                                              297
                                              298
                                              299
                                              300
                                              301
                                              302
                                              def render_system_prompts(self, context: dict[str, Any] | None = None) -> list[str]:
                                                  """Render system prompts with context."""
                                                  if not context:
                                                      # Default context
                                                      context = {"name": self.name, "id": 1, "model": self.model}
                                                  return [render_prompt(p, {"agent": context}) for p in self.system_prompts]
                                              

                                              validate_result_type classmethod

                                              validate_result_type(data: dict[str, Any]) -> dict[str, Any]
                                              

                                              Convert result type and apply its settings.

                                              Source code in src/llmling_agent/models/agents.py
                                              148
                                              149
                                              150
                                              151
                                              152
                                              153
                                              154
                                              155
                                              156
                                              157
                                              158
                                              159
                                              160
                                              161
                                              162
                                              163
                                              164
                                              165
                                              166
                                              167
                                              168
                                              169
                                              170
                                              171
                                              172
                                              @model_validator(mode="before")
                                              @classmethod
                                              def validate_result_type(cls, data: dict[str, Any]) -> dict[str, Any]:
                                                  """Convert result type and apply its settings."""
                                                  result_type = data.get("result_type")
                                                  if isinstance(result_type, dict):
                                                      # Extract response-specific settings
                                                      tool_name = result_type.pop("result_tool_name", None)
                                                      tool_description = result_type.pop("result_tool_description", None)
                                                      retries = result_type.pop("result_retries", None)
                                              
                                                      # Convert remaining dict to ResponseDefinition
                                                      if "type" not in result_type:
                                                          result_type["type"] = "inline"
                                                      data["result_type"] = InlineResponseDefinition(**result_type)
                                              
                                                      # Apply extracted settings to agent config
                                                      if tool_name:
                                                          data["result_tool_name"] = tool_name
                                                      if tool_description:
                                                          data["result_tool_description"] = tool_description
                                                      if retries is not None:
                                                          data["result_retries"] = retries
                                              
                                                  return data
                                              

                                              AgentContext dataclass

                                              Bases: NodeContext[TDeps]

                                              Runtime context for agent execution.

                                              Generically typed with AgentContext[Type of Dependencies]

                                              Source code in src/llmling_agent/agent/context.py
                                               31
                                               32
                                               33
                                               34
                                               35
                                               36
                                               37
                                               38
                                               39
                                               40
                                               41
                                               42
                                               43
                                               44
                                               45
                                               46
                                               47
                                               48
                                               49
                                               50
                                               51
                                               52
                                               53
                                               54
                                               55
                                               56
                                               57
                                               58
                                               59
                                               60
                                               61
                                               62
                                               63
                                               64
                                               65
                                               66
                                               67
                                               68
                                               69
                                               70
                                               71
                                               72
                                               73
                                               74
                                               75
                                               76
                                               77
                                               78
                                               79
                                               80
                                               81
                                               82
                                               83
                                               84
                                               85
                                               86
                                               87
                                               88
                                               89
                                               90
                                               91
                                               92
                                               93
                                               94
                                               95
                                               96
                                               97
                                               98
                                               99
                                              100
                                              101
                                              102
                                              103
                                              104
                                              105
                                              106
                                              107
                                              108
                                              109
                                              110
                                              111
                                              112
                                              113
                                              114
                                              115
                                              116
                                              @dataclass(kw_only=True)
                                              class AgentContext[TDeps](NodeContext[TDeps]):
                                                  """Runtime context for agent execution.
                                              
                                                  Generically typed with AgentContext[Type of Dependencies]
                                                  """
                                              
                                                  capabilities: Capabilities
                                                  """Current agent's capabilities."""
                                              
                                                  config: AgentConfig
                                                  """Current agent's specific configuration."""
                                              
                                                  model_settings: dict[str, Any] = field(default_factory=dict)
                                                  """Model-specific settings."""
                                              
                                                  data: TDeps | None = None
                                                  """Custom context data."""
                                              
                                                  runtime: RuntimeConfig | None = None
                                                  """Reference to the runtime configuration."""
                                              
                                                  @classmethod
                                                  def create_default(
                                                      cls,
                                                      name: str,
                                                      capabilities: Capabilities | None = None,
                                                      deps: TDeps | None = None,
                                                      pool: AgentPool | None = None,
                                                      input_provider: InputProvider | None = None,
                                                  ) -> AgentContext[TDeps]:
                                                      """Create a default agent context with minimal privileges.
                                              
                                                      Args:
                                                          name: Name of the agent
                                                          capabilities: Optional custom capabilities (defaults to minimal access)
                                                          deps: Optional dependencies for the agent
                                                          pool: Optional pool the agent is part of
                                                          input_provider: Optional input provider for the agent
                                                      """
                                                      from llmling_agent.config.capabilities import Capabilities
                                                      from llmling_agent.models import AgentConfig, AgentsManifest
                                              
                                                      caps = capabilities or Capabilities()
                                                      defn = AgentsManifest()
                                                      cfg = AgentConfig(name=name)
                                                      return cls(
                                                          input_provider=input_provider,
                                                          node_name=name,
                                                          capabilities=caps,
                                                          definition=defn,
                                                          config=cfg,
                                                          data=deps,
                                                          pool=pool,
                                                      )
                                              
                                                  @cached_property
                                                  def converter(self) -> ConversionManager:
                                                      """Get conversion manager from global config."""
                                                      return ConversionManager(self.definition.conversion)
                                              
                                                  # TODO: perhaps add agent directly to context?
                                                  @property
                                                  def agent(self) -> AnyAgent[TDeps, Any]:
                                                      """Get the agent instance from the pool."""
                                                      assert self.pool, "No agent pool available"
                                                      assert self.node_name, "No agent name available"
                                                      return self.pool.agents[self.node_name]
                                              
                                                  async def handle_confirmation(
                                                      self,
                                                      tool: Tool,
                                                      args: dict[str, Any],
                                                  ) -> ConfirmationResult:
                                                      """Handle tool execution confirmation.
                                              
                                                      Returns True if:
                                                      - No confirmation handler is set
                                                      - Handler confirms the execution
                                                      """
                                                      provider = self.get_input_provider()
                                                      mode = self.config.requires_tool_confirmation
                                                      if (mode == "per_tool" and not tool.requires_confirmation) or mode == "never":
                                                          return "allow"
                                                      history = self.agent.conversation.get_history() if self.pool else []
                                                      return await provider.get_tool_confirmation(self, tool, args, history)
                                              

                                              agent property

                                              agent: AnyAgent[TDeps, Any]
                                              

                                              Get the agent instance from the pool.

                                              capabilities instance-attribute

                                              capabilities: Capabilities
                                              

                                              Current agent's capabilities.

                                              config instance-attribute

                                              config: AgentConfig
                                              

                                              Current agent's specific configuration.

                                              converter cached property

                                              converter: ConversionManager
                                              

                                              Get conversion manager from global config.

                                              data class-attribute instance-attribute

                                              data: TDeps | None = None
                                              

                                              Custom context data.

                                              model_settings class-attribute instance-attribute

                                              model_settings: dict[str, Any] = field(default_factory=dict)
                                              

                                              Model-specific settings.

                                              runtime class-attribute instance-attribute

                                              runtime: RuntimeConfig | None = None
                                              

                                              Reference to the runtime configuration.

                                              create_default classmethod

                                              create_default(
                                                  name: str,
                                                  capabilities: Capabilities | None = None,
                                                  deps: TDeps | None = None,
                                                  pool: AgentPool | None = None,
                                                  input_provider: InputProvider | None = None,
                                              ) -> AgentContext[TDeps]
                                              

                                              Create a default agent context with minimal privileges.

                                              Parameters:

                                              Name Type Description Default
                                              name str

                                              Name of the agent

                                              required
                                              capabilities Capabilities | None

                                              Optional custom capabilities (defaults to minimal access)

                                              None
                                              deps TDeps | None

                                              Optional dependencies for the agent

                                              None
                                              pool AgentPool | None

                                              Optional pool the agent is part of

                                              None
                                              input_provider InputProvider | None

                                              Optional input provider for the agent

                                              None
                                              Source code in src/llmling_agent/agent/context.py
                                              53
                                              54
                                              55
                                              56
                                              57
                                              58
                                              59
                                              60
                                              61
                                              62
                                              63
                                              64
                                              65
                                              66
                                              67
                                              68
                                              69
                                              70
                                              71
                                              72
                                              73
                                              74
                                              75
                                              76
                                              77
                                              78
                                              79
                                              80
                                              81
                                              82
                                              83
                                              84
                                              85
                                              @classmethod
                                              def create_default(
                                                  cls,
                                                  name: str,
                                                  capabilities: Capabilities | None = None,
                                                  deps: TDeps | None = None,
                                                  pool: AgentPool | None = None,
                                                  input_provider: InputProvider | None = None,
                                              ) -> AgentContext[TDeps]:
                                                  """Create a default agent context with minimal privileges.
                                              
                                                  Args:
                                                      name: Name of the agent
                                                      capabilities: Optional custom capabilities (defaults to minimal access)
                                                      deps: Optional dependencies for the agent
                                                      pool: Optional pool the agent is part of
                                                      input_provider: Optional input provider for the agent
                                                  """
                                                  from llmling_agent.config.capabilities import Capabilities
                                                  from llmling_agent.models import AgentConfig, AgentsManifest
                                              
                                                  caps = capabilities or Capabilities()
                                                  defn = AgentsManifest()
                                                  cfg = AgentConfig(name=name)
                                                  return cls(
                                                      input_provider=input_provider,
                                                      node_name=name,
                                                      capabilities=caps,
                                                      definition=defn,
                                                      config=cfg,
                                                      data=deps,
                                                      pool=pool,
                                                  )
                                              

                                              handle_confirmation async

                                              handle_confirmation(tool: Tool, args: dict[str, Any]) -> ConfirmationResult
                                              

                                              Handle tool execution confirmation.

                                              Returns True if: - No confirmation handler is set - Handler confirms the execution

                                              Source code in src/llmling_agent/agent/context.py
                                              100
                                              101
                                              102
                                              103
                                              104
                                              105
                                              106
                                              107
                                              108
                                              109
                                              110
                                              111
                                              112
                                              113
                                              114
                                              115
                                              116
                                              async def handle_confirmation(
                                                  self,
                                                  tool: Tool,
                                                  args: dict[str, Any],
                                              ) -> ConfirmationResult:
                                                  """Handle tool execution confirmation.
                                              
                                                  Returns True if:
                                                  - No confirmation handler is set
                                                  - Handler confirms the execution
                                                  """
                                                  provider = self.get_input_provider()
                                                  mode = self.config.requires_tool_confirmation
                                                  if (mode == "per_tool" and not tool.requires_confirmation) or mode == "never":
                                                      return "allow"
                                                  history = self.agent.conversation.get_history() if self.pool else []
                                                  return await provider.get_tool_confirmation(self, tool, args, history)
                                              

                                              AgentPool

                                              Bases: BaseRegistry[NodeName, MessageEmitter[Any, Any]]

                                              Pool managing message processing nodes (agents and teams).

                                              Acts as a unified registry for all nodes, providing: - Centralized node management and lookup - Shared dependency injection - Connection management - Resource coordination

                                              Nodes can be accessed through: - nodes: All registered nodes (agents and teams) - agents: Only Agent instances - teams: Only Team instances

                                              Source code in src/llmling_agent/delegation/pool.py
                                               61
                                               62
                                               63
                                               64
                                               65
                                               66
                                               67
                                               68
                                               69
                                               70
                                               71
                                               72
                                               73
                                               74
                                               75
                                               76
                                               77
                                               78
                                               79
                                               80
                                               81
                                               82
                                               83
                                               84
                                               85
                                               86
                                               87
                                               88
                                               89
                                               90
                                               91
                                               92
                                               93
                                               94
                                               95
                                               96
                                               97
                                               98
                                               99
                                              100
                                              101
                                              102
                                              103
                                              104
                                              105
                                              106
                                              107
                                              108
                                              109
                                              110
                                              111
                                              112
                                              113
                                              114
                                              115
                                              116
                                              117
                                              118
                                              119
                                              120
                                              121
                                              122
                                              123
                                              124
                                              125
                                              126
                                              127
                                              128
                                              129
                                              130
                                              131
                                              132
                                              133
                                              134
                                              135
                                              136
                                              137
                                              138
                                              139
                                              140
                                              141
                                              142
                                              143
                                              144
                                              145
                                              146
                                              147
                                              148
                                              149
                                              150
                                              151
                                              152
                                              153
                                              154
                                              155
                                              156
                                              157
                                              158
                                              159
                                              160
                                              161
                                              162
                                              163
                                              164
                                              165
                                              166
                                              167
                                              168
                                              169
                                              170
                                              171
                                              172
                                              173
                                              174
                                              175
                                              176
                                              177
                                              178
                                              179
                                              180
                                              181
                                              182
                                              183
                                              184
                                              185
                                              186
                                              187
                                              188
                                              189
                                              190
                                              191
                                              192
                                              193
                                              194
                                              195
                                              196
                                              197
                                              198
                                              199
                                              200
                                              201
                                              202
                                              203
                                              204
                                              205
                                              206
                                              207
                                              208
                                              209
                                              210
                                              211
                                              212
                                              213
                                              214
                                              215
                                              216
                                              217
                                              218
                                              219
                                              220
                                              221
                                              222
                                              223
                                              224
                                              225
                                              226
                                              227
                                              228
                                              229
                                              230
                                              231
                                              232
                                              233
                                              234
                                              235
                                              236
                                              237
                                              238
                                              239
                                              240
                                              241
                                              242
                                              243
                                              244
                                              245
                                              246
                                              247
                                              248
                                              249
                                              250
                                              251
                                              252
                                              253
                                              254
                                              255
                                              256
                                              257
                                              258
                                              259
                                              260
                                              261
                                              262
                                              263
                                              264
                                              265
                                              266
                                              267
                                              268
                                              269
                                              270
                                              271
                                              272
                                              273
                                              274
                                              275
                                              276
                                              277
                                              278
                                              279
                                              280
                                              281
                                              282
                                              283
                                              284
                                              285
                                              286
                                              287
                                              288
                                              289
                                              290
                                              291
                                              292
                                              293
                                              294
                                              295
                                              296
                                              297
                                              298
                                              299
                                              300
                                              301
                                              302
                                              303
                                              304
                                              305
                                              306
                                              307
                                              308
                                              309
                                              310
                                              311
                                              312
                                              313
                                              314
                                              315
                                              316
                                              317
                                              318
                                              319
                                              320
                                              321
                                              322
                                              323
                                              324
                                              325
                                              326
                                              327
                                              328
                                              329
                                              330
                                              331
                                              332
                                              333
                                              334
                                              335
                                              336
                                              337
                                              338
                                              339
                                              340
                                              341
                                              342
                                              343
                                              344
                                              345
                                              346
                                              347
                                              348
                                              349
                                              350
                                              351
                                              352
                                              353
                                              354
                                              355
                                              356
                                              357
                                              358
                                              359
                                              360
                                              361
                                              362
                                              363
                                              364
                                              365
                                              366
                                              367
                                              368
                                              369
                                              370
                                              371
                                              372
                                              373
                                              374
                                              375
                                              376
                                              377
                                              378
                                              379
                                              380
                                              381
                                              382
                                              383
                                              384
                                              385
                                              386
                                              387
                                              388
                                              389
                                              390
                                              391
                                              392
                                              393
                                              394
                                              395
                                              396
                                              397
                                              398
                                              399
                                              400
                                              401
                                              402
                                              403
                                              404
                                              405
                                              406
                                              407
                                              408
                                              409
                                              410
                                              411
                                              412
                                              413
                                              414
                                              415
                                              416
                                              417
                                              418
                                              419
                                              420
                                              421
                                              422
                                              423
                                              424
                                              425
                                              426
                                              427
                                              428
                                              429
                                              430
                                              431
                                              432
                                              433
                                              434
                                              435
                                              436
                                              437
                                              438
                                              439
                                              440
                                              441
                                              442
                                              443
                                              444
                                              445
                                              446
                                              447
                                              448
                                              449
                                              450
                                              451
                                              452
                                              453
                                              454
                                              455
                                              456
                                              457
                                              458
                                              459
                                              460
                                              461
                                              462
                                              463
                                              464
                                              465
                                              466
                                              467
                                              468
                                              469
                                              470
                                              471
                                              472
                                              473
                                              474
                                              475
                                              476
                                              477
                                              478
                                              479
                                              480
                                              481
                                              482
                                              483
                                              484
                                              485
                                              486
                                              487
                                              488
                                              489
                                              490
                                              491
                                              492
                                              493
                                              494
                                              495
                                              496
                                              497
                                              498
                                              499
                                              500
                                              501
                                              502
                                              503
                                              504
                                              505
                                              506
                                              507
                                              508
                                              509
                                              510
                                              511
                                              512
                                              513
                                              514
                                              515
                                              516
                                              517
                                              518
                                              519
                                              520
                                              521
                                              522
                                              523
                                              524
                                              525
                                              526
                                              527
                                              528
                                              529
                                              530
                                              531
                                              532
                                              533
                                              534
                                              535
                                              536
                                              537
                                              538
                                              539
                                              540
                                              541
                                              542
                                              543
                                              544
                                              545
                                              546
                                              547
                                              548
                                              549
                                              550
                                              551
                                              552
                                              553
                                              554
                                              555
                                              556
                                              557
                                              558
                                              559
                                              560
                                              561
                                              562
                                              563
                                              564
                                              565
                                              566
                                              567
                                              568
                                              569
                                              570
                                              571
                                              572
                                              573
                                              574
                                              575
                                              576
                                              577
                                              578
                                              579
                                              580
                                              581
                                              582
                                              583
                                              584
                                              585
                                              586
                                              587
                                              588
                                              589
                                              590
                                              591
                                              592
                                              593
                                              594
                                              595
                                              596
                                              597
                                              598
                                              599
                                              600
                                              601
                                              602
                                              603
                                              604
                                              605
                                              606
                                              607
                                              608
                                              609
                                              610
                                              611
                                              612
                                              613
                                              614
                                              615
                                              616
                                              617
                                              618
                                              619
                                              620
                                              621
                                              622
                                              623
                                              624
                                              625
                                              626
                                              627
                                              628
                                              629
                                              630
                                              631
                                              632
                                              633
                                              634
                                              635
                                              636
                                              637
                                              638
                                              639
                                              640
                                              641
                                              642
                                              643
                                              644
                                              645
                                              646
                                              647
                                              648
                                              649
                                              650
                                              651
                                              652
                                              653
                                              654
                                              655
                                              656
                                              657
                                              658
                                              659
                                              660
                                              661
                                              662
                                              663
                                              664
                                              665
                                              666
                                              667
                                              668
                                              669
                                              670
                                              671
                                              672
                                              673
                                              674
                                              675
                                              676
                                              677
                                              678
                                              679
                                              680
                                              681
                                              682
                                              683
                                              684
                                              685
                                              686
                                              687
                                              688
                                              689
                                              690
                                              691
                                              692
                                              693
                                              694
                                              695
                                              696
                                              697
                                              698
                                              699
                                              700
                                              701
                                              702
                                              703
                                              704
                                              705
                                              706
                                              707
                                              708
                                              709
                                              710
                                              711
                                              712
                                              713
                                              714
                                              715
                                              716
                                              717
                                              718
                                              719
                                              720
                                              721
                                              722
                                              723
                                              724
                                              725
                                              726
                                              727
                                              728
                                              729
                                              730
                                              731
                                              732
                                              733
                                              734
                                              735
                                              736
                                              737
                                              738
                                              739
                                              740
                                              741
                                              742
                                              743
                                              744
                                              745
                                              746
                                              747
                                              748
                                              749
                                              750
                                              751
                                              752
                                              753
                                              754
                                              755
                                              756
                                              757
                                              758
                                              759
                                              760
                                              761
                                              762
                                              763
                                              764
                                              765
                                              766
                                              767
                                              768
                                              769
                                              770
                                              771
                                              772
                                              773
                                              774
                                              775
                                              776
                                              777
                                              778
                                              779
                                              780
                                              781
                                              782
                                              783
                                              784
                                              785
                                              786
                                              787
                                              788
                                              789
                                              790
                                              791
                                              792
                                              793
                                              794
                                              795
                                              796
                                              797
                                              798
                                              799
                                              800
                                              801
                                              802
                                              803
                                              804
                                              805
                                              806
                                              807
                                              808
                                              809
                                              810
                                              811
                                              812
                                              813
                                              814
                                              815
                                              816
                                              817
                                              818
                                              819
                                              820
                                              821
                                              822
                                              823
                                              824
                                              825
                                              826
                                              827
                                              828
                                              829
                                              830
                                              831
                                              832
                                              833
                                              834
                                              835
                                              836
                                              837
                                              838
                                              839
                                              840
                                              841
                                              842
                                              843
                                              844
                                              845
                                              846
                                              847
                                              848
                                              849
                                              850
                                              851
                                              852
                                              853
                                              854
                                              855
                                              856
                                              857
                                              858
                                              859
                                              860
                                              861
                                              862
                                              863
                                              864
                                              865
                                              866
                                              867
                                              868
                                              869
                                              870
                                              871
                                              872
                                              873
                                              874
                                              875
                                              876
                                              877
                                              878
                                              879
                                              880
                                              881
                                              882
                                              883
                                              884
                                              885
                                              886
                                              887
                                              888
                                              889
                                              890
                                              891
                                              892
                                              893
                                              894
                                              895
                                              896
                                              897
                                              898
                                              899
                                              900
                                              901
                                              902
                                              903
                                              904
                                              905
                                              906
                                              907
                                              908
                                              909
                                              910
                                              911
                                              912
                                              913
                                              914
                                              915
                                              916
                                              917
                                              918
                                              919
                                              920
                                              921
                                              922
                                              923
                                              924
                                              925
                                              926
                                              927
                                              928
                                              929
                                              930
                                              931
                                              class AgentPool[TPoolDeps](BaseRegistry[NodeName, MessageEmitter[Any, Any]]):
                                                  """Pool managing message processing nodes (agents and teams).
                                              
                                                  Acts as a unified registry for all nodes, providing:
                                                  - Centralized node management and lookup
                                                  - Shared dependency injection
                                                  - Connection management
                                                  - Resource coordination
                                              
                                                  Nodes can be accessed through:
                                                  - nodes: All registered nodes (agents and teams)
                                                  - agents: Only Agent instances
                                                  - teams: Only Team instances
                                                  """
                                              
                                                  def __init__(
                                                      self,
                                                      manifest: StrPath | AgentsManifest | None = None,
                                                      *,
                                                      shared_deps: TPoolDeps | None = None,
                                                      connect_nodes: bool = True,
                                                      input_provider: InputProvider | None = None,
                                                      parallel_load: bool = True,
                                                  ):
                                                      """Initialize agent pool with immediate agent creation.
                                              
                                                      Args:
                                                          manifest: Agent configuration manifest
                                                          shared_deps: Dependencies to share across all nodes
                                                          connect_nodes: Whether to set up forwarding connections
                                                          input_provider: Input provider for tool / step confirmations / HumanAgents
                                                          parallel_load: Whether to load nodes in parallel (async)
                                              
                                                      Raises:
                                                          ValueError: If manifest contains invalid node configurations
                                                          RuntimeError: If node initialization fails
                                                      """
                                                      super().__init__()
                                                      from llmling_agent.mcp_server.manager import MCPManager
                                                      from llmling_agent.models.manifest import AgentsManifest
                                                      from llmling_agent.storage import StorageManager
                                              
                                                      match manifest:
                                                          case None:
                                                              self.manifest = AgentsManifest()
                                                          case str():
                                                              self.manifest = AgentsManifest.from_file(manifest)
                                                          case AgentsManifest():
                                                              self.manifest = manifest
                                                          case _:
                                                              msg = f"Invalid config path: {manifest}"
                                                              raise ValueError(msg)
                                                      self.shared_deps = shared_deps
                                                      self._input_provider = input_provider
                                                      self.exit_stack = AsyncExitStack()
                                                      self.parallel_load = parallel_load
                                                      self.storage = StorageManager(self.manifest.storage)
                                                      self.connection_registry = ConnectionRegistry()
                                                      self.mcp = MCPManager(
                                                          name="pool_mcp", servers=self.manifest.get_mcp_servers(), owner="pool"
                                                      )
                                                      self._tasks = TaskRegistry()
                                                      # Register tasks from manifest
                                                      for name, task in self.manifest.jobs.items():
                                                          self._tasks.register(name, task)
                                                      self.pool_talk = TeamTalk[Any].from_nodes(list(self.nodes.values()))
                                                      if self.manifest.pool_server and self.manifest.pool_server.enabled:
                                                          from llmling_agent.resource_providers.pool import PoolResourceProvider
                                                          from llmling_agent_mcp.server import LLMLingServer
                                              
                                                          provider = PoolResourceProvider(
                                                              self, zed_mode=self.manifest.pool_server.zed_mode
                                                          )
                                                          self.server: LLMLingServer | None = LLMLingServer(
                                                              provider=provider,
                                                              config=self.manifest.pool_server,
                                                          )
                                                      else:
                                                          self.server = None
                                                      # Create requested agents immediately
                                                      for name in self.manifest.agents:
                                                          agent = self.manifest.get_agent(name, deps=shared_deps)
                                                          self.register(name, agent)
                                              
                                                      # Then set up worker relationships
                                                      for agent in self.agents.values():
                                                          self.setup_agent_workers(agent)
                                                      self._create_teams()
                                                      # Set up forwarding connections
                                                      if connect_nodes:
                                                          self._connect_nodes()
                                              
                                                  async def __aenter__(self) -> Self:
                                                      """Enter async context and initialize all agents."""
                                                      try:
                                                          # Add MCP tool provider to all agents
                                                          agents = list(self.agents.values())
                                                          teams = list(self.teams.values())
                                                          for agent in agents:
                                                              agent.tools.add_provider(self.mcp)
                                              
                                                          # Collect all components to initialize
                                                          components: list[AbstractAsyncContextManager[Any]] = [
                                                              self.mcp,
                                                              *agents,
                                                              *teams,
                                                          ]
                                              
                                                          # Add MCP server if configured
                                                          if self.server:
                                                              components.append(self.server)
                                              
                                                          # Initialize all components
                                                          if self.parallel_load:
                                                              await asyncio.gather(
                                                                  *(self.exit_stack.enter_async_context(c) for c in components)
                                                              )
                                                          else:
                                                              for component in components:
                                                                  await self.exit_stack.enter_async_context(component)
                                              
                                                      except Exception as e:
                                                          await self.cleanup()
                                                          msg = "Failed to initialize agent pool"
                                                          logger.exception(msg, exc_info=e)
                                                          raise RuntimeError(msg) from e
                                                      return self
                                              
                                                  async def __aexit__(
                                                      self,
                                                      exc_type: type[BaseException] | None,
                                                      exc_val: BaseException | None,
                                                      exc_tb: TracebackType | None,
                                                  ):
                                                      """Exit async context."""
                                                      # Remove MCP tool provider from all agents
                                                      for agent in self.agents.values():
                                                          if self.mcp in agent.tools.providers:
                                                              agent.tools.remove_provider(self.mcp)
                                                      await self.cleanup()
                                              
                                                  async def cleanup(self):
                                                      """Clean up all agents."""
                                                      await self.exit_stack.aclose()
                                                      self.clear()
                                              
                                                  @overload
                                                  def create_team_run(
                                                      self,
                                                      agents: Sequence[str],
                                                      validator: MessageNode[Any, TResult] | None = None,
                                                      *,
                                                      name: str | None = None,
                                                      description: str | None = None,
                                                      shared_prompt: str | None = None,
                                                      picker: AnyAgent[Any, Any] | None = None,
                                                      num_picks: int | None = None,
                                                      pick_prompt: str | None = None,
                                                  ) -> TeamRun[TPoolDeps, TResult]: ...
                                              
                                                  @overload
                                                  def create_team_run[TDeps, TResult](
                                                      self,
                                                      agents: Sequence[MessageNode[TDeps, Any]],
                                                      validator: MessageNode[Any, TResult] | None = None,
                                                      *,
                                                      name: str | None = None,
                                                      description: str | None = None,
                                                      shared_prompt: str | None = None,
                                                      picker: AnyAgent[Any, Any] | None = None,
                                                      num_picks: int | None = None,
                                                      pick_prompt: str | None = None,
                                                  ) -> TeamRun[TDeps, TResult]: ...
                                              
                                                  @overload
                                                  def create_team_run(
                                                      self,
                                                      agents: Sequence[AgentName | MessageNode[Any, Any]],
                                                      validator: MessageNode[Any, TResult] | None = None,
                                                      *,
                                                      name: str | None = None,
                                                      description: str | None = None,
                                                      shared_prompt: str | None = None,
                                                      picker: AnyAgent[Any, Any] | None = None,
                                                      num_picks: int | None = None,
                                                      pick_prompt: str | None = None,
                                                  ) -> TeamRun[Any, TResult]: ...
                                              
                                                  def create_team_run(
                                                      self,
                                                      agents: Sequence[AgentName | MessageNode[Any, Any]] | None = None,
                                                      validator: MessageNode[Any, TResult] | None = None,
                                                      *,
                                                      name: str | None = None,
                                                      description: str | None = None,
                                                      shared_prompt: str | None = None,
                                                      picker: AnyAgent[Any, Any] | None = None,
                                                      num_picks: int | None = None,
                                                      pick_prompt: str | None = None,
                                                  ) -> TeamRun[Any, TResult]:
                                                      """Create a a sequential TeamRun from a list of Agents.
                                              
                                                      Args:
                                                          agents: List of agent names or team/agent instances (all if None)
                                                          validator: Node to validate the results of the TeamRun
                                                          name: Optional name for the team
                                                          description: Optional description for the team
                                                          shared_prompt: Optional prompt for all agents
                                                          picker: Agent to use for picking agents
                                                          num_picks: Number of agents to pick
                                                          pick_prompt: Prompt to use for picking agents
                                                      """
                                                      from llmling_agent.delegation.teamrun import TeamRun
                                              
                                                      if agents is None:
                                                          agents = list(self.agents.keys())
                                              
                                                      # First resolve/configure agents
                                                      resolved_agents: list[MessageNode[Any, Any]] = []
                                                      for agent in agents:
                                                          if isinstance(agent, str):
                                                              agent = self.get_agent(agent)
                                                          resolved_agents.append(agent)
                                                      team = TeamRun(
                                                          resolved_agents,
                                                          name=name,
                                                          description=description,
                                                          validator=validator,
                                                          shared_prompt=shared_prompt,
                                                          picker=picker,
                                                          num_picks=num_picks,
                                                          pick_prompt=pick_prompt,
                                                      )
                                                      if name:
                                                          self[name] = team
                                                      return team
                                              
                                                  @overload
                                                  def create_team(self, agents: Sequence[str]) -> Team[TPoolDeps]: ...
                                              
                                                  @overload
                                                  def create_team[TDeps](
                                                      self,
                                                      agents: Sequence[MessageNode[TDeps, Any]],
                                                      *,
                                                      name: str | None = None,
                                                      description: str | None = None,
                                                      shared_prompt: str | None = None,
                                                      picker: AnyAgent[Any, Any] | None = None,
                                                      num_picks: int | None = None,
                                                      pick_prompt: str | None = None,
                                                  ) -> Team[TDeps]: ...
                                              
                                                  @overload
                                                  def create_team(
                                                      self,
                                                      agents: Sequence[AgentName | MessageNode[Any, Any]],
                                                      *,
                                                      name: str | None = None,
                                                      description: str | None = None,
                                                      shared_prompt: str | None = None,
                                                      picker: AnyAgent[Any, Any] | None = None,
                                                      num_picks: int | None = None,
                                                      pick_prompt: str | None = None,
                                                  ) -> Team[Any]: ...
                                              
                                                  def create_team(
                                                      self,
                                                      agents: Sequence[AgentName | MessageNode[Any, Any]] | None = None,
                                                      *,
                                                      name: str | None = None,
                                                      description: str | None = None,
                                                      shared_prompt: str | None = None,
                                                      picker: AnyAgent[Any, Any] | None = None,
                                                      num_picks: int | None = None,
                                                      pick_prompt: str | None = None,
                                                  ) -> Team[Any]:
                                                      """Create a group from agent names or instances.
                                              
                                                      Args:
                                                          agents: List of agent names or instances (all if None)
                                                          name: Optional name for the team
                                                          description: Optional description for the team
                                                          shared_prompt: Optional prompt for all agents
                                                          picker: Agent to use for picking agents
                                                          num_picks: Number of agents to pick
                                                          pick_prompt: Prompt to use for picking agents
                                                      """
                                                      from llmling_agent.delegation.team import Team
                                              
                                                      if agents is None:
                                                          agents = list(self.agents.keys())
                                              
                                                      # First resolve/configure agents
                                                      resolved_agents: list[MessageNode[Any, Any]] = []
                                                      for agent in agents:
                                                          if isinstance(agent, str):
                                                              agent = self.get_agent(agent)
                                                          resolved_agents.append(agent)
                                              
                                                      team = Team(
                                                          name=name,
                                                          description=description,
                                                          agents=resolved_agents,
                                                          shared_prompt=shared_prompt,
                                                          picker=picker,
                                                          num_picks=num_picks,
                                                          pick_prompt=pick_prompt,
                                                      )
                                                      if name:
                                                          self[name] = team
                                                      return team
                                              
                                                  @asynccontextmanager
                                                  async def track_message_flow(self) -> AsyncIterator[MessageFlowTracker]:
                                                      """Track message flow during a context."""
                                                      tracker = MessageFlowTracker()
                                                      self.connection_registry.message_flow.connect(tracker.track)
                                                      try:
                                                          yield tracker
                                                      finally:
                                                          self.connection_registry.message_flow.disconnect(tracker.track)
                                              
                                                  async def run_event_loop(self):
                                                      """Run pool in event-watching mode until interrupted."""
                                                      import sys
                                              
                                                      print("Starting event watch mode...")
                                                      print("Active nodes: ", ", ".join(self.list_nodes()))
                                                      print("Press Ctrl+C to stop")
                                              
                                                      stop_event = asyncio.Event()
                                              
                                                      if sys.platform != "win32":
                                                          # Unix: Use signal handlers
                                                          loop = asyncio.get_running_loop()
                                                          for sig in (signal.SIGINT, signal.SIGTERM):
                                                              loop.add_signal_handler(sig, stop_event.set)
                                                          while True:
                                                              await asyncio.sleep(1)
                                                      else:
                                                          # Windows: Use keyboard interrupt
                                                          with suppress(KeyboardInterrupt):
                                                              while True:
                                                                  await asyncio.sleep(1)
                                              
                                                  @property
                                                  def agents(self) -> dict[str, AnyAgent[Any, Any]]:
                                                      """Get agents dict (backward compatibility)."""
                                                      return {
                                                          i.name: i
                                                          for i in self._items.values()
                                                          if isinstance(i, Agent | StructuredAgent)
                                                      }
                                              
                                                  @property
                                                  def teams(self) -> dict[str, BaseTeam[Any, Any]]:
                                                      """Get agents dict (backward compatibility)."""
                                                      from llmling_agent.delegation.base_team import BaseTeam
                                              
                                                      return {i.name: i for i in self._items.values() if isinstance(i, BaseTeam)}
                                              
                                                  @property
                                                  def nodes(self) -> dict[str, MessageNode[Any, Any]]:
                                                      """Get agents dict (backward compatibility)."""
                                                      from llmling_agent import MessageNode
                                              
                                                      return {i.name: i for i in self._items.values() if isinstance(i, MessageNode)}
                                              
                                                  @property
                                                  def event_nodes(self) -> dict[str, EventNode[Any]]:
                                                      """Get agents dict (backward compatibility)."""
                                                      from llmling_agent.messaging.eventnode import EventNode
                                              
                                                      return {i.name: i for i in self._items.values() if isinstance(i, EventNode)}
                                              
                                                  @property
                                                  def node_events(self) -> DictEvents:
                                                      """Get node events."""
                                                      return self._items.events
                                              
                                                  @property
                                                  def _error_class(self) -> type[LLMLingError]:
                                                      """Error class for agent operations."""
                                                      return LLMLingError
                                              
                                                  def _validate_item(
                                                      self, item: MessageEmitter[Any, Any] | Any
                                                  ) -> MessageEmitter[Any, Any]:
                                                      """Validate and convert items before registration.
                                              
                                                      Args:
                                                          item: Item to validate
                                              
                                                      Returns:
                                                          Validated Node
                                              
                                                      Raises:
                                                          LLMlingError: If item is not a valid node
                                                      """
                                                      if not isinstance(item, MessageEmitter):
                                                          msg = f"Item must be Agent or Team, got {type(item)}"
                                                          raise self._error_class(msg)
                                                      item.context.pool = self
                                                      return item
                                              
                                                  def _create_teams(self):
                                                      """Create all teams in two phases to allow nesting."""
                                                      # Phase 1: Create empty teams
                                              
                                                      empty_teams: dict[str, BaseTeam[Any, Any]] = {}
                                                      for name, config in self.manifest.teams.items():
                                                          if config.mode == "parallel":
                                                              empty_teams[name] = Team(
                                                                  [], name=name, shared_prompt=config.shared_prompt
                                                              )
                                                          else:
                                                              empty_teams[name] = TeamRun(
                                                                  [], name=name, shared_prompt=config.shared_prompt
                                                              )
                                              
                                                      # Phase 2: Resolve members
                                                      for name, config in self.manifest.teams.items():
                                                          team = empty_teams[name]
                                                          members: list[MessageNode[Any, Any]] = []
                                                          for member in config.members:
                                                              if member in self.agents:
                                                                  members.append(self.agents[member])
                                                              elif member in empty_teams:
                                                                  members.append(empty_teams[member])
                                                              else:
                                                                  msg = f"Unknown team member: {member}"
                                                                  raise ValueError(msg)
                                                          team.agents.extend(members)
                                                          self[name] = team
                                              
                                                  def _connect_nodes(self):
                                                      """Set up connections defined in manifest."""
                                                      # Merge agent and team configs into one dict of nodes with connections
                                                      for name, config in self.manifest.nodes.items():
                                                          source = self[name]
                                                          for target in config.connections or []:
                                                              match target:
                                                                  case NodeConnectionConfig():
                                                                      if target.name not in self:
                                                                          msg = f"Forward target {target.name} not found for {name}"
                                                                          raise ValueError(msg)
                                                                      target_node = self[target.name]
                                                                  case FileConnectionConfig() | CallableConnectionConfig():
                                                                      target_node = Agent(provider=target.get_provider())
                                                                  case _:
                                                                      msg = f"Invalid connection config: {target}"
                                                                      raise ValueError(msg)
                                              
                                                              source.connect_to(
                                                                  target_node,  # type: ignore  # recognized as "Any | BaseTeam[Any, Any]" by mypy?
                                                                  connection_type=target.connection_type,
                                                                  name=name,
                                                                  priority=target.priority,
                                                                  delay=target.delay,
                                                                  queued=target.queued,
                                                                  queue_strategy=target.queue_strategy,
                                                                  transform=target.transform,
                                                                  filter_condition=target.filter_condition.check
                                                                  if target.filter_condition
                                                                  else None,
                                                                  stop_condition=target.stop_condition.check
                                                                  if target.stop_condition
                                                                  else None,
                                                                  exit_condition=target.exit_condition.check
                                                                  if target.exit_condition
                                                                  else None,
                                                              )
                                                              source.connections.set_wait_state(
                                                                  target_node,
                                                                  wait=target.wait_for_completion,
                                                              )
                                              
                                                  @overload
                                                  async def clone_agent[TDeps](
                                                      self,
                                                      agent: AgentName | Agent[TDeps],
                                                      new_name: AgentName | None = None,
                                                      *,
                                                      system_prompts: list[str] | None = None,
                                                      template_context: dict[str, Any] | None = None,
                                                  ) -> Agent[TDeps]: ...
                                              
                                                  @overload
                                                  async def clone_agent[TDeps, TResult](
                                                      self,
                                                      agent: StructuredAgent[TDeps, TResult],
                                                      new_name: AgentName | None = None,
                                                      *,
                                                      system_prompts: list[str] | None = None,
                                                      template_context: dict[str, Any] | None = None,
                                                  ) -> StructuredAgent[TDeps, TResult]: ...
                                              
                                                  async def clone_agent[TDeps, TAgentResult](
                                                      self,
                                                      agent: AgentName | AnyAgent[TDeps, TAgentResult],
                                                      new_name: AgentName | None = None,
                                                      *,
                                                      system_prompts: list[str] | None = None,
                                                      template_context: dict[str, Any] | None = None,
                                                  ) -> AnyAgent[TDeps, TAgentResult]:
                                                      """Create a copy of an agent.
                                              
                                                      Args:
                                                          agent: Agent instance or name to clone
                                                          new_name: Optional name for the clone
                                                          system_prompts: Optional different prompts
                                                          template_context: Variables for template rendering
                                              
                                                      Returns:
                                                          The new agent instance
                                                      """
                                                      from llmling_agent.agent import Agent, StructuredAgent
                                              
                                                      # Get original config
                                                      if isinstance(agent, str):
                                                          if agent not in self.manifest.agents:
                                                              msg = f"Agent {agent} not found"
                                                              raise KeyError(msg)
                                                          config = self.manifest.agents[agent]
                                                          original_agent: AnyAgent[Any, Any] = self.get_agent(agent)
                                                      else:
                                                          config = agent.context.config  # type: ignore
                                                          original_agent = agent
                                              
                                                      # Create new config
                                                      new_config = config.model_copy(deep=True)
                                              
                                                      # Apply overrides
                                                      if system_prompts:
                                                          new_config.system_prompts = system_prompts
                                              
                                                      # Handle template rendering
                                                      if template_context:
                                                          new_config.system_prompts = new_config.render_system_prompts(template_context)
                                              
                                                      # Create new agent with same runtime
                                                      new_agent = Agent[TDeps](
                                                          runtime=original_agent.runtime,
                                                          context=original_agent.context,
                                                          # result_type=original_agent.actual_type,
                                                          provider=new_config.get_provider(),
                                                          system_prompt=new_config.system_prompts,
                                                          name=new_name or f"{config.name}_copy_{len(self.agents)}",
                                                      )
                                                      if isinstance(original_agent, StructuredAgent):
                                                          new_agent = new_agent.to_structured(original_agent.actual_type)
                                              
                                                      # Register in pool
                                                      agent_name = new_agent.name
                                                      self.manifest.agents[agent_name] = new_config
                                                      self.register(agent_name, new_agent)
                                                      return await self.exit_stack.enter_async_context(new_agent)
                                              
                                                  @overload
                                                  async def create_agent(
                                                      self,
                                                      name: AgentName,
                                                      *,
                                                      session: SessionIdType | SessionQuery = None,
                                                      name_override: str | None = None,
                                                  ) -> Agent[TPoolDeps]: ...
                                              
                                                  @overload
                                                  async def create_agent[TCustomDeps](
                                                      self,
                                                      name: AgentName,
                                                      *,
                                                      deps: TCustomDeps,
                                                      session: SessionIdType | SessionQuery = None,
                                                      name_override: str | None = None,
                                                  ) -> Agent[TCustomDeps]: ...
                                              
                                                  @overload
                                                  async def create_agent[TResult](
                                                      self,
                                                      name: AgentName,
                                                      *,
                                                      return_type: type[TResult],
                                                      session: SessionIdType | SessionQuery = None,
                                                      name_override: str | None = None,
                                                  ) -> StructuredAgent[TPoolDeps, TResult]: ...
                                              
                                                  @overload
                                                  async def create_agent[TCustomDeps, TResult](
                                                      self,
                                                      name: AgentName,
                                                      *,
                                                      deps: TCustomDeps,
                                                      return_type: type[TResult],
                                                      session: SessionIdType | SessionQuery = None,
                                                      name_override: str | None = None,
                                                  ) -> StructuredAgent[TCustomDeps, TResult]: ...
                                              
                                                  async def create_agent(
                                                      self,
                                                      name: AgentName,
                                                      *,
                                                      deps: Any | None = None,
                                                      return_type: Any | None = None,
                                                      session: SessionIdType | SessionQuery = None,
                                                      name_override: str | None = None,
                                                  ) -> AnyAgent[Any, Any]:
                                                      """Create a new agent instance from configuration.
                                              
                                                      Args:
                                                          name: Name of the agent configuration to use
                                                          deps: Optional custom dependencies (overrides pool deps)
                                                          return_type: Optional type for structured responses
                                                          session: Optional session ID or query to recover conversation
                                                          name_override: Optional different name for this instance
                                              
                                                      Returns:
                                                          New agent instance with the specified configuration
                                              
                                                      Raises:
                                                          KeyError: If agent configuration not found
                                                          ValueError: If configuration is invalid
                                                      """
                                                      if name not in self.manifest.agents:
                                                          msg = f"Agent configuration {name!r} not found"
                                                          raise KeyError(msg)
                                              
                                                      # Use Manifest.get_agent for proper initialization
                                                      final_deps = deps if deps is not None else self.shared_deps
                                                      agent = self.manifest.get_agent(name, deps=final_deps)
                                                      # Override name if requested
                                                      if name_override:
                                                          agent.name = name_override
                                              
                                                      # Set pool reference
                                                      agent.context.pool = self
                                              
                                                      # Handle session if provided
                                                      if session:
                                                          agent.conversation.load_history_from_database(session=session)
                                              
                                                      # Initialize agent through exit stack
                                                      agent = await self.exit_stack.enter_async_context(agent)
                                              
                                                      # Override structured configuration if provided
                                                      if return_type is not None:
                                                          return agent.to_structured(return_type)
                                              
                                                      return agent
                                              
                                                  def setup_agent_workers(self, agent: AnyAgent[Any, Any]):
                                                      """Set up workers for an agent from configuration."""
                                                      for worker_config in agent.context.config.workers:
                                                          try:
                                                              worker = self.nodes[worker_config.name]
                                                              match worker_config:
                                                                  case TeamWorkerConfig():
                                                                      agent.register_worker(worker)
                                                                  case AgentWorkerConfig():
                                                                      agent.register_worker(
                                                                          worker,
                                                                          name=worker_config.name,
                                                                          reset_history_on_run=worker_config.reset_history_on_run,
                                                                          pass_message_history=worker_config.pass_message_history,
                                                                          share_context=worker_config.share_context,
                                                                      )
                                                          except KeyError as e:
                                                              msg = f"Worker agent {worker_config.name!r} not found"
                                                              raise ValueError(msg) from e
                                              
                                                  @overload
                                                  def get_agent(
                                                      self,
                                                      agent: AgentName | Agent[Any],
                                                      *,
                                                      model_override: str | None = None,
                                                      session: SessionIdType | SessionQuery = None,
                                                  ) -> Agent[TPoolDeps]: ...
                                              
                                                  @overload
                                                  def get_agent[TResult](
                                                      self,
                                                      agent: AgentName | Agent[Any],
                                                      *,
                                                      return_type: type[TResult],
                                                      model_override: str | None = None,
                                                      session: SessionIdType | SessionQuery = None,
                                                  ) -> StructuredAgent[TPoolDeps, TResult]: ...
                                              
                                                  @overload
                                                  def get_agent[TCustomDeps](
                                                      self,
                                                      agent: AgentName | Agent[Any],
                                                      *,
                                                      deps: TCustomDeps,
                                                      model_override: str | None = None,
                                                      session: SessionIdType | SessionQuery = None,
                                                  ) -> Agent[TCustomDeps]: ...
                                              
                                                  @overload
                                                  def get_agent[TCustomDeps, TResult](
                                                      self,
                                                      agent: AgentName | Agent[Any],
                                                      *,
                                                      deps: TCustomDeps,
                                                      return_type: type[TResult],
                                                      model_override: str | None = None,
                                                      session: SessionIdType | SessionQuery = None,
                                                  ) -> StructuredAgent[TCustomDeps, TResult]: ...
                                              
                                                  def get_agent(
                                                      self,
                                                      agent: AgentName | Agent[Any],
                                                      *,
                                                      deps: Any | None = None,
                                                      return_type: Any | None = None,
                                                      model_override: str | None = None,
                                                      session: SessionIdType | SessionQuery = None,
                                                  ) -> AnyAgent[Any, Any]:
                                                      """Get or configure an agent from the pool.
                                              
                                                      This method provides flexible agent configuration with dependency injection:
                                                      - Without deps: Agent uses pool's shared dependencies
                                                      - With deps: Agent uses provided custom dependencies
                                                      - With return_type: Returns a StructuredAgent with type validation
                                              
                                                      Args:
                                                          agent: Either agent name or instance
                                                          deps: Optional custom dependencies (overrides shared deps)
                                                          return_type: Optional type for structured responses
                                                          model_override: Optional model override
                                                          session: Optional session ID or query to recover conversation
                                              
                                                      Returns:
                                                          Either:
                                                          - Agent[TPoolDeps] when using pool's shared deps
                                                          - Agent[TCustomDeps] when custom deps provided
                                                          - StructuredAgent when return_type provided
                                              
                                                      Raises:
                                                          KeyError: If agent name not found
                                                          ValueError: If configuration is invalid
                                                      """
                                                      from llmling_agent.agent import Agent
                                                      from llmling_agent.agent.context import AgentContext
                                              
                                                      # Get base agent
                                                      base = agent if isinstance(agent, Agent) else self.agents[agent]
                                              
                                                      # Setup context and dependencies
                                                      if base.context is None:
                                                          base.context = AgentContext[Any].create_default(base.name)
                                              
                                                      # Use custom deps if provided, otherwise use shared deps
                                                      base.context.data = deps if deps is not None else self.shared_deps
                                                      base.context.pool = self
                                              
                                                      # Apply overrides
                                                      if model_override:
                                                          base.set_model(model_override)
                                              
                                                      if session:
                                                          base.conversation.load_history_from_database(session=session)
                                              
                                                      # Convert to structured if needed
                                                      if return_type is not None:
                                                          return base.to_structured(return_type)
                                              
                                                      return base
                                              
                                                  def list_nodes(self) -> list[str]:
                                                      """List available agent names."""
                                                      return list(self.list_items())
                                              
                                                  def get_job(self, name: str) -> Job[Any, Any]:
                                                      return self._tasks[name]
                                              
                                                  def register_task(self, name: str, task: Job[Any, Any]):
                                                      self._tasks.register(name, task)
                                              
                                                  @overload
                                                  async def add_agent(
                                                      self,
                                                      name: AgentName,
                                                      *,
                                                      result_type: None = None,
                                                      **kwargs: Unpack[AgentKwargs],
                                                  ) -> Agent[Any]: ...
                                              
                                                  @overload
                                                  async def add_agent[TResult](
                                                      self,
                                                      name: AgentName,
                                                      *,
                                                      result_type: type[TResult] | str | ResponseDefinition,
                                                      **kwargs: Unpack[AgentKwargs],
                                                  ) -> StructuredAgent[Any, TResult]: ...
                                              
                                                  async def add_agent(
                                                      self,
                                                      name: AgentName,
                                                      *,
                                                      result_type: type[Any] | str | ResponseDefinition | None = None,
                                                      **kwargs: Unpack[AgentKwargs],
                                                  ) -> Agent[Any] | StructuredAgent[Any, Any]:
                                                      """Add a new permanent agent to the pool.
                                              
                                                      Args:
                                                          name: Name for the new agent
                                                          result_type: Optional type for structured responses:
                                                              - None: Regular unstructured agent
                                                              - type: Python type for validation
                                                              - str: Name of response definition
                                                              - ResponseDefinition: Complete response definition
                                                          **kwargs: Additional agent configuration
                                              
                                                      Returns:
                                                          Either a regular Agent or StructuredAgent depending on result_type
                                                      """
                                                      from llmling_agent.agent import Agent
                                              
                                                      agent: AnyAgent[Any, Any] = Agent(name=name, **kwargs)
                                                      agent.tools.add_provider(self.mcp)
                                                      agent = await self.exit_stack.enter_async_context(agent)
                                                      # Convert to structured if needed
                                                      if result_type is not None:
                                                          agent = agent.to_structured(result_type)
                                                      self.register(name, agent)
                                                      return agent
                                              
                                                  def get_mermaid_diagram(
                                                      self,
                                                      include_details: bool = True,
                                                  ) -> str:
                                                      """Generate mermaid flowchart of all agents and their connections.
                                              
                                                      Args:
                                                          include_details: Whether to show connection details (types, queues, etc)
                                                      """
                                                      lines = ["flowchart LR"]
                                              
                                                      # Add all agents as nodes
                                                      for name in self.agents:
                                                          lines.append(f"    {name}[{name}]")  # noqa: PERF401
                                              
                                                      # Add all connections as edges
                                                      for agent in self.agents.values():
                                                          connections = agent.connections.get_connections()
                                                          for talk in connections:
                                                              talk = cast(Talk[Any], talk)  # help mypy understand it's a Talk
                                                              source = talk.source.name
                                                              for target in talk.targets:
                                                                  if include_details:
                                                                      details: list[str] = []
                                                                      details.append(talk.connection_type)
                                                                      if talk.queued:
                                                                          details.append(f"queued({talk.queue_strategy})")
                                                                      if fn := talk.filter_condition:  # type: ignore
                                                                          details.append(f"filter:{fn.__name__}")
                                                                      if fn := talk.stop_condition:  # type: ignore
                                                                          details.append(f"stop:{fn.__name__}")
                                                                      if fn := talk.exit_condition:  # type: ignore
                                                                          details.append(f"exit:{fn.__name__}")
                                              
                                                                      label = f"|{' '.join(details)}|" if details else ""
                                                                      lines.append(f"    {source}--{label}-->{target.name}")
                                                                  else:
                                                                      lines.append(f"    {source}-->{target.name}")
                                              
                                                      return "\n".join(lines)
                                              

                                              _error_class property

                                              _error_class: type[LLMLingError]
                                              

                                              Error class for agent operations.

                                              agents property

                                              agents: dict[str, AnyAgent[Any, Any]]
                                              

                                              Get agents dict (backward compatibility).

                                              event_nodes property

                                              event_nodes: dict[str, EventNode[Any]]
                                              

                                              Get agents dict (backward compatibility).

                                              node_events property

                                              node_events: DictEvents
                                              

                                              Get node events.

                                              nodes property

                                              nodes: dict[str, MessageNode[Any, Any]]
                                              

                                              Get agents dict (backward compatibility).

                                              teams property

                                              teams: dict[str, BaseTeam[Any, Any]]
                                              

                                              Get agents dict (backward compatibility).

                                              __aenter__ async

                                              __aenter__() -> Self
                                              

                                              Enter async context and initialize all agents.

                                              Source code in src/llmling_agent/delegation/pool.py
                                              153
                                              154
                                              155
                                              156
                                              157
                                              158
                                              159
                                              160
                                              161
                                              162
                                              163
                                              164
                                              165
                                              166
                                              167
                                              168
                                              169
                                              170
                                              171
                                              172
                                              173
                                              174
                                              175
                                              176
                                              177
                                              178
                                              179
                                              180
                                              181
                                              182
                                              183
                                              184
                                              185
                                              186
                                              187
                                              async def __aenter__(self) -> Self:
                                                  """Enter async context and initialize all agents."""
                                                  try:
                                                      # Add MCP tool provider to all agents
                                                      agents = list(self.agents.values())
                                                      teams = list(self.teams.values())
                                                      for agent in agents:
                                                          agent.tools.add_provider(self.mcp)
                                              
                                                      # Collect all components to initialize
                                                      components: list[AbstractAsyncContextManager[Any]] = [
                                                          self.mcp,
                                                          *agents,
                                                          *teams,
                                                      ]
                                              
                                                      # Add MCP server if configured
                                                      if self.server:
                                                          components.append(self.server)
                                              
                                                      # Initialize all components
                                                      if self.parallel_load:
                                                          await asyncio.gather(
                                                              *(self.exit_stack.enter_async_context(c) for c in components)
                                                          )
                                                      else:
                                                          for component in components:
                                                              await self.exit_stack.enter_async_context(component)
                                              
                                                  except Exception as e:
                                                      await self.cleanup()
                                                      msg = "Failed to initialize agent pool"
                                                      logger.exception(msg, exc_info=e)
                                                      raise RuntimeError(msg) from e
                                                  return self
                                              

                                              __aexit__ async

                                              __aexit__(
                                                  exc_type: type[BaseException] | None,
                                                  exc_val: BaseException | None,
                                                  exc_tb: TracebackType | None,
                                              )
                                              

                                              Exit async context.

                                              Source code in src/llmling_agent/delegation/pool.py
                                              189
                                              190
                                              191
                                              192
                                              193
                                              194
                                              195
                                              196
                                              197
                                              198
                                              199
                                              200
                                              async def __aexit__(
                                                  self,
                                                  exc_type: type[BaseException] | None,
                                                  exc_val: BaseException | None,
                                                  exc_tb: TracebackType | None,
                                              ):
                                                  """Exit async context."""
                                                  # Remove MCP tool provider from all agents
                                                  for agent in self.agents.values():
                                                      if self.mcp in agent.tools.providers:
                                                          agent.tools.remove_provider(self.mcp)
                                                  await self.cleanup()
                                              

                                              __init__

                                              __init__(
                                                  manifest: StrPath | AgentsManifest | None = None,
                                                  *,
                                                  shared_deps: TPoolDeps | None = None,
                                                  connect_nodes: bool = True,
                                                  input_provider: InputProvider | None = None,
                                                  parallel_load: bool = True,
                                              )
                                              

                                              Initialize agent pool with immediate agent creation.

                                              Parameters:

                                              Name Type Description Default
                                              manifest StrPath | AgentsManifest | None

                                              Agent configuration manifest

                                              None
                                              shared_deps TPoolDeps | None

                                              Dependencies to share across all nodes

                                              None
                                              connect_nodes bool

                                              Whether to set up forwarding connections

                                              True
                                              input_provider InputProvider | None

                                              Input provider for tool / step confirmations / HumanAgents

                                              None
                                              parallel_load bool

                                              Whether to load nodes in parallel (async)

                                              True

                                              Raises:

                                              Type Description
                                              ValueError

                                              If manifest contains invalid node configurations

                                              RuntimeError

                                              If node initialization fails

                                              Source code in src/llmling_agent/delegation/pool.py
                                               76
                                               77
                                               78
                                               79
                                               80
                                               81
                                               82
                                               83
                                               84
                                               85
                                               86
                                               87
                                               88
                                               89
                                               90
                                               91
                                               92
                                               93
                                               94
                                               95
                                               96
                                               97
                                               98
                                               99
                                              100
                                              101
                                              102
                                              103
                                              104
                                              105
                                              106
                                              107
                                              108
                                              109
                                              110
                                              111
                                              112
                                              113
                                              114
                                              115
                                              116
                                              117
                                              118
                                              119
                                              120
                                              121
                                              122
                                              123
                                              124
                                              125
                                              126
                                              127
                                              128
                                              129
                                              130
                                              131
                                              132
                                              133
                                              134
                                              135
                                              136
                                              137
                                              138
                                              139
                                              140
                                              141
                                              142
                                              143
                                              144
                                              145
                                              146
                                              147
                                              148
                                              149
                                              150
                                              151
                                              def __init__(
                                                  self,
                                                  manifest: StrPath | AgentsManifest | None = None,
                                                  *,
                                                  shared_deps: TPoolDeps | None = None,
                                                  connect_nodes: bool = True,
                                                  input_provider: InputProvider | None = None,
                                                  parallel_load: bool = True,
                                              ):
                                                  """Initialize agent pool with immediate agent creation.
                                              
                                                  Args:
                                                      manifest: Agent configuration manifest
                                                      shared_deps: Dependencies to share across all nodes
                                                      connect_nodes: Whether to set up forwarding connections
                                                      input_provider: Input provider for tool / step confirmations / HumanAgents
                                                      parallel_load: Whether to load nodes in parallel (async)
                                              
                                                  Raises:
                                                      ValueError: If manifest contains invalid node configurations
                                                      RuntimeError: If node initialization fails
                                                  """
                                                  super().__init__()
                                                  from llmling_agent.mcp_server.manager import MCPManager
                                                  from llmling_agent.models.manifest import AgentsManifest
                                                  from llmling_agent.storage import StorageManager
                                              
                                                  match manifest:
                                                      case None:
                                                          self.manifest = AgentsManifest()
                                                      case str():
                                                          self.manifest = AgentsManifest.from_file(manifest)
                                                      case AgentsManifest():
                                                          self.manifest = manifest
                                                      case _:
                                                          msg = f"Invalid config path: {manifest}"
                                                          raise ValueError(msg)
                                                  self.shared_deps = shared_deps
                                                  self._input_provider = input_provider
                                                  self.exit_stack = AsyncExitStack()
                                                  self.parallel_load = parallel_load
                                                  self.storage = StorageManager(self.manifest.storage)
                                                  self.connection_registry = ConnectionRegistry()
                                                  self.mcp = MCPManager(
                                                      name="pool_mcp", servers=self.manifest.get_mcp_servers(), owner="pool"
                                                  )
                                                  self._tasks = TaskRegistry()
                                                  # Register tasks from manifest
                                                  for name, task in self.manifest.jobs.items():
                                                      self._tasks.register(name, task)
                                                  self.pool_talk = TeamTalk[Any].from_nodes(list(self.nodes.values()))
                                                  if self.manifest.pool_server and self.manifest.pool_server.enabled:
                                                      from llmling_agent.resource_providers.pool import PoolResourceProvider
                                                      from llmling_agent_mcp.server import LLMLingServer
                                              
                                                      provider = PoolResourceProvider(
                                                          self, zed_mode=self.manifest.pool_server.zed_mode
                                                      )
                                                      self.server: LLMLingServer | None = LLMLingServer(
                                                          provider=provider,
                                                          config=self.manifest.pool_server,
                                                      )
                                                  else:
                                                      self.server = None
                                                  # Create requested agents immediately
                                                  for name in self.manifest.agents:
                                                      agent = self.manifest.get_agent(name, deps=shared_deps)
                                                      self.register(name, agent)
                                              
                                                  # Then set up worker relationships
                                                  for agent in self.agents.values():
                                                      self.setup_agent_workers(agent)
                                                  self._create_teams()
                                                  # Set up forwarding connections
                                                  if connect_nodes:
                                                      self._connect_nodes()
                                              

                                              _connect_nodes

                                              _connect_nodes()
                                              

                                              Set up connections defined in manifest.

                                              Source code in src/llmling_agent/delegation/pool.py
                                              497
                                              498
                                              499
                                              500
                                              501
                                              502
                                              503
                                              504
                                              505
                                              506
                                              507
                                              508
                                              509
                                              510
                                              511
                                              512
                                              513
                                              514
                                              515
                                              516
                                              517
                                              518
                                              519
                                              520
                                              521
                                              522
                                              523
                                              524
                                              525
                                              526
                                              527
                                              528
                                              529
                                              530
                                              531
                                              532
                                              533
                                              534
                                              535
                                              536
                                              537
                                              def _connect_nodes(self):
                                                  """Set up connections defined in manifest."""
                                                  # Merge agent and team configs into one dict of nodes with connections
                                                  for name, config in self.manifest.nodes.items():
                                                      source = self[name]
                                                      for target in config.connections or []:
                                                          match target:
                                                              case NodeConnectionConfig():
                                                                  if target.name not in self:
                                                                      msg = f"Forward target {target.name} not found for {name}"
                                                                      raise ValueError(msg)
                                                                  target_node = self[target.name]
                                                              case FileConnectionConfig() | CallableConnectionConfig():
                                                                  target_node = Agent(provider=target.get_provider())
                                                              case _:
                                                                  msg = f"Invalid connection config: {target}"
                                                                  raise ValueError(msg)
                                              
                                                          source.connect_to(
                                                              target_node,  # type: ignore  # recognized as "Any | BaseTeam[Any, Any]" by mypy?
                                                              connection_type=target.connection_type,
                                                              name=name,
                                                              priority=target.priority,
                                                              delay=target.delay,
                                                              queued=target.queued,
                                                              queue_strategy=target.queue_strategy,
                                                              transform=target.transform,
                                                              filter_condition=target.filter_condition.check
                                                              if target.filter_condition
                                                              else None,
                                                              stop_condition=target.stop_condition.check
                                                              if target.stop_condition
                                                              else None,
                                                              exit_condition=target.exit_condition.check
                                                              if target.exit_condition
                                                              else None,
                                                          )
                                                          source.connections.set_wait_state(
                                                              target_node,
                                                              wait=target.wait_for_completion,
                                                          )
                                              

                                              _create_teams

                                              _create_teams()
                                              

                                              Create all teams in two phases to allow nesting.

                                              Source code in src/llmling_agent/delegation/pool.py
                                              467
                                              468
                                              469
                                              470
                                              471
                                              472
                                              473
                                              474
                                              475
                                              476
                                              477
                                              478
                                              479
                                              480
                                              481
                                              482
                                              483
                                              484
                                              485
                                              486
                                              487
                                              488
                                              489
                                              490
                                              491
                                              492
                                              493
                                              494
                                              495
                                              def _create_teams(self):
                                                  """Create all teams in two phases to allow nesting."""
                                                  # Phase 1: Create empty teams
                                              
                                                  empty_teams: dict[str, BaseTeam[Any, Any]] = {}
                                                  for name, config in self.manifest.teams.items():
                                                      if config.mode == "parallel":
                                                          empty_teams[name] = Team(
                                                              [], name=name, shared_prompt=config.shared_prompt
                                                          )
                                                      else:
                                                          empty_teams[name] = TeamRun(
                                                              [], name=name, shared_prompt=config.shared_prompt
                                                          )
                                              
                                                  # Phase 2: Resolve members
                                                  for name, config in self.manifest.teams.items():
                                                      team = empty_teams[name]
                                                      members: list[MessageNode[Any, Any]] = []
                                                      for member in config.members:
                                                          if member in self.agents:
                                                              members.append(self.agents[member])
                                                          elif member in empty_teams:
                                                              members.append(empty_teams[member])
                                                          else:
                                                              msg = f"Unknown team member: {member}"
                                                              raise ValueError(msg)
                                                      team.agents.extend(members)
                                                      self[name] = team
                                              

                                              _validate_item

                                              _validate_item(item: MessageEmitter[Any, Any] | Any) -> MessageEmitter[Any, Any]
                                              

                                              Validate and convert items before registration.

                                              Parameters:

                                              Name Type Description Default
                                              item MessageEmitter[Any, Any] | Any

                                              Item to validate

                                              required

                                              Returns:

                                              Type Description
                                              MessageEmitter[Any, Any]

                                              Validated Node

                                              Raises:

                                              Type Description
                                              LLMlingError

                                              If item is not a valid node

                                              Source code in src/llmling_agent/delegation/pool.py
                                              447
                                              448
                                              449
                                              450
                                              451
                                              452
                                              453
                                              454
                                              455
                                              456
                                              457
                                              458
                                              459
                                              460
                                              461
                                              462
                                              463
                                              464
                                              465
                                              def _validate_item(
                                                  self, item: MessageEmitter[Any, Any] | Any
                                              ) -> MessageEmitter[Any, Any]:
                                                  """Validate and convert items before registration.
                                              
                                                  Args:
                                                      item: Item to validate
                                              
                                                  Returns:
                                                      Validated Node
                                              
                                                  Raises:
                                                      LLMlingError: If item is not a valid node
                                                  """
                                                  if not isinstance(item, MessageEmitter):
                                                      msg = f"Item must be Agent or Team, got {type(item)}"
                                                      raise self._error_class(msg)
                                                  item.context.pool = self
                                                  return item
                                              

                                              add_agent async

                                              add_agent(
                                                  name: AgentName, *, result_type: None = None, **kwargs: Unpack[AgentKwargs]
                                              ) -> Agent[Any]
                                              
                                              add_agent(
                                                  name: AgentName,
                                                  *,
                                                  result_type: type[TResult] | str | ResponseDefinition,
                                                  **kwargs: Unpack[AgentKwargs],
                                              ) -> StructuredAgent[Any, TResult]
                                              
                                              add_agent(
                                                  name: AgentName,
                                                  *,
                                                  result_type: type[Any] | str | ResponseDefinition | None = None,
                                                  **kwargs: Unpack[AgentKwargs],
                                              ) -> Agent[Any] | StructuredAgent[Any, Any]
                                              

                                              Add a new permanent agent to the pool.

                                              Parameters:

                                              Name Type Description Default
                                              name AgentName

                                              Name for the new agent

                                              required
                                              result_type type[Any] | str | ResponseDefinition | None

                                              Optional type for structured responses: - None: Regular unstructured agent - type: Python type for validation - str: Name of response definition - ResponseDefinition: Complete response definition

                                              None
                                              **kwargs Unpack[AgentKwargs]

                                              Additional agent configuration

                                              {}

                                              Returns:

                                              Type Description
                                              Agent[Any] | StructuredAgent[Any, Any]

                                              Either a regular Agent or StructuredAgent depending on result_type

                                              Source code in src/llmling_agent/delegation/pool.py
                                              860
                                              861
                                              862
                                              863
                                              864
                                              865
                                              866
                                              867
                                              868
                                              869
                                              870
                                              871
                                              872
                                              873
                                              874
                                              875
                                              876
                                              877
                                              878
                                              879
                                              880
                                              881
                                              882
                                              883
                                              884
                                              885
                                              886
                                              887
                                              888
                                              889
                                              890
                                              async def add_agent(
                                                  self,
                                                  name: AgentName,
                                                  *,
                                                  result_type: type[Any] | str | ResponseDefinition | None = None,
                                                  **kwargs: Unpack[AgentKwargs],
                                              ) -> Agent[Any] | StructuredAgent[Any, Any]:
                                                  """Add a new permanent agent to the pool.
                                              
                                                  Args:
                                                      name: Name for the new agent
                                                      result_type: Optional type for structured responses:
                                                          - None: Regular unstructured agent
                                                          - type: Python type for validation
                                                          - str: Name of response definition
                                                          - ResponseDefinition: Complete response definition
                                                      **kwargs: Additional agent configuration
                                              
                                                  Returns:
                                                      Either a regular Agent or StructuredAgent depending on result_type
                                                  """
                                                  from llmling_agent.agent import Agent
                                              
                                                  agent: AnyAgent[Any, Any] = Agent(name=name, **kwargs)
                                                  agent.tools.add_provider(self.mcp)
                                                  agent = await self.exit_stack.enter_async_context(agent)
                                                  # Convert to structured if needed
                                                  if result_type is not None:
                                                      agent = agent.to_structured(result_type)
                                                  self.register(name, agent)
                                                  return agent
                                              

                                              cleanup async

                                              cleanup()
                                              

                                              Clean up all agents.

                                              Source code in src/llmling_agent/delegation/pool.py
                                              202
                                              203
                                              204
                                              205
                                              async def cleanup(self):
                                                  """Clean up all agents."""
                                                  await self.exit_stack.aclose()
                                                  self.clear()
                                              

                                              clone_agent async

                                              clone_agent(
                                                  agent: AgentName | Agent[TDeps],
                                                  new_name: AgentName | None = None,
                                                  *,
                                                  system_prompts: list[str] | None = None,
                                                  template_context: dict[str, Any] | None = None,
                                              ) -> Agent[TDeps]
                                              
                                              clone_agent(
                                                  agent: StructuredAgent[TDeps, TResult],
                                                  new_name: AgentName | None = None,
                                                  *,
                                                  system_prompts: list[str] | None = None,
                                                  template_context: dict[str, Any] | None = None,
                                              ) -> StructuredAgent[TDeps, TResult]
                                              
                                              clone_agent(
                                                  agent: AgentName | AnyAgent[TDeps, TAgentResult],
                                                  new_name: AgentName | None = None,
                                                  *,
                                                  system_prompts: list[str] | None = None,
                                                  template_context: dict[str, Any] | None = None,
                                              ) -> AnyAgent[TDeps, TAgentResult]
                                              

                                              Create a copy of an agent.

                                              Parameters:

                                              Name Type Description Default
                                              agent AgentName | AnyAgent[TDeps, TAgentResult]

                                              Agent instance or name to clone

                                              required
                                              new_name AgentName | None

                                              Optional name for the clone

                                              None
                                              system_prompts list[str] | None

                                              Optional different prompts

                                              None
                                              template_context dict[str, Any] | None

                                              Variables for template rendering

                                              None

                                              Returns:

                                              Type Description
                                              AnyAgent[TDeps, TAgentResult]

                                              The new agent instance

                                              Source code in src/llmling_agent/delegation/pool.py
                                              559
                                              560
                                              561
                                              562
                                              563
                                              564
                                              565
                                              566
                                              567
                                              568
                                              569
                                              570
                                              571
                                              572
                                              573
                                              574
                                              575
                                              576
                                              577
                                              578
                                              579
                                              580
                                              581
                                              582
                                              583
                                              584
                                              585
                                              586
                                              587
                                              588
                                              589
                                              590
                                              591
                                              592
                                              593
                                              594
                                              595
                                              596
                                              597
                                              598
                                              599
                                              600
                                              601
                                              602
                                              603
                                              604
                                              605
                                              606
                                              607
                                              608
                                              609
                                              610
                                              611
                                              612
                                              613
                                              614
                                              615
                                              616
                                              617
                                              618
                                              async def clone_agent[TDeps, TAgentResult](
                                                  self,
                                                  agent: AgentName | AnyAgent[TDeps, TAgentResult],
                                                  new_name: AgentName | None = None,
                                                  *,
                                                  system_prompts: list[str] | None = None,
                                                  template_context: dict[str, Any] | None = None,
                                              ) -> AnyAgent[TDeps, TAgentResult]:
                                                  """Create a copy of an agent.
                                              
                                                  Args:
                                                      agent: Agent instance or name to clone
                                                      new_name: Optional name for the clone
                                                      system_prompts: Optional different prompts
                                                      template_context: Variables for template rendering
                                              
                                                  Returns:
                                                      The new agent instance
                                                  """
                                                  from llmling_agent.agent import Agent, StructuredAgent
                                              
                                                  # Get original config
                                                  if isinstance(agent, str):
                                                      if agent not in self.manifest.agents:
                                                          msg = f"Agent {agent} not found"
                                                          raise KeyError(msg)
                                                      config = self.manifest.agents[agent]
                                                      original_agent: AnyAgent[Any, Any] = self.get_agent(agent)
                                                  else:
                                                      config = agent.context.config  # type: ignore
                                                      original_agent = agent
                                              
                                                  # Create new config
                                                  new_config = config.model_copy(deep=True)
                                              
                                                  # Apply overrides
                                                  if system_prompts:
                                                      new_config.system_prompts = system_prompts
                                              
                                                  # Handle template rendering
                                                  if template_context:
                                                      new_config.system_prompts = new_config.render_system_prompts(template_context)
                                              
                                                  # Create new agent with same runtime
                                                  new_agent = Agent[TDeps](
                                                      runtime=original_agent.runtime,
                                                      context=original_agent.context,
                                                      # result_type=original_agent.actual_type,
                                                      provider=new_config.get_provider(),
                                                      system_prompt=new_config.system_prompts,
                                                      name=new_name or f"{config.name}_copy_{len(self.agents)}",
                                                  )
                                                  if isinstance(original_agent, StructuredAgent):
                                                      new_agent = new_agent.to_structured(original_agent.actual_type)
                                              
                                                  # Register in pool
                                                  agent_name = new_agent.name
                                                  self.manifest.agents[agent_name] = new_config
                                                  self.register(agent_name, new_agent)
                                                  return await self.exit_stack.enter_async_context(new_agent)
                                              

                                              create_agent async

                                              create_agent(
                                                  name: AgentName,
                                                  *,
                                                  session: SessionIdType | SessionQuery = None,
                                                  name_override: str | None = None,
                                              ) -> Agent[TPoolDeps]
                                              
                                              create_agent(
                                                  name: AgentName,
                                                  *,
                                                  deps: TCustomDeps,
                                                  session: SessionIdType | SessionQuery = None,
                                                  name_override: str | None = None,
                                              ) -> Agent[TCustomDeps]
                                              
                                              create_agent(
                                                  name: AgentName,
                                                  *,
                                                  return_type: type[TResult],
                                                  session: SessionIdType | SessionQuery = None,
                                                  name_override: str | None = None,
                                              ) -> StructuredAgent[TPoolDeps, TResult]
                                              
                                              create_agent(
                                                  name: AgentName,
                                                  *,
                                                  deps: TCustomDeps,
                                                  return_type: type[TResult],
                                                  session: SessionIdType | SessionQuery = None,
                                                  name_override: str | None = None,
                                              ) -> StructuredAgent[TCustomDeps, TResult]
                                              
                                              create_agent(
                                                  name: AgentName,
                                                  *,
                                                  deps: Any | None = None,
                                                  return_type: Any | None = None,
                                                  session: SessionIdType | SessionQuery = None,
                                                  name_override: str | None = None,
                                              ) -> AnyAgent[Any, Any]
                                              

                                              Create a new agent instance from configuration.

                                              Parameters:

                                              Name Type Description Default
                                              name AgentName

                                              Name of the agent configuration to use

                                              required
                                              deps Any | None

                                              Optional custom dependencies (overrides pool deps)

                                              None
                                              return_type Any | None

                                              Optional type for structured responses

                                              None
                                              session SessionIdType | SessionQuery

                                              Optional session ID or query to recover conversation

                                              None
                                              name_override str | None

                                              Optional different name for this instance

                                              None

                                              Returns:

                                              Type Description
                                              AnyAgent[Any, Any]

                                              New agent instance with the specified configuration

                                              Raises:

                                              Type Description
                                              KeyError

                                              If agent configuration not found

                                              ValueError

                                              If configuration is invalid

                                              Source code in src/llmling_agent/delegation/pool.py
                                              660
                                              661
                                              662
                                              663
                                              664
                                              665
                                              666
                                              667
                                              668
                                              669
                                              670
                                              671
                                              672
                                              673
                                              674
                                              675
                                              676
                                              677
                                              678
                                              679
                                              680
                                              681
                                              682
                                              683
                                              684
                                              685
                                              686
                                              687
                                              688
                                              689
                                              690
                                              691
                                              692
                                              693
                                              694
                                              695
                                              696
                                              697
                                              698
                                              699
                                              700
                                              701
                                              702
                                              703
                                              704
                                              705
                                              706
                                              707
                                              708
                                              709
                                              710
                                              async def create_agent(
                                                  self,
                                                  name: AgentName,
                                                  *,
                                                  deps: Any | None = None,
                                                  return_type: Any | None = None,
                                                  session: SessionIdType | SessionQuery = None,
                                                  name_override: str | None = None,
                                              ) -> AnyAgent[Any, Any]:
                                                  """Create a new agent instance from configuration.
                                              
                                                  Args:
                                                      name: Name of the agent configuration to use
                                                      deps: Optional custom dependencies (overrides pool deps)
                                                      return_type: Optional type for structured responses
                                                      session: Optional session ID or query to recover conversation
                                                      name_override: Optional different name for this instance
                                              
                                                  Returns:
                                                      New agent instance with the specified configuration
                                              
                                                  Raises:
                                                      KeyError: If agent configuration not found
                                                      ValueError: If configuration is invalid
                                                  """
                                                  if name not in self.manifest.agents:
                                                      msg = f"Agent configuration {name!r} not found"
                                                      raise KeyError(msg)
                                              
                                                  # Use Manifest.get_agent for proper initialization
                                                  final_deps = deps if deps is not None else self.shared_deps
                                                  agent = self.manifest.get_agent(name, deps=final_deps)
                                                  # Override name if requested
                                                  if name_override:
                                                      agent.name = name_override
                                              
                                                  # Set pool reference
                                                  agent.context.pool = self
                                              
                                                  # Handle session if provided
                                                  if session:
                                                      agent.conversation.load_history_from_database(session=session)
                                              
                                                  # Initialize agent through exit stack
                                                  agent = await self.exit_stack.enter_async_context(agent)
                                              
                                                  # Override structured configuration if provided
                                                  if return_type is not None:
                                                      return agent.to_structured(return_type)
                                              
                                                  return agent
                                              

                                              create_team

                                              create_team(agents: Sequence[str]) -> Team[TPoolDeps]
                                              
                                              create_team(
                                                  agents: Sequence[MessageNode[TDeps, Any]],
                                                  *,
                                                  name: str | None = None,
                                                  description: str | None = None,
                                                  shared_prompt: str | None = None,
                                                  picker: AnyAgent[Any, Any] | None = None,
                                                  num_picks: int | None = None,
                                                  pick_prompt: str | None = None,
                                              ) -> Team[TDeps]
                                              
                                              create_team(
                                                  agents: Sequence[AgentName | MessageNode[Any, Any]],
                                                  *,
                                                  name: str | None = None,
                                                  description: str | None = None,
                                                  shared_prompt: str | None = None,
                                                  picker: AnyAgent[Any, Any] | None = None,
                                                  num_picks: int | None = None,
                                                  pick_prompt: str | None = None,
                                              ) -> Team[Any]
                                              
                                              create_team(
                                                  agents: Sequence[AgentName | MessageNode[Any, Any]] | None = None,
                                                  *,
                                                  name: str | None = None,
                                                  description: str | None = None,
                                                  shared_prompt: str | None = None,
                                                  picker: AnyAgent[Any, Any] | None = None,
                                                  num_picks: int | None = None,
                                                  pick_prompt: str | None = None,
                                              ) -> Team[Any]
                                              

                                              Create a group from agent names or instances.

                                              Parameters:

                                              Name Type Description Default
                                              agents Sequence[AgentName | MessageNode[Any, Any]] | None

                                              List of agent names or instances (all if None)

                                              None
                                              name str | None

                                              Optional name for the team

                                              None
                                              description str | None

                                              Optional description for the team

                                              None
                                              shared_prompt str | None

                                              Optional prompt for all agents

                                              None
                                              picker AnyAgent[Any, Any] | None

                                              Agent to use for picking agents

                                              None
                                              num_picks int | None

                                              Number of agents to pick

                                              None
                                              pick_prompt str | None

                                              Prompt to use for picking agents

                                              None
                                              Source code in src/llmling_agent/delegation/pool.py
                                              327
                                              328
                                              329
                                              330
                                              331
                                              332
                                              333
                                              334
                                              335
                                              336
                                              337
                                              338
                                              339
                                              340
                                              341
                                              342
                                              343
                                              344
                                              345
                                              346
                                              347
                                              348
                                              349
                                              350
                                              351
                                              352
                                              353
                                              354
                                              355
                                              356
                                              357
                                              358
                                              359
                                              360
                                              361
                                              362
                                              363
                                              364
                                              365
                                              366
                                              367
                                              368
                                              369
                                              370
                                              371
                                              372
                                              def create_team(
                                                  self,
                                                  agents: Sequence[AgentName | MessageNode[Any, Any]] | None = None,
                                                  *,
                                                  name: str | None = None,
                                                  description: str | None = None,
                                                  shared_prompt: str | None = None,
                                                  picker: AnyAgent[Any, Any] | None = None,
                                                  num_picks: int | None = None,
                                                  pick_prompt: str | None = None,
                                              ) -> Team[Any]:
                                                  """Create a group from agent names or instances.
                                              
                                                  Args:
                                                      agents: List of agent names or instances (all if None)
                                                      name: Optional name for the team
                                                      description: Optional description for the team
                                                      shared_prompt: Optional prompt for all agents
                                                      picker: Agent to use for picking agents
                                                      num_picks: Number of agents to pick
                                                      pick_prompt: Prompt to use for picking agents
                                                  """
                                                  from llmling_agent.delegation.team import Team
                                              
                                                  if agents is None:
                                                      agents = list(self.agents.keys())
                                              
                                                  # First resolve/configure agents
                                                  resolved_agents: list[MessageNode[Any, Any]] = []
                                                  for agent in agents:
                                                      if isinstance(agent, str):
                                                          agent = self.get_agent(agent)
                                                      resolved_agents.append(agent)
                                              
                                                  team = Team(
                                                      name=name,
                                                      description=description,
                                                      agents=resolved_agents,
                                                      shared_prompt=shared_prompt,
                                                      picker=picker,
                                                      num_picks=num_picks,
                                                      pick_prompt=pick_prompt,
                                                  )
                                                  if name:
                                                      self[name] = team
                                                  return team
                                              

                                              create_team_run

                                              create_team_run(
                                                  agents: Sequence[str],
                                                  validator: MessageNode[Any, TResult] | None = None,
                                                  *,
                                                  name: str | None = None,
                                                  description: str | None = None,
                                                  shared_prompt: str | None = None,
                                                  picker: AnyAgent[Any, Any] | None = None,
                                                  num_picks: int | None = None,
                                                  pick_prompt: str | None = None,
                                              ) -> TeamRun[TPoolDeps, TResult]
                                              
                                              create_team_run(
                                                  agents: Sequence[MessageNode[TDeps, Any]],
                                                  validator: MessageNode[Any, TResult] | None = None,
                                                  *,
                                                  name: str | None = None,
                                                  description: str | None = None,
                                                  shared_prompt: str | None = None,
                                                  picker: AnyAgent[Any, Any] | None = None,
                                                  num_picks: int | None = None,
                                                  pick_prompt: str | None = None,
                                              ) -> TeamRun[TDeps, TResult]
                                              
                                              create_team_run(
                                                  agents: Sequence[AgentName | MessageNode[Any, Any]],
                                                  validator: MessageNode[Any, TResult] | None = None,
                                                  *,
                                                  name: str | None = None,
                                                  description: str | None = None,
                                                  shared_prompt: str | None = None,
                                                  picker: AnyAgent[Any, Any] | None = None,
                                                  num_picks: int | None = None,
                                                  pick_prompt: str | None = None,
                                              ) -> TeamRun[Any, TResult]
                                              
                                              create_team_run(
                                                  agents: Sequence[AgentName | MessageNode[Any, Any]] | None = None,
                                                  validator: MessageNode[Any, TResult] | None = None,
                                                  *,
                                                  name: str | None = None,
                                                  description: str | None = None,
                                                  shared_prompt: str | None = None,
                                                  picker: AnyAgent[Any, Any] | None = None,
                                                  num_picks: int | None = None,
                                                  pick_prompt: str | None = None,
                                              ) -> TeamRun[Any, TResult]
                                              

                                              Create a a sequential TeamRun from a list of Agents.

                                              Parameters:

                                              Name Type Description Default
                                              agents Sequence[AgentName | MessageNode[Any, Any]] | None

                                              List of agent names or team/agent instances (all if None)

                                              None
                                              validator MessageNode[Any, TResult] | None

                                              Node to validate the results of the TeamRun

                                              None
                                              name str | None

                                              Optional name for the team

                                              None
                                              description str | None

                                              Optional description for the team

                                              None
                                              shared_prompt str | None

                                              Optional prompt for all agents

                                              None
                                              picker AnyAgent[Any, Any] | None

                                              Agent to use for picking agents

                                              None
                                              num_picks int | None

                                              Number of agents to pick

                                              None
                                              pick_prompt str | None

                                              Prompt to use for picking agents

                                              None
                                              Source code in src/llmling_agent/delegation/pool.py
                                              249
                                              250
                                              251
                                              252
                                              253
                                              254
                                              255
                                              256
                                              257
                                              258
                                              259
                                              260
                                              261
                                              262
                                              263
                                              264
                                              265
                                              266
                                              267
                                              268
                                              269
                                              270
                                              271
                                              272
                                              273
                                              274
                                              275
                                              276
                                              277
                                              278
                                              279
                                              280
                                              281
                                              282
                                              283
                                              284
                                              285
                                              286
                                              287
                                              288
                                              289
                                              290
                                              291
                                              292
                                              293
                                              294
                                              295
                                              296
                                              def create_team_run(
                                                  self,
                                                  agents: Sequence[AgentName | MessageNode[Any, Any]] | None = None,
                                                  validator: MessageNode[Any, TResult] | None = None,
                                                  *,
                                                  name: str | None = None,
                                                  description: str | None = None,
                                                  shared_prompt: str | None = None,
                                                  picker: AnyAgent[Any, Any] | None = None,
                                                  num_picks: int | None = None,
                                                  pick_prompt: str | None = None,
                                              ) -> TeamRun[Any, TResult]:
                                                  """Create a a sequential TeamRun from a list of Agents.
                                              
                                                  Args:
                                                      agents: List of agent names or team/agent instances (all if None)
                                                      validator: Node to validate the results of the TeamRun
                                                      name: Optional name for the team
                                                      description: Optional description for the team
                                                      shared_prompt: Optional prompt for all agents
                                                      picker: Agent to use for picking agents
                                                      num_picks: Number of agents to pick
                                                      pick_prompt: Prompt to use for picking agents
                                                  """
                                                  from llmling_agent.delegation.teamrun import TeamRun
                                              
                                                  if agents is None:
                                                      agents = list(self.agents.keys())
                                              
                                                  # First resolve/configure agents
                                                  resolved_agents: list[MessageNode[Any, Any]] = []
                                                  for agent in agents:
                                                      if isinstance(agent, str):
                                                          agent = self.get_agent(agent)
                                                      resolved_agents.append(agent)
                                                  team = TeamRun(
                                                      resolved_agents,
                                                      name=name,
                                                      description=description,
                                                      validator=validator,
                                                      shared_prompt=shared_prompt,
                                                      picker=picker,
                                                      num_picks=num_picks,
                                                      pick_prompt=pick_prompt,
                                                  )
                                                  if name:
                                                      self[name] = team
                                                  return team
                                              

                                              get_agent

                                              get_agent(
                                                  agent: AgentName | Agent[Any],
                                                  *,
                                                  model_override: str | None = None,
                                                  session: SessionIdType | SessionQuery = None,
                                              ) -> Agent[TPoolDeps]
                                              
                                              get_agent(
                                                  agent: AgentName | Agent[Any],
                                                  *,
                                                  return_type: type[TResult],
                                                  model_override: str | None = None,
                                                  session: SessionIdType | SessionQuery = None,
                                              ) -> StructuredAgent[TPoolDeps, TResult]
                                              
                                              get_agent(
                                                  agent: AgentName | Agent[Any],
                                                  *,
                                                  deps: TCustomDeps,
                                                  model_override: str | None = None,
                                                  session: SessionIdType | SessionQuery = None,
                                              ) -> Agent[TCustomDeps]
                                              
                                              get_agent(
                                                  agent: AgentName | Agent[Any],
                                                  *,
                                                  deps: TCustomDeps,
                                                  return_type: type[TResult],
                                                  model_override: str | None = None,
                                                  session: SessionIdType | SessionQuery = None,
                                              ) -> StructuredAgent[TCustomDeps, TResult]
                                              
                                              get_agent(
                                                  agent: AgentName | Agent[Any],
                                                  *,
                                                  deps: Any | None = None,
                                                  return_type: Any | None = None,
                                                  model_override: str | None = None,
                                                  session: SessionIdType | SessionQuery = None,
                                              ) -> AnyAgent[Any, Any]
                                              

                                              Get or configure an agent from the pool.

                                              This method provides flexible agent configuration with dependency injection: - Without deps: Agent uses pool's shared dependencies - With deps: Agent uses provided custom dependencies - With return_type: Returns a StructuredAgent with type validation

                                              Parameters:

                                              Name Type Description Default
                                              agent AgentName | Agent[Any]

                                              Either agent name or instance

                                              required
                                              deps Any | None

                                              Optional custom dependencies (overrides shared deps)

                                              None
                                              return_type Any | None

                                              Optional type for structured responses

                                              None
                                              model_override str | None

                                              Optional model override

                                              None
                                              session SessionIdType | SessionQuery

                                              Optional session ID or query to recover conversation

                                              None

                                              Returns:

                                              Name Type Description
                                              Either AnyAgent[Any, Any]
                                              AnyAgent[Any, Any]
                                              • Agent[TPoolDeps] when using pool's shared deps
                                              AnyAgent[Any, Any]
                                              • Agent[TCustomDeps] when custom deps provided
                                              AnyAgent[Any, Any]
                                              • StructuredAgent when return_type provided

                                              Raises:

                                              Type Description
                                              KeyError

                                              If agent name not found

                                              ValueError

                                              If configuration is invalid

                                              Source code in src/llmling_agent/delegation/pool.py
                                              772
                                              773
                                              774
                                              775
                                              776
                                              777
                                              778
                                              779
                                              780
                                              781
                                              782
                                              783
                                              784
                                              785
                                              786
                                              787
                                              788
                                              789
                                              790
                                              791
                                              792
                                              793
                                              794
                                              795
                                              796
                                              797
                                              798
                                              799
                                              800
                                              801
                                              802
                                              803
                                              804
                                              805
                                              806
                                              807
                                              808
                                              809
                                              810
                                              811
                                              812
                                              813
                                              814
                                              815
                                              816
                                              817
                                              818
                                              819
                                              820
                                              821
                                              822
                                              823
                                              824
                                              825
                                              826
                                              827
                                              828
                                              829
                                              830
                                              def get_agent(
                                                  self,
                                                  agent: AgentName | Agent[Any],
                                                  *,
                                                  deps: Any | None = None,
                                                  return_type: Any | None = None,
                                                  model_override: str | None = None,
                                                  session: SessionIdType | SessionQuery = None,
                                              ) -> AnyAgent[Any, Any]:
                                                  """Get or configure an agent from the pool.
                                              
                                                  This method provides flexible agent configuration with dependency injection:
                                                  - Without deps: Agent uses pool's shared dependencies
                                                  - With deps: Agent uses provided custom dependencies
                                                  - With return_type: Returns a StructuredAgent with type validation
                                              
                                                  Args:
                                                      agent: Either agent name or instance
                                                      deps: Optional custom dependencies (overrides shared deps)
                                                      return_type: Optional type for structured responses
                                                      model_override: Optional model override
                                                      session: Optional session ID or query to recover conversation
                                              
                                                  Returns:
                                                      Either:
                                                      - Agent[TPoolDeps] when using pool's shared deps
                                                      - Agent[TCustomDeps] when custom deps provided
                                                      - StructuredAgent when return_type provided
                                              
                                                  Raises:
                                                      KeyError: If agent name not found
                                                      ValueError: If configuration is invalid
                                                  """
                                                  from llmling_agent.agent import Agent
                                                  from llmling_agent.agent.context import AgentContext
                                              
                                                  # Get base agent
                                                  base = agent if isinstance(agent, Agent) else self.agents[agent]
                                              
                                                  # Setup context and dependencies
                                                  if base.context is None:
                                                      base.context = AgentContext[Any].create_default(base.name)
                                              
                                                  # Use custom deps if provided, otherwise use shared deps
                                                  base.context.data = deps if deps is not None else self.shared_deps
                                                  base.context.pool = self
                                              
                                                  # Apply overrides
                                                  if model_override:
                                                      base.set_model(model_override)
                                              
                                                  if session:
                                                      base.conversation.load_history_from_database(session=session)
                                              
                                                  # Convert to structured if needed
                                                  if return_type is not None:
                                                      return base.to_structured(return_type)
                                              
                                                  return base
                                              

                                              get_mermaid_diagram

                                              get_mermaid_diagram(include_details: bool = True) -> str
                                              

                                              Generate mermaid flowchart of all agents and their connections.

                                              Parameters:

                                              Name Type Description Default
                                              include_details bool

                                              Whether to show connection details (types, queues, etc)

                                              True
                                              Source code in src/llmling_agent/delegation/pool.py
                                              892
                                              893
                                              894
                                              895
                                              896
                                              897
                                              898
                                              899
                                              900
                                              901
                                              902
                                              903
                                              904
                                              905
                                              906
                                              907
                                              908
                                              909
                                              910
                                              911
                                              912
                                              913
                                              914
                                              915
                                              916
                                              917
                                              918
                                              919
                                              920
                                              921
                                              922
                                              923
                                              924
                                              925
                                              926
                                              927
                                              928
                                              929
                                              930
                                              931
                                              def get_mermaid_diagram(
                                                  self,
                                                  include_details: bool = True,
                                              ) -> str:
                                                  """Generate mermaid flowchart of all agents and their connections.
                                              
                                                  Args:
                                                      include_details: Whether to show connection details (types, queues, etc)
                                                  """
                                                  lines = ["flowchart LR"]
                                              
                                                  # Add all agents as nodes
                                                  for name in self.agents:
                                                      lines.append(f"    {name}[{name}]")  # noqa: PERF401
                                              
                                                  # Add all connections as edges
                                                  for agent in self.agents.values():
                                                      connections = agent.connections.get_connections()
                                                      for talk in connections:
                                                          talk = cast(Talk[Any], talk)  # help mypy understand it's a Talk
                                                          source = talk.source.name
                                                          for target in talk.targets:
                                                              if include_details:
                                                                  details: list[str] = []
                                                                  details.append(talk.connection_type)
                                                                  if talk.queued:
                                                                      details.append(f"queued({talk.queue_strategy})")
                                                                  if fn := talk.filter_condition:  # type: ignore
                                                                      details.append(f"filter:{fn.__name__}")
                                                                  if fn := talk.stop_condition:  # type: ignore
                                                                      details.append(f"stop:{fn.__name__}")
                                                                  if fn := talk.exit_condition:  # type: ignore
                                                                      details.append(f"exit:{fn.__name__}")
                                              
                                                                  label = f"|{' '.join(details)}|" if details else ""
                                                                  lines.append(f"    {source}--{label}-->{target.name}")
                                                              else:
                                                                  lines.append(f"    {source}-->{target.name}")
                                              
                                                  return "\n".join(lines)
                                              

                                              list_nodes

                                              list_nodes() -> list[str]
                                              

                                              List available agent names.

                                              Source code in src/llmling_agent/delegation/pool.py
                                              832
                                              833
                                              834
                                              def list_nodes(self) -> list[str]:
                                                  """List available agent names."""
                                                  return list(self.list_items())
                                              

                                              run_event_loop async

                                              run_event_loop()
                                              

                                              Run pool in event-watching mode until interrupted.

                                              Source code in src/llmling_agent/delegation/pool.py
                                              384
                                              385
                                              386
                                              387
                                              388
                                              389
                                              390
                                              391
                                              392
                                              393
                                              394
                                              395
                                              396
                                              397
                                              398
                                              399
                                              400
                                              401
                                              402
                                              403
                                              404
                                              405
                                              async def run_event_loop(self):
                                                  """Run pool in event-watching mode until interrupted."""
                                                  import sys
                                              
                                                  print("Starting event watch mode...")
                                                  print("Active nodes: ", ", ".join(self.list_nodes()))
                                                  print("Press Ctrl+C to stop")
                                              
                                                  stop_event = asyncio.Event()
                                              
                                                  if sys.platform != "win32":
                                                      # Unix: Use signal handlers
                                                      loop = asyncio.get_running_loop()
                                                      for sig in (signal.SIGINT, signal.SIGTERM):
                                                          loop.add_signal_handler(sig, stop_event.set)
                                                      while True:
                                                          await asyncio.sleep(1)
                                                  else:
                                                      # Windows: Use keyboard interrupt
                                                      with suppress(KeyboardInterrupt):
                                                          while True:
                                                              await asyncio.sleep(1)
                                              

                                              setup_agent_workers

                                              setup_agent_workers(agent: AnyAgent[Any, Any])
                                              

                                              Set up workers for an agent from configuration.

                                              Source code in src/llmling_agent/delegation/pool.py
                                              712
                                              713
                                              714
                                              715
                                              716
                                              717
                                              718
                                              719
                                              720
                                              721
                                              722
                                              723
                                              724
                                              725
                                              726
                                              727
                                              728
                                              729
                                              730
                                              def setup_agent_workers(self, agent: AnyAgent[Any, Any]):
                                                  """Set up workers for an agent from configuration."""
                                                  for worker_config in agent.context.config.workers:
                                                      try:
                                                          worker = self.nodes[worker_config.name]
                                                          match worker_config:
                                                              case TeamWorkerConfig():
                                                                  agent.register_worker(worker)
                                                              case AgentWorkerConfig():
                                                                  agent.register_worker(
                                                                      worker,
                                                                      name=worker_config.name,
                                                                      reset_history_on_run=worker_config.reset_history_on_run,
                                                                      pass_message_history=worker_config.pass_message_history,
                                                                      share_context=worker_config.share_context,
                                                                  )
                                                      except KeyError as e:
                                                          msg = f"Worker agent {worker_config.name!r} not found"
                                                          raise ValueError(msg) from e
                                              

                                              track_message_flow async

                                              track_message_flow() -> AsyncIterator[MessageFlowTracker]
                                              

                                              Track message flow during a context.

                                              Source code in src/llmling_agent/delegation/pool.py
                                              374
                                              375
                                              376
                                              377
                                              378
                                              379
                                              380
                                              381
                                              382
                                              @asynccontextmanager
                                              async def track_message_flow(self) -> AsyncIterator[MessageFlowTracker]:
                                                  """Track message flow during a context."""
                                                  tracker = MessageFlowTracker()
                                                  self.connection_registry.message_flow.connect(tracker.track)
                                                  try:
                                                      yield tracker
                                                  finally:
                                                      self.connection_registry.message_flow.disconnect(tracker.track)
                                              

                                              AgentsManifest

                                              Bases: ConfigModel

                                              Complete agent configuration manifest defining all available agents.

                                              This is the root configuration that: - Defines available response types (both inline and imported) - Configures all agent instances and their settings - Sets up custom role definitions and capabilities - Manages environment configurations

                                              A single manifest can define multiple agents that can work independently or collaborate through the orchestrator.

                                              Source code in src/llmling_agent/models/manifest.py
                                               47
                                               48
                                               49
                                               50
                                               51
                                               52
                                               53
                                               54
                                               55
                                               56
                                               57
                                               58
                                               59
                                               60
                                               61
                                               62
                                               63
                                               64
                                               65
                                               66
                                               67
                                               68
                                               69
                                               70
                                               71
                                               72
                                               73
                                               74
                                               75
                                               76
                                               77
                                               78
                                               79
                                               80
                                               81
                                               82
                                               83
                                               84
                                               85
                                               86
                                               87
                                               88
                                               89
                                               90
                                               91
                                               92
                                               93
                                               94
                                               95
                                               96
                                               97
                                               98
                                               99
                                              100
                                              101
                                              102
                                              103
                                              104
                                              105
                                              106
                                              107
                                              108
                                              109
                                              110
                                              111
                                              112
                                              113
                                              114
                                              115
                                              116
                                              117
                                              118
                                              119
                                              120
                                              121
                                              122
                                              123
                                              124
                                              125
                                              126
                                              127
                                              128
                                              129
                                              130
                                              131
                                              132
                                              133
                                              134
                                              135
                                              136
                                              137
                                              138
                                              139
                                              140
                                              141
                                              142
                                              143
                                              144
                                              145
                                              146
                                              147
                                              148
                                              149
                                              150
                                              151
                                              152
                                              153
                                              154
                                              155
                                              156
                                              157
                                              158
                                              159
                                              160
                                              161
                                              162
                                              163
                                              164
                                              165
                                              166
                                              167
                                              168
                                              169
                                              170
                                              171
                                              172
                                              173
                                              174
                                              175
                                              176
                                              177
                                              178
                                              179
                                              180
                                              181
                                              182
                                              183
                                              184
                                              185
                                              186
                                              187
                                              188
                                              189
                                              190
                                              191
                                              192
                                              193
                                              194
                                              195
                                              196
                                              197
                                              198
                                              199
                                              200
                                              201
                                              202
                                              203
                                              204
                                              205
                                              206
                                              207
                                              208
                                              209
                                              210
                                              211
                                              212
                                              213
                                              214
                                              215
                                              216
                                              217
                                              218
                                              219
                                              220
                                              221
                                              222
                                              223
                                              224
                                              225
                                              226
                                              227
                                              228
                                              229
                                              230
                                              231
                                              232
                                              233
                                              234
                                              235
                                              236
                                              237
                                              238
                                              239
                                              240
                                              241
                                              242
                                              243
                                              244
                                              245
                                              246
                                              247
                                              248
                                              249
                                              250
                                              251
                                              252
                                              253
                                              254
                                              255
                                              256
                                              257
                                              258
                                              259
                                              260
                                              261
                                              262
                                              263
                                              264
                                              265
                                              266
                                              267
                                              268
                                              269
                                              270
                                              271
                                              272
                                              273
                                              274
                                              275
                                              276
                                              277
                                              278
                                              279
                                              280
                                              281
                                              282
                                              283
                                              284
                                              285
                                              286
                                              287
                                              288
                                              289
                                              290
                                              291
                                              292
                                              293
                                              294
                                              295
                                              296
                                              297
                                              298
                                              299
                                              300
                                              301
                                              302
                                              303
                                              304
                                              305
                                              306
                                              307
                                              308
                                              309
                                              310
                                              311
                                              312
                                              313
                                              314
                                              315
                                              316
                                              317
                                              318
                                              319
                                              320
                                              321
                                              322
                                              323
                                              324
                                              325
                                              326
                                              327
                                              328
                                              329
                                              330
                                              331
                                              332
                                              333
                                              334
                                              335
                                              336
                                              337
                                              338
                                              339
                                              340
                                              341
                                              342
                                              343
                                              344
                                              345
                                              346
                                              347
                                              348
                                              349
                                              350
                                              351
                                              352
                                              353
                                              354
                                              355
                                              356
                                              357
                                              358
                                              359
                                              360
                                              361
                                              362
                                              363
                                              364
                                              365
                                              366
                                              367
                                              368
                                              369
                                              370
                                              371
                                              372
                                              373
                                              374
                                              375
                                              376
                                              377
                                              378
                                              379
                                              380
                                              381
                                              382
                                              383
                                              384
                                              385
                                              386
                                              387
                                              388
                                              389
                                              390
                                              391
                                              392
                                              393
                                              394
                                              395
                                              396
                                              397
                                              398
                                              399
                                              400
                                              401
                                              402
                                              403
                                              404
                                              405
                                              406
                                              407
                                              408
                                              409
                                              410
                                              411
                                              412
                                              413
                                              414
                                              415
                                              416
                                              417
                                              418
                                              419
                                              420
                                              421
                                              422
                                              423
                                              424
                                              425
                                              426
                                              427
                                              428
                                              429
                                              430
                                              431
                                              432
                                              433
                                              434
                                              435
                                              436
                                              437
                                              438
                                              439
                                              440
                                              441
                                              442
                                              443
                                              444
                                              445
                                              446
                                              447
                                              448
                                              449
                                              450
                                              451
                                              452
                                              453
                                              454
                                              455
                                              456
                                              457
                                              458
                                              459
                                              460
                                              461
                                              462
                                              463
                                              464
                                              465
                                              466
                                              467
                                              468
                                              469
                                              470
                                              471
                                              472
                                              473
                                              474
                                              475
                                              476
                                              477
                                              478
                                              479
                                              480
                                              481
                                              482
                                              483
                                              484
                                              485
                                              class AgentsManifest(ConfigModel):
                                                  """Complete agent configuration manifest defining all available agents.
                                              
                                                  This is the root configuration that:
                                                  - Defines available response types (both inline and imported)
                                                  - Configures all agent instances and their settings
                                                  - Sets up custom role definitions and capabilities
                                                  - Manages environment configurations
                                              
                                                  A single manifest can define multiple agents that can work independently
                                                  or collaborate through the orchestrator.
                                                  """
                                              
                                                  INHERIT: str | list[str] | None = None
                                                  """Inheritance references."""
                                              
                                                  resources: dict[str, ResourceConfig | str] = Field(default_factory=dict)
                                                  """Resource configurations defining available filesystems.
                                              
                                                  Supports both full config and URI shorthand:
                                                      resources:
                                                        docs: "file://./docs"  # shorthand
                                                        data:  # full config
                                                          type: "source"
                                                          uri: "s3://bucket/data"
                                                          cached: true
                                                  """
                                              
                                                  ui: UIConfig = Field(default_factory=StdlibUIConfig)
                                                  """UI configuration."""
                                              
                                                  agents: dict[str, AgentConfig] = Field(default_factory=dict)
                                                  """Mapping of agent IDs to their configurations"""
                                              
                                                  teams: dict[str, TeamConfig] = Field(default_factory=dict)
                                                  """Mapping of team IDs to their configurations"""
                                              
                                                  storage: StorageConfig = Field(default_factory=StorageConfig)
                                                  """Storage provider configuration."""
                                              
                                                  observability: ObservabilityConfig = Field(default_factory=ObservabilityConfig)
                                                  """Observability provider configuration."""
                                              
                                                  conversion: ConversionConfig = Field(default_factory=ConversionConfig)
                                                  """Document conversion configuration."""
                                              
                                                  responses: dict[str, ResponseDefinition] = Field(default_factory=dict)
                                                  """Mapping of response names to their definitions"""
                                              
                                                  jobs: dict[str, Job] = Field(default_factory=dict)
                                                  """Pre-defined jobs, ready to be used by nodes."""
                                              
                                                  mcp_servers: list[str | MCPServerConfig] = Field(default_factory=list)
                                                  """List of MCP server configurations:
                                              
                                                  These MCP servers are used to provide tools and other resources to the nodes.
                                                  """
                                                  pool_server: PoolServerConfig = Field(default_factory=PoolServerConfig)
                                                  """Pool server configuration.
                                              
                                                  This MCP server configuration is used for the pool MCP server,
                                                  which exposes pool functionality to other applications / clients."""
                                              
                                                  prompts: PromptConfig = Field(default_factory=PromptConfig)
                                              
                                                  model_config = ConfigDict(use_attribute_docstrings=True, extra="forbid")
                                              
                                                  @model_validator(mode="before")
                                                  @classmethod
                                                  def normalize_workers(cls, data: dict[str, Any]) -> dict[str, Any]:
                                                      """Convert string workers to appropriate WorkerConfig for all agents."""
                                                      teams = data.get("teams", {})
                                                      agents = data.get("agents", {})
                                              
                                                      # Process workers for all agents that have them
                                                      for agent_name, agent_config in agents.items():
                                                          if isinstance(agent_config, dict):
                                                              workers = agent_config.get("workers", [])
                                                          else:
                                                              workers = agent_config.workers
                                              
                                                          if workers:
                                                              normalized: list[BaseWorkerConfig] = []
                                              
                                                              for worker in workers:
                                                                  match worker:
                                                                      case str() as name:
                                                                          # Determine type based on presence in teams/agents
                                                                          if name in teams:
                                                                              normalized.append(TeamWorkerConfig(name=name))
                                                                          elif name in agents:
                                                                              normalized.append(AgentWorkerConfig(name=name))
                                                                          else:
                                                                              # Default to agent if type can't be determined
                                                                              normalized.append(AgentWorkerConfig(name=name))
                                              
                                                                      case dict() as config:
                                                                          # If type is explicitly specified, use it
                                                                          if worker_type := config.get("type"):
                                                                              match worker_type:
                                                                                  case "team":
                                                                                      normalized.append(TeamWorkerConfig(**config))
                                                                                  case "agent":
                                                                                      normalized.append(AgentWorkerConfig(**config))
                                                                                  case _:
                                                                                      msg = f"Invalid worker type: {worker_type}"
                                                                                      raise ValueError(msg)
                                                                          else:
                                                                              # Determine type based on worker name
                                                                              worker_name = config.get("name")
                                                                              if not worker_name:
                                                                                  msg = "Worker config missing name"
                                                                                  raise ValueError(msg)
                                              
                                                                              if worker_name in teams:
                                                                                  normalized.append(TeamWorkerConfig(**config))
                                                                              else:
                                                                                  normalized.append(AgentWorkerConfig(**config))
                                              
                                                                      case BaseWorkerConfig():  # Already normalized
                                                                          normalized.append(worker)
                                              
                                                                      case _:
                                                                          msg = f"Invalid worker configuration: {worker}"
                                                                          raise ValueError(msg)
                                              
                                                              if isinstance(agent_config, dict):
                                                                  agent_config["workers"] = normalized
                                                              else:
                                                                  # Need to create a new dict with updated workers
                                                                  agent_dict = agent_config.model_dump()
                                                                  agent_dict["workers"] = normalized
                                                                  agents[agent_name] = agent_dict
                                              
                                                      return data
                                              
                                                  @cached_property
                                                  def resource_registry(self) -> ResourceRegistry:
                                                      """Get registry with all configured resources."""
                                                      registry = ResourceRegistry()
                                                      for name, config in self.resources.items():
                                                          if isinstance(config, str):
                                                              # Convert URI shorthand to SourceResourceConfig
                                                              config = SourceResourceConfig(uri=config)
                                                          registry.register_from_config(name, config)
                                                      return registry
                                              
                                                  def clone_agent_config(
                                                      self,
                                                      name: str,
                                                      new_name: str | None = None,
                                                      *,
                                                      template_context: dict[str, Any] | None = None,
                                                      **overrides: Any,
                                                  ) -> str:
                                                      """Create a copy of an agent configuration.
                                              
                                                      Args:
                                                          name: Name of agent to clone
                                                          new_name: Optional new name (auto-generated if None)
                                                          template_context: Variables for template rendering
                                                          **overrides: Configuration overrides for the clone
                                              
                                                      Returns:
                                                          Name of the new agent
                                              
                                                      Raises:
                                                          KeyError: If original agent not found
                                                          ValueError: If new name already exists or if overrides invalid
                                                      """
                                                      if name not in self.agents:
                                                          msg = f"Agent {name} not found"
                                                          raise KeyError(msg)
                                              
                                                      actual_name = new_name or f"{name}_copy_{len(self.agents)}"
                                                      if actual_name in self.agents:
                                                          msg = f"Agent {actual_name} already exists"
                                                          raise ValueError(msg)
                                              
                                                      # Deep copy the configuration
                                                      config = self.agents[name].model_copy(deep=True)
                                              
                                                      # Apply overrides
                                                      for key, value in overrides.items():
                                                          if not hasattr(config, key):
                                                              msg = f"Invalid override: {key}"
                                                              raise ValueError(msg)
                                                          setattr(config, key, value)
                                              
                                                      # Handle template rendering if context provided
                                                      if template_context:
                                                          # Apply name from context if not explicitly overridden
                                                          if "name" in template_context and "name" not in overrides:
                                                              config.name = template_context["name"]
                                              
                                                          # Render system prompts
                                                          config.system_prompts = config.render_system_prompts(template_context)
                                              
                                                      self.agents[actual_name] = config
                                                      return actual_name
                                              
                                                  @model_validator(mode="before")
                                                  @classmethod
                                                  def resolve_inheritance(cls, data: dict) -> dict:
                                                      """Resolve agent inheritance chains."""
                                                      nodes = data.get("agents", {})
                                                      resolved: dict[str, dict] = {}
                                                      seen: set[str] = set()
                                              
                                                      def resolve_node(name: str) -> dict:
                                                          if name in resolved:
                                                              return resolved[name]
                                              
                                                          if name in seen:
                                                              msg = f"Circular inheritance detected: {name}"
                                                              raise ValueError(msg)
                                              
                                                          seen.add(name)
                                                          config = (
                                                              nodes[name].model_copy()
                                                              if hasattr(nodes[name], "model_copy")
                                                              else nodes[name].copy()
                                                          )
                                                          inherit = (
                                                              config.get("inherits") if isinstance(config, dict) else config.inherits
                                                          )
                                                          if inherit:
                                                              if inherit not in nodes:
                                                                  msg = f"Parent agent {inherit} not found"
                                                                  raise ValueError(msg)
                                              
                                                              # Get resolved parent config
                                                              parent = resolve_node(inherit)
                                                              # Merge parent with child (child overrides parent)
                                                              merged = parent.copy()
                                                              merged.update(config)
                                                              config = merged
                                              
                                                          seen.remove(name)
                                                          resolved[name] = config
                                                          return config
                                              
                                                      # Resolve all nodes
                                                      for name in nodes:
                                                          resolved[name] = resolve_node(name)
                                              
                                                      # Update nodes with resolved configs
                                                      data["agents"] = resolved
                                                      return data
                                              
                                                  @model_validator(mode="after")
                                                  def set_instrument_libraries(self) -> Self:
                                                      """Auto-set libraries to instrument based on used providers."""
                                                      if (
                                                          not self.observability.enabled
                                                          or self.observability.instrument_libraries is not None
                                                      ):
                                                          return self
                                                      self.observability.instrument_libraries = list(self.get_used_providers())
                                                      return self
                                              
                                                  @property
                                                  def node_names(self) -> list[str]:
                                                      """Get list of all agent and team names."""
                                                      return list(self.agents.keys()) + list(self.teams.keys())
                                              
                                                  @property
                                                  def nodes(self) -> dict[str, Any]:
                                                      """Get all agent and team configurations."""
                                                      return {**self.agents, **self.teams}
                                              
                                                  def get_mcp_servers(self) -> list[MCPServerConfig]:
                                                      """Get processed MCP server configurations.
                                              
                                                      Converts string entries to StdioMCPServerConfig configs by splitting
                                                      into command and arguments.
                                              
                                                      Returns:
                                                          List of MCPServerConfig instances
                                              
                                                      Raises:
                                                          ValueError: If string entry is empty
                                                      """
                                                      configs: list[MCPServerConfig] = []
                                              
                                                      for server in self.mcp_servers:
                                                          match server:
                                                              case str():
                                                                  parts = server.split()
                                                                  if not parts:
                                                                      msg = "Empty MCP server command"
                                                                      raise ValueError(msg)
                                              
                                                                  configs.append(StdioMCPServerConfig(command=parts[0], args=parts[1:]))
                                                              case BaseMCPServerConfig():
                                                                  configs.append(server)
                                              
                                                      return configs
                                              
                                                  @cached_property
                                                  def prompt_manager(self) -> PromptManager:
                                                      """Get prompt manager for this manifest."""
                                                      from llmling_agent.prompts.manager import PromptManager
                                              
                                                      return PromptManager(self.prompts)
                                              
                                                  # @model_validator(mode="after")
                                                  # def validate_response_types(self) -> AgentsManifest:
                                                  #     """Ensure all agent result_types exist in responses or are inline."""
                                                  #     for agent_id, agent in self.agents.items():
                                                  #         if (
                                                  #             isinstance(agent.result_type, str)
                                                  #             and agent.result_type not in self.responses
                                                  #         ):
                                                  #             msg = f"'{agent.result_type=}' for '{agent_id=}' not found in responses"
                                                  #             raise ValueError(msg)
                                                  #     return self
                                              
                                                  def get_agent[TAgentDeps](
                                                      self, name: str, deps: TAgentDeps | None = None
                                                  ) -> AnyAgent[TAgentDeps, Any]:
                                                      from llmling import RuntimeConfig
                                              
                                                      from llmling_agent import Agent, AgentContext
                                              
                                                      config = self.agents[name]
                                                      # Create runtime without async context
                                                      cfg = config.get_config()
                                                      runtime = RuntimeConfig.from_config(cfg)
                                              
                                                      # Create context with config path and capabilities
                                                      context = AgentContext[TAgentDeps](
                                                          node_name=name,
                                                          data=deps,
                                                          capabilities=config.capabilities,
                                                          definition=self,
                                                          config=config,
                                                          runtime=runtime,
                                                          # pool=self,
                                                          # confirmation_callback=confirmation_callback,
                                                      )
                                              
                                                      sys_prompts = config.system_prompts.copy()
                                                      # Library prompts
                                                      if config.library_system_prompts:
                                                          for prompt_ref in config.library_system_prompts:
                                                              try:
                                                                  content = self.prompt_manager.get_sync(prompt_ref)
                                                                  sys_prompts.append(content)
                                                              except Exception as e:
                                                                  msg = f"Failed to load library prompt {prompt_ref!r} for agent {name}"
                                                                  logger.exception(msg)
                                                                  raise ValueError(msg) from e
                                                      # Create agent with runtime and context
                                                      agent = Agent[Any](
                                                          runtime=runtime,
                                                          context=context,
                                                          provider=config.get_provider(),
                                                          system_prompt=sys_prompts,
                                                          name=name,
                                                          description=config.description,
                                                          retries=config.retries,
                                                          session=config.get_session_config(),
                                                          result_retries=config.result_retries,
                                                          end_strategy=config.end_strategy,
                                                          capabilities=config.capabilities,
                                                          debug=config.debug,
                                                          # name=config.name or name,
                                                      )
                                                      if result_type := self.get_result_type(name):
                                                          return agent.to_structured(result_type)
                                                      return agent
                                              
                                                  def get_used_providers(self) -> set[str]:
                                                      """Get all providers configured in this manifest."""
                                                      providers = set[str]()
                                              
                                                      for agent_config in self.agents.values():
                                                          match agent_config.provider:
                                                              case "pydantic_ai":
                                                                  providers.add("pydantic_ai")
                                                              case "litellm":
                                                                  providers.add("litellm")
                                                              case BaseProviderConfig():
                                                                  providers.add(agent_config.provider.type)
                                                      return providers
                                              
                                                  @classmethod
                                                  def from_file(cls, path: StrPath) -> Self:
                                                      """Load agent configuration from YAML file.
                                              
                                                      Args:
                                                          path: Path to the configuration file
                                              
                                                      Returns:
                                                          Loaded agent definition
                                              
                                                      Raises:
                                                          ValueError: If loading fails
                                                      """
                                                      import yamling
                                              
                                                      try:
                                                          data = yamling.load_yaml_file(path, resolve_inherit=True)
                                                          agent_def = cls.model_validate(data)
                                                          # Update all agents with the config file path and ensure names
                                                          agents = {
                                                              name: config.model_copy(update={"config_file_path": str(path)})
                                                              for name, config in agent_def.agents.items()
                                                          }
                                                          return agent_def.model_copy(update={"agents": agents})
                                                      except Exception as exc:
                                                          msg = f"Failed to load agent config from {path}"
                                                          raise ValueError(msg) from exc
                                              
                                                  @cached_property
                                                  def pool(self) -> AgentPool:
                                                      """Create an agent pool from this manifest.
                                              
                                                      Returns:
                                                          Configured agent pool
                                                      """
                                                      from llmling_agent import AgentPool
                                              
                                                      return AgentPool(manifest=self)
                                              
                                                  def get_result_type(self, agent_name: str) -> type[Any] | None:
                                                      """Get the resolved result type for an agent.
                                              
                                                      Returns None if no result type is configured.
                                                      """
                                                      agent_config = self.agents[agent_name]
                                                      if not agent_config.result_type:
                                                          return None
                                                      logger.debug("Building response model for %r", agent_config.result_type)
                                                      if isinstance(agent_config.result_type, str):
                                                          response_def = self.responses[agent_config.result_type]
                                                          return response_def.create_model()  # type: ignore
                                                      return agent_config.result_type.create_model()  # type: ignore
                                              

                                              INHERIT class-attribute instance-attribute

                                              INHERIT: str | list[str] | None = None
                                              

                                              Inheritance references.

                                              agents class-attribute instance-attribute

                                              agents: dict[str, AgentConfig] = Field(default_factory=dict)
                                              

                                              Mapping of agent IDs to their configurations

                                              conversion class-attribute instance-attribute

                                              conversion: ConversionConfig = Field(default_factory=ConversionConfig)
                                              

                                              Document conversion configuration.

                                              jobs class-attribute instance-attribute

                                              jobs: dict[str, Job] = Field(default_factory=dict)
                                              

                                              Pre-defined jobs, ready to be used by nodes.

                                              mcp_servers class-attribute instance-attribute

                                              mcp_servers: list[str | MCPServerConfig] = Field(default_factory=list)
                                              

                                              List of MCP server configurations:

                                              These MCP servers are used to provide tools and other resources to the nodes.

                                              node_names property

                                              node_names: list[str]
                                              

                                              Get list of all agent and team names.

                                              nodes property

                                              nodes: dict[str, Any]
                                              

                                              Get all agent and team configurations.

                                              observability class-attribute instance-attribute

                                              observability: ObservabilityConfig = Field(default_factory=ObservabilityConfig)
                                              

                                              Observability provider configuration.

                                              pool cached property

                                              pool: AgentPool
                                              

                                              Create an agent pool from this manifest.

                                              Returns:

                                              Type Description
                                              AgentPool

                                              Configured agent pool

                                              pool_server class-attribute instance-attribute

                                              pool_server: PoolServerConfig = Field(default_factory=PoolServerConfig)
                                              

                                              Pool server configuration.

                                              This MCP server configuration is used for the pool MCP server, which exposes pool functionality to other applications / clients.

                                              prompt_manager cached property

                                              prompt_manager: PromptManager
                                              

                                              Get prompt manager for this manifest.

                                              resource_registry cached property

                                              resource_registry: ResourceRegistry
                                              

                                              Get registry with all configured resources.

                                              resources class-attribute instance-attribute

                                              resources: dict[str, ResourceConfig | str] = Field(default_factory=dict)
                                              

                                              Resource configurations defining available filesystems.

                                              Supports both full config and URI shorthand

                                              resources: docs: "file://./docs" # shorthand data: # full config type: "source" uri: "s3://bucket/data" cached: true

                                              responses class-attribute instance-attribute

                                              responses: dict[str, ResponseDefinition] = Field(default_factory=dict)
                                              

                                              Mapping of response names to their definitions

                                              storage class-attribute instance-attribute

                                              storage: StorageConfig = Field(default_factory=StorageConfig)
                                              

                                              Storage provider configuration.

                                              teams class-attribute instance-attribute

                                              teams: dict[str, TeamConfig] = Field(default_factory=dict)
                                              

                                              Mapping of team IDs to their configurations

                                              ui class-attribute instance-attribute

                                              ui: UIConfig = Field(default_factory=StdlibUIConfig)
                                              

                                              UI configuration.

                                              clone_agent_config

                                              clone_agent_config(
                                                  name: str,
                                                  new_name: str | None = None,
                                                  *,
                                                  template_context: dict[str, Any] | None = None,
                                                  **overrides: Any,
                                              ) -> str
                                              

                                              Create a copy of an agent configuration.

                                              Parameters:

                                              Name Type Description Default
                                              name str

                                              Name of agent to clone

                                              required
                                              new_name str | None

                                              Optional new name (auto-generated if None)

                                              None
                                              template_context dict[str, Any] | None

                                              Variables for template rendering

                                              None
                                              **overrides Any

                                              Configuration overrides for the clone

                                              {}

                                              Returns:

                                              Type Description
                                              str

                                              Name of the new agent

                                              Raises:

                                              Type Description
                                              KeyError

                                              If original agent not found

                                              ValueError

                                              If new name already exists or if overrides invalid

                                              Source code in src/llmling_agent/models/manifest.py
                                              194
                                              195
                                              196
                                              197
                                              198
                                              199
                                              200
                                              201
                                              202
                                              203
                                              204
                                              205
                                              206
                                              207
                                              208
                                              209
                                              210
                                              211
                                              212
                                              213
                                              214
                                              215
                                              216
                                              217
                                              218
                                              219
                                              220
                                              221
                                              222
                                              223
                                              224
                                              225
                                              226
                                              227
                                              228
                                              229
                                              230
                                              231
                                              232
                                              233
                                              234
                                              235
                                              236
                                              237
                                              238
                                              239
                                              240
                                              241
                                              242
                                              243
                                              244
                                              245
                                              246
                                              def clone_agent_config(
                                                  self,
                                                  name: str,
                                                  new_name: str | None = None,
                                                  *,
                                                  template_context: dict[str, Any] | None = None,
                                                  **overrides: Any,
                                              ) -> str:
                                                  """Create a copy of an agent configuration.
                                              
                                                  Args:
                                                      name: Name of agent to clone
                                                      new_name: Optional new name (auto-generated if None)
                                                      template_context: Variables for template rendering
                                                      **overrides: Configuration overrides for the clone
                                              
                                                  Returns:
                                                      Name of the new agent
                                              
                                                  Raises:
                                                      KeyError: If original agent not found
                                                      ValueError: If new name already exists or if overrides invalid
                                                  """
                                                  if name not in self.agents:
                                                      msg = f"Agent {name} not found"
                                                      raise KeyError(msg)
                                              
                                                  actual_name = new_name or f"{name}_copy_{len(self.agents)}"
                                                  if actual_name in self.agents:
                                                      msg = f"Agent {actual_name} already exists"
                                                      raise ValueError(msg)
                                              
                                                  # Deep copy the configuration
                                                  config = self.agents[name].model_copy(deep=True)
                                              
                                                  # Apply overrides
                                                  for key, value in overrides.items():
                                                      if not hasattr(config, key):
                                                          msg = f"Invalid override: {key}"
                                                          raise ValueError(msg)
                                                      setattr(config, key, value)
                                              
                                                  # Handle template rendering if context provided
                                                  if template_context:
                                                      # Apply name from context if not explicitly overridden
                                                      if "name" in template_context and "name" not in overrides:
                                                          config.name = template_context["name"]
                                              
                                                      # Render system prompts
                                                      config.system_prompts = config.render_system_prompts(template_context)
                                              
                                                  self.agents[actual_name] = config
                                                  return actual_name
                                              

                                              from_file classmethod

                                              from_file(path: StrPath) -> Self
                                              

                                              Load agent configuration from YAML file.

                                              Parameters:

                                              Name Type Description Default
                                              path StrPath

                                              Path to the configuration file

                                              required

                                              Returns:

                                              Type Description
                                              Self

                                              Loaded agent definition

                                              Raises:

                                              Type Description
                                              ValueError

                                              If loading fails

                                              Source code in src/llmling_agent/models/manifest.py
                                              434
                                              435
                                              436
                                              437
                                              438
                                              439
                                              440
                                              441
                                              442
                                              443
                                              444
                                              445
                                              446
                                              447
                                              448
                                              449
                                              450
                                              451
                                              452
                                              453
                                              454
                                              455
                                              456
                                              457
                                              458
                                              459
                                              460
                                              @classmethod
                                              def from_file(cls, path: StrPath) -> Self:
                                                  """Load agent configuration from YAML file.
                                              
                                                  Args:
                                                      path: Path to the configuration file
                                              
                                                  Returns:
                                                      Loaded agent definition
                                              
                                                  Raises:
                                                      ValueError: If loading fails
                                                  """
                                                  import yamling
                                              
                                                  try:
                                                      data = yamling.load_yaml_file(path, resolve_inherit=True)
                                                      agent_def = cls.model_validate(data)
                                                      # Update all agents with the config file path and ensure names
                                                      agents = {
                                                          name: config.model_copy(update={"config_file_path": str(path)})
                                                          for name, config in agent_def.agents.items()
                                                      }
                                                      return agent_def.model_copy(update={"agents": agents})
                                                  except Exception as exc:
                                                      msg = f"Failed to load agent config from {path}"
                                                      raise ValueError(msg) from exc
                                              

                                              get_mcp_servers

                                              get_mcp_servers() -> list[MCPServerConfig]
                                              

                                              Get processed MCP server configurations.

                                              Converts string entries to StdioMCPServerConfig configs by splitting into command and arguments.

                                              Returns:

                                              Type Description
                                              list[MCPServerConfig]

                                              List of MCPServerConfig instances

                                              Raises:

                                              Type Description
                                              ValueError

                                              If string entry is empty

                                              Source code in src/llmling_agent/models/manifest.py
                                              318
                                              319
                                              320
                                              321
                                              322
                                              323
                                              324
                                              325
                                              326
                                              327
                                              328
                                              329
                                              330
                                              331
                                              332
                                              333
                                              334
                                              335
                                              336
                                              337
                                              338
                                              339
                                              340
                                              341
                                              342
                                              343
                                              344
                                              def get_mcp_servers(self) -> list[MCPServerConfig]:
                                                  """Get processed MCP server configurations.
                                              
                                                  Converts string entries to StdioMCPServerConfig configs by splitting
                                                  into command and arguments.
                                              
                                                  Returns:
                                                      List of MCPServerConfig instances
                                              
                                                  Raises:
                                                      ValueError: If string entry is empty
                                                  """
                                                  configs: list[MCPServerConfig] = []
                                              
                                                  for server in self.mcp_servers:
                                                      match server:
                                                          case str():
                                                              parts = server.split()
                                                              if not parts:
                                                                  msg = "Empty MCP server command"
                                                                  raise ValueError(msg)
                                              
                                                              configs.append(StdioMCPServerConfig(command=parts[0], args=parts[1:]))
                                                          case BaseMCPServerConfig():
                                                              configs.append(server)
                                              
                                                  return configs
                                              

                                              get_result_type

                                              get_result_type(agent_name: str) -> type[Any] | None
                                              

                                              Get the resolved result type for an agent.

                                              Returns None if no result type is configured.

                                              Source code in src/llmling_agent/models/manifest.py
                                              473
                                              474
                                              475
                                              476
                                              477
                                              478
                                              479
                                              480
                                              481
                                              482
                                              483
                                              484
                                              485
                                              def get_result_type(self, agent_name: str) -> type[Any] | None:
                                                  """Get the resolved result type for an agent.
                                              
                                                  Returns None if no result type is configured.
                                                  """
                                                  agent_config = self.agents[agent_name]
                                                  if not agent_config.result_type:
                                                      return None
                                                  logger.debug("Building response model for %r", agent_config.result_type)
                                                  if isinstance(agent_config.result_type, str):
                                                      response_def = self.responses[agent_config.result_type]
                                                      return response_def.create_model()  # type: ignore
                                                  return agent_config.result_type.create_model()  # type: ignore
                                              

                                              get_used_providers

                                              get_used_providers() -> set[str]
                                              

                                              Get all providers configured in this manifest.

                                              Source code in src/llmling_agent/models/manifest.py
                                              420
                                              421
                                              422
                                              423
                                              424
                                              425
                                              426
                                              427
                                              428
                                              429
                                              430
                                              431
                                              432
                                              def get_used_providers(self) -> set[str]:
                                                  """Get all providers configured in this manifest."""
                                                  providers = set[str]()
                                              
                                                  for agent_config in self.agents.values():
                                                      match agent_config.provider:
                                                          case "pydantic_ai":
                                                              providers.add("pydantic_ai")
                                                          case "litellm":
                                                              providers.add("litellm")
                                                          case BaseProviderConfig():
                                                              providers.add(agent_config.provider.type)
                                                  return providers
                                              

                                              normalize_workers classmethod

                                              normalize_workers(data: dict[str, Any]) -> dict[str, Any]
                                              

                                              Convert string workers to appropriate WorkerConfig for all agents.

                                              Source code in src/llmling_agent/models/manifest.py
                                              114
                                              115
                                              116
                                              117
                                              118
                                              119
                                              120
                                              121
                                              122
                                              123
                                              124
                                              125
                                              126
                                              127
                                              128
                                              129
                                              130
                                              131
                                              132
                                              133
                                              134
                                              135
                                              136
                                              137
                                              138
                                              139
                                              140
                                              141
                                              142
                                              143
                                              144
                                              145
                                              146
                                              147
                                              148
                                              149
                                              150
                                              151
                                              152
                                              153
                                              154
                                              155
                                              156
                                              157
                                              158
                                              159
                                              160
                                              161
                                              162
                                              163
                                              164
                                              165
                                              166
                                              167
                                              168
                                              169
                                              170
                                              171
                                              172
                                              173
                                              174
                                              175
                                              176
                                              177
                                              178
                                              179
                                              180
                                              181
                                              @model_validator(mode="before")
                                              @classmethod
                                              def normalize_workers(cls, data: dict[str, Any]) -> dict[str, Any]:
                                                  """Convert string workers to appropriate WorkerConfig for all agents."""
                                                  teams = data.get("teams", {})
                                                  agents = data.get("agents", {})
                                              
                                                  # Process workers for all agents that have them
                                                  for agent_name, agent_config in agents.items():
                                                      if isinstance(agent_config, dict):
                                                          workers = agent_config.get("workers", [])
                                                      else:
                                                          workers = agent_config.workers
                                              
                                                      if workers:
                                                          normalized: list[BaseWorkerConfig] = []
                                              
                                                          for worker in workers:
                                                              match worker:
                                                                  case str() as name:
                                                                      # Determine type based on presence in teams/agents
                                                                      if name in teams:
                                                                          normalized.append(TeamWorkerConfig(name=name))
                                                                      elif name in agents:
                                                                          normalized.append(AgentWorkerConfig(name=name))
                                                                      else:
                                                                          # Default to agent if type can't be determined
                                                                          normalized.append(AgentWorkerConfig(name=name))
                                              
                                                                  case dict() as config:
                                                                      # If type is explicitly specified, use it
                                                                      if worker_type := config.get("type"):
                                                                          match worker_type:
                                                                              case "team":
                                                                                  normalized.append(TeamWorkerConfig(**config))
                                                                              case "agent":
                                                                                  normalized.append(AgentWorkerConfig(**config))
                                                                              case _:
                                                                                  msg = f"Invalid worker type: {worker_type}"
                                                                                  raise ValueError(msg)
                                                                      else:
                                                                          # Determine type based on worker name
                                                                          worker_name = config.get("name")
                                                                          if not worker_name:
                                                                              msg = "Worker config missing name"
                                                                              raise ValueError(msg)
                                              
                                                                          if worker_name in teams:
                                                                              normalized.append(TeamWorkerConfig(**config))
                                                                          else:
                                                                              normalized.append(AgentWorkerConfig(**config))
                                              
                                                                  case BaseWorkerConfig():  # Already normalized
                                                                      normalized.append(worker)
                                              
                                                                  case _:
                                                                      msg = f"Invalid worker configuration: {worker}"
                                                                      raise ValueError(msg)
                                              
                                                          if isinstance(agent_config, dict):
                                                              agent_config["workers"] = normalized
                                                          else:
                                                              # Need to create a new dict with updated workers
                                                              agent_dict = agent_config.model_dump()
                                                              agent_dict["workers"] = normalized
                                                              agents[agent_name] = agent_dict
                                              
                                                  return data
                                              

                                              resolve_inheritance classmethod

                                              resolve_inheritance(data: dict) -> dict
                                              

                                              Resolve agent inheritance chains.

                                              Source code in src/llmling_agent/models/manifest.py
                                              248
                                              249
                                              250
                                              251
                                              252
                                              253
                                              254
                                              255
                                              256
                                              257
                                              258
                                              259
                                              260
                                              261
                                              262
                                              263
                                              264
                                              265
                                              266
                                              267
                                              268
                                              269
                                              270
                                              271
                                              272
                                              273
                                              274
                                              275
                                              276
                                              277
                                              278
                                              279
                                              280
                                              281
                                              282
                                              283
                                              284
                                              285
                                              286
                                              287
                                              288
                                              289
                                              290
                                              291
                                              292
                                              293
                                              294
                                              295
                                              @model_validator(mode="before")
                                              @classmethod
                                              def resolve_inheritance(cls, data: dict) -> dict:
                                                  """Resolve agent inheritance chains."""
                                                  nodes = data.get("agents", {})
                                                  resolved: dict[str, dict] = {}
                                                  seen: set[str] = set()
                                              
                                                  def resolve_node(name: str) -> dict:
                                                      if name in resolved:
                                                          return resolved[name]
                                              
                                                      if name in seen:
                                                          msg = f"Circular inheritance detected: {name}"
                                                          raise ValueError(msg)
                                              
                                                      seen.add(name)
                                                      config = (
                                                          nodes[name].model_copy()
                                                          if hasattr(nodes[name], "model_copy")
                                                          else nodes[name].copy()
                                                      )
                                                      inherit = (
                                                          config.get("inherits") if isinstance(config, dict) else config.inherits
                                                      )
                                                      if inherit:
                                                          if inherit not in nodes:
                                                              msg = f"Parent agent {inherit} not found"
                                                              raise ValueError(msg)
                                              
                                                          # Get resolved parent config
                                                          parent = resolve_node(inherit)
                                                          # Merge parent with child (child overrides parent)
                                                          merged = parent.copy()
                                                          merged.update(config)
                                                          config = merged
                                              
                                                      seen.remove(name)
                                                      resolved[name] = config
                                                      return config
                                              
                                                  # Resolve all nodes
                                                  for name in nodes:
                                                      resolved[name] = resolve_node(name)
                                              
                                                  # Update nodes with resolved configs
                                                  data["agents"] = resolved
                                                  return data
                                              

                                              set_instrument_libraries

                                              set_instrument_libraries() -> Self
                                              

                                              Auto-set libraries to instrument based on used providers.

                                              Source code in src/llmling_agent/models/manifest.py
                                              297
                                              298
                                              299
                                              300
                                              301
                                              302
                                              303
                                              304
                                              305
                                              306
                                              @model_validator(mode="after")
                                              def set_instrument_libraries(self) -> Self:
                                                  """Auto-set libraries to instrument based on used providers."""
                                                  if (
                                                      not self.observability.enabled
                                                      or self.observability.instrument_libraries is not None
                                                  ):
                                                      return self
                                                  self.observability.instrument_libraries = list(self.get_used_providers())
                                                  return self
                                              

                                              AudioBase64Content

                                              Bases: AudioContent

                                              Audio from base64 data.

                                              Source code in src/llmling_agent/models/content.py
                                              250
                                              251
                                              252
                                              253
                                              254
                                              255
                                              256
                                              257
                                              258
                                              259
                                              260
                                              261
                                              262
                                              263
                                              264
                                              265
                                              266
                                              267
                                              268
                                              269
                                              270
                                              271
                                              272
                                              273
                                              274
                                              275
                                              276
                                              277
                                              278
                                              279
                                              280
                                              281
                                              282
                                              283
                                              284
                                              285
                                              286
                                              287
                                              288
                                              class AudioBase64Content(AudioContent):
                                                  """Audio from base64 data."""
                                              
                                                  type: Literal["audio_base64"] = Field("audio_base64", init=False)
                                                  """Base64-encoded audio."""
                                              
                                                  data: str
                                                  """Audio data in base64 format."""
                                              
                                                  format: str | None = None  # mp3, wav, etc
                                                  """Audio format."""
                                              
                                                  def to_openai_format(self) -> dict[str, Any]:
                                                      """Convert to OpenAI API format for audio models."""
                                                      data_url = f"data:audio/{self.format or 'mp3'};base64,{self.data}"
                                                      content = {"url": data_url, "format": self.format or "auto"}
                                                      return {"type": "audio", "audio": content}
                                              
                                                  @classmethod
                                                  def from_bytes(cls, data: bytes, audio_format: str = "mp3") -> Self:
                                                      """Create from raw bytes."""
                                                      return cls(data=base64.b64encode(data).decode(), format=audio_format)
                                              
                                                  @classmethod
                                                  def from_path(cls, path: StrPath) -> Self:
                                                      """Create from file path with auto format detection."""
                                                      import mimetypes
                                              
                                                      from upath import UPath
                                              
                                                      path_obj = UPath(path)
                                                      mime_type, _ = mimetypes.guess_type(str(path_obj))
                                                      fmt = (
                                                          mime_type.removeprefix("audio/")
                                                          if mime_type and mime_type.startswith("audio/")
                                                          else "mp3"
                                                      )
                                              
                                                      return cls(data=base64.b64encode(path_obj.read_bytes()).decode(), format=fmt)
                                              

                                              data instance-attribute

                                              data: str
                                              

                                              Audio data in base64 format.

                                              format class-attribute instance-attribute

                                              format: str | None = None
                                              

                                              Audio format.

                                              type class-attribute instance-attribute

                                              type: Literal['audio_base64'] = Field('audio_base64', init=False)
                                              

                                              Base64-encoded audio.

                                              from_bytes classmethod

                                              from_bytes(data: bytes, audio_format: str = 'mp3') -> Self
                                              

                                              Create from raw bytes.

                                              Source code in src/llmling_agent/models/content.py
                                              268
                                              269
                                              270
                                              271
                                              @classmethod
                                              def from_bytes(cls, data: bytes, audio_format: str = "mp3") -> Self:
                                                  """Create from raw bytes."""
                                                  return cls(data=base64.b64encode(data).decode(), format=audio_format)
                                              

                                              from_path classmethod

                                              from_path(path: StrPath) -> Self
                                              

                                              Create from file path with auto format detection.

                                              Source code in src/llmling_agent/models/content.py
                                              273
                                              274
                                              275
                                              276
                                              277
                                              278
                                              279
                                              280
                                              281
                                              282
                                              283
                                              284
                                              285
                                              286
                                              287
                                              288
                                              @classmethod
                                              def from_path(cls, path: StrPath) -> Self:
                                                  """Create from file path with auto format detection."""
                                                  import mimetypes
                                              
                                                  from upath import UPath
                                              
                                                  path_obj = UPath(path)
                                                  mime_type, _ = mimetypes.guess_type(str(path_obj))
                                                  fmt = (
                                                      mime_type.removeprefix("audio/")
                                                      if mime_type and mime_type.startswith("audio/")
                                                      else "mp3"
                                                  )
                                              
                                                  return cls(data=base64.b64encode(path_obj.read_bytes()).decode(), format=fmt)
                                              

                                              to_openai_format

                                              to_openai_format() -> dict[str, Any]
                                              

                                              Convert to OpenAI API format for audio models.

                                              Source code in src/llmling_agent/models/content.py
                                              262
                                              263
                                              264
                                              265
                                              266
                                              def to_openai_format(self) -> dict[str, Any]:
                                                  """Convert to OpenAI API format for audio models."""
                                                  data_url = f"data:audio/{self.format or 'mp3'};base64,{self.data}"
                                                  content = {"url": data_url, "format": self.format or "auto"}
                                                  return {"type": "audio", "audio": content}
                                              

                                              AudioURLContent

                                              Bases: AudioContent

                                              Audio from URL.

                                              Source code in src/llmling_agent/models/content.py
                                              235
                                              236
                                              237
                                              238
                                              239
                                              240
                                              241
                                              242
                                              243
                                              244
                                              245
                                              246
                                              247
                                              class AudioURLContent(AudioContent):
                                                  """Audio from URL."""
                                              
                                                  type: Literal["audio_url"] = Field("audio_url", init=False)
                                                  """URL-based audio."""
                                              
                                                  url: str
                                                  """URL to the audio."""
                                              
                                                  def to_openai_format(self) -> dict[str, Any]:
                                                      """Convert to OpenAI API format for audio models."""
                                                      content = {"url": self.url, "format": self.format or "auto"}
                                                      return {"type": "audio", "audio": content}
                                              

                                              type class-attribute instance-attribute

                                              type: Literal['audio_url'] = Field('audio_url', init=False)
                                              

                                              URL-based audio.

                                              url instance-attribute

                                              url: str
                                              

                                              URL to the audio.

                                              to_openai_format

                                              to_openai_format() -> dict[str, Any]
                                              

                                              Convert to OpenAI API format for audio models.

                                              Source code in src/llmling_agent/models/content.py
                                              244
                                              245
                                              246
                                              247
                                              def to_openai_format(self) -> dict[str, Any]:
                                                  """Convert to OpenAI API format for audio models."""
                                                  content = {"url": self.url, "format": self.format or "auto"}
                                                  return {"type": "audio", "audio": content}
                                              

                                              BaseTeam

                                              Bases: MessageNode[TDeps, TResult]

                                              Base class for Team and TeamRun.

                                              Source code in src/llmling_agent/delegation/base_team.py
                                               74
                                               75
                                               76
                                               77
                                               78
                                               79
                                               80
                                               81
                                               82
                                               83
                                               84
                                               85
                                               86
                                               87
                                               88
                                               89
                                               90
                                               91
                                               92
                                               93
                                               94
                                               95
                                               96
                                               97
                                               98
                                               99
                                              100
                                              101
                                              102
                                              103
                                              104
                                              105
                                              106
                                              107
                                              108
                                              109
                                              110
                                              111
                                              112
                                              113
                                              114
                                              115
                                              116
                                              117
                                              118
                                              119
                                              120
                                              121
                                              122
                                              123
                                              124
                                              125
                                              126
                                              127
                                              128
                                              129
                                              130
                                              131
                                              132
                                              133
                                              134
                                              135
                                              136
                                              137
                                              138
                                              139
                                              140
                                              141
                                              142
                                              143
                                              144
                                              145
                                              146
                                              147
                                              148
                                              149
                                              150
                                              151
                                              152
                                              153
                                              154
                                              155
                                              156
                                              157
                                              158
                                              159
                                              160
                                              161
                                              162
                                              163
                                              164
                                              165
                                              166
                                              167
                                              168
                                              169
                                              170
                                              171
                                              172
                                              173
                                              174
                                              175
                                              176
                                              177
                                              178
                                              179
                                              180
                                              181
                                              182
                                              183
                                              184
                                              185
                                              186
                                              187
                                              188
                                              189
                                              190
                                              191
                                              192
                                              193
                                              194
                                              195
                                              196
                                              197
                                              198
                                              199
                                              200
                                              201
                                              202
                                              203
                                              204
                                              205
                                              206
                                              207
                                              208
                                              209
                                              210
                                              211
                                              212
                                              213
                                              214
                                              215
                                              216
                                              217
                                              218
                                              219
                                              220
                                              221
                                              222
                                              223
                                              224
                                              225
                                              226
                                              227
                                              228
                                              229
                                              230
                                              231
                                              232
                                              233
                                              234
                                              235
                                              236
                                              237
                                              238
                                              239
                                              240
                                              241
                                              242
                                              243
                                              244
                                              245
                                              246
                                              247
                                              248
                                              249
                                              250
                                              251
                                              252
                                              253
                                              254
                                              255
                                              256
                                              257
                                              258
                                              259
                                              260
                                              261
                                              262
                                              263
                                              264
                                              265
                                              266
                                              267
                                              268
                                              269
                                              270
                                              271
                                              272
                                              273
                                              274
                                              275
                                              276
                                              277
                                              278
                                              279
                                              280
                                              281
                                              282
                                              283
                                              284
                                              285
                                              286
                                              287
                                              288
                                              289
                                              290
                                              291
                                              292
                                              293
                                              294
                                              295
                                              296
                                              297
                                              298
                                              299
                                              300
                                              301
                                              302
                                              303
                                              304
                                              305
                                              306
                                              307
                                              308
                                              309
                                              310
                                              311
                                              312
                                              313
                                              314
                                              315
                                              316
                                              317
                                              318
                                              319
                                              320
                                              321
                                              322
                                              323
                                              324
                                              325
                                              326
                                              327
                                              328
                                              329
                                              330
                                              331
                                              332
                                              333
                                              334
                                              335
                                              336
                                              337
                                              338
                                              339
                                              340
                                              341
                                              342
                                              343
                                              344
                                              345
                                              346
                                              347
                                              348
                                              349
                                              350
                                              351
                                              352
                                              353
                                              354
                                              355
                                              356
                                              357
                                              358
                                              359
                                              360
                                              361
                                              362
                                              363
                                              364
                                              365
                                              366
                                              367
                                              368
                                              369
                                              370
                                              371
                                              372
                                              373
                                              374
                                              375
                                              376
                                              377
                                              378
                                              379
                                              380
                                              381
                                              382
                                              383
                                              384
                                              385
                                              386
                                              387
                                              388
                                              389
                                              390
                                              391
                                              392
                                              393
                                              394
                                              395
                                              396
                                              397
                                              398
                                              399
                                              400
                                              401
                                              402
                                              403
                                              404
                                              405
                                              406
                                              407
                                              408
                                              409
                                              410
                                              411
                                              412
                                              413
                                              414
                                              415
                                              416
                                              417
                                              418
                                              419
                                              420
                                              421
                                              422
                                              423
                                              424
                                              425
                                              426
                                              427
                                              428
                                              429
                                              430
                                              431
                                              432
                                              433
                                              434
                                              435
                                              436
                                              437
                                              438
                                              439
                                              440
                                              441
                                              442
                                              443
                                              444
                                              445
                                              446
                                              447
                                              448
                                              449
                                              450
                                              451
                                              452
                                              453
                                              454
                                              455
                                              456
                                              457
                                              458
                                              459
                                              460
                                              461
                                              462
                                              463
                                              464
                                              465
                                              466
                                              467
                                              468
                                              469
                                              470
                                              471
                                              472
                                              473
                                              474
                                              475
                                              476
                                              477
                                              478
                                              479
                                              480
                                              481
                                              482
                                              483
                                              484
                                              485
                                              486
                                              487
                                              488
                                              489
                                              490
                                              491
                                              492
                                              493
                                              494
                                              495
                                              496
                                              497
                                              498
                                              499
                                              500
                                              501
                                              502
                                              503
                                              504
                                              505
                                              506
                                              507
                                              508
                                              509
                                              510
                                              511
                                              512
                                              513
                                              514
                                              515
                                              516
                                              517
                                              518
                                              519
                                              520
                                              521
                                              522
                                              523
                                              524
                                              525
                                              526
                                              527
                                              528
                                              529
                                              530
                                              531
                                              532
                                              533
                                              534
                                              535
                                              536
                                              537
                                              538
                                              539
                                              540
                                              541
                                              542
                                              543
                                              544
                                              545
                                              class BaseTeam[TDeps, TResult](MessageNode[TDeps, TResult]):
                                                  """Base class for Team and TeamRun."""
                                              
                                                  def __init__(
                                                      self,
                                                      agents: Sequence[MessageNode[TDeps, TResult]],
                                                      *,
                                                      name: str | None = None,
                                                      description: str | None = None,
                                                      shared_prompt: str | None = None,
                                                      mcp_servers: list[str | MCPServerConfig] | None = None,
                                                      picker: AnyAgent[Any, Any] | None = None,
                                                      num_picks: int | None = None,
                                                      pick_prompt: str | None = None,
                                                  ):
                                                      """Common variables only for typing."""
                                                      from llmling_agent.delegation.teamrun import ExtendedTeamTalk
                                              
                                                      self._name = name or " & ".join([i.name for i in agents])
                                                      self.agents = EventedList[MessageNode]()
                                                      self.agents.events.inserted.connect(self._on_node_added)
                                                      self.agents.events.removed.connect(self._on_node_removed)
                                                      self.agents.events.changed.connect(self._on_node_changed)
                                                      super().__init__(
                                                          name=self._name,
                                                          context=self.context,
                                                          mcp_servers=mcp_servers,
                                                          description=description,
                                                      )
                                                      self.agents.extend(list(agents))
                                                      self._team_talk = ExtendedTeamTalk()
                                                      self.shared_prompt = shared_prompt
                                                      self._main_task: asyncio.Task[Any] | None = None
                                                      self._infinite = False
                                                      self.picker = picker
                                                      self.num_picks = num_picks
                                                      self.pick_prompt = pick_prompt
                                              
                                                  def to_tool(self, *, name: str | None = None, description: str | None = None) -> Tool:
                                                      """Create a tool from this agent.
                                              
                                                      Args:
                                                          name: Optional tool name override
                                                          description: Optional tool description override
                                                      """
                                                      tool_name = name or f"ask_{self.name}"
                                              
                                                      async def wrapped_tool(prompt: str) -> TResult:
                                                          result = await self.run(prompt)
                                                          return result.data
                                              
                                                      docstring = description or f"Get expert answer from node {self.name}"
                                                      if self.description:
                                                          docstring = f"{docstring}\n\n{self.description}"
                                              
                                                      wrapped_tool.__doc__ = docstring
                                                      wrapped_tool.__name__ = tool_name
                                              
                                                      return Tool.from_callable(
                                                          wrapped_tool,
                                                          name_override=tool_name,
                                                          description_override=docstring,
                                                      )
                                              
                                                  async def pick_agents(self, task: str) -> Sequence[MessageNode[Any, Any]]:
                                                      """Pick agents to run."""
                                                      if self.picker:
                                                          if self.num_picks == 1:
                                                              result = await self.picker.talk.pick(self, task, self.pick_prompt)
                                                              return [result.selection]
                                                          result = await self.picker.talk.pick_multiple(
                                                              self,
                                                              task,
                                                              min_picks=self.num_picks or 1,
                                                              max_picks=self.num_picks,
                                                              prompt=self.pick_prompt,
                                                          )
                                                          return result.selections
                                                      return list(self.agents)
                                              
                                                  def _on_node_changed(self, index: int, old: MessageNode, new: MessageNode):
                                                      """Handle node replacement in the agents list."""
                                                      self._on_node_removed(index, old)
                                                      self._on_node_added(index, new)
                                              
                                                  def _on_node_added(self, index: int, node: MessageNode[Any, Any]):
                                                      """Handler for adding nodes to the team."""
                                                      from llmling_agent.agent import Agent, StructuredAgent
                                              
                                                      if isinstance(node, Agent | StructuredAgent):
                                                          node.tools.add_provider(self.mcp)
                                                      # TODO: Right now connecting here is not desired since emission means db logging
                                                      # ideally db logging would not rely on the "public" agent signal.
                                              
                                                      # node.tool_used.connect(self.tool_used)
                                              
                                                  def _on_node_removed(self, index: int, node: MessageNode[Any, Any]):
                                                      """Handler for removing nodes from the team."""
                                                      from llmling_agent.agent import Agent, StructuredAgent
                                              
                                                      if isinstance(node, Agent | StructuredAgent):
                                                          node.tools.remove_provider(self.mcp)
                                                      # node.tool_used.disconnect(self.tool_used)
                                              
                                                  def __repr__(self) -> str:
                                                      """Create readable representation."""
                                                      members = ", ".join(agent.name for agent in self.agents)
                                                      name = f" ({self.name})" if self.name else ""
                                                      return f"{self.__class__.__name__}[{len(self.agents)}]{name}: {members}"
                                              
                                                  def __len__(self) -> int:
                                                      """Get number of team members."""
                                                      return len(self.agents)
                                              
                                                  def __iter__(self) -> Iterator[MessageNode[TDeps, TResult]]:
                                                      """Iterate over team members."""
                                                      return iter(self.agents)
                                              
                                                  def __getitem__(self, index_or_name: int | str) -> MessageNode[TDeps, TResult]:
                                                      """Get team member by index or name."""
                                                      if isinstance(index_or_name, str):
                                                          return next(agent for agent in self.agents if agent.name == index_or_name)
                                                      return self.agents[index_or_name]
                                              
                                                  def __or__(
                                                      self,
                                                      other: AnyAgent[Any, Any] | ProcessorCallback[Any] | BaseTeam[Any, Any],
                                                  ) -> TeamRun[Any, Any]:
                                                      """Create a sequential pipeline."""
                                                      from llmling_agent.agent import Agent, StructuredAgent
                                                      from llmling_agent.delegation.teamrun import TeamRun
                                              
                                                      # Handle conversion of callables first
                                                      if callable(other):
                                                          if has_return_type(other, str):
                                                              other = Agent.from_callback(other)
                                                          else:
                                                              other = StructuredAgent.from_callback(other)
                                                          other.context.pool = self.context.pool
                                              
                                                      # If we're already a TeamRun, extend it
                                                      if isinstance(self, TeamRun):
                                                          if self.validator:
                                                              # If we have a validator, create new TeamRun to preserve validation
                                                              return TeamRun([self, other])
                                                          self.agents.append(other)
                                                          return self
                                                      # Otherwise create new TeamRun
                                                      return TeamRun([self, other])
                                              
                                                  @overload
                                                  def __and__(self, other: Team[None]) -> Team[None]: ...
                                              
                                                  @overload
                                                  def __and__(self, other: Team[TDeps]) -> Team[TDeps]: ...
                                              
                                                  @overload
                                                  def __and__(self, other: Team[Any]) -> Team[Any]: ...
                                              
                                                  @overload
                                                  def __and__(self, other: AnyAgent[TDeps, Any]) -> Team[TDeps]: ...
                                              
                                                  @overload
                                                  def __and__(self, other: AnyAgent[Any, Any]) -> Team[Any]: ...
                                              
                                                  def __and__(
                                                      self, other: Team[Any] | AnyAgent[Any, Any] | ProcessorCallback[Any]
                                                  ) -> Team[Any]:
                                                      """Combine teams, preserving type safety for same types."""
                                                      from llmling_agent.agent import Agent, StructuredAgent
                                                      from llmling_agent.delegation.team import Team
                                              
                                                      if callable(other):
                                                          if has_return_type(other, str):
                                                              other = Agent.from_callback(other)
                                                          else:
                                                              other = StructuredAgent.from_callback(other)
                                                          other.context.pool = self.context.pool
                                              
                                                      match other:
                                                          case Team():
                                                              # Flatten when combining Teams
                                                              return Team([*self.agents, *other.agents])
                                                          case _:
                                                              # Everything else just becomes a member
                                                              return Team([*self.agents, other])
                                              
                                                  @property
                                                  def stats(self) -> AggregatedMessageStats:
                                                      """Get aggregated stats from all team members."""
                                                      return AggregatedMessageStats(stats=[agent.stats for agent in self.agents])
                                              
                                                  @property
                                                  def is_running(self) -> bool:
                                                      """Whether execution is currently running."""
                                                      return bool(self._main_task and not self._main_task.done())
                                              
                                                  def is_busy(self) -> bool:
                                                      """Check if team is processing any tasks."""
                                                      return bool(self._pending_tasks or self._main_task)
                                              
                                                  async def stop(self):
                                                      """Stop background execution if running."""
                                                      if self._main_task and not self._main_task.done():
                                                          self._main_task.cancel()
                                                          await self._main_task
                                                      self._main_task = None
                                                      await self.cleanup_tasks()
                                              
                                                  async def wait(self) -> ChatMessage[Any] | None:
                                                      """Wait for background execution to complete and return last message."""
                                                      if not self._main_task:
                                                          msg = "No execution running"
                                                          raise RuntimeError(msg)
                                                      if self._infinite:
                                                          msg = "Cannot wait on infinite execution"
                                                          raise RuntimeError(msg)
                                                      try:
                                                          return await self._main_task
                                                      finally:
                                                          await self.cleanup_tasks()
                                                          self._main_task = None
                                              
                                                  async def run_in_background(
                                                      self,
                                                      *prompts: AnyPromptType | PIL.Image.Image | os.PathLike[str] | None,
                                                      max_count: int | None = 1,  # 1 = single execution, None = indefinite
                                                      interval: float = 1.0,
                                                      **kwargs: Any,
                                                  ) -> ExtendedTeamTalk:
                                                      """Start execution in background.
                                              
                                                      Args:
                                                          prompts: Prompts to execute
                                                          max_count: Maximum number of executions (None = run indefinitely)
                                                          interval: Seconds between executions
                                                          **kwargs: Additional args for execute()
                                                      """
                                                      if self._main_task:
                                                          msg = "Execution already running"
                                                          raise RuntimeError(msg)
                                                      self._infinite = max_count is None
                                              
                                                      async def _continuous() -> ChatMessage[Any] | None:
                                                          count = 0
                                                          last_message = None
                                                          while max_count is None or count < max_count:
                                                              try:
                                                                  result = await self.execute(*prompts, **kwargs)
                                                                  last_message = result[-1].message if result else None
                                                                  count += 1
                                                                  if max_count is None or count < max_count:
                                                                      await asyncio.sleep(interval)
                                                              except asyncio.CancelledError:
                                                                  logger.debug("Background execution cancelled")
                                                                  break
                                                          return last_message
                                              
                                                      self._main_task = self.create_task(_continuous(), name="main_execution")
                                                      return self._team_talk
                                              
                                                  @property
                                                  def execution_stats(self) -> AggregatedTalkStats:
                                                      """Get current execution statistics."""
                                                      return self._team_talk.stats
                                              
                                                  @property
                                                  def talk(self) -> ExtendedTeamTalk:
                                                      """Get current connection."""
                                                      return self._team_talk
                                              
                                                  @property
                                                  def events(self) -> ListEvents:
                                                      """Get events for the team."""
                                                      return self.agents.events
                                              
                                                  async def cancel(self):
                                                      """Cancel execution and cleanup."""
                                                      if self._main_task:
                                                          self._main_task.cancel()
                                                      await self.cleanup_tasks()
                                              
                                                  def get_structure_diagram(self) -> str:
                                                      """Generate mermaid flowchart of node hierarchy."""
                                                      lines = ["flowchart TD"]
                                              
                                                      def add_node(node: MessageNode[Any, Any], parent: str | None = None):
                                                          """Recursively add node and its members to diagram."""
                                                          node_id = f"node_{id(node)}"
                                                          lines.append(f"    {node_id}[{node.name}]")
                                                          if parent:
                                                              lines.append(f"    {parent} --> {node_id}")
                                              
                                                          # If it's a team, recursively add its members
                                                          from llmling_agent.delegation.base_team import BaseTeam
                                              
                                                          if isinstance(node, BaseTeam):
                                                              for member in node.agents:
                                                                  add_node(member, node_id)
                                              
                                                      # Start with root nodes (team members)
                                                      for node in self.agents:
                                                          add_node(node)
                                              
                                                      return "\n".join(lines)
                                              
                                                  def iter_agents(self) -> Iterator[AnyAgent[Any, Any]]:
                                                      """Recursively iterate over all child agents."""
                                                      from llmling_agent.agent import Agent, StructuredAgent
                                              
                                                      for node in self.agents:
                                                          match node:
                                                              case BaseTeam():
                                                                  yield from node.iter_agents()
                                                              case Agent() | StructuredAgent():
                                                                  yield node
                                                              case _:
                                                                  msg = f"Invalid node type: {type(node)}"
                                                                  raise ValueError(msg)
                                              
                                                  @property
                                                  def context(self) -> TeamContext:
                                                      """Get shared pool from team members.
                                              
                                                      Raises:
                                                          ValueError: If team members belong to different pools
                                                      """
                                                      from llmling_agent.delegation.team import Team
                                              
                                                      pool_ids: set[int] = set()
                                                      shared_pool: AgentPool | None = None
                                                      team_config: TeamConfig | None = None
                                              
                                                      for agent in self.iter_agents():
                                                          if agent.context and agent.context.pool:
                                                              pool_id = id(agent.context.pool)
                                                              if pool_id not in pool_ids:
                                                                  pool_ids.add(pool_id)
                                                                  shared_pool = agent.context.pool
                                                                  if shared_pool.manifest.teams:
                                                                      team_config = shared_pool.manifest.teams.get(self.name)
                                                      if not team_config:
                                                          mode = "parallel" if isinstance(self, Team) else "sequential"
                                                          team_config = TeamConfig(name=self.name, mode=mode, members=[])
                                                      if not pool_ids:
                                                          logger.info("No pool found for team %s.", self.name)
                                                          return TeamContext(
                                                              node_name=self.name,
                                                              pool=shared_pool,
                                                              config=team_config,
                                                              definition=shared_pool.manifest if shared_pool else AgentsManifest(),
                                                          )
                                              
                                                      if len(pool_ids) > 1:
                                                          msg = f"Team members in {self.name} belong to different pools"
                                                          raise ValueError(msg)
                                                      return TeamContext(
                                                          node_name=self.name,
                                                          pool=shared_pool,
                                                          config=team_config,
                                                          definition=shared_pool.manifest if shared_pool else AgentsManifest(),
                                                      )
                                              
                                                  @context.setter
                                                  def context(self, value: NodeContext):
                                                      msg = "Cannot set context on BaseTeam"
                                                      raise RuntimeError(msg)
                                              
                                                  async def distribute(
                                                      self,
                                                      content: str,
                                                      *,
                                                      tools: list[str] | None = None,
                                                      resources: list[str] | None = None,
                                                      metadata: dict[str, Any] | None = None,
                                                  ):
                                                      """Distribute content and capabilities to all team members."""
                                                      for agent in self.iter_agents():
                                                          # Add context message
                                                          agent.conversation.add_context_message(
                                                              content, source="distribution", metadata=metadata
                                                          )
                                              
                                                          # Register tools if provided
                                                          if tools:
                                                              for tool in tools:
                                                                  agent.tools.register_tool(tool)
                                              
                                                          # Load resources if provided
                                                          if resources:
                                                              for resource in resources:
                                                                  await agent.conversation.load_context_source(resource)
                                              
                                                  @asynccontextmanager
                                                  async def temporary_state(
                                                      self,
                                                      *,
                                                      system_prompts: list[AnyPromptType] | None = None,
                                                      replace_prompts: bool = False,
                                                      tools: list[ToolType] | None = None,
                                                      replace_tools: bool = False,
                                                      history: list[AnyPromptType] | SessionQuery | None = None,
                                                      replace_history: bool = False,
                                                      pause_routing: bool = False,
                                                      model: ModelType | None = None,
                                                      provider: AgentProvider | None = None,
                                                  ) -> AsyncIterator[Self]:
                                                      """Temporarily modify state of all agents in the team.
                                              
                                                      All agents in the team will enter their temporary state simultaneously.
                                              
                                                      Args:
                                                          system_prompts: Temporary system prompts to use
                                                          replace_prompts: Whether to replace existing prompts
                                                          tools: Temporary tools to make available
                                                          replace_tools: Whether to replace existing tools
                                                          history: Conversation history (prompts or query)
                                                          replace_history: Whether to replace existing history
                                                          pause_routing: Whether to pause message routing
                                                          model: Temporary model override
                                                          provider: Temporary provider override
                                                      """
                                                      # Get all agents (flattened) before entering context
                                                      agents = list(self.iter_agents())
                                              
                                                      async with AsyncExitStack() as stack:
                                                          if pause_routing:
                                                              await stack.enter_async_context(self.connections.paused_routing())
                                                          # Enter temporary state for all agents
                                                          for agent in agents:
                                                              await stack.enter_async_context(
                                                                  agent.temporary_state(
                                                                      system_prompts=system_prompts,
                                                                      replace_prompts=replace_prompts,
                                                                      tools=tools,
                                                                      replace_tools=replace_tools,
                                                                      history=history,
                                                                      replace_history=replace_history,
                                                                      pause_routing=pause_routing,
                                                                      model=model,
                                                                      provider=provider,
                                                                  )
                                                              )
                                                          try:
                                                              yield self
                                                          finally:
                                                              # AsyncExitStack will handle cleanup of all states
                                                              pass
                                              
                                                  @abstractmethod
                                                  async def execute(
                                                      self,
                                                      *prompts: AnyPromptType | PIL.Image.Image | os.PathLike[str] | None,
                                                      **kwargs: Any,
                                                  ) -> TeamResponse: ...
                                              
                                                  def run_sync(
                                                      self,
                                                      *prompt: AnyPromptType | PIL.Image.Image | os.PathLike[str],
                                                      store_history: bool = True,
                                                  ) -> ChatMessage[TResult]:
                                                      """Run agent synchronously (convenience wrapper).
                                              
                                                      Args:
                                                          prompt: User query or instruction
                                                          store_history: Whether the message exchange should be added to the
                                                                         context window
                                                      Returns:
                                                          Result containing response and run information
                                                      """
                                                      coro = self.run(*prompt, store_history=store_history)
                                                      return self.run_task_sync(coro)
                                              

                                              context property writable

                                              context: TeamContext
                                              

                                              Get shared pool from team members.

                                              Raises:

                                              Type Description
                                              ValueError

                                              If team members belong to different pools

                                              events property

                                              events: ListEvents
                                              

                                              Get events for the team.

                                              execution_stats property

                                              execution_stats: AggregatedTalkStats
                                              

                                              Get current execution statistics.

                                              is_running property

                                              is_running: bool
                                              

                                              Whether execution is currently running.

                                              stats property

                                              Get aggregated stats from all team members.

                                              talk property

                                              Get current connection.

                                              __and__

                                              __and__(other: Team[None]) -> Team[None]
                                              
                                              __and__(other: Team[TDeps]) -> Team[TDeps]
                                              
                                              __and__(other: Team[Any]) -> Team[Any]
                                              
                                              __and__(other: AnyAgent[TDeps, Any]) -> Team[TDeps]
                                              
                                              __and__(other: AnyAgent[Any, Any]) -> Team[Any]
                                              
                                              __and__(other: Team[Any] | AnyAgent[Any, Any] | ProcessorCallback[Any]) -> Team[Any]
                                              

                                              Combine teams, preserving type safety for same types.

                                              Source code in src/llmling_agent/delegation/base_team.py
                                              239
                                              240
                                              241
                                              242
                                              243
                                              244
                                              245
                                              246
                                              247
                                              248
                                              249
                                              250
                                              251
                                              252
                                              253
                                              254
                                              255
                                              256
                                              257
                                              258
                                              259
                                              def __and__(
                                                  self, other: Team[Any] | AnyAgent[Any, Any] | ProcessorCallback[Any]
                                              ) -> Team[Any]:
                                                  """Combine teams, preserving type safety for same types."""
                                                  from llmling_agent.agent import Agent, StructuredAgent
                                                  from llmling_agent.delegation.team import Team
                                              
                                                  if callable(other):
                                                      if has_return_type(other, str):
                                                          other = Agent.from_callback(other)
                                                      else:
                                                          other = StructuredAgent.from_callback(other)
                                                      other.context.pool = self.context.pool
                                              
                                                  match other:
                                                      case Team():
                                                          # Flatten when combining Teams
                                                          return Team([*self.agents, *other.agents])
                                                      case _:
                                                          # Everything else just becomes a member
                                                          return Team([*self.agents, other])
                                              

                                              __getitem__

                                              __getitem__(index_or_name: int | str) -> MessageNode[TDeps, TResult]
                                              

                                              Get team member by index or name.

                                              Source code in src/llmling_agent/delegation/base_team.py
                                              192
                                              193
                                              194
                                              195
                                              196
                                              def __getitem__(self, index_or_name: int | str) -> MessageNode[TDeps, TResult]:
                                                  """Get team member by index or name."""
                                                  if isinstance(index_or_name, str):
                                                      return next(agent for agent in self.agents if agent.name == index_or_name)
                                                  return self.agents[index_or_name]
                                              

                                              __init__

                                              __init__(
                                                  agents: Sequence[MessageNode[TDeps, TResult]],
                                                  *,
                                                  name: str | None = None,
                                                  description: str | None = None,
                                                  shared_prompt: str | None = None,
                                                  mcp_servers: list[str | MCPServerConfig] | None = None,
                                                  picker: AnyAgent[Any, Any] | None = None,
                                                  num_picks: int | None = None,
                                                  pick_prompt: str | None = None,
                                              )
                                              

                                              Common variables only for typing.

                                              Source code in src/llmling_agent/delegation/base_team.py
                                               77
                                               78
                                               79
                                               80
                                               81
                                               82
                                               83
                                               84
                                               85
                                               86
                                               87
                                               88
                                               89
                                               90
                                               91
                                               92
                                               93
                                               94
                                               95
                                               96
                                               97
                                               98
                                               99
                                              100
                                              101
                                              102
                                              103
                                              104
                                              105
                                              106
                                              107
                                              108
                                              109
                                              110
                                              def __init__(
                                                  self,
                                                  agents: Sequence[MessageNode[TDeps, TResult]],
                                                  *,
                                                  name: str | None = None,
                                                  description: str | None = None,
                                                  shared_prompt: str | None = None,
                                                  mcp_servers: list[str | MCPServerConfig] | None = None,
                                                  picker: AnyAgent[Any, Any] | None = None,
                                                  num_picks: int | None = None,
                                                  pick_prompt: str | None = None,
                                              ):
                                                  """Common variables only for typing."""
                                                  from llmling_agent.delegation.teamrun import ExtendedTeamTalk
                                              
                                                  self._name = name or " & ".join([i.name for i in agents])
                                                  self.agents = EventedList[MessageNode]()
                                                  self.agents.events.inserted.connect(self._on_node_added)
                                                  self.agents.events.removed.connect(self._on_node_removed)
                                                  self.agents.events.changed.connect(self._on_node_changed)
                                                  super().__init__(
                                                      name=self._name,
                                                      context=self.context,
                                                      mcp_servers=mcp_servers,
                                                      description=description,
                                                  )
                                                  self.agents.extend(list(agents))
                                                  self._team_talk = ExtendedTeamTalk()
                                                  self.shared_prompt = shared_prompt
                                                  self._main_task: asyncio.Task[Any] | None = None
                                                  self._infinite = False
                                                  self.picker = picker
                                                  self.num_picks = num_picks
                                                  self.pick_prompt = pick_prompt
                                              

                                              __iter__

                                              __iter__() -> Iterator[MessageNode[TDeps, TResult]]
                                              

                                              Iterate over team members.

                                              Source code in src/llmling_agent/delegation/base_team.py
                                              188
                                              189
                                              190
                                              def __iter__(self) -> Iterator[MessageNode[TDeps, TResult]]:
                                                  """Iterate over team members."""
                                                  return iter(self.agents)
                                              

                                              __len__

                                              __len__() -> int
                                              

                                              Get number of team members.

                                              Source code in src/llmling_agent/delegation/base_team.py
                                              184
                                              185
                                              186
                                              def __len__(self) -> int:
                                                  """Get number of team members."""
                                                  return len(self.agents)
                                              

                                              __or__

                                              __or__(
                                                  other: AnyAgent[Any, Any] | ProcessorCallback[Any] | BaseTeam[Any, Any],
                                              ) -> TeamRun[Any, Any]
                                              

                                              Create a sequential pipeline.

                                              Source code in src/llmling_agent/delegation/base_team.py
                                              198
                                              199
                                              200
                                              201
                                              202
                                              203
                                              204
                                              205
                                              206
                                              207
                                              208
                                              209
                                              210
                                              211
                                              212
                                              213
                                              214
                                              215
                                              216
                                              217
                                              218
                                              219
                                              220
                                              221
                                              222
                                              def __or__(
                                                  self,
                                                  other: AnyAgent[Any, Any] | ProcessorCallback[Any] | BaseTeam[Any, Any],
                                              ) -> TeamRun[Any, Any]:
                                                  """Create a sequential pipeline."""
                                                  from llmling_agent.agent import Agent, StructuredAgent
                                                  from llmling_agent.delegation.teamrun import TeamRun
                                              
                                                  # Handle conversion of callables first
                                                  if callable(other):
                                                      if has_return_type(other, str):
                                                          other = Agent.from_callback(other)
                                                      else:
                                                          other = StructuredAgent.from_callback(other)
                                                      other.context.pool = self.context.pool
                                              
                                                  # If we're already a TeamRun, extend it
                                                  if isinstance(self, TeamRun):
                                                      if self.validator:
                                                          # If we have a validator, create new TeamRun to preserve validation
                                                          return TeamRun([self, other])
                                                      self.agents.append(other)
                                                      return self
                                                  # Otherwise create new TeamRun
                                                  return TeamRun([self, other])
                                              

                                              __repr__

                                              __repr__() -> str
                                              

                                              Create readable representation.

                                              Source code in src/llmling_agent/delegation/base_team.py
                                              178
                                              179
                                              180
                                              181
                                              182
                                              def __repr__(self) -> str:
                                                  """Create readable representation."""
                                                  members = ", ".join(agent.name for agent in self.agents)
                                                  name = f" ({self.name})" if self.name else ""
                                                  return f"{self.__class__.__name__}[{len(self.agents)}]{name}: {members}"
                                              

                                              _on_node_added

                                              _on_node_added(index: int, node: MessageNode[Any, Any])
                                              

                                              Handler for adding nodes to the team.

                                              Source code in src/llmling_agent/delegation/base_team.py
                                              159
                                              160
                                              161
                                              162
                                              163
                                              164
                                              def _on_node_added(self, index: int, node: MessageNode[Any, Any]):
                                                  """Handler for adding nodes to the team."""
                                                  from llmling_agent.agent import Agent, StructuredAgent
                                              
                                                  if isinstance(node, Agent | StructuredAgent):
                                                      node.tools.add_provider(self.mcp)
                                              

                                              _on_node_changed

                                              _on_node_changed(index: int, old: MessageNode, new: MessageNode)
                                              

                                              Handle node replacement in the agents list.

                                              Source code in src/llmling_agent/delegation/base_team.py
                                              154
                                              155
                                              156
                                              157
                                              def _on_node_changed(self, index: int, old: MessageNode, new: MessageNode):
                                                  """Handle node replacement in the agents list."""
                                                  self._on_node_removed(index, old)
                                                  self._on_node_added(index, new)
                                              

                                              _on_node_removed

                                              _on_node_removed(index: int, node: MessageNode[Any, Any])
                                              

                                              Handler for removing nodes from the team.

                                              Source code in src/llmling_agent/delegation/base_team.py
                                              170
                                              171
                                              172
                                              173
                                              174
                                              175
                                              def _on_node_removed(self, index: int, node: MessageNode[Any, Any]):
                                                  """Handler for removing nodes from the team."""
                                                  from llmling_agent.agent import Agent, StructuredAgent
                                              
                                                  if isinstance(node, Agent | StructuredAgent):
                                                      node.tools.remove_provider(self.mcp)
                                              

                                              cancel async

                                              cancel()
                                              

                                              Cancel execution and cleanup.

                                              Source code in src/llmling_agent/delegation/base_team.py
                                              350
                                              351
                                              352
                                              353
                                              354
                                              async def cancel(self):
                                                  """Cancel execution and cleanup."""
                                                  if self._main_task:
                                                      self._main_task.cancel()
                                                  await self.cleanup_tasks()
                                              

                                              distribute async

                                              distribute(
                                                  content: str,
                                                  *,
                                                  tools: list[str] | None = None,
                                                  resources: list[str] | None = None,
                                                  metadata: dict[str, Any] | None = None,
                                              )
                                              

                                              Distribute content and capabilities to all team members.

                                              Source code in src/llmling_agent/delegation/base_team.py
                                              442
                                              443
                                              444
                                              445
                                              446
                                              447
                                              448
                                              449
                                              450
                                              451
                                              452
                                              453
                                              454
                                              455
                                              456
                                              457
                                              458
                                              459
                                              460
                                              461
                                              462
                                              463
                                              464
                                              465
                                              async def distribute(
                                                  self,
                                                  content: str,
                                                  *,
                                                  tools: list[str] | None = None,
                                                  resources: list[str] | None = None,
                                                  metadata: dict[str, Any] | None = None,
                                              ):
                                                  """Distribute content and capabilities to all team members."""
                                                  for agent in self.iter_agents():
                                                      # Add context message
                                                      agent.conversation.add_context_message(
                                                          content, source="distribution", metadata=metadata
                                                      )
                                              
                                                      # Register tools if provided
                                                      if tools:
                                                          for tool in tools:
                                                              agent.tools.register_tool(tool)
                                              
                                                      # Load resources if provided
                                                      if resources:
                                                          for resource in resources:
                                                              await agent.conversation.load_context_source(resource)
                                              

                                              get_structure_diagram

                                              get_structure_diagram() -> str
                                              

                                              Generate mermaid flowchart of node hierarchy.

                                              Source code in src/llmling_agent/delegation/base_team.py
                                              356
                                              357
                                              358
                                              359
                                              360
                                              361
                                              362
                                              363
                                              364
                                              365
                                              366
                                              367
                                              368
                                              369
                                              370
                                              371
                                              372
                                              373
                                              374
                                              375
                                              376
                                              377
                                              378
                                              def get_structure_diagram(self) -> str:
                                                  """Generate mermaid flowchart of node hierarchy."""
                                                  lines = ["flowchart TD"]
                                              
                                                  def add_node(node: MessageNode[Any, Any], parent: str | None = None):
                                                      """Recursively add node and its members to diagram."""
                                                      node_id = f"node_{id(node)}"
                                                      lines.append(f"    {node_id}[{node.name}]")
                                                      if parent:
                                                          lines.append(f"    {parent} --> {node_id}")
                                              
                                                      # If it's a team, recursively add its members
                                                      from llmling_agent.delegation.base_team import BaseTeam
                                              
                                                      if isinstance(node, BaseTeam):
                                                          for member in node.agents:
                                                              add_node(member, node_id)
                                              
                                                  # Start with root nodes (team members)
                                                  for node in self.agents:
                                                      add_node(node)
                                              
                                                  return "\n".join(lines)
                                              

                                              is_busy

                                              is_busy() -> bool
                                              

                                              Check if team is processing any tasks.

                                              Source code in src/llmling_agent/delegation/base_team.py
                                              271
                                              272
                                              273
                                              def is_busy(self) -> bool:
                                                  """Check if team is processing any tasks."""
                                                  return bool(self._pending_tasks or self._main_task)
                                              

                                              iter_agents

                                              iter_agents() -> Iterator[AnyAgent[Any, Any]]
                                              

                                              Recursively iterate over all child agents.

                                              Source code in src/llmling_agent/delegation/base_team.py
                                              380
                                              381
                                              382
                                              383
                                              384
                                              385
                                              386
                                              387
                                              388
                                              389
                                              390
                                              391
                                              392
                                              def iter_agents(self) -> Iterator[AnyAgent[Any, Any]]:
                                                  """Recursively iterate over all child agents."""
                                                  from llmling_agent.agent import Agent, StructuredAgent
                                              
                                                  for node in self.agents:
                                                      match node:
                                                          case BaseTeam():
                                                              yield from node.iter_agents()
                                                          case Agent() | StructuredAgent():
                                                              yield node
                                                          case _:
                                                              msg = f"Invalid node type: {type(node)}"
                                                              raise ValueError(msg)
                                              

                                              pick_agents async

                                              pick_agents(task: str) -> Sequence[MessageNode[Any, Any]]
                                              

                                              Pick agents to run.

                                              Source code in src/llmling_agent/delegation/base_team.py
                                              138
                                              139
                                              140
                                              141
                                              142
                                              143
                                              144
                                              145
                                              146
                                              147
                                              148
                                              149
                                              150
                                              151
                                              152
                                              async def pick_agents(self, task: str) -> Sequence[MessageNode[Any, Any]]:
                                                  """Pick agents to run."""
                                                  if self.picker:
                                                      if self.num_picks == 1:
                                                          result = await self.picker.talk.pick(self, task, self.pick_prompt)
                                                          return [result.selection]
                                                      result = await self.picker.talk.pick_multiple(
                                                          self,
                                                          task,
                                                          min_picks=self.num_picks or 1,
                                                          max_picks=self.num_picks,
                                                          prompt=self.pick_prompt,
                                                      )
                                                      return result.selections
                                                  return list(self.agents)
                                              

                                              run_in_background async

                                              run_in_background(
                                                  *prompts: AnyPromptType | Image | PathLike[str] | None,
                                                  max_count: int | None = 1,
                                                  interval: float = 1.0,
                                                  **kwargs: Any,
                                              ) -> ExtendedTeamTalk
                                              

                                              Start execution in background.

                                              Parameters:

                                              Name Type Description Default
                                              prompts AnyPromptType | Image | PathLike[str] | None

                                              Prompts to execute

                                              ()
                                              max_count int | None

                                              Maximum number of executions (None = run indefinitely)

                                              1
                                              interval float

                                              Seconds between executions

                                              1.0
                                              **kwargs Any

                                              Additional args for execute()

                                              {}
                                              Source code in src/llmling_agent/delegation/base_team.py
                                              297
                                              298
                                              299
                                              300
                                              301
                                              302
                                              303
                                              304
                                              305
                                              306
                                              307
                                              308
                                              309
                                              310
                                              311
                                              312
                                              313
                                              314
                                              315
                                              316
                                              317
                                              318
                                              319
                                              320
                                              321
                                              322
                                              323
                                              324
                                              325
                                              326
                                              327
                                              328
                                              329
                                              330
                                              331
                                              332
                                              333
                                              async def run_in_background(
                                                  self,
                                                  *prompts: AnyPromptType | PIL.Image.Image | os.PathLike[str] | None,
                                                  max_count: int | None = 1,  # 1 = single execution, None = indefinite
                                                  interval: float = 1.0,
                                                  **kwargs: Any,
                                              ) -> ExtendedTeamTalk:
                                                  """Start execution in background.
                                              
                                                  Args:
                                                      prompts: Prompts to execute
                                                      max_count: Maximum number of executions (None = run indefinitely)
                                                      interval: Seconds between executions
                                                      **kwargs: Additional args for execute()
                                                  """
                                                  if self._main_task:
                                                      msg = "Execution already running"
                                                      raise RuntimeError(msg)
                                                  self._infinite = max_count is None
                                              
                                                  async def _continuous() -> ChatMessage[Any] | None:
                                                      count = 0
                                                      last_message = None
                                                      while max_count is None or count < max_count:
                                                          try:
                                                              result = await self.execute(*prompts, **kwargs)
                                                              last_message = result[-1].message if result else None
                                                              count += 1
                                                              if max_count is None or count < max_count:
                                                                  await asyncio.sleep(interval)
                                                          except asyncio.CancelledError:
                                                              logger.debug("Background execution cancelled")
                                                              break
                                                      return last_message
                                              
                                                  self._main_task = self.create_task(_continuous(), name="main_execution")
                                                  return self._team_talk
                                              

                                              run_sync

                                              run_sync(
                                                  *prompt: AnyPromptType | Image | PathLike[str], store_history: bool = True
                                              ) -> ChatMessage[TResult]
                                              

                                              Run agent synchronously (convenience wrapper).

                                              Parameters:

                                              Name Type Description Default
                                              prompt AnyPromptType | Image | PathLike[str]

                                              User query or instruction

                                              ()
                                              store_history bool

                                              Whether the message exchange should be added to the context window

                                              True

                                              Returns: Result containing response and run information

                                              Source code in src/llmling_agent/delegation/base_team.py
                                              530
                                              531
                                              532
                                              533
                                              534
                                              535
                                              536
                                              537
                                              538
                                              539
                                              540
                                              541
                                              542
                                              543
                                              544
                                              545
                                              def run_sync(
                                                  self,
                                                  *prompt: AnyPromptType | PIL.Image.Image | os.PathLike[str],
                                                  store_history: bool = True,
                                              ) -> ChatMessage[TResult]:
                                                  """Run agent synchronously (convenience wrapper).
                                              
                                                  Args:
                                                      prompt: User query or instruction
                                                      store_history: Whether the message exchange should be added to the
                                                                     context window
                                                  Returns:
                                                      Result containing response and run information
                                                  """
                                                  coro = self.run(*prompt, store_history=store_history)
                                                  return self.run_task_sync(coro)
                                              

                                              stop async

                                              stop()
                                              

                                              Stop background execution if running.

                                              Source code in src/llmling_agent/delegation/base_team.py
                                              275
                                              276
                                              277
                                              278
                                              279
                                              280
                                              281
                                              async def stop(self):
                                                  """Stop background execution if running."""
                                                  if self._main_task and not self._main_task.done():
                                                      self._main_task.cancel()
                                                      await self._main_task
                                                  self._main_task = None
                                                  await self.cleanup_tasks()
                                              

                                              temporary_state async

                                              temporary_state(
                                                  *,
                                                  system_prompts: list[AnyPromptType] | None = None,
                                                  replace_prompts: bool = False,
                                                  tools: list[ToolType] | None = None,
                                                  replace_tools: bool = False,
                                                  history: list[AnyPromptType] | SessionQuery | None = None,
                                                  replace_history: bool = False,
                                                  pause_routing: bool = False,
                                                  model: ModelType | None = None,
                                                  provider: AgentProvider | None = None,
                                              ) -> AsyncIterator[Self]
                                              

                                              Temporarily modify state of all agents in the team.

                                              All agents in the team will enter their temporary state simultaneously.

                                              Parameters:

                                              Name Type Description Default
                                              system_prompts list[AnyPromptType] | None

                                              Temporary system prompts to use

                                              None
                                              replace_prompts bool

                                              Whether to replace existing prompts

                                              False
                                              tools list[ToolType] | None

                                              Temporary tools to make available

                                              None
                                              replace_tools bool

                                              Whether to replace existing tools

                                              False
                                              history list[AnyPromptType] | SessionQuery | None

                                              Conversation history (prompts or query)

                                              None
                                              replace_history bool

                                              Whether to replace existing history

                                              False
                                              pause_routing bool

                                              Whether to pause message routing

                                              False
                                              model ModelType | None

                                              Temporary model override

                                              None
                                              provider AgentProvider | None

                                              Temporary provider override

                                              None
                                              Source code in src/llmling_agent/delegation/base_team.py
                                              467
                                              468
                                              469
                                              470
                                              471
                                              472
                                              473
                                              474
                                              475
                                              476
                                              477
                                              478
                                              479
                                              480
                                              481
                                              482
                                              483
                                              484
                                              485
                                              486
                                              487
                                              488
                                              489
                                              490
                                              491
                                              492
                                              493
                                              494
                                              495
                                              496
                                              497
                                              498
                                              499
                                              500
                                              501
                                              502
                                              503
                                              504
                                              505
                                              506
                                              507
                                              508
                                              509
                                              510
                                              511
                                              512
                                              513
                                              514
                                              515
                                              516
                                              517
                                              518
                                              519
                                              520
                                              521
                                              @asynccontextmanager
                                              async def temporary_state(
                                                  self,
                                                  *,
                                                  system_prompts: list[AnyPromptType] | None = None,
                                                  replace_prompts: bool = False,
                                                  tools: list[ToolType] | None = None,
                                                  replace_tools: bool = False,
                                                  history: list[AnyPromptType] | SessionQuery | None = None,
                                                  replace_history: bool = False,
                                                  pause_routing: bool = False,
                                                  model: ModelType | None = None,
                                                  provider: AgentProvider | None = None,
                                              ) -> AsyncIterator[Self]:
                                                  """Temporarily modify state of all agents in the team.
                                              
                                                  All agents in the team will enter their temporary state simultaneously.
                                              
                                                  Args:
                                                      system_prompts: Temporary system prompts to use
                                                      replace_prompts: Whether to replace existing prompts
                                                      tools: Temporary tools to make available
                                                      replace_tools: Whether to replace existing tools
                                                      history: Conversation history (prompts or query)
                                                      replace_history: Whether to replace existing history
                                                      pause_routing: Whether to pause message routing
                                                      model: Temporary model override
                                                      provider: Temporary provider override
                                                  """
                                                  # Get all agents (flattened) before entering context
                                                  agents = list(self.iter_agents())
                                              
                                                  async with AsyncExitStack() as stack:
                                                      if pause_routing:
                                                          await stack.enter_async_context(self.connections.paused_routing())
                                                      # Enter temporary state for all agents
                                                      for agent in agents:
                                                          await stack.enter_async_context(
                                                              agent.temporary_state(
                                                                  system_prompts=system_prompts,
                                                                  replace_prompts=replace_prompts,
                                                                  tools=tools,
                                                                  replace_tools=replace_tools,
                                                                  history=history,
                                                                  replace_history=replace_history,
                                                                  pause_routing=pause_routing,
                                                                  model=model,
                                                                  provider=provider,
                                                              )
                                                          )
                                                      try:
                                                          yield self
                                                      finally:
                                                          # AsyncExitStack will handle cleanup of all states
                                                          pass
                                              

                                              to_tool

                                              to_tool(*, name: str | None = None, description: str | None = None) -> Tool
                                              

                                              Create a tool from this agent.

                                              Parameters:

                                              Name Type Description Default
                                              name str | None

                                              Optional tool name override

                                              None
                                              description str | None

                                              Optional tool description override

                                              None
                                              Source code in src/llmling_agent/delegation/base_team.py
                                              112
                                              113
                                              114
                                              115
                                              116
                                              117
                                              118
                                              119
                                              120
                                              121
                                              122
                                              123
                                              124
                                              125
                                              126
                                              127
                                              128
                                              129
                                              130
                                              131
                                              132
                                              133
                                              134
                                              135
                                              136
                                              def to_tool(self, *, name: str | None = None, description: str | None = None) -> Tool:
                                                  """Create a tool from this agent.
                                              
                                                  Args:
                                                      name: Optional tool name override
                                                      description: Optional tool description override
                                                  """
                                                  tool_name = name or f"ask_{self.name}"
                                              
                                                  async def wrapped_tool(prompt: str) -> TResult:
                                                      result = await self.run(prompt)
                                                      return result.data
                                              
                                                  docstring = description or f"Get expert answer from node {self.name}"
                                                  if self.description:
                                                      docstring = f"{docstring}\n\n{self.description}"
                                              
                                                  wrapped_tool.__doc__ = docstring
                                                  wrapped_tool.__name__ = tool_name
                                              
                                                  return Tool.from_callable(
                                                      wrapped_tool,
                                                      name_override=tool_name,
                                                      description_override=docstring,
                                                  )
                                              

                                              wait async

                                              wait() -> ChatMessage[Any] | None
                                              

                                              Wait for background execution to complete and return last message.

                                              Source code in src/llmling_agent/delegation/base_team.py
                                              283
                                              284
                                              285
                                              286
                                              287
                                              288
                                              289
                                              290
                                              291
                                              292
                                              293
                                              294
                                              295
                                              async def wait(self) -> ChatMessage[Any] | None:
                                                  """Wait for background execution to complete and return last message."""
                                                  if not self._main_task:
                                                      msg = "No execution running"
                                                      raise RuntimeError(msg)
                                                  if self._infinite:
                                                      msg = "Cannot wait on infinite execution"
                                                      raise RuntimeError(msg)
                                                  try:
                                                      return await self._main_task
                                                  finally:
                                                      await self.cleanup_tasks()
                                                      self._main_task = None
                                              

                                              ChatMessage dataclass

                                              Common message format for all UI types.

                                              Generically typed with: ChatMessage[Type of Content] The type can either be str or a BaseModel subclass.

                                              Source code in src/llmling_agent/messaging/messages.py
                                              154
                                              155
                                              156
                                              157
                                              158
                                              159
                                              160
                                              161
                                              162
                                              163
                                              164
                                              165
                                              166
                                              167
                                              168
                                              169
                                              170
                                              171
                                              172
                                              173
                                              174
                                              175
                                              176
                                              177
                                              178
                                              179
                                              180
                                              181
                                              182
                                              183
                                              184
                                              185
                                              186
                                              187
                                              188
                                              189
                                              190
                                              191
                                              192
                                              193
                                              194
                                              195
                                              196
                                              197
                                              198
                                              199
                                              200
                                              201
                                              202
                                              203
                                              204
                                              205
                                              206
                                              207
                                              208
                                              209
                                              210
                                              211
                                              212
                                              213
                                              214
                                              215
                                              216
                                              217
                                              218
                                              219
                                              220
                                              221
                                              222
                                              223
                                              224
                                              225
                                              226
                                              227
                                              228
                                              229
                                              230
                                              231
                                              232
                                              233
                                              234
                                              235
                                              236
                                              237
                                              238
                                              239
                                              240
                                              241
                                              242
                                              243
                                              244
                                              245
                                              246
                                              247
                                              248
                                              249
                                              250
                                              251
                                              252
                                              253
                                              254
                                              255
                                              256
                                              257
                                              258
                                              259
                                              260
                                              261
                                              262
                                              263
                                              264
                                              265
                                              266
                                              267
                                              268
                                              269
                                              270
                                              271
                                              272
                                              273
                                              274
                                              275
                                              276
                                              277
                                              278
                                              279
                                              280
                                              281
                                              282
                                              283
                                              284
                                              285
                                              286
                                              287
                                              288
                                              289
                                              290
                                              291
                                              292
                                              293
                                              294
                                              295
                                              296
                                              297
                                              298
                                              299
                                              300
                                              301
                                              302
                                              303
                                              304
                                              305
                                              306
                                              307
                                              308
                                              309
                                              310
                                              311
                                              312
                                              313
                                              314
                                              315
                                              316
                                              317
                                              318
                                              319
                                              320
                                              321
                                              322
                                              323
                                              324
                                              325
                                              326
                                              327
                                              328
                                              329
                                              330
                                              331
                                              332
                                              333
                                              334
                                              @dataclass
                                              class ChatMessage[TContent]:
                                                  """Common message format for all UI types.
                                              
                                                  Generically typed with: ChatMessage[Type of Content]
                                                  The type can either be str or a BaseModel subclass.
                                                  """
                                              
                                                  content: TContent
                                                  """Message content, typed as TContent (either str or BaseModel)."""
                                              
                                                  role: MessageRole
                                                  """Role of the message sender (user/assistant/system)."""
                                              
                                                  model: str | None = None
                                                  """Name of the model that generated this message."""
                                              
                                                  metadata: SimpleJsonType = field(default_factory=dict)
                                                  """Additional metadata about the message."""
                                              
                                                  timestamp: datetime = field(default_factory=get_now)
                                                  """When this message was created."""
                                              
                                                  cost_info: TokenCost | None = None
                                                  """Token usage and costs for this specific message if available."""
                                              
                                                  message_id: str = field(default_factory=lambda: str(uuid4()))
                                                  """Unique identifier for this message."""
                                              
                                                  conversation_id: str | None = None
                                                  """ID of the conversation this message belongs to."""
                                              
                                                  response_time: float | None = None
                                                  """Time it took the LLM to respond."""
                                              
                                                  tool_calls: list[ToolCallInfo] = field(default_factory=list)
                                                  """List of tool calls made during message generation."""
                                              
                                                  associated_messages: list[ChatMessage[Any]] = field(default_factory=list)
                                                  """List of messages which were generated during the the creation of this messsage."""
                                              
                                                  name: str | None = None
                                                  """Display name for the message sender in UI."""
                                              
                                                  forwarded_from: list[str] = field(default_factory=list)
                                                  """List of agent names (the chain) that forwarded this message to the sender."""
                                              
                                                  provider_extra: dict[str, Any] = field(default_factory=dict)
                                                  """Provider specific metadata / extra information."""
                                              
                                                  @classmethod
                                                  def from_openai_format(
                                                      cls,
                                                      message: dict[str, Any],
                                                      conversation_id: str | None = None,
                                                  ) -> ChatMessage[str]:
                                                      """Create ChatMessage from OpenAI message format.
                                              
                                                      Args:
                                                          message: OpenAI format message dict with role, content etc.
                                                          conversation_id: Optional conversation ID to assign
                                              
                                                      Returns:
                                                          Converted ChatMessage
                                              
                                                      Example:
                                                          >>> msg = ChatMessage.from_openai_format({
                                                          ...     "role": "user",
                                                          ...     "content": "Hello!",
                                                          ...     "name": "john"
                                                          ... })
                                                      """
                                                      # Handle multimodal content lists (OpenAI vision format)
                                                      if isinstance(message["content"], list):
                                                          # Combine text parts
                                                          content = "\n".join(
                                                              part["text"] for part in message["content"] if part["type"] == "text"
                                                          )
                                                      else:
                                                          content = message["content"] or ""
                                              
                                                      return ChatMessage[str](
                                                          content=str(content),
                                                          role=message["role"],
                                                          name=message.get("name"),
                                                          conversation_id=conversation_id,
                                                          tool_calls=[
                                                              ToolCallInfo(
                                                                  agent_name=message.get("name") or "",
                                                                  tool_call_id=tc["id"],
                                                                  tool_name=tc["function"]["name"],
                                                                  args=tc["function"]["arguments"],
                                                                  result=None,  # OpenAI format doesn't include results
                                                              )
                                                              for tc in message.get("tool_calls", [])
                                                          ]
                                                          if message.get("tool_calls")
                                                          else [],
                                                          metadata={"function_call": message["function_call"]}
                                                          if "function_call" in message
                                                          else {},
                                                      )
                                              
                                                  def forwarded(self, previous_message: ChatMessage[Any]) -> Self:
                                                      """Create new message showing it was forwarded from another message.
                                              
                                                      Args:
                                                          previous_message: The message that led to this one's creation
                                              
                                                      Returns:
                                                          New message with updated chain showing the path through previous message
                                                      """
                                                      from_ = [*previous_message.forwarded_from, previous_message.name or "unknown"]
                                                      return replace(self, forwarded_from=from_)
                                              
                                                  def to_text_message(self) -> ChatMessage[str]:
                                                      """Convert this message to a text-only version."""
                                                      return dataclasses.replace(self, content=str(self.content))  # type: ignore
                                              
                                                  def _get_content_str(self) -> str:
                                                      """Get string representation of content."""
                                                      match self.content:
                                                          case str():
                                                              return self.content
                                                          case BaseModel():
                                                              return self.content.model_dump_json(indent=2)
                                                          case _:
                                                              msg = f"Unexpected content type: {type(self.content)}"
                                                              raise ValueError(msg)
                                              
                                                  @property
                                                  def data(self) -> TContent:
                                                      """Get content as typed data. Provides compat to AgentRunResult."""
                                                      return self.content
                                              
                                                  def format(
                                                      self,
                                                      style: FormatStyle = "simple",
                                                      *,
                                                      template: str | None = None,
                                                      variables: dict[str, Any] | None = None,
                                                      show_metadata: bool = False,
                                                      show_costs: bool = False,
                                                  ) -> str:
                                                      """Format message with configurable style.
                                              
                                                      Args:
                                                          style: Predefined style or "custom" for custom template
                                                          template: Custom Jinja template (required if style="custom")
                                                          variables: Additional variables for template rendering
                                                          show_metadata: Whether to include metadata
                                                          show_costs: Whether to include cost information
                                              
                                                      Raises:
                                                          ValueError: If style is "custom" but no template provided
                                                                  or if style is invalid
                                                      """
                                                      from jinjarope import Environment
                                                      import yamling
                                              
                                                      env = Environment(trim_blocks=True, lstrip_blocks=True)
                                                      env.filters["to_yaml"] = yamling.dump_yaml
                                              
                                                      match style:
                                                          case "custom":
                                                              if not template:
                                                                  msg = "Custom style requires a template"
                                                                  raise ValueError(msg)
                                                              template_str = template
                                                          case _ if style in MESSAGE_TEMPLATES:
                                                              template_str = MESSAGE_TEMPLATES[style]
                                                          case _:
                                                              msg = f"Invalid style: {style}"
                                                              raise ValueError(msg)
                                              
                                                      template_obj = env.from_string(template_str)
                                                      vars_ = {**asdict(self), "show_metadata": show_metadata, "show_costs": show_costs}
                                                      if variables:
                                                          vars_.update(variables)
                                              
                                                      return template_obj.render(**vars_)
                                              

                                              associated_messages class-attribute instance-attribute

                                              associated_messages: list[ChatMessage[Any]] = field(default_factory=list)
                                              

                                              List of messages which were generated during the the creation of this messsage.

                                              content instance-attribute

                                              content: TContent
                                              

                                              Message content, typed as TContent (either str or BaseModel).

                                              conversation_id class-attribute instance-attribute

                                              conversation_id: str | None = None
                                              

                                              ID of the conversation this message belongs to.

                                              cost_info class-attribute instance-attribute

                                              cost_info: TokenCost | None = None
                                              

                                              Token usage and costs for this specific message if available.

                                              data property

                                              data: TContent
                                              

                                              Get content as typed data. Provides compat to AgentRunResult.

                                              forwarded_from class-attribute instance-attribute

                                              forwarded_from: list[str] = field(default_factory=list)
                                              

                                              List of agent names (the chain) that forwarded this message to the sender.

                                              message_id class-attribute instance-attribute

                                              message_id: str = field(default_factory=lambda: str(uuid4()))
                                              

                                              Unique identifier for this message.

                                              metadata class-attribute instance-attribute

                                              metadata: SimpleJsonType = field(default_factory=dict)
                                              

                                              Additional metadata about the message.

                                              model class-attribute instance-attribute

                                              model: str | None = None
                                              

                                              Name of the model that generated this message.

                                              name class-attribute instance-attribute

                                              name: str | None = None
                                              

                                              Display name for the message sender in UI.

                                              provider_extra class-attribute instance-attribute

                                              provider_extra: dict[str, Any] = field(default_factory=dict)
                                              

                                              Provider specific metadata / extra information.

                                              response_time class-attribute instance-attribute

                                              response_time: float | None = None
                                              

                                              Time it took the LLM to respond.

                                              role instance-attribute

                                              role: MessageRole
                                              

                                              Role of the message sender (user/assistant/system).

                                              timestamp class-attribute instance-attribute

                                              timestamp: datetime = field(default_factory=get_now)
                                              

                                              When this message was created.

                                              tool_calls class-attribute instance-attribute

                                              tool_calls: list[ToolCallInfo] = field(default_factory=list)
                                              

                                              List of tool calls made during message generation.

                                              _get_content_str

                                              _get_content_str() -> str
                                              

                                              Get string representation of content.

                                              Source code in src/llmling_agent/messaging/messages.py
                                              273
                                              274
                                              275
                                              276
                                              277
                                              278
                                              279
                                              280
                                              281
                                              282
                                              def _get_content_str(self) -> str:
                                                  """Get string representation of content."""
                                                  match self.content:
                                                      case str():
                                                          return self.content
                                                      case BaseModel():
                                                          return self.content.model_dump_json(indent=2)
                                                      case _:
                                                          msg = f"Unexpected content type: {type(self.content)}"
                                                          raise ValueError(msg)
                                              

                                              format

                                              format(
                                                  style: FormatStyle = "simple",
                                                  *,
                                                  template: str | None = None,
                                                  variables: dict[str, Any] | None = None,
                                                  show_metadata: bool = False,
                                                  show_costs: bool = False,
                                              ) -> str
                                              

                                              Format message with configurable style.

                                              Parameters:

                                              Name Type Description Default
                                              style FormatStyle

                                              Predefined style or "custom" for custom template

                                              'simple'
                                              template str | None

                                              Custom Jinja template (required if style="custom")

                                              None
                                              variables dict[str, Any] | None

                                              Additional variables for template rendering

                                              None
                                              show_metadata bool

                                              Whether to include metadata

                                              False
                                              show_costs bool

                                              Whether to include cost information

                                              False

                                              Raises:

                                              Type Description
                                              ValueError

                                              If style is "custom" but no template provided or if style is invalid

                                              Source code in src/llmling_agent/messaging/messages.py
                                              289
                                              290
                                              291
                                              292
                                              293
                                              294
                                              295
                                              296
                                              297
                                              298
                                              299
                                              300
                                              301
                                              302
                                              303
                                              304
                                              305
                                              306
                                              307
                                              308
                                              309
                                              310
                                              311
                                              312
                                              313
                                              314
                                              315
                                              316
                                              317
                                              318
                                              319
                                              320
                                              321
                                              322
                                              323
                                              324
                                              325
                                              326
                                              327
                                              328
                                              329
                                              330
                                              331
                                              332
                                              333
                                              334
                                              def format(
                                                  self,
                                                  style: FormatStyle = "simple",
                                                  *,
                                                  template: str | None = None,
                                                  variables: dict[str, Any] | None = None,
                                                  show_metadata: bool = False,
                                                  show_costs: bool = False,
                                              ) -> str:
                                                  """Format message with configurable style.
                                              
                                                  Args:
                                                      style: Predefined style or "custom" for custom template
                                                      template: Custom Jinja template (required if style="custom")
                                                      variables: Additional variables for template rendering
                                                      show_metadata: Whether to include metadata
                                                      show_costs: Whether to include cost information
                                              
                                                  Raises:
                                                      ValueError: If style is "custom" but no template provided
                                                              or if style is invalid
                                                  """
                                                  from jinjarope import Environment
                                                  import yamling
                                              
                                                  env = Environment(trim_blocks=True, lstrip_blocks=True)
                                                  env.filters["to_yaml"] = yamling.dump_yaml
                                              
                                                  match style:
                                                      case "custom":
                                                          if not template:
                                                              msg = "Custom style requires a template"
                                                              raise ValueError(msg)
                                                          template_str = template
                                                      case _ if style in MESSAGE_TEMPLATES:
                                                          template_str = MESSAGE_TEMPLATES[style]
                                                      case _:
                                                          msg = f"Invalid style: {style}"
                                                          raise ValueError(msg)
                                              
                                                  template_obj = env.from_string(template_str)
                                                  vars_ = {**asdict(self), "show_metadata": show_metadata, "show_costs": show_costs}
                                                  if variables:
                                                      vars_.update(variables)
                                              
                                                  return template_obj.render(**vars_)
                                              

                                              forwarded

                                              forwarded(previous_message: ChatMessage[Any]) -> Self
                                              

                                              Create new message showing it was forwarded from another message.

                                              Parameters:

                                              Name Type Description Default
                                              previous_message ChatMessage[Any]

                                              The message that led to this one's creation

                                              required

                                              Returns:

                                              Type Description
                                              Self

                                              New message with updated chain showing the path through previous message

                                              Source code in src/llmling_agent/messaging/messages.py
                                              257
                                              258
                                              259
                                              260
                                              261
                                              262
                                              263
                                              264
                                              265
                                              266
                                              267
                                              def forwarded(self, previous_message: ChatMessage[Any]) -> Self:
                                                  """Create new message showing it was forwarded from another message.
                                              
                                                  Args:
                                                      previous_message: The message that led to this one's creation
                                              
                                                  Returns:
                                                      New message with updated chain showing the path through previous message
                                                  """
                                                  from_ = [*previous_message.forwarded_from, previous_message.name or "unknown"]
                                                  return replace(self, forwarded_from=from_)
                                              

                                              from_openai_format classmethod

                                              from_openai_format(
                                                  message: dict[str, Any], conversation_id: str | None = None
                                              ) -> ChatMessage[str]
                                              

                                              Create ChatMessage from OpenAI message format.

                                              Parameters:

                                              Name Type Description Default
                                              message dict[str, Any]

                                              OpenAI format message dict with role, content etc.

                                              required
                                              conversation_id str | None

                                              Optional conversation ID to assign

                                              None

                                              Returns:

                                              Type Description
                                              ChatMessage[str]

                                              Converted ChatMessage

                                              Example

                                              msg = ChatMessage.from_openai_format({ ... "role": "user", ... "content": "Hello!", ... "name": "john" ... })

                                              Source code in src/llmling_agent/messaging/messages.py
                                              204
                                              205
                                              206
                                              207
                                              208
                                              209
                                              210
                                              211
                                              212
                                              213
                                              214
                                              215
                                              216
                                              217
                                              218
                                              219
                                              220
                                              221
                                              222
                                              223
                                              224
                                              225
                                              226
                                              227
                                              228
                                              229
                                              230
                                              231
                                              232
                                              233
                                              234
                                              235
                                              236
                                              237
                                              238
                                              239
                                              240
                                              241
                                              242
                                              243
                                              244
                                              245
                                              246
                                              247
                                              248
                                              249
                                              250
                                              251
                                              252
                                              253
                                              254
                                              255
                                              @classmethod
                                              def from_openai_format(
                                                  cls,
                                                  message: dict[str, Any],
                                                  conversation_id: str | None = None,
                                              ) -> ChatMessage[str]:
                                                  """Create ChatMessage from OpenAI message format.
                                              
                                                  Args:
                                                      message: OpenAI format message dict with role, content etc.
                                                      conversation_id: Optional conversation ID to assign
                                              
                                                  Returns:
                                                      Converted ChatMessage
                                              
                                                  Example:
                                                      >>> msg = ChatMessage.from_openai_format({
                                                      ...     "role": "user",
                                                      ...     "content": "Hello!",
                                                      ...     "name": "john"
                                                      ... })
                                                  """
                                                  # Handle multimodal content lists (OpenAI vision format)
                                                  if isinstance(message["content"], list):
                                                      # Combine text parts
                                                      content = "\n".join(
                                                          part["text"] for part in message["content"] if part["type"] == "text"
                                                      )
                                                  else:
                                                      content = message["content"] or ""
                                              
                                                  return ChatMessage[str](
                                                      content=str(content),
                                                      role=message["role"],
                                                      name=message.get("name"),
                                                      conversation_id=conversation_id,
                                                      tool_calls=[
                                                          ToolCallInfo(
                                                              agent_name=message.get("name") or "",
                                                              tool_call_id=tc["id"],
                                                              tool_name=tc["function"]["name"],
                                                              args=tc["function"]["arguments"],
                                                              result=None,  # OpenAI format doesn't include results
                                                          )
                                                          for tc in message.get("tool_calls", [])
                                                      ]
                                                      if message.get("tool_calls")
                                                      else [],
                                                      metadata={"function_call": message["function_call"]}
                                                      if "function_call" in message
                                                      else {},
                                                  )
                                              

                                              to_text_message

                                              to_text_message() -> ChatMessage[str]
                                              

                                              Convert this message to a text-only version.

                                              Source code in src/llmling_agent/messaging/messages.py
                                              269
                                              270
                                              271
                                              def to_text_message(self) -> ChatMessage[str]:
                                                  """Convert this message to a text-only version."""
                                                  return dataclasses.replace(self, content=str(self.content))  # type: ignore
                                              

                                              ImageBase64Content

                                              Bases: BaseImageContent

                                              Image from base64 data.

                                              Source code in src/llmling_agent/models/content.py
                                               97
                                               98
                                               99
                                              100
                                              101
                                              102
                                              103
                                              104
                                              105
                                              106
                                              107
                                              108
                                              109
                                              110
                                              111
                                              112
                                              113
                                              114
                                              115
                                              116
                                              117
                                              118
                                              119
                                              120
                                              121
                                              122
                                              123
                                              124
                                              125
                                              126
                                              127
                                              128
                                              129
                                              130
                                              131
                                              132
                                              133
                                              134
                                              135
                                              136
                                              137
                                              138
                                              class ImageBase64Content(BaseImageContent):
                                                  """Image from base64 data."""
                                              
                                                  type: Literal["image_base64"] = Field("image_base64", init=False)
                                                  """Base64-encoded image."""
                                              
                                                  data: str
                                                  """Base64-encoded image data."""
                                              
                                                  mime_type: str = "image/jpeg"
                                                  """MIME type of the image."""
                                              
                                                  def to_openai_format(self) -> dict[str, Any]:
                                                      """Convert to OpenAI API format for vision models."""
                                                      data_url = f"data:{self.mime_type};base64,{self.data}"
                                                      content = {"url": data_url, "detail": self.detail or "auto"}
                                                      return {"type": "image_url", "image_url": content}
                                              
                                                  @classmethod
                                                  def from_bytes(
                                                      cls,
                                                      data: bytes,
                                                      *,
                                                      detail: DetailLevel | None = None,
                                                      description: str | None = None,
                                                  ) -> ImageBase64Content:
                                                      """Create image content from raw bytes.
                                              
                                                      Args:
                                                          data: Raw image bytes
                                                          detail: Optional detail level for processing
                                                          description: Optional description of the image
                                                      """
                                                      content = base64.b64encode(data).decode()
                                                      return cls(data=content, detail=detail, description=description)
                                              
                                                  @classmethod
                                                  def from_pil_image(cls, image: PIL.Image.Image) -> ImageBase64Content:
                                                      """Create content from PIL Image."""
                                                      with io.BytesIO() as buffer:
                                                          image.save(buffer, format="PNG")
                                                          return cls(data=base64.b64encode(buffer.getvalue()).decode())
                                              

                                              data instance-attribute

                                              data: str
                                              

                                              Base64-encoded image data.

                                              mime_type class-attribute instance-attribute

                                              mime_type: str = 'image/jpeg'
                                              

                                              MIME type of the image.

                                              type class-attribute instance-attribute

                                              type: Literal['image_base64'] = Field('image_base64', init=False)
                                              

                                              Base64-encoded image.

                                              from_bytes classmethod

                                              from_bytes(
                                                  data: bytes, *, detail: DetailLevel | None = None, description: str | None = None
                                              ) -> ImageBase64Content
                                              

                                              Create image content from raw bytes.

                                              Parameters:

                                              Name Type Description Default
                                              data bytes

                                              Raw image bytes

                                              required
                                              detail DetailLevel | None

                                              Optional detail level for processing

                                              None
                                              description str | None

                                              Optional description of the image

                                              None
                                              Source code in src/llmling_agent/models/content.py
                                              115
                                              116
                                              117
                                              118
                                              119
                                              120
                                              121
                                              122
                                              123
                                              124
                                              125
                                              126
                                              127
                                              128
                                              129
                                              130
                                              131
                                              @classmethod
                                              def from_bytes(
                                                  cls,
                                                  data: bytes,
                                                  *,
                                                  detail: DetailLevel | None = None,
                                                  description: str | None = None,
                                              ) -> ImageBase64Content:
                                                  """Create image content from raw bytes.
                                              
                                                  Args:
                                                      data: Raw image bytes
                                                      detail: Optional detail level for processing
                                                      description: Optional description of the image
                                                  """
                                                  content = base64.b64encode(data).decode()
                                                  return cls(data=content, detail=detail, description=description)
                                              

                                              from_pil_image classmethod

                                              from_pil_image(image: Image) -> ImageBase64Content
                                              

                                              Create content from PIL Image.

                                              Source code in src/llmling_agent/models/content.py
                                              133
                                              134
                                              135
                                              136
                                              137
                                              138
                                              @classmethod
                                              def from_pil_image(cls, image: PIL.Image.Image) -> ImageBase64Content:
                                                  """Create content from PIL Image."""
                                                  with io.BytesIO() as buffer:
                                                      image.save(buffer, format="PNG")
                                                      return cls(data=base64.b64encode(buffer.getvalue()).decode())
                                              

                                              to_openai_format

                                              to_openai_format() -> dict[str, Any]
                                              

                                              Convert to OpenAI API format for vision models.

                                              Source code in src/llmling_agent/models/content.py
                                              109
                                              110
                                              111
                                              112
                                              113
                                              def to_openai_format(self) -> dict[str, Any]:
                                                  """Convert to OpenAI API format for vision models."""
                                                  data_url = f"data:{self.mime_type};base64,{self.data}"
                                                  content = {"url": data_url, "detail": self.detail or "auto"}
                                                  return {"type": "image_url", "image_url": content}
                                              

                                              ImageURLContent

                                              Bases: BaseImageContent

                                              Image from URL.

                                              Source code in src/llmling_agent/models/content.py
                                              82
                                              83
                                              84
                                              85
                                              86
                                              87
                                              88
                                              89
                                              90
                                              91
                                              92
                                              93
                                              94
                                              class ImageURLContent(BaseImageContent):
                                                  """Image from URL."""
                                              
                                                  type: Literal["image_url"] = Field("image_url", init=False)
                                                  """URL-based image."""
                                              
                                                  url: str
                                                  """URL to the image."""
                                              
                                                  def to_openai_format(self) -> dict[str, Any]:
                                                      """Convert to OpenAI API format for vision models."""
                                                      content = {"url": self.url, "detail": self.detail or "auto"}
                                                      return {"type": "image_url", "image_url": content}
                                              

                                              type class-attribute instance-attribute

                                              type: Literal['image_url'] = Field('image_url', init=False)
                                              

                                              URL-based image.

                                              url instance-attribute

                                              url: str
                                              

                                              URL to the image.

                                              to_openai_format

                                              to_openai_format() -> dict[str, Any]
                                              

                                              Convert to OpenAI API format for vision models.

                                              Source code in src/llmling_agent/models/content.py
                                              91
                                              92
                                              93
                                              94
                                              def to_openai_format(self) -> dict[str, Any]:
                                                  """Convert to OpenAI API format for vision models."""
                                                  content = {"url": self.url, "detail": self.detail or "auto"}
                                                  return {"type": "image_url", "image_url": content}
                                              

                                              JSONCode

                                              Bases: BaseCode

                                              JSON with syntax validation.

                                              Source code in src/llmling_agent/common_types.py
                                              108
                                              109
                                              110
                                              111
                                              112
                                              113
                                              114
                                              115
                                              116
                                              117
                                              118
                                              119
                                              120
                                              121
                                              122
                                              class JSONCode(BaseCode):
                                                  """JSON with syntax validation."""
                                              
                                                  @field_validator("code")
                                                  @classmethod
                                                  def validate_syntax(cls, code: str) -> str:
                                                      import yamling
                                              
                                                      try:
                                                          yamling.load(code, mode="json")
                                                      except yamling.ParsingError as e:
                                                          msg = f"Invalid JSON syntax: {e}"
                                                          raise ValueError(msg) from e
                                                      else:
                                                          return code
                                              

                                              MessageNode

                                              Bases: MessageEmitter[TDeps, TResult]

                                              Base class for all message processing nodes.

                                              Source code in src/llmling_agent/messaging/messagenode.py
                                               28
                                               29
                                               30
                                               31
                                               32
                                               33
                                               34
                                               35
                                               36
                                               37
                                               38
                                               39
                                               40
                                               41
                                               42
                                               43
                                               44
                                               45
                                               46
                                               47
                                               48
                                               49
                                               50
                                               51
                                               52
                                               53
                                               54
                                               55
                                               56
                                               57
                                               58
                                               59
                                               60
                                               61
                                               62
                                               63
                                               64
                                               65
                                               66
                                               67
                                               68
                                               69
                                               70
                                               71
                                               72
                                               73
                                               74
                                               75
                                               76
                                               77
                                               78
                                               79
                                               80
                                               81
                                               82
                                               83
                                               84
                                               85
                                               86
                                               87
                                               88
                                               89
                                               90
                                               91
                                               92
                                               93
                                               94
                                               95
                                               96
                                               97
                                               98
                                               99
                                              100
                                              101
                                              102
                                              103
                                              104
                                              105
                                              106
                                              107
                                              108
                                              109
                                              110
                                              111
                                              112
                                              113
                                              114
                                              115
                                              116
                                              117
                                              118
                                              119
                                              120
                                              121
                                              122
                                              123
                                              124
                                              125
                                              126
                                              127
                                              128
                                              129
                                              class MessageNode[TDeps, TResult](MessageEmitter[TDeps, TResult]):
                                                  """Base class for all message processing nodes."""
                                              
                                                  tool_used = Signal(ToolCallInfo)
                                                  """Signal emitted when node uses a tool."""
                                              
                                                  async def pre_run(
                                                      self,
                                                      *prompt: AnyPromptType | PIL.Image.Image | os.PathLike[str] | ChatMessage,
                                                  ) -> tuple[ChatMessage[Any], list[Content | str]]:
                                                      """Hook to prepare a MessgeNode run call.
                                              
                                                      Args:
                                                          *prompt: The prompt(s) to prepare.
                                              
                                                      Returns:
                                                          A tuple of:
                                                              - Either incoming message, or a constructed incoming message based
                                                                on the prompt(s).
                                                              - A list of prompts to be sent to the model.
                                                      """
                                                      if len(prompt) == 1 and isinstance(prompt[0], ChatMessage):
                                                          user_msg = prompt[0]
                                                          prompts = await convert_prompts([user_msg.content])
                                                          # Update received message's chain to show it came through its source
                                                          user_msg = user_msg.forwarded(prompt[0])
                                                          # clear cost info to avoid double-counting
                                                          user_msg = replace(user_msg, role="user", cost_info=None)
                                                          final_prompt = "\n\n".join(str(p) for p in prompts)
                                                      else:
                                                          prompts = await convert_prompts(prompt)
                                                          final_prompt = "\n\n".join(str(p) for p in prompts)
                                                          # use format_prompts?
                                                          user_msg = ChatMessage[str](
                                                              content=final_prompt,
                                                              role="user",
                                                              conversation_id=str(uuid4()),
                                                          )
                                                      self.message_received.emit(user_msg)
                                                      self.context.current_prompt = final_prompt
                                                      return user_msg, prompts
                                              
                                                  # async def post_run(
                                                  #     self,
                                                  #     message: ChatMessage[TResult],
                                                  #     previous_message: ChatMessage[Any] | None,
                                                  #     wait_for_connections: bool | None = None,
                                                  # ) -> ChatMessage[Any]:
                                                  #     # For chain processing, update the response's chain
                                                  #     if previous_message:
                                                  #         message = message.forwarded(previous_message)
                                                  #         conversation_id = previous_message.conversation_id
                                                  #     else:
                                                  #         conversation_id = str(uuid4())
                                                  #     # Set conversation_id on response message
                                                  #     message = replace(message, conversation_id=conversation_id)
                                                  #     self.message_sent.emit(message)
                                                  #     await self.connections.route_message(message, wait=wait_for_connections)
                                                  #     return message
                                              
                                                  async def run(
                                                      self,
                                                      *prompt: AnyPromptType | PIL.Image.Image | os.PathLike[str] | ChatMessage,
                                                      wait_for_connections: bool | None = None,
                                                      store_history: bool = True,
                                                      **kwargs: Any,
                                                  ) -> ChatMessage[TResult]:
                                                      """Execute node with prompts and handle message routing.
                                              
                                                      Args:
                                                          prompt: Input prompts
                                                          wait_for_connections: Whether to wait for forwarded messages
                                                          store_history: Whether to store in conversation history
                                                          **kwargs: Additional arguments for _run
                                                      """
                                                      from llmling_agent import Agent, StructuredAgent
                                              
                                                      user_msg, prompts = await self.pre_run(*prompt)
                                                      message = await self._run(
                                                          *prompts,
                                                          store_history=store_history,
                                                          conversation_id=user_msg.conversation_id,
                                                          **kwargs,
                                                      )
                                              
                                                      # For chain processing, update the response's chain
                                                      if len(prompt) == 1 and isinstance(prompt[0], ChatMessage):
                                                          message = message.forwarded(prompt[0])
                                              
                                                      if store_history and isinstance(self, Agent | StructuredAgent):
                                                          self.conversation.add_chat_messages([user_msg, message])
                                                      self.message_sent.emit(message)
                                                      await self.connections.route_message(message, wait=wait_for_connections)
                                                      return message
                                              
                                                  @abstractmethod
                                                  def run_iter(
                                                      self,
                                                      *prompts: Any,
                                                      **kwargs: Any,
                                                  ) -> AsyncIterator[ChatMessage[Any]]:
                                                      """Yield messages during execution."""
                                              

                                              tool_used class-attribute instance-attribute

                                              tool_used = Signal(ToolCallInfo)
                                              

                                              Signal emitted when node uses a tool.

                                              pre_run async

                                              pre_run(
                                                  *prompt: AnyPromptType | Image | PathLike[str] | ChatMessage,
                                              ) -> tuple[ChatMessage[Any], list[Content | str]]
                                              

                                              Hook to prepare a MessgeNode run call.

                                              Parameters:

                                              Name Type Description Default
                                              *prompt AnyPromptType | Image | PathLike[str] | ChatMessage

                                              The prompt(s) to prepare.

                                              ()

                                              Returns:

                                              Type Description
                                              tuple[ChatMessage[Any], list[Content | str]]

                                              A tuple of: - Either incoming message, or a constructed incoming message based on the prompt(s). - A list of prompts to be sent to the model.

                                              Source code in src/llmling_agent/messaging/messagenode.py
                                              34
                                              35
                                              36
                                              37
                                              38
                                              39
                                              40
                                              41
                                              42
                                              43
                                              44
                                              45
                                              46
                                              47
                                              48
                                              49
                                              50
                                              51
                                              52
                                              53
                                              54
                                              55
                                              56
                                              57
                                              58
                                              59
                                              60
                                              61
                                              62
                                              63
                                              64
                                              65
                                              66
                                              67
                                              68
                                              async def pre_run(
                                                  self,
                                                  *prompt: AnyPromptType | PIL.Image.Image | os.PathLike[str] | ChatMessage,
                                              ) -> tuple[ChatMessage[Any], list[Content | str]]:
                                                  """Hook to prepare a MessgeNode run call.
                                              
                                                  Args:
                                                      *prompt: The prompt(s) to prepare.
                                              
                                                  Returns:
                                                      A tuple of:
                                                          - Either incoming message, or a constructed incoming message based
                                                            on the prompt(s).
                                                          - A list of prompts to be sent to the model.
                                                  """
                                                  if len(prompt) == 1 and isinstance(prompt[0], ChatMessage):
                                                      user_msg = prompt[0]
                                                      prompts = await convert_prompts([user_msg.content])
                                                      # Update received message's chain to show it came through its source
                                                      user_msg = user_msg.forwarded(prompt[0])
                                                      # clear cost info to avoid double-counting
                                                      user_msg = replace(user_msg, role="user", cost_info=None)
                                                      final_prompt = "\n\n".join(str(p) for p in prompts)
                                                  else:
                                                      prompts = await convert_prompts(prompt)
                                                      final_prompt = "\n\n".join(str(p) for p in prompts)
                                                      # use format_prompts?
                                                      user_msg = ChatMessage[str](
                                                          content=final_prompt,
                                                          role="user",
                                                          conversation_id=str(uuid4()),
                                                      )
                                                  self.message_received.emit(user_msg)
                                                  self.context.current_prompt = final_prompt
                                                  return user_msg, prompts
                                              

                                              run async

                                              run(
                                                  *prompt: AnyPromptType | Image | PathLike[str] | ChatMessage,
                                                  wait_for_connections: bool | None = None,
                                                  store_history: bool = True,
                                                  **kwargs: Any,
                                              ) -> ChatMessage[TResult]
                                              

                                              Execute node with prompts and handle message routing.

                                              Parameters:

                                              Name Type Description Default
                                              prompt AnyPromptType | Image | PathLike[str] | ChatMessage

                                              Input prompts

                                              ()
                                              wait_for_connections bool | None

                                              Whether to wait for forwarded messages

                                              None
                                              store_history bool

                                              Whether to store in conversation history

                                              True
                                              **kwargs Any

                                              Additional arguments for _run

                                              {}
                                              Source code in src/llmling_agent/messaging/messagenode.py
                                               88
                                               89
                                               90
                                               91
                                               92
                                               93
                                               94
                                               95
                                               96
                                               97
                                               98
                                               99
                                              100
                                              101
                                              102
                                              103
                                              104
                                              105
                                              106
                                              107
                                              108
                                              109
                                              110
                                              111
                                              112
                                              113
                                              114
                                              115
                                              116
                                              117
                                              118
                                              119
                                              120
                                              121
                                              async def run(
                                                  self,
                                                  *prompt: AnyPromptType | PIL.Image.Image | os.PathLike[str] | ChatMessage,
                                                  wait_for_connections: bool | None = None,
                                                  store_history: bool = True,
                                                  **kwargs: Any,
                                              ) -> ChatMessage[TResult]:
                                                  """Execute node with prompts and handle message routing.
                                              
                                                  Args:
                                                      prompt: Input prompts
                                                      wait_for_connections: Whether to wait for forwarded messages
                                                      store_history: Whether to store in conversation history
                                                      **kwargs: Additional arguments for _run
                                                  """
                                                  from llmling_agent import Agent, StructuredAgent
                                              
                                                  user_msg, prompts = await self.pre_run(*prompt)
                                                  message = await self._run(
                                                      *prompts,
                                                      store_history=store_history,
                                                      conversation_id=user_msg.conversation_id,
                                                      **kwargs,
                                                  )
                                              
                                                  # For chain processing, update the response's chain
                                                  if len(prompt) == 1 and isinstance(prompt[0], ChatMessage):
                                                      message = message.forwarded(prompt[0])
                                              
                                                  if store_history and isinstance(self, Agent | StructuredAgent):
                                                      self.conversation.add_chat_messages([user_msg, message])
                                                  self.message_sent.emit(message)
                                                  await self.connections.route_message(message, wait=wait_for_connections)
                                                  return message
                                              

                                              run_iter abstractmethod

                                              run_iter(*prompts: Any, **kwargs: Any) -> AsyncIterator[ChatMessage[Any]]
                                              

                                              Yield messages during execution.

                                              Source code in src/llmling_agent/messaging/messagenode.py
                                              123
                                              124
                                              125
                                              126
                                              127
                                              128
                                              129
                                              @abstractmethod
                                              def run_iter(
                                                  self,
                                                  *prompts: Any,
                                                  **kwargs: Any,
                                              ) -> AsyncIterator[ChatMessage[Any]]:
                                                  """Yield messages during execution."""
                                              

                                              PDFBase64Content

                                              Bases: BasePDFContent

                                              PDF from base64 data.

                                              Source code in src/llmling_agent/models/content.py
                                              197
                                              198
                                              199
                                              200
                                              201
                                              202
                                              203
                                              204
                                              205
                                              206
                                              207
                                              208
                                              209
                                              210
                                              211
                                              212
                                              213
                                              214
                                              215
                                              216
                                              217
                                              218
                                              219
                                              220
                                              221
                                              222
                                              class PDFBase64Content(BasePDFContent):
                                                  """PDF from base64 data."""
                                              
                                                  type: Literal["pdf_base64"] = Field("pdf_base64", init=False)
                                                  """Base64-data based PDF."""
                                              
                                                  data: str
                                                  """Base64-encoded PDF data."""
                                              
                                                  def to_openai_format(self) -> dict[str, Any]:
                                                      """Convert to OpenAI API format for PDF handling."""
                                                      data_url = f"data:application/pdf;base64,{self.data}"
                                                      content = {"url": data_url, "detail": self.detail or "auto"}
                                                      return {"type": "file", "file": content}
                                              
                                                  @classmethod
                                                  def from_bytes(
                                                      cls,
                                                      data: bytes,
                                                      *,
                                                      detail: DetailLevel | None = None,
                                                      description: str | None = None,
                                                  ) -> Self:
                                                      """Create PDF content from raw bytes."""
                                                      content = base64.b64encode(data).decode()
                                                      return cls(data=content, detail=detail, description=description)
                                              

                                              data instance-attribute

                                              data: str
                                              

                                              Base64-encoded PDF data.

                                              type class-attribute instance-attribute

                                              type: Literal['pdf_base64'] = Field('pdf_base64', init=False)
                                              

                                              Base64-data based PDF.

                                              from_bytes classmethod

                                              from_bytes(
                                                  data: bytes, *, detail: DetailLevel | None = None, description: str | None = None
                                              ) -> Self
                                              

                                              Create PDF content from raw bytes.

                                              Source code in src/llmling_agent/models/content.py
                                              212
                                              213
                                              214
                                              215
                                              216
                                              217
                                              218
                                              219
                                              220
                                              221
                                              222
                                              @classmethod
                                              def from_bytes(
                                                  cls,
                                                  data: bytes,
                                                  *,
                                                  detail: DetailLevel | None = None,
                                                  description: str | None = None,
                                              ) -> Self:
                                                  """Create PDF content from raw bytes."""
                                                  content = base64.b64encode(data).decode()
                                                  return cls(data=content, detail=detail, description=description)
                                              

                                              to_openai_format

                                              to_openai_format() -> dict[str, Any]
                                              

                                              Convert to OpenAI API format for PDF handling.

                                              Source code in src/llmling_agent/models/content.py
                                              206
                                              207
                                              208
                                              209
                                              210
                                              def to_openai_format(self) -> dict[str, Any]:
                                                  """Convert to OpenAI API format for PDF handling."""
                                                  data_url = f"data:application/pdf;base64,{self.data}"
                                                  content = {"url": data_url, "detail": self.detail or "auto"}
                                                  return {"type": "file", "file": content}
                                              

                                              PDFURLContent

                                              Bases: BasePDFContent

                                              PDF from URL.

                                              Source code in src/llmling_agent/models/content.py
                                              182
                                              183
                                              184
                                              185
                                              186
                                              187
                                              188
                                              189
                                              190
                                              191
                                              192
                                              193
                                              194
                                              class PDFURLContent(BasePDFContent):
                                                  """PDF from URL."""
                                              
                                                  type: Literal["pdf_url"] = Field("pdf_url", init=False)
                                                  """URL-based PDF."""
                                              
                                                  url: str
                                                  """URL to the PDF document."""
                                              
                                                  def to_openai_format(self) -> dict[str, Any]:
                                                      """Convert to OpenAI API format for PDF handling."""
                                                      content = {"url": self.url, "detail": self.detail or "auto"}
                                                      return {"type": "file", "file": content}
                                              

                                              type class-attribute instance-attribute

                                              type: Literal['pdf_url'] = Field('pdf_url', init=False)
                                              

                                              URL-based PDF.

                                              url instance-attribute

                                              url: str
                                              

                                              URL to the PDF document.

                                              to_openai_format

                                              to_openai_format() -> dict[str, Any]
                                              

                                              Convert to OpenAI API format for PDF handling.

                                              Source code in src/llmling_agent/models/content.py
                                              191
                                              192
                                              193
                                              194
                                              def to_openai_format(self) -> dict[str, Any]:
                                                  """Convert to OpenAI API format for PDF handling."""
                                                  content = {"url": self.url, "detail": self.detail or "auto"}
                                                  return {"type": "file", "file": content}
                                              

                                              PythonCode

                                              Bases: BaseCode

                                              Python with syntax validation.

                                              Source code in src/llmling_agent/common_types.py
                                              142
                                              143
                                              144
                                              145
                                              146
                                              147
                                              148
                                              149
                                              150
                                              151
                                              152
                                              153
                                              154
                                              class PythonCode(BaseCode):
                                                  """Python with syntax validation."""
                                              
                                                  @field_validator("code")
                                                  @classmethod
                                                  def validate_syntax(cls, code: str) -> str:
                                                      try:
                                                          ast.parse(code)
                                                      except SyntaxError as e:
                                                          msg = f"Invalid Python syntax: {e}"
                                                          raise ValueError(msg) from e
                                                      else:
                                                          return code
                                              

                                              StructuredAgent

                                              Bases: MessageNode[TDeps, TResult]

                                              Wrapper for Agent that enforces a specific result type.

                                              This wrapper ensures the agent always returns results of the specified type. The type can be provided as: - A Python type for validation - A response definition name from the manifest - A complete response definition instance

                                              Source code in src/llmling_agent/agent/structured.py
                                               46
                                               47
                                               48
                                               49
                                               50
                                               51
                                               52
                                               53
                                               54
                                               55
                                               56
                                               57
                                               58
                                               59
                                               60
                                               61
                                               62
                                               63
                                               64
                                               65
                                               66
                                               67
                                               68
                                               69
                                               70
                                               71
                                               72
                                               73
                                               74
                                               75
                                               76
                                               77
                                               78
                                               79
                                               80
                                               81
                                               82
                                               83
                                               84
                                               85
                                               86
                                               87
                                               88
                                               89
                                               90
                                               91
                                               92
                                               93
                                               94
                                               95
                                               96
                                               97
                                               98
                                               99
                                              100
                                              101
                                              102
                                              103
                                              104
                                              105
                                              106
                                              107
                                              108
                                              109
                                              110
                                              111
                                              112
                                              113
                                              114
                                              115
                                              116
                                              117
                                              118
                                              119
                                              120
                                              121
                                              122
                                              123
                                              124
                                              125
                                              126
                                              127
                                              128
                                              129
                                              130
                                              131
                                              132
                                              133
                                              134
                                              135
                                              136
                                              137
                                              138
                                              139
                                              140
                                              141
                                              142
                                              143
                                              144
                                              145
                                              146
                                              147
                                              148
                                              149
                                              150
                                              151
                                              152
                                              153
                                              154
                                              155
                                              156
                                              157
                                              158
                                              159
                                              160
                                              161
                                              162
                                              163
                                              164
                                              165
                                              166
                                              167
                                              168
                                              169
                                              170
                                              171
                                              172
                                              173
                                              174
                                              175
                                              176
                                              177
                                              178
                                              179
                                              180
                                              181
                                              182
                                              183
                                              184
                                              185
                                              186
                                              187
                                              188
                                              189
                                              190
                                              191
                                              192
                                              193
                                              194
                                              195
                                              196
                                              197
                                              198
                                              199
                                              200
                                              201
                                              202
                                              203
                                              204
                                              205
                                              206
                                              207
                                              208
                                              209
                                              210
                                              211
                                              212
                                              213
                                              214
                                              215
                                              216
                                              217
                                              218
                                              219
                                              220
                                              221
                                              222
                                              223
                                              224
                                              225
                                              226
                                              227
                                              228
                                              229
                                              230
                                              231
                                              232
                                              233
                                              234
                                              235
                                              236
                                              237
                                              238
                                              239
                                              240
                                              241
                                              242
                                              243
                                              244
                                              245
                                              246
                                              247
                                              248
                                              249
                                              250
                                              251
                                              252
                                              253
                                              254
                                              255
                                              256
                                              257
                                              258
                                              259
                                              260
                                              261
                                              262
                                              263
                                              264
                                              265
                                              266
                                              267
                                              268
                                              269
                                              270
                                              271
                                              272
                                              273
                                              274
                                              275
                                              276
                                              277
                                              278
                                              279
                                              280
                                              281
                                              282
                                              283
                                              284
                                              285
                                              286
                                              287
                                              288
                                              289
                                              290
                                              291
                                              292
                                              293
                                              294
                                              295
                                              296
                                              297
                                              298
                                              299
                                              300
                                              301
                                              302
                                              303
                                              304
                                              305
                                              306
                                              307
                                              308
                                              309
                                              310
                                              311
                                              312
                                              313
                                              314
                                              315
                                              316
                                              317
                                              318
                                              319
                                              320
                                              321
                                              322
                                              323
                                              324
                                              325
                                              326
                                              327
                                              328
                                              329
                                              330
                                              331
                                              332
                                              333
                                              334
                                              335
                                              336
                                              337
                                              338
                                              339
                                              340
                                              341
                                              342
                                              343
                                              344
                                              345
                                              346
                                              347
                                              348
                                              349
                                              350
                                              351
                                              352
                                              353
                                              354
                                              355
                                              356
                                              357
                                              358
                                              359
                                              360
                                              361
                                              362
                                              363
                                              364
                                              365
                                              366
                                              367
                                              368
                                              369
                                              370
                                              371
                                              372
                                              373
                                              374
                                              375
                                              376
                                              377
                                              378
                                              379
                                              380
                                              381
                                              382
                                              383
                                              384
                                              385
                                              386
                                              387
                                              388
                                              389
                                              390
                                              391
                                              392
                                              393
                                              394
                                              395
                                              396
                                              397
                                              class StructuredAgent[TDeps, TResult](MessageNode[TDeps, TResult]):
                                                  """Wrapper for Agent that enforces a specific result type.
                                              
                                                  This wrapper ensures the agent always returns results of the specified type.
                                                  The type can be provided as:
                                                  - A Python type for validation
                                                  - A response definition name from the manifest
                                                  - A complete response definition instance
                                                  """
                                              
                                                  def __init__(
                                                      self,
                                                      agent: Agent[TDeps] | StructuredAgent[TDeps, TResult] | Callable[..., TResult],
                                                      result_type: type[TResult] | str | ResponseDefinition,
                                                      *,
                                                      tool_name: str | None = None,
                                                      tool_description: str | None = None,
                                                  ):
                                                      """Initialize structured agent wrapper.
                                              
                                                      Args:
                                                          agent: Base agent to wrap
                                                          result_type: Expected result type:
                                                              - BaseModel / dataclasses
                                                              - Name of response definition in manifest
                                                              - Complete response definition instance
                                                          tool_name: Optional override for tool name
                                                          tool_description: Optional override for tool description
                                              
                                                      Raises:
                                                          ValueError: If named response type not found in manifest
                                                      """
                                                      from llmling_agent.agent.agent import Agent
                                              
                                                      logger.debug("StructuredAgent.run result_type = %s", result_type)
                                                      match agent:
                                                          case StructuredAgent():
                                                              self._agent: Agent[TDeps] = agent._agent
                                                          case Callable():
                                                              self._agent = Agent[TDeps](provider=agent, name=agent.__name__)
                                                          case Agent():
                                                              self._agent = agent
                                                          case _:
                                                              msg = "Invalid agent type"
                                                              raise ValueError(msg)
                                              
                                                      super().__init__(name=self._agent.name)
                                              
                                                      self._result_type = to_type(result_type)
                                                      agent.set_result_type(result_type)
                                              
                                                      match result_type:
                                                          case type() | str():
                                                              # For types and named definitions, use overrides if provided
                                                              self._agent.set_result_type(
                                                                  result_type,
                                                                  tool_name=tool_name,
                                                                  tool_description=tool_description,
                                                              )
                                                          case BaseResponseDefinition():
                                                              # For response definitions, use as-is
                                                              # (overrides don't apply to complete definitions)
                                                              self._agent.set_result_type(result_type)
                                              
                                                  async def __aenter__(self) -> Self:
                                                      """Enter async context and set up MCP servers.
                                              
                                                      Called when agent enters its async context. Sets up any configured
                                                      MCP servers and their tools.
                                                      """
                                                      await self._agent.__aenter__()
                                                      return self
                                              
                                                  async def __aexit__(
                                                      self,
                                                      exc_type: type[BaseException] | None,
                                                      exc_val: BaseException | None,
                                                      exc_tb: TracebackType | None,
                                                  ):
                                                      """Exit async context."""
                                                      await self._agent.__aexit__(exc_type, exc_val, exc_tb)
                                              
                                                  def __and__(
                                                      self, other: AnyAgent[Any, Any] | Team[Any] | ProcessorCallback[TResult]
                                                  ) -> Team[TDeps]:
                                                      return self._agent.__and__(other)
                                              
                                                  def __or__(self, other: Agent | ProcessorCallback | BaseTeam) -> TeamRun:
                                                      return self._agent.__or__(other)
                                              
                                                  async def _run(
                                                      self,
                                                      *prompt: AnyPromptType | TResult,
                                                      result_type: type[TResult] | None = None,
                                                      model: ModelType = None,
                                                      tool_choice: str | list[str] | None = None,
                                                      store_history: bool = True,
                                                      message_id: str | None = None,
                                                      conversation_id: str | None = None,
                                                      wait_for_connections: bool | None = None,
                                                  ) -> ChatMessage[TResult]:
                                                      """Run with fixed result type.
                                              
                                                      Args:
                                                          prompt: Any prompt-compatible object or structured objects of type TResult
                                                          result_type: Expected result type:
                                                              - BaseModel / dataclasses
                                                              - Name of response definition in manifest
                                                              - Complete response definition instance
                                                          model: Optional model override
                                                          tool_choice: Filter available tools by name
                                                          store_history: Whether the message exchange should be added to the
                                                                         context window
                                                          message_id: Optional message id for the returned message.
                                                                      Automatically generated if not provided.
                                                          conversation_id: Optional conversation id for the returned message.
                                                          wait_for_connections: Whether to wait for all connections to complete
                                                      """
                                                      typ = result_type or self._result_type
                                                      return await self._agent._run(
                                                          *prompt,
                                                          result_type=typ,
                                                          model=model,
                                                          store_history=store_history,
                                                          tool_choice=tool_choice,
                                                          message_id=message_id,
                                                          conversation_id=conversation_id,
                                                          wait_for_connections=wait_for_connections,
                                                      )
                                              
                                                  async def validate_against(
                                                      self,
                                                      prompt: str,
                                                      criteria: type[TResult],
                                                      **kwargs: Any,
                                                  ) -> bool:
                                                      """Check if agent's response satisfies stricter criteria."""
                                                      result = await self.run(prompt, **kwargs)
                                                      try:
                                                          criteria.model_validate(result.content.model_dump())  # type: ignore
                                                      except ValidationError:
                                                          return False
                                                      else:
                                                          return True
                                              
                                                  def __repr__(self) -> str:
                                                      type_name = getattr(self._result_type, "__name__", str(self._result_type))
                                                      return f"StructuredAgent({self._agent!r}, result_type={type_name})"
                                              
                                                  def __prompt__(self) -> str:
                                                      type_name = getattr(self._result_type, "__name__", str(self._result_type))
                                                      base_info = self._agent.__prompt__()
                                                      return f"{base_info}\nStructured output type: {type_name}"
                                              
                                                  def __getattr__(self, name: str) -> Any:
                                                      return getattr(self._agent, name)
                                              
                                                  @property
                                                  def context(self) -> AgentContext[TDeps]:
                                                      return self._agent.context
                                              
                                                  @context.setter
                                                  def context(self, value: Any):
                                                      self._agent.context = value
                                              
                                                  @property
                                                  def name(self) -> str:
                                                      return self._agent.name
                                              
                                                  @name.setter
                                                  def name(self, value: str):
                                                      self._agent.name = value
                                              
                                                  @property
                                                  def tools(self) -> ToolManager:
                                                      return self._agent.tools
                                              
                                                  @property
                                                  def conversation(self) -> ConversationManager:
                                                      return self._agent.conversation
                                              
                                                  @overload
                                                  def to_structured(
                                                      self,
                                                      result_type: None,
                                                      *,
                                                      tool_name: str | None = None,
                                                      tool_description: str | None = None,
                                                  ) -> Agent[TDeps]: ...
                                              
                                                  @overload
                                                  def to_structured[TNewResult](
                                                      self,
                                                      result_type: type[TNewResult] | str | ResponseDefinition,
                                                      *,
                                                      tool_name: str | None = None,
                                                      tool_description: str | None = None,
                                                  ) -> StructuredAgent[TDeps, TNewResult]: ...
                                              
                                                  def to_structured[TNewResult](
                                                      self,
                                                      result_type: type[TNewResult] | str | ResponseDefinition | None,
                                                      *,
                                                      tool_name: str | None = None,
                                                      tool_description: str | None = None,
                                                  ) -> Agent[TDeps] | StructuredAgent[TDeps, TNewResult]:
                                                      if result_type is None:
                                                          return self._agent
                                              
                                                      return StructuredAgent(
                                                          self._agent,
                                                          result_type=result_type,
                                                          tool_name=tool_name,
                                                          tool_description=tool_description,
                                                      )
                                              
                                                  @property
                                                  def stats(self) -> MessageStats:
                                                      return self._agent.stats
                                              
                                                  async def run_iter(
                                                      self,
                                                      *prompt_groups: Sequence[AnyPromptType | PIL.Image.Image | os.PathLike[str]],
                                                      **kwargs: Any,
                                                  ) -> AsyncIterator[ChatMessage[Any]]:
                                                      """Forward run_iter to wrapped agent."""
                                                      async for message in self._agent.run_iter(*prompt_groups, **kwargs):
                                                          yield message
                                              
                                                  async def run_job(
                                                      self,
                                                      job: Job[TDeps, TResult],
                                                      *,
                                                      store_history: bool = True,
                                                      include_agent_tools: bool = True,
                                                  ) -> ChatMessage[TResult]:
                                                      """Execute a pre-defined job ensuring type compatibility.
                                              
                                                      Args:
                                                          job: Job configuration to execute
                                                          store_history: Whether to add job execution to conversation history
                                                          include_agent_tools: Whether to include agent's tools alongside job tools
                                              
                                                      Returns:
                                                          Task execution result
                                              
                                                      Raises:
                                                          JobError: If job execution fails or types don't match
                                                          ValueError: If job configuration is invalid
                                                      """
                                                      from llmling_agent.tasks import JobError
                                              
                                                      # Validate dependency requirement
                                                      if job.required_dependency is not None:  # noqa: SIM102
                                                          if not isinstance(self.context.data, job.required_dependency):
                                                              msg = (
                                                                  f"Agent dependencies ({type(self.context.data)}) "
                                                                  f"don't match job requirement ({job.required_dependency})"
                                                              )
                                                              raise JobError(msg)
                                              
                                                      # Validate return type requirement
                                                      if job.required_return_type != self._result_type:
                                                          msg = (
                                                              f"Agent result type ({self._result_type}) "
                                                              f"doesn't match job requirement ({job.required_return_type})"
                                                          )
                                                          raise JobError(msg)
                                              
                                                      # Load task knowledge if provided
                                                      if job.knowledge:
                                                          # Add knowledge sources to context
                                                          resources: list[Resource | str] = list(job.knowledge.paths) + list(
                                                              job.knowledge.resources
                                                          )
                                                          for source in resources:
                                                              await self.conversation.load_context_source(source)
                                                          for prompt in job.knowledge.prompts:
                                                              await self.conversation.load_context_source(prompt)
                                              
                                                      try:
                                                          # Register task tools temporarily
                                                          tools = job.get_tools()
                                              
                                                          # Use temporary tools
                                                          with self._agent.tools.temporary_tools(
                                                              tools, exclusive=not include_agent_tools
                                                          ):
                                                              # Execute job using StructuredAgent's run to maintain type safety
                                                              return await self.run(await job.get_prompt(), store_history=store_history)
                                              
                                                      except Exception as e:
                                                          msg = f"Task execution failed: {e}"
                                                          logger.exception(msg)
                                                          raise JobError(msg) from e
                                              
                                                  @classmethod
                                                  def from_callback(
                                                      cls,
                                                      callback: ProcessorCallback[TResult],
                                                      *,
                                                      name: str | None = None,
                                                      **kwargs: Any,
                                                  ) -> StructuredAgent[None, TResult]:
                                                      """Create a structured agent from a processing callback.
                                              
                                                      Args:
                                                          callback: Function to process messages. Can be:
                                                              - sync or async
                                                              - with or without context
                                                              - with explicit return type
                                                          name: Optional name for the agent
                                                          **kwargs: Additional arguments for agent
                                              
                                                      Example:
                                                          ```python
                                                          class AnalysisResult(BaseModel):
                                                              sentiment: float
                                                              topics: list[str]
                                              
                                                          def analyze(msg: str) -> AnalysisResult:
                                                              return AnalysisResult(sentiment=0.8, topics=["tech"])
                                              
                                                          analyzer = StructuredAgent.from_callback(analyze)
                                                          ```
                                                      """
                                                      from llmling_agent.agent.agent import Agent
                                                      from llmling_agent_providers.callback import CallbackProvider
                                              
                                                      name = name or callback.__name__ or "processor"
                                                      provider = CallbackProvider(callback, name=name)
                                                      agent = Agent[None](provider=provider, name=name, **kwargs)
                                                      # Get return type from signature for validation
                                                      hints = get_type_hints(callback)
                                                      return_type = hints.get("return")
                                              
                                                      # If async, unwrap from Awaitable
                                                      if (
                                                          return_type
                                                          and hasattr(return_type, "__origin__")
                                                          and return_type.__origin__ is Awaitable
                                                      ):
                                                          return_type = return_type.__args__[0]
                                                      return StructuredAgent[None, TResult](agent, return_type or str)  # type: ignore
                                              
                                                  def is_busy(self) -> bool:
                                                      """Check if agent is currently processing tasks."""
                                                      return bool(self._pending_tasks or self._background_task)
                                              
                                                  def run_sync(self, *args, **kwargs):
                                                      """Run agent synchronously."""
                                                      return self._agent.run_sync(*args, result_type=self._result_type, **kwargs)
                                              

                                              __aenter__ async

                                              __aenter__() -> Self
                                              

                                              Enter async context and set up MCP servers.

                                              Called when agent enters its async context. Sets up any configured MCP servers and their tools.

                                              Source code in src/llmling_agent/agent/structured.py
                                              110
                                              111
                                              112
                                              113
                                              114
                                              115
                                              116
                                              117
                                              async def __aenter__(self) -> Self:
                                                  """Enter async context and set up MCP servers.
                                              
                                                  Called when agent enters its async context. Sets up any configured
                                                  MCP servers and their tools.
                                                  """
                                                  await self._agent.__aenter__()
                                                  return self
                                              

                                              __aexit__ async

                                              __aexit__(
                                                  exc_type: type[BaseException] | None,
                                                  exc_val: BaseException | None,
                                                  exc_tb: TracebackType | None,
                                              )
                                              

                                              Exit async context.

                                              Source code in src/llmling_agent/agent/structured.py
                                              119
                                              120
                                              121
                                              122
                                              123
                                              124
                                              125
                                              126
                                              async def __aexit__(
                                                  self,
                                                  exc_type: type[BaseException] | None,
                                                  exc_val: BaseException | None,
                                                  exc_tb: TracebackType | None,
                                              ):
                                                  """Exit async context."""
                                                  await self._agent.__aexit__(exc_type, exc_val, exc_tb)
                                              

                                              __init__

                                              __init__(
                                                  agent: Agent[TDeps] | StructuredAgent[TDeps, TResult] | Callable[..., TResult],
                                                  result_type: type[TResult] | str | ResponseDefinition,
                                                  *,
                                                  tool_name: str | None = None,
                                                  tool_description: str | None = None,
                                              )
                                              

                                              Initialize structured agent wrapper.

                                              Parameters:

                                              Name Type Description Default
                                              agent Agent[TDeps] | StructuredAgent[TDeps, TResult] | Callable[..., TResult]

                                              Base agent to wrap

                                              required
                                              result_type type[TResult] | str | ResponseDefinition

                                              Expected result type: - BaseModel / dataclasses - Name of response definition in manifest - Complete response definition instance

                                              required
                                              tool_name str | None

                                              Optional override for tool name

                                              None
                                              tool_description str | None

                                              Optional override for tool description

                                              None

                                              Raises:

                                              Type Description
                                              ValueError

                                              If named response type not found in manifest

                                              Source code in src/llmling_agent/agent/structured.py
                                               56
                                               57
                                               58
                                               59
                                               60
                                               61
                                               62
                                               63
                                               64
                                               65
                                               66
                                               67
                                               68
                                               69
                                               70
                                               71
                                               72
                                               73
                                               74
                                               75
                                               76
                                               77
                                               78
                                               79
                                               80
                                               81
                                               82
                                               83
                                               84
                                               85
                                               86
                                               87
                                               88
                                               89
                                               90
                                               91
                                               92
                                               93
                                               94
                                               95
                                               96
                                               97
                                               98
                                               99
                                              100
                                              101
                                              102
                                              103
                                              104
                                              105
                                              106
                                              107
                                              108
                                              def __init__(
                                                  self,
                                                  agent: Agent[TDeps] | StructuredAgent[TDeps, TResult] | Callable[..., TResult],
                                                  result_type: type[TResult] | str | ResponseDefinition,
                                                  *,
                                                  tool_name: str | None = None,
                                                  tool_description: str | None = None,
                                              ):
                                                  """Initialize structured agent wrapper.
                                              
                                                  Args:
                                                      agent: Base agent to wrap
                                                      result_type: Expected result type:
                                                          - BaseModel / dataclasses
                                                          - Name of response definition in manifest
                                                          - Complete response definition instance
                                                      tool_name: Optional override for tool name
                                                      tool_description: Optional override for tool description
                                              
                                                  Raises:
                                                      ValueError: If named response type not found in manifest
                                                  """
                                                  from llmling_agent.agent.agent import Agent
                                              
                                                  logger.debug("StructuredAgent.run result_type = %s", result_type)
                                                  match agent:
                                                      case StructuredAgent():
                                                          self._agent: Agent[TDeps] = agent._agent
                                                      case Callable():
                                                          self._agent = Agent[TDeps](provider=agent, name=agent.__name__)
                                                      case Agent():
                                                          self._agent = agent
                                                      case _:
                                                          msg = "Invalid agent type"
                                                          raise ValueError(msg)
                                              
                                                  super().__init__(name=self._agent.name)
                                              
                                                  self._result_type = to_type(result_type)
                                                  agent.set_result_type(result_type)
                                              
                                                  match result_type:
                                                      case type() | str():
                                                          # For types and named definitions, use overrides if provided
                                                          self._agent.set_result_type(
                                                              result_type,
                                                              tool_name=tool_name,
                                                              tool_description=tool_description,
                                                          )
                                                      case BaseResponseDefinition():
                                                          # For response definitions, use as-is
                                                          # (overrides don't apply to complete definitions)
                                                          self._agent.set_result_type(result_type)
                                              

                                              _run async

                                              _run(
                                                  *prompt: AnyPromptType | TResult,
                                                  result_type: type[TResult] | None = None,
                                                  model: ModelType = None,
                                                  tool_choice: str | list[str] | None = None,
                                                  store_history: bool = True,
                                                  message_id: str | None = None,
                                                  conversation_id: str | None = None,
                                                  wait_for_connections: bool | None = None,
                                              ) -> ChatMessage[TResult]
                                              

                                              Run with fixed result type.

                                              Parameters:

                                              Name Type Description Default
                                              prompt AnyPromptType | TResult

                                              Any prompt-compatible object or structured objects of type TResult

                                              ()
                                              result_type type[TResult] | None

                                              Expected result type: - BaseModel / dataclasses - Name of response definition in manifest - Complete response definition instance

                                              None
                                              model ModelType

                                              Optional model override

                                              None
                                              tool_choice str | list[str] | None

                                              Filter available tools by name

                                              None
                                              store_history bool

                                              Whether the message exchange should be added to the context window

                                              True
                                              message_id str | None

                                              Optional message id for the returned message. Automatically generated if not provided.

                                              None
                                              conversation_id str | None

                                              Optional conversation id for the returned message.

                                              None
                                              wait_for_connections bool | None

                                              Whether to wait for all connections to complete

                                              None
                                              Source code in src/llmling_agent/agent/structured.py
                                              136
                                              137
                                              138
                                              139
                                              140
                                              141
                                              142
                                              143
                                              144
                                              145
                                              146
                                              147
                                              148
                                              149
                                              150
                                              151
                                              152
                                              153
                                              154
                                              155
                                              156
                                              157
                                              158
                                              159
                                              160
                                              161
                                              162
                                              163
                                              164
                                              165
                                              166
                                              167
                                              168
                                              169
                                              170
                                              171
                                              172
                                              173
                                              174
                                              async def _run(
                                                  self,
                                                  *prompt: AnyPromptType | TResult,
                                                  result_type: type[TResult] | None = None,
                                                  model: ModelType = None,
                                                  tool_choice: str | list[str] | None = None,
                                                  store_history: bool = True,
                                                  message_id: str | None = None,
                                                  conversation_id: str | None = None,
                                                  wait_for_connections: bool | None = None,
                                              ) -> ChatMessage[TResult]:
                                                  """Run with fixed result type.
                                              
                                                  Args:
                                                      prompt: Any prompt-compatible object or structured objects of type TResult
                                                      result_type: Expected result type:
                                                          - BaseModel / dataclasses
                                                          - Name of response definition in manifest
                                                          - Complete response definition instance
                                                      model: Optional model override
                                                      tool_choice: Filter available tools by name
                                                      store_history: Whether the message exchange should be added to the
                                                                     context window
                                                      message_id: Optional message id for the returned message.
                                                                  Automatically generated if not provided.
                                                      conversation_id: Optional conversation id for the returned message.
                                                      wait_for_connections: Whether to wait for all connections to complete
                                                  """
                                                  typ = result_type or self._result_type
                                                  return await self._agent._run(
                                                      *prompt,
                                                      result_type=typ,
                                                      model=model,
                                                      store_history=store_history,
                                                      tool_choice=tool_choice,
                                                      message_id=message_id,
                                                      conversation_id=conversation_id,
                                                      wait_for_connections=wait_for_connections,
                                                  )
                                              

                                              from_callback classmethod

                                              from_callback(
                                                  callback: ProcessorCallback[TResult], *, name: str | None = None, **kwargs: Any
                                              ) -> StructuredAgent[None, TResult]
                                              

                                              Create a structured agent from a processing callback.

                                              Parameters:

                                              Name Type Description Default
                                              callback ProcessorCallback[TResult]

                                              Function to process messages. Can be: - sync or async - with or without context - with explicit return type

                                              required
                                              name str | None

                                              Optional name for the agent

                                              None
                                              **kwargs Any

                                              Additional arguments for agent

                                              {}
                                              Example
                                              class AnalysisResult(BaseModel):
                                                  sentiment: float
                                                  topics: list[str]
                                              
                                              def analyze(msg: str) -> AnalysisResult:
                                                  return AnalysisResult(sentiment=0.8, topics=["tech"])
                                              
                                              analyzer = StructuredAgent.from_callback(analyze)
                                              
                                              Source code in src/llmling_agent/agent/structured.py
                                              342
                                              343
                                              344
                                              345
                                              346
                                              347
                                              348
                                              349
                                              350
                                              351
                                              352
                                              353
                                              354
                                              355
                                              356
                                              357
                                              358
                                              359
                                              360
                                              361
                                              362
                                              363
                                              364
                                              365
                                              366
                                              367
                                              368
                                              369
                                              370
                                              371
                                              372
                                              373
                                              374
                                              375
                                              376
                                              377
                                              378
                                              379
                                              380
                                              381
                                              382
                                              383
                                              384
                                              385
                                              386
                                              387
                                              388
                                              389
                                              @classmethod
                                              def from_callback(
                                                  cls,
                                                  callback: ProcessorCallback[TResult],
                                                  *,
                                                  name: str | None = None,
                                                  **kwargs: Any,
                                              ) -> StructuredAgent[None, TResult]:
                                                  """Create a structured agent from a processing callback.
                                              
                                                  Args:
                                                      callback: Function to process messages. Can be:
                                                          - sync or async
                                                          - with or without context
                                                          - with explicit return type
                                                      name: Optional name for the agent
                                                      **kwargs: Additional arguments for agent
                                              
                                                  Example:
                                                      ```python
                                                      class AnalysisResult(BaseModel):
                                                          sentiment: float
                                                          topics: list[str]
                                              
                                                      def analyze(msg: str) -> AnalysisResult:
                                                          return AnalysisResult(sentiment=0.8, topics=["tech"])
                                              
                                                      analyzer = StructuredAgent.from_callback(analyze)
                                                      ```
                                                  """
                                                  from llmling_agent.agent.agent import Agent
                                                  from llmling_agent_providers.callback import CallbackProvider
                                              
                                                  name = name or callback.__name__ or "processor"
                                                  provider = CallbackProvider(callback, name=name)
                                                  agent = Agent[None](provider=provider, name=name, **kwargs)
                                                  # Get return type from signature for validation
                                                  hints = get_type_hints(callback)
                                                  return_type = hints.get("return")
                                              
                                                  # If async, unwrap from Awaitable
                                                  if (
                                                      return_type
                                                      and hasattr(return_type, "__origin__")
                                                      and return_type.__origin__ is Awaitable
                                                  ):
                                                      return_type = return_type.__args__[0]
                                                  return StructuredAgent[None, TResult](agent, return_type or str)  # type: ignore
                                              

                                              is_busy

                                              is_busy() -> bool
                                              

                                              Check if agent is currently processing tasks.

                                              Source code in src/llmling_agent/agent/structured.py
                                              391
                                              392
                                              393
                                              def is_busy(self) -> bool:
                                                  """Check if agent is currently processing tasks."""
                                                  return bool(self._pending_tasks or self._background_task)
                                              

                                              run_iter async

                                              run_iter(
                                                  *prompt_groups: Sequence[AnyPromptType | Image | PathLike[str]], **kwargs: Any
                                              ) -> AsyncIterator[ChatMessage[Any]]
                                              

                                              Forward run_iter to wrapped agent.

                                              Source code in src/llmling_agent/agent/structured.py
                                              266
                                              267
                                              268
                                              269
                                              270
                                              271
                                              272
                                              273
                                              async def run_iter(
                                                  self,
                                                  *prompt_groups: Sequence[AnyPromptType | PIL.Image.Image | os.PathLike[str]],
                                                  **kwargs: Any,
                                              ) -> AsyncIterator[ChatMessage[Any]]:
                                                  """Forward run_iter to wrapped agent."""
                                                  async for message in self._agent.run_iter(*prompt_groups, **kwargs):
                                                      yield message
                                              

                                              run_job async

                                              run_job(
                                                  job: Job[TDeps, TResult],
                                                  *,
                                                  store_history: bool = True,
                                                  include_agent_tools: bool = True,
                                              ) -> ChatMessage[TResult]
                                              

                                              Execute a pre-defined job ensuring type compatibility.

                                              Parameters:

                                              Name Type Description Default
                                              job Job[TDeps, TResult]

                                              Job configuration to execute

                                              required
                                              store_history bool

                                              Whether to add job execution to conversation history

                                              True
                                              include_agent_tools bool

                                              Whether to include agent's tools alongside job tools

                                              True

                                              Returns:

                                              Type Description
                                              ChatMessage[TResult]

                                              Task execution result

                                              Raises:

                                              Type Description
                                              JobError

                                              If job execution fails or types don't match

                                              ValueError

                                              If job configuration is invalid

                                              Source code in src/llmling_agent/agent/structured.py
                                              275
                                              276
                                              277
                                              278
                                              279
                                              280
                                              281
                                              282
                                              283
                                              284
                                              285
                                              286
                                              287
                                              288
                                              289
                                              290
                                              291
                                              292
                                              293
                                              294
                                              295
                                              296
                                              297
                                              298
                                              299
                                              300
                                              301
                                              302
                                              303
                                              304
                                              305
                                              306
                                              307
                                              308
                                              309
                                              310
                                              311
                                              312
                                              313
                                              314
                                              315
                                              316
                                              317
                                              318
                                              319
                                              320
                                              321
                                              322
                                              323
                                              324
                                              325
                                              326
                                              327
                                              328
                                              329
                                              330
                                              331
                                              332
                                              333
                                              334
                                              335
                                              336
                                              337
                                              338
                                              339
                                              340
                                              async def run_job(
                                                  self,
                                                  job: Job[TDeps, TResult],
                                                  *,
                                                  store_history: bool = True,
                                                  include_agent_tools: bool = True,
                                              ) -> ChatMessage[TResult]:
                                                  """Execute a pre-defined job ensuring type compatibility.
                                              
                                                  Args:
                                                      job: Job configuration to execute
                                                      store_history: Whether to add job execution to conversation history
                                                      include_agent_tools: Whether to include agent's tools alongside job tools
                                              
                                                  Returns:
                                                      Task execution result
                                              
                                                  Raises:
                                                      JobError: If job execution fails or types don't match
                                                      ValueError: If job configuration is invalid
                                                  """
                                                  from llmling_agent.tasks import JobError
                                              
                                                  # Validate dependency requirement
                                                  if job.required_dependency is not None:  # noqa: SIM102
                                                      if not isinstance(self.context.data, job.required_dependency):
                                                          msg = (
                                                              f"Agent dependencies ({type(self.context.data)}) "
                                                              f"don't match job requirement ({job.required_dependency})"
                                                          )
                                                          raise JobError(msg)
                                              
                                                  # Validate return type requirement
                                                  if job.required_return_type != self._result_type:
                                                      msg = (
                                                          f"Agent result type ({self._result_type}) "
                                                          f"doesn't match job requirement ({job.required_return_type})"
                                                      )
                                                      raise JobError(msg)
                                              
                                                  # Load task knowledge if provided
                                                  if job.knowledge:
                                                      # Add knowledge sources to context
                                                      resources: list[Resource | str] = list(job.knowledge.paths) + list(
                                                          job.knowledge.resources
                                                      )
                                                      for source in resources:
                                                          await self.conversation.load_context_source(source)
                                                      for prompt in job.knowledge.prompts:
                                                          await self.conversation.load_context_source(prompt)
                                              
                                                  try:
                                                      # Register task tools temporarily
                                                      tools = job.get_tools()
                                              
                                                      # Use temporary tools
                                                      with self._agent.tools.temporary_tools(
                                                          tools, exclusive=not include_agent_tools
                                                      ):
                                                          # Execute job using StructuredAgent's run to maintain type safety
                                                          return await self.run(await job.get_prompt(), store_history=store_history)
                                              
                                                  except Exception as e:
                                                      msg = f"Task execution failed: {e}"
                                                      logger.exception(msg)
                                                      raise JobError(msg) from e
                                              

                                              run_sync

                                              run_sync(*args, **kwargs)
                                              

                                              Run agent synchronously.

                                              Source code in src/llmling_agent/agent/structured.py
                                              395
                                              396
                                              397
                                              def run_sync(self, *args, **kwargs):
                                                  """Run agent synchronously."""
                                                  return self._agent.run_sync(*args, result_type=self._result_type, **kwargs)
                                              

                                              validate_against async

                                              validate_against(prompt: str, criteria: type[TResult], **kwargs: Any) -> bool
                                              

                                              Check if agent's response satisfies stricter criteria.

                                              Source code in src/llmling_agent/agent/structured.py
                                              176
                                              177
                                              178
                                              179
                                              180
                                              181
                                              182
                                              183
                                              184
                                              185
                                              186
                                              187
                                              188
                                              189
                                              async def validate_against(
                                                  self,
                                                  prompt: str,
                                                  criteria: type[TResult],
                                                  **kwargs: Any,
                                              ) -> bool:
                                                  """Check if agent's response satisfies stricter criteria."""
                                                  result = await self.run(prompt, **kwargs)
                                                  try:
                                                      criteria.model_validate(result.content.model_dump())  # type: ignore
                                                  except ValidationError:
                                                      return False
                                                  else:
                                                      return True
                                              

                                              TOMLCode

                                              Bases: BaseCode

                                              TOML with syntax validation.

                                              Source code in src/llmling_agent/common_types.py
                                              125
                                              126
                                              127
                                              128
                                              129
                                              130
                                              131
                                              132
                                              133
                                              134
                                              135
                                              136
                                              137
                                              138
                                              139
                                              class TOMLCode(BaseCode):
                                                  """TOML with syntax validation."""
                                              
                                                  @field_validator("code")
                                                  @classmethod
                                                  def validate_syntax(cls, code: str) -> str:
                                                      import yamling
                                              
                                                      try:
                                                          yamling.load(code, mode="toml")
                                                      except yamling.ParsingError as e:
                                                          msg = f"Invalid TOML syntax: {e}"
                                                          raise ValueError(msg) from e
                                                      else:
                                                          return code
                                              

                                              Team

                                              Bases: BaseTeam[TDeps, Any]

                                              Group of agents that can execute together.

                                              Source code in src/llmling_agent/delegation/team.py
                                               36
                                               37
                                               38
                                               39
                                               40
                                               41
                                               42
                                               43
                                               44
                                               45
                                               46
                                               47
                                               48
                                               49
                                               50
                                               51
                                               52
                                               53
                                               54
                                               55
                                               56
                                               57
                                               58
                                               59
                                               60
                                               61
                                               62
                                               63
                                               64
                                               65
                                               66
                                               67
                                               68
                                               69
                                               70
                                               71
                                               72
                                               73
                                               74
                                               75
                                               76
                                               77
                                               78
                                               79
                                               80
                                               81
                                               82
                                               83
                                               84
                                               85
                                               86
                                               87
                                               88
                                               89
                                               90
                                               91
                                               92
                                               93
                                               94
                                               95
                                               96
                                               97
                                               98
                                               99
                                              100
                                              101
                                              102
                                              103
                                              104
                                              105
                                              106
                                              107
                                              108
                                              109
                                              110
                                              111
                                              112
                                              113
                                              114
                                              115
                                              116
                                              117
                                              118
                                              119
                                              120
                                              121
                                              122
                                              123
                                              124
                                              125
                                              126
                                              127
                                              128
                                              129
                                              130
                                              131
                                              132
                                              133
                                              134
                                              135
                                              136
                                              137
                                              138
                                              139
                                              140
                                              141
                                              142
                                              143
                                              144
                                              145
                                              146
                                              147
                                              148
                                              149
                                              150
                                              151
                                              152
                                              153
                                              154
                                              155
                                              156
                                              157
                                              158
                                              159
                                              160
                                              161
                                              162
                                              163
                                              164
                                              165
                                              166
                                              167
                                              168
                                              169
                                              170
                                              171
                                              172
                                              173
                                              174
                                              175
                                              176
                                              177
                                              178
                                              179
                                              180
                                              181
                                              182
                                              183
                                              184
                                              185
                                              186
                                              187
                                              188
                                              189
                                              190
                                              191
                                              192
                                              193
                                              194
                                              195
                                              196
                                              197
                                              198
                                              199
                                              200
                                              201
                                              202
                                              203
                                              204
                                              205
                                              206
                                              207
                                              208
                                              209
                                              210
                                              211
                                              212
                                              213
                                              214
                                              215
                                              216
                                              217
                                              218
                                              219
                                              220
                                              221
                                              222
                                              223
                                              224
                                              225
                                              226
                                              227
                                              228
                                              229
                                              230
                                              231
                                              232
                                              233
                                              234
                                              235
                                              236
                                              237
                                              238
                                              239
                                              240
                                              241
                                              class Team[TDeps](BaseTeam[TDeps, Any]):
                                                  """Group of agents that can execute together."""
                                              
                                                  async def execute(
                                                      self,
                                                      *prompts: AnyPromptType | PIL.Image.Image | os.PathLike[str] | None,
                                                      **kwargs: Any,
                                                  ) -> TeamResponse:
                                                      """Run all agents in parallel with monitoring."""
                                                      from llmling_agent.talk.talk import Talk
                                              
                                                      self._team_talk.clear()
                                              
                                                      start_time = get_now()
                                                      responses: list[AgentResponse[Any]] = []
                                                      errors: dict[str, Exception] = {}
                                                      final_prompt = list(prompts)
                                                      if self.shared_prompt:
                                                          final_prompt.insert(0, self.shared_prompt)
                                                      combined_prompt = "\n".join([await to_prompt(p) for p in final_prompt])
                                                      all_nodes = list(await self.pick_agents(combined_prompt))
                                                      # Create Talk connections for monitoring this execution
                                                      execution_talks: list[Talk[Any]] = []
                                                      for node in all_nodes:
                                                          talk = Talk[Any](
                                                              node,
                                                              [],  # No actual forwarding, just for tracking
                                                              connection_type="run",
                                                              queued=True,
                                                              queue_strategy="latest",
                                                          )
                                                          execution_talks.append(talk)
                                                          self._team_talk.append(talk)  # Add to base class's TeamTalk
                                              
                                                      async def _run(node: MessageNode[TDeps, Any]):
                                                          try:
                                                              start = perf_counter()
                                                              message = await node.run(*final_prompt, **kwargs)
                                                              timing = perf_counter() - start
                                                              r = AgentResponse(agent_name=node.name, message=message, timing=timing)
                                                              responses.append(r)
                                              
                                                              # Update talk stats for this agent
                                                              talk = next(t for t in execution_talks if t.source == node)
                                                              talk._stats.messages.append(message)
                                              
                                                          except Exception as e:  # noqa: BLE001
                                                              errors[node.name] = e
                                              
                                                      # Run all agents in parallel
                                                      await asyncio.gather(*[_run(node) for node in all_nodes])
                                              
                                                      return TeamResponse(responses=responses, start_time=start_time, errors=errors)
                                              
                                                  def __prompt__(self) -> str:
                                                      """Format team info for prompts."""
                                                      members = ", ".join(a.name for a in self.agents)
                                                      desc = f" - {self.description}" if self.description else ""
                                                      return f"Parallel Team '{self.name}'{desc}\nMembers: {members}"
                                              
                                                  async def run_iter(
                                                      self,
                                                      *prompts: AnyPromptType,
                                                      **kwargs: Any,
                                                  ) -> AsyncIterator[ChatMessage[Any]]:
                                                      """Yield messages as they arrive from parallel execution."""
                                                      queue: asyncio.Queue[ChatMessage[Any] | None] = asyncio.Queue()
                                                      failures: dict[str, Exception] = {}
                                              
                                                      async def _run(node: MessageNode[TDeps, Any]):
                                                          try:
                                                              message = await node.run(*prompts, **kwargs)
                                                              await queue.put(message)
                                                          except Exception as e:
                                                              logger.exception("Error executing node %s", node.name)
                                                              failures[node.name] = e
                                                              # Put None to maintain queue count
                                                              await queue.put(None)
                                              
                                                      # Get nodes to run
                                                      combined_prompt = "\n".join([await to_prompt(p) for p in prompts])
                                                      all_nodes = list(await self.pick_agents(combined_prompt))
                                              
                                                      # Start all agents
                                                      tasks = [asyncio.create_task(_run(n), name=f"run_{n.name}") for n in all_nodes]
                                              
                                                      try:
                                                          # Yield messages as they arrive
                                                          for _ in all_nodes:
                                                              if msg := await queue.get():
                                                                  yield msg
                                              
                                                          # If any failures occurred, raise error with details
                                                          if failures:
                                                              error_details = "\n".join(
                                                                  f"- {name}: {error}" for name, error in failures.items()
                                                              )
                                                              error_msg = f"Some nodes failed to execute:\n{error_details}"
                                                              raise RuntimeError(error_msg)
                                              
                                                      finally:
                                                          # Clean up any remaining tasks
                                                          for task in tasks:
                                                              if not task.done():
                                                                  task.cancel()
                                              
                                                  async def _run(
                                                      self,
                                                      *prompts: AnyPromptType | PIL.Image.Image | os.PathLike[str] | None,
                                                      wait_for_connections: bool | None = None,
                                                      message_id: str | None = None,
                                                      conversation_id: str | None = None,
                                                      **kwargs: Any,
                                                  ) -> ChatMessage[list[Any]]:
                                                      """Run all agents in parallel and return combined message."""
                                                      result: TeamResponse = await self.execute(*prompts, **kwargs)
                                                      message_id = message_id or str(uuid4())
                                                      return ChatMessage(
                                                          content=[r.message.content for r in result if r.message],
                                                          role="assistant",
                                                          name=self.name,
                                                          message_id=message_id,
                                                          conversation_id=conversation_id,
                                                          metadata={
                                                              "agent_names": [r.agent_name for r in result],
                                                              "errors": {name: str(error) for name, error in result.errors.items()},
                                                              "start_time": result.start_time.isoformat(),
                                                          },
                                                      )
                                              
                                                  async def run_job[TJobResult](
                                                      self,
                                                      job: Job[TDeps, TJobResult],
                                                      *,
                                                      store_history: bool = True,
                                                      include_agent_tools: bool = True,
                                                  ) -> list[AgentResponse[TJobResult]]:
                                                      """Execute a job across all team members in parallel.
                                              
                                                      Args:
                                                          job: Job configuration to execute
                                                          store_history: Whether to add job execution to conversation history
                                                          include_agent_tools: Whether to include agent's tools alongside job tools
                                              
                                                      Returns:
                                                          List of responses from all agents
                                              
                                                      Raises:
                                                          JobError: If job execution fails for any agent
                                                          ValueError: If job configuration is invalid
                                                      """
                                                      from llmling_agent.agent import Agent, StructuredAgent
                                                      from llmling_agent.tasks import JobError
                                              
                                                      responses: list[AgentResponse[TJobResult]] = []
                                                      errors: dict[str, Exception] = {}
                                                      start_time = get_now()
                                              
                                                      # Validate dependencies for all agents
                                                      if job.required_dependency is not None:
                                                          invalid_agents = [
                                                              agent.name
                                                              for agent in self.iter_agents()
                                                              if not isinstance(agent.context.data, job.required_dependency)
                                                          ]
                                                          if invalid_agents:
                                                              msg = (
                                                                  f"Agents {', '.join(invalid_agents)} don't have required "
                                                                  f"dependency type: {job.required_dependency}"
                                                              )
                                                              raise JobError(msg)
                                              
                                                      try:
                                                          # Load knowledge for all agents if provided
                                                          if job.knowledge:
                                                              # TODO: resources
                                                              tools = [t.name for t in job.get_tools()]
                                                              await self.distribute(content="", tools=tools)
                                              
                                                          prompt = await job.get_prompt()
                                              
                                                          async def _run(agent: MessageNode[TDeps, TJobResult]):
                                                              assert isinstance(agent, Agent | StructuredAgent)
                                                              try:
                                                                  with agent.tools.temporary_tools(
                                                                      job.get_tools(), exclusive=not include_agent_tools
                                                                  ):
                                                                      start = perf_counter()
                                                                      resp = AgentResponse(
                                                                          agent_name=agent.name,
                                                                          message=await agent.run(prompt, store_history=store_history),  # pyright: ignore
                                                                          timing=perf_counter() - start,
                                                                      )
                                                                      responses.append(resp)
                                                              except Exception as e:  # noqa: BLE001
                                                                  errors[agent.name] = e
                                              
                                                          # Run job in parallel on all agents
                                                          await asyncio.gather(*[_run(node) for node in self.agents])
                                              
                                                          return TeamResponse(responses=responses, start_time=start_time, errors=errors)
                                              
                                                      except Exception as e:
                                                          msg = "Job execution failed"
                                                          logger.exception(msg)
                                                          raise JobError(msg) from e
                                              

                                              __prompt__

                                              __prompt__() -> str
                                              

                                              Format team info for prompts.

                                              Source code in src/llmling_agent/delegation/team.py
                                              90
                                              91
                                              92
                                              93
                                              94
                                              def __prompt__(self) -> str:
                                                  """Format team info for prompts."""
                                                  members = ", ".join(a.name for a in self.agents)
                                                  desc = f" - {self.description}" if self.description else ""
                                                  return f"Parallel Team '{self.name}'{desc}\nMembers: {members}"
                                              

                                              _run async

                                              _run(
                                                  *prompts: AnyPromptType | Image | PathLike[str] | None,
                                                  wait_for_connections: bool | None = None,
                                                  message_id: str | None = None,
                                                  conversation_id: str | None = None,
                                                  **kwargs: Any,
                                              ) -> ChatMessage[list[Any]]
                                              

                                              Run all agents in parallel and return combined message.

                                              Source code in src/llmling_agent/delegation/team.py
                                              142
                                              143
                                              144
                                              145
                                              146
                                              147
                                              148
                                              149
                                              150
                                              151
                                              152
                                              153
                                              154
                                              155
                                              156
                                              157
                                              158
                                              159
                                              160
                                              161
                                              162
                                              163
                                              164
                                              async def _run(
                                                  self,
                                                  *prompts: AnyPromptType | PIL.Image.Image | os.PathLike[str] | None,
                                                  wait_for_connections: bool | None = None,
                                                  message_id: str | None = None,
                                                  conversation_id: str | None = None,
                                                  **kwargs: Any,
                                              ) -> ChatMessage[list[Any]]:
                                                  """Run all agents in parallel and return combined message."""
                                                  result: TeamResponse = await self.execute(*prompts, **kwargs)
                                                  message_id = message_id or str(uuid4())
                                                  return ChatMessage(
                                                      content=[r.message.content for r in result if r.message],
                                                      role="assistant",
                                                      name=self.name,
                                                      message_id=message_id,
                                                      conversation_id=conversation_id,
                                                      metadata={
                                                          "agent_names": [r.agent_name for r in result],
                                                          "errors": {name: str(error) for name, error in result.errors.items()},
                                                          "start_time": result.start_time.isoformat(),
                                                      },
                                                  )
                                              

                                              execute async

                                              execute(
                                                  *prompts: AnyPromptType | Image | PathLike[str] | None, **kwargs: Any
                                              ) -> TeamResponse
                                              

                                              Run all agents in parallel with monitoring.

                                              Source code in src/llmling_agent/delegation/team.py
                                              39
                                              40
                                              41
                                              42
                                              43
                                              44
                                              45
                                              46
                                              47
                                              48
                                              49
                                              50
                                              51
                                              52
                                              53
                                              54
                                              55
                                              56
                                              57
                                              58
                                              59
                                              60
                                              61
                                              62
                                              63
                                              64
                                              65
                                              66
                                              67
                                              68
                                              69
                                              70
                                              71
                                              72
                                              73
                                              74
                                              75
                                              76
                                              77
                                              78
                                              79
                                              80
                                              81
                                              82
                                              83
                                              84
                                              85
                                              86
                                              87
                                              88
                                              async def execute(
                                                  self,
                                                  *prompts: AnyPromptType | PIL.Image.Image | os.PathLike[str] | None,
                                                  **kwargs: Any,
                                              ) -> TeamResponse:
                                                  """Run all agents in parallel with monitoring."""
                                                  from llmling_agent.talk.talk import Talk
                                              
                                                  self._team_talk.clear()
                                              
                                                  start_time = get_now()
                                                  responses: list[AgentResponse[Any]] = []
                                                  errors: dict[str, Exception] = {}
                                                  final_prompt = list(prompts)
                                                  if self.shared_prompt:
                                                      final_prompt.insert(0, self.shared_prompt)
                                                  combined_prompt = "\n".join([await to_prompt(p) for p in final_prompt])
                                                  all_nodes = list(await self.pick_agents(combined_prompt))
                                                  # Create Talk connections for monitoring this execution
                                                  execution_talks: list[Talk[Any]] = []
                                                  for node in all_nodes:
                                                      talk = Talk[Any](
                                                          node,
                                                          [],  # No actual forwarding, just for tracking
                                                          connection_type="run",
                                                          queued=True,
                                                          queue_strategy="latest",
                                                      )
                                                      execution_talks.append(talk)
                                                      self._team_talk.append(talk)  # Add to base class's TeamTalk
                                              
                                                  async def _run(node: MessageNode[TDeps, Any]):
                                                      try:
                                                          start = perf_counter()
                                                          message = await node.run(*final_prompt, **kwargs)
                                                          timing = perf_counter() - start
                                                          r = AgentResponse(agent_name=node.name, message=message, timing=timing)
                                                          responses.append(r)
                                              
                                                          # Update talk stats for this agent
                                                          talk = next(t for t in execution_talks if t.source == node)
                                                          talk._stats.messages.append(message)
                                              
                                                      except Exception as e:  # noqa: BLE001
                                                          errors[node.name] = e
                                              
                                                  # Run all agents in parallel
                                                  await asyncio.gather(*[_run(node) for node in all_nodes])
                                              
                                                  return TeamResponse(responses=responses, start_time=start_time, errors=errors)
                                              

                                              run_iter async

                                              run_iter(*prompts: AnyPromptType, **kwargs: Any) -> AsyncIterator[ChatMessage[Any]]
                                              

                                              Yield messages as they arrive from parallel execution.

                                              Source code in src/llmling_agent/delegation/team.py
                                               96
                                               97
                                               98
                                               99
                                              100
                                              101
                                              102
                                              103
                                              104
                                              105
                                              106
                                              107
                                              108
                                              109
                                              110
                                              111
                                              112
                                              113
                                              114
                                              115
                                              116
                                              117
                                              118
                                              119
                                              120
                                              121
                                              122
                                              123
                                              124
                                              125
                                              126
                                              127
                                              128
                                              129
                                              130
                                              131
                                              132
                                              133
                                              134
                                              135
                                              136
                                              137
                                              138
                                              139
                                              140
                                              async def run_iter(
                                                  self,
                                                  *prompts: AnyPromptType,
                                                  **kwargs: Any,
                                              ) -> AsyncIterator[ChatMessage[Any]]:
                                                  """Yield messages as they arrive from parallel execution."""
                                                  queue: asyncio.Queue[ChatMessage[Any] | None] = asyncio.Queue()
                                                  failures: dict[str, Exception] = {}
                                              
                                                  async def _run(node: MessageNode[TDeps, Any]):
                                                      try:
                                                          message = await node.run(*prompts, **kwargs)
                                                          await queue.put(message)
                                                      except Exception as e:
                                                          logger.exception("Error executing node %s", node.name)
                                                          failures[node.name] = e
                                                          # Put None to maintain queue count
                                                          await queue.put(None)
                                              
                                                  # Get nodes to run
                                                  combined_prompt = "\n".join([await to_prompt(p) for p in prompts])
                                                  all_nodes = list(await self.pick_agents(combined_prompt))
                                              
                                                  # Start all agents
                                                  tasks = [asyncio.create_task(_run(n), name=f"run_{n.name}") for n in all_nodes]
                                              
                                                  try:
                                                      # Yield messages as they arrive
                                                      for _ in all_nodes:
                                                          if msg := await queue.get():
                                                              yield msg
                                              
                                                      # If any failures occurred, raise error with details
                                                      if failures:
                                                          error_details = "\n".join(
                                                              f"- {name}: {error}" for name, error in failures.items()
                                                          )
                                                          error_msg = f"Some nodes failed to execute:\n{error_details}"
                                                          raise RuntimeError(error_msg)
                                              
                                                  finally:
                                                      # Clean up any remaining tasks
                                                      for task in tasks:
                                                          if not task.done():
                                                              task.cancel()
                                              

                                              run_job async

                                              run_job(
                                                  job: Job[TDeps, TJobResult],
                                                  *,
                                                  store_history: bool = True,
                                                  include_agent_tools: bool = True,
                                              ) -> list[AgentResponse[TJobResult]]
                                              

                                              Execute a job across all team members in parallel.

                                              Parameters:

                                              Name Type Description Default
                                              job Job[TDeps, TJobResult]

                                              Job configuration to execute

                                              required
                                              store_history bool

                                              Whether to add job execution to conversation history

                                              True
                                              include_agent_tools bool

                                              Whether to include agent's tools alongside job tools

                                              True

                                              Returns:

                                              Type Description
                                              list[AgentResponse[TJobResult]]

                                              List of responses from all agents

                                              Raises:

                                              Type Description
                                              JobError

                                              If job execution fails for any agent

                                              ValueError

                                              If job configuration is invalid

                                              Source code in src/llmling_agent/delegation/team.py
                                              166
                                              167
                                              168
                                              169
                                              170
                                              171
                                              172
                                              173
                                              174
                                              175
                                              176
                                              177
                                              178
                                              179
                                              180
                                              181
                                              182
                                              183
                                              184
                                              185
                                              186
                                              187
                                              188
                                              189
                                              190
                                              191
                                              192
                                              193
                                              194
                                              195
                                              196
                                              197
                                              198
                                              199
                                              200
                                              201
                                              202
                                              203
                                              204
                                              205
                                              206
                                              207
                                              208
                                              209
                                              210
                                              211
                                              212
                                              213
                                              214
                                              215
                                              216
                                              217
                                              218
                                              219
                                              220
                                              221
                                              222
                                              223
                                              224
                                              225
                                              226
                                              227
                                              228
                                              229
                                              230
                                              231
                                              232
                                              233
                                              234
                                              235
                                              236
                                              237
                                              238
                                              239
                                              240
                                              241
                                              async def run_job[TJobResult](
                                                  self,
                                                  job: Job[TDeps, TJobResult],
                                                  *,
                                                  store_history: bool = True,
                                                  include_agent_tools: bool = True,
                                              ) -> list[AgentResponse[TJobResult]]:
                                                  """Execute a job across all team members in parallel.
                                              
                                                  Args:
                                                      job: Job configuration to execute
                                                      store_history: Whether to add job execution to conversation history
                                                      include_agent_tools: Whether to include agent's tools alongside job tools
                                              
                                                  Returns:
                                                      List of responses from all agents
                                              
                                                  Raises:
                                                      JobError: If job execution fails for any agent
                                                      ValueError: If job configuration is invalid
                                                  """
                                                  from llmling_agent.agent import Agent, StructuredAgent
                                                  from llmling_agent.tasks import JobError
                                              
                                                  responses: list[AgentResponse[TJobResult]] = []
                                                  errors: dict[str, Exception] = {}
                                                  start_time = get_now()
                                              
                                                  # Validate dependencies for all agents
                                                  if job.required_dependency is not None:
                                                      invalid_agents = [
                                                          agent.name
                                                          for agent in self.iter_agents()
                                                          if not isinstance(agent.context.data, job.required_dependency)
                                                      ]
                                                      if invalid_agents:
                                                          msg = (
                                                              f"Agents {', '.join(invalid_agents)} don't have required "
                                                              f"dependency type: {job.required_dependency}"
                                                          )
                                                          raise JobError(msg)
                                              
                                                  try:
                                                      # Load knowledge for all agents if provided
                                                      if job.knowledge:
                                                          # TODO: resources
                                                          tools = [t.name for t in job.get_tools()]
                                                          await self.distribute(content="", tools=tools)
                                              
                                                      prompt = await job.get_prompt()
                                              
                                                      async def _run(agent: MessageNode[TDeps, TJobResult]):
                                                          assert isinstance(agent, Agent | StructuredAgent)
                                                          try:
                                                              with agent.tools.temporary_tools(
                                                                  job.get_tools(), exclusive=not include_agent_tools
                                                              ):
                                                                  start = perf_counter()
                                                                  resp = AgentResponse(
                                                                      agent_name=agent.name,
                                                                      message=await agent.run(prompt, store_history=store_history),  # pyright: ignore
                                                                      timing=perf_counter() - start,
                                                                  )
                                                                  responses.append(resp)
                                                          except Exception as e:  # noqa: BLE001
                                                              errors[agent.name] = e
                                              
                                                      # Run job in parallel on all agents
                                                      await asyncio.gather(*[_run(node) for node in self.agents])
                                              
                                                      return TeamResponse(responses=responses, start_time=start_time, errors=errors)
                                              
                                                  except Exception as e:
                                                      msg = "Job execution failed"
                                                      logger.exception(msg)
                                                      raise JobError(msg) from e
                                              

                                              TeamRun

                                              Bases: BaseTeam[TDeps, TResult]

                                              Handles team operations with monitoring.

                                              Source code in src/llmling_agent/delegation/teamrun.py
                                               60
                                               61
                                               62
                                               63
                                               64
                                               65
                                               66
                                               67
                                               68
                                               69
                                               70
                                               71
                                               72
                                               73
                                               74
                                               75
                                               76
                                               77
                                               78
                                               79
                                               80
                                               81
                                               82
                                               83
                                               84
                                               85
                                               86
                                               87
                                               88
                                               89
                                               90
                                               91
                                               92
                                               93
                                               94
                                               95
                                               96
                                               97
                                               98
                                               99
                                              100
                                              101
                                              102
                                              103
                                              104
                                              105
                                              106
                                              107
                                              108
                                              109
                                              110
                                              111
                                              112
                                              113
                                              114
                                              115
                                              116
                                              117
                                              118
                                              119
                                              120
                                              121
                                              122
                                              123
                                              124
                                              125
                                              126
                                              127
                                              128
                                              129
                                              130
                                              131
                                              132
                                              133
                                              134
                                              135
                                              136
                                              137
                                              138
                                              139
                                              140
                                              141
                                              142
                                              143
                                              144
                                              145
                                              146
                                              147
                                              148
                                              149
                                              150
                                              151
                                              152
                                              153
                                              154
                                              155
                                              156
                                              157
                                              158
                                              159
                                              160
                                              161
                                              162
                                              163
                                              164
                                              165
                                              166
                                              167
                                              168
                                              169
                                              170
                                              171
                                              172
                                              173
                                              174
                                              175
                                              176
                                              177
                                              178
                                              179
                                              180
                                              181
                                              182
                                              183
                                              184
                                              185
                                              186
                                              187
                                              188
                                              189
                                              190
                                              191
                                              192
                                              193
                                              194
                                              195
                                              196
                                              197
                                              198
                                              199
                                              200
                                              201
                                              202
                                              203
                                              204
                                              205
                                              206
                                              207
                                              208
                                              209
                                              210
                                              211
                                              212
                                              213
                                              214
                                              215
                                              216
                                              217
                                              218
                                              219
                                              220
                                              221
                                              222
                                              223
                                              224
                                              225
                                              226
                                              227
                                              228
                                              229
                                              230
                                              231
                                              232
                                              233
                                              234
                                              235
                                              236
                                              237
                                              238
                                              239
                                              240
                                              241
                                              242
                                              243
                                              244
                                              245
                                              246
                                              247
                                              248
                                              249
                                              250
                                              251
                                              252
                                              253
                                              254
                                              255
                                              256
                                              257
                                              258
                                              259
                                              260
                                              261
                                              262
                                              263
                                              264
                                              265
                                              266
                                              267
                                              268
                                              269
                                              270
                                              271
                                              272
                                              273
                                              274
                                              275
                                              276
                                              277
                                              278
                                              279
                                              280
                                              281
                                              282
                                              283
                                              284
                                              285
                                              286
                                              287
                                              288
                                              289
                                              290
                                              291
                                              292
                                              293
                                              294
                                              295
                                              296
                                              297
                                              298
                                              299
                                              300
                                              301
                                              302
                                              303
                                              304
                                              305
                                              306
                                              307
                                              308
                                              309
                                              310
                                              311
                                              class TeamRun[TDeps, TResult](BaseTeam[TDeps, TResult]):
                                                  """Handles team operations with monitoring."""
                                              
                                                  def __init__(
                                                      self,
                                                      agents: Sequence[MessageNode[TDeps, Any]],
                                                      *,
                                                      name: str | None = None,
                                                      description: str | None = None,
                                                      shared_prompt: str | None = None,
                                                      validator: MessageNode[Any, TResult] | None = None,
                                                      picker: AnyAgent[Any, Any] | None = None,
                                                      num_picks: int | None = None,
                                                      pick_prompt: str | None = None,
                                                      # result_mode: ResultMode = "last",
                                                  ):
                                                      super().__init__(
                                                          agents,
                                                          name=name,
                                                          description=description,
                                                          shared_prompt=shared_prompt,
                                                          picker=picker,
                                                          num_picks=num_picks,
                                                          pick_prompt=pick_prompt,
                                                      )
                                                      self.validator = validator
                                                      self.result_mode = "last"
                                              
                                                  def __prompt__(self) -> str:
                                                      """Format team info for prompts."""
                                                      members = " -> ".join(a.name for a in self.agents)
                                                      desc = f" - {self.description}" if self.description else ""
                                                      return f"Sequential Team '{self.name}'{desc}\nPipeline: {members}"
                                              
                                                  async def _run(
                                                      self,
                                                      *prompts: AnyPromptType | PIL.Image.Image | os.PathLike[str] | None,
                                                      wait_for_connections: bool | None = None,
                                                      message_id: str | None = None,
                                                      conversation_id: str | None = None,
                                                      **kwargs: Any,
                                                  ) -> ChatMessage[TResult]:
                                                      """Run agents sequentially and return combined message.
                                              
                                                      This message wraps execute and extracts the ChatMessage in order to fulfill
                                                      the "message protocol".
                                                      """
                                                      message_id = message_id or str(uuid4())
                                              
                                                      result = await self.execute(*prompts, **kwargs)
                                                      all_messages = [r.message for r in result if r.message]
                                                      assert all_messages, "Error during execution, returned None for TeamRun"
                                                      # Determine content based on mode
                                                      match self.result_mode:
                                                          case "last":
                                                              content = all_messages[-1].content
                                                          # case "concat":
                                                          #     content = "\n".join(msg.format() for msg in all_messages)
                                                          case _:
                                                              msg = f"Invalid result mode: {self.result_mode}"
                                                              raise ValueError(msg)
                                              
                                                      return ChatMessage(
                                                          content=content,
                                                          role="assistant",
                                                          name=self.name,
                                                          associated_messages=all_messages,
                                                          message_id=message_id,
                                                          conversation_id=conversation_id,
                                                          metadata={
                                                              "execution_order": [r.agent_name for r in result],
                                                              "start_time": result.start_time.isoformat(),
                                                              "errors": {name: str(error) for name, error in result.errors.items()},
                                                          },
                                                      )
                                              
                                                  async def execute(
                                                      self,
                                                      *prompts: AnyPromptType | PIL.Image.Image | os.PathLike[str] | None,
                                                      **kwargs: Any,
                                                  ) -> TeamResponse[TResult]:
                                                      """Start execution with optional monitoring."""
                                                      self._team_talk.clear()
                                                      start_time = get_now()
                                                      final_prompt = list(prompts)
                                                      if self.shared_prompt:
                                                          final_prompt.insert(0, self.shared_prompt)
                                              
                                                      responses = [
                                                          i
                                                          async for i in self.execute_iter(*final_prompt)
                                                          if isinstance(i, AgentResponse)
                                                      ]
                                                      return TeamResponse(responses, start_time)
                                              
                                                  async def run_iter(
                                                      self,
                                                      *prompts: AnyPromptType | PIL.Image.Image | os.PathLike[str],
                                                      **kwargs: Any,
                                                  ) -> AsyncIterator[ChatMessage[Any]]:
                                                      """Yield messages from the execution chain."""
                                                      async for item in self.execute_iter(*prompts, **kwargs):
                                                          match item:
                                                              case AgentResponse():
                                                                  if item.message:
                                                                      yield item.message
                                                              case Talk():
                                                                  pass
                                              
                                                  async def execute_iter(
                                                      self,
                                                      *prompt: AnyPromptType | PIL.Image.Image | os.PathLike[str],
                                                      **kwargs: Any,
                                                  ) -> AsyncIterator[Talk[Any] | AgentResponse[Any]]:
                                                      from toprompt import to_prompt
                                              
                                                      connections: list[Talk[Any]] = []
                                                      try:
                                                          combined_prompt = "\n".join([await to_prompt(p) for p in prompt])
                                                          all_nodes = list(await self.pick_agents(combined_prompt))
                                                          if self.validator:
                                                              all_nodes.append(self.validator)
                                                          first = all_nodes[0]
                                                          connections = [
                                                              source.connect_to(target, queued=True)
                                                              for source, target in pairwise(all_nodes)
                                                          ]
                                                          for conn in connections:
                                                              self._team_talk.append(conn)
                                              
                                                          # First agent
                                                          start = perf_counter()
                                                          message = await first.run(*prompt, **kwargs)
                                                          timing = perf_counter() - start
                                                          response = AgentResponse[Any](first.name, message=message, timing=timing)
                                                          yield response
                                              
                                                          # Process through chain
                                                          for connection in connections:
                                                              target = connection.targets[0]
                                                              target_name = target.name
                                                              yield connection
                                              
                                                              # Let errors propagate - they break the chain
                                                              start = perf_counter()
                                                              messages = await connection.trigger()
                                              
                                                              # If this is the last node
                                                              if target == all_nodes[-1]:
                                                                  last_talk = Talk[Any](target, [], connection_type="run")
                                                                  if response.message:
                                                                      last_talk.stats.messages.append(response.message)
                                                                  self._team_talk.append(last_talk)
                                              
                                                              timing = perf_counter() - start
                                                              msg = messages[0]
                                                              response = AgentResponse[Any](target_name, message=msg, timing=timing)
                                                              yield response
                                              
                                                      finally:
                                                          # Always clean up connections
                                                          for connection in connections:
                                                              connection.disconnect()
                                              
                                                  @asynccontextmanager
                                                  async def chain_stream(
                                                      self,
                                                      *prompts: AnyPromptType | PIL.Image.Image | os.PathLike[str] | None,
                                                      require_all: bool = True,
                                                      **kwargs: Any,
                                                  ) -> AsyncIterator[StreamingResponseProtocol]:
                                                      """Stream results through chain of team members."""
                                                      from llmling_agent.agent import Agent, StructuredAgent
                                                      from llmling_agent.delegation import TeamRun
                                                      from llmling_agent_providers.base import StreamingResponseProtocol
                                              
                                                      async with AsyncExitStack() as stack:
                                                          streams: list[StreamingResponseProtocol[str]] = []
                                                          current_message = prompts
                                              
                                                          # Set up all streams
                                                          for agent in self.agents:
                                                              try:
                                                                  assert isinstance(agent, TeamRun | Agent | StructuredAgent), (
                                                                      "Cannot stream teams!"
                                                                  )
                                                                  stream = await stack.enter_async_context(
                                                                      agent.run_stream(*current_message, **kwargs)
                                                                  )
                                                                  streams.append(stream)  # type: ignore
                                                                  # Wait for complete response for next agent
                                                                  async for chunk in stream.stream():
                                                                      current_message = chunk
                                                                      if stream.is_complete:
                                                                          current_message = (stream.formatted_content,)  # type: ignore
                                                                          break
                                                              except Exception as e:
                                                                  if require_all:
                                                                      msg = f"Chain broken at {agent.name}: {e}"
                                                                      raise ValueError(msg) from e
                                                                  logger.warning("Chain handler %s failed: %s", agent.name, e)
                                              
                                                          # Create a stream-like interface for the chain
                                                          class ChainStream(StreamingResponseProtocol[str]):
                                                              def __init__(self):
                                                                  self.streams = streams
                                                                  self.current_stream_idx = 0
                                                                  self.is_complete = False
                                                                  self.model_name = None
                                              
                                                              def usage(self) -> Usage:
                                                                  @dataclass
                                                                  class Usage:
                                                                      total_tokens: int | None
                                                                      request_tokens: int | None
                                                                      response_tokens: int | None
                                              
                                                                  return Usage(0, 0, 0)
                                              
                                                              async def stream(self) -> AsyncGenerator[str, None]:  # type: ignore
                                                                  for idx, stream in enumerate(self.streams):
                                                                      self.current_stream_idx = idx
                                                                      async for chunk in stream.stream():
                                                                          yield chunk
                                                                          if idx == len(self.streams) - 1 and stream.is_complete:
                                                                              self.is_complete = True
                                              
                                                              async def stream_text(
                                                                  self,
                                                                  delta: bool = False,
                                                              ) -> AsyncGenerator[str, None]:
                                                                  for idx, stream in enumerate(self.streams):
                                                                      self.current_stream_idx = idx
                                                                      async for chunk in stream.stream_text(delta=delta):
                                                                          yield chunk
                                                                          if idx == len(self.streams) - 1 and stream.is_complete:
                                                                              self.is_complete = True
                                              
                                                          yield ChainStream()
                                              
                                                  @asynccontextmanager
                                                  async def run_stream(
                                                      self,
                                                      *prompts: AnyPromptType | PIL.Image.Image | os.PathLike[str],
                                                      **kwargs: Any,
                                                  ) -> AsyncIterator[StreamingResponseProtocol[TResult]]:
                                                      """Stream responses through the chain.
                                              
                                                      Provides same interface as Agent.run_stream.
                                                      """
                                                      async with self.chain_stream(*prompts, **kwargs) as stream:
                                                          yield stream
                                              

                                              __prompt__

                                              __prompt__() -> str
                                              

                                              Format team info for prompts.

                                              Source code in src/llmling_agent/delegation/teamrun.py
                                              88
                                              89
                                              90
                                              91
                                              92
                                              def __prompt__(self) -> str:
                                                  """Format team info for prompts."""
                                                  members = " -> ".join(a.name for a in self.agents)
                                                  desc = f" - {self.description}" if self.description else ""
                                                  return f"Sequential Team '{self.name}'{desc}\nPipeline: {members}"
                                              

                                              _run async

                                              _run(
                                                  *prompts: AnyPromptType | Image | PathLike[str] | None,
                                                  wait_for_connections: bool | None = None,
                                                  message_id: str | None = None,
                                                  conversation_id: str | None = None,
                                                  **kwargs: Any,
                                              ) -> ChatMessage[TResult]
                                              

                                              Run agents sequentially and return combined message.

                                              This message wraps execute and extracts the ChatMessage in order to fulfill the "message protocol".

                                              Source code in src/llmling_agent/delegation/teamrun.py
                                               94
                                               95
                                               96
                                               97
                                               98
                                               99
                                              100
                                              101
                                              102
                                              103
                                              104
                                              105
                                              106
                                              107
                                              108
                                              109
                                              110
                                              111
                                              112
                                              113
                                              114
                                              115
                                              116
                                              117
                                              118
                                              119
                                              120
                                              121
                                              122
                                              123
                                              124
                                              125
                                              126
                                              127
                                              128
                                              129
                                              130
                                              131
                                              132
                                              133
                                              134
                                              async def _run(
                                                  self,
                                                  *prompts: AnyPromptType | PIL.Image.Image | os.PathLike[str] | None,
                                                  wait_for_connections: bool | None = None,
                                                  message_id: str | None = None,
                                                  conversation_id: str | None = None,
                                                  **kwargs: Any,
                                              ) -> ChatMessage[TResult]:
                                                  """Run agents sequentially and return combined message.
                                              
                                                  This message wraps execute and extracts the ChatMessage in order to fulfill
                                                  the "message protocol".
                                                  """
                                                  message_id = message_id or str(uuid4())
                                              
                                                  result = await self.execute(*prompts, **kwargs)
                                                  all_messages = [r.message for r in result if r.message]
                                                  assert all_messages, "Error during execution, returned None for TeamRun"
                                                  # Determine content based on mode
                                                  match self.result_mode:
                                                      case "last":
                                                          content = all_messages[-1].content
                                                      # case "concat":
                                                      #     content = "\n".join(msg.format() for msg in all_messages)
                                                      case _:
                                                          msg = f"Invalid result mode: {self.result_mode}"
                                                          raise ValueError(msg)
                                              
                                                  return ChatMessage(
                                                      content=content,
                                                      role="assistant",
                                                      name=self.name,
                                                      associated_messages=all_messages,
                                                      message_id=message_id,
                                                      conversation_id=conversation_id,
                                                      metadata={
                                                          "execution_order": [r.agent_name for r in result],
                                                          "start_time": result.start_time.isoformat(),
                                                          "errors": {name: str(error) for name, error in result.errors.items()},
                                                      },
                                                  )
                                              

                                              chain_stream async

                                              chain_stream(
                                                  *prompts: AnyPromptType | Image | PathLike[str] | None,
                                                  require_all: bool = True,
                                                  **kwargs: Any,
                                              ) -> AsyncIterator[StreamingResponseProtocol]
                                              

                                              Stream results through chain of team members.

                                              Source code in src/llmling_agent/delegation/teamrun.py
                                              224
                                              225
                                              226
                                              227
                                              228
                                              229
                                              230
                                              231
                                              232
                                              233
                                              234
                                              235
                                              236
                                              237
                                              238
                                              239
                                              240
                                              241
                                              242
                                              243
                                              244
                                              245
                                              246
                                              247
                                              248
                                              249
                                              250
                                              251
                                              252
                                              253
                                              254
                                              255
                                              256
                                              257
                                              258
                                              259
                                              260
                                              261
                                              262
                                              263
                                              264
                                              265
                                              266
                                              267
                                              268
                                              269
                                              270
                                              271
                                              272
                                              273
                                              274
                                              275
                                              276
                                              277
                                              278
                                              279
                                              280
                                              281
                                              282
                                              283
                                              284
                                              285
                                              286
                                              287
                                              288
                                              289
                                              290
                                              291
                                              292
                                              293
                                              294
                                              295
                                              296
                                              297
                                              298
                                              @asynccontextmanager
                                              async def chain_stream(
                                                  self,
                                                  *prompts: AnyPromptType | PIL.Image.Image | os.PathLike[str] | None,
                                                  require_all: bool = True,
                                                  **kwargs: Any,
                                              ) -> AsyncIterator[StreamingResponseProtocol]:
                                                  """Stream results through chain of team members."""
                                                  from llmling_agent.agent import Agent, StructuredAgent
                                                  from llmling_agent.delegation import TeamRun
                                                  from llmling_agent_providers.base import StreamingResponseProtocol
                                              
                                                  async with AsyncExitStack() as stack:
                                                      streams: list[StreamingResponseProtocol[str]] = []
                                                      current_message = prompts
                                              
                                                      # Set up all streams
                                                      for agent in self.agents:
                                                          try:
                                                              assert isinstance(agent, TeamRun | Agent | StructuredAgent), (
                                                                  "Cannot stream teams!"
                                                              )
                                                              stream = await stack.enter_async_context(
                                                                  agent.run_stream(*current_message, **kwargs)
                                                              )
                                                              streams.append(stream)  # type: ignore
                                                              # Wait for complete response for next agent
                                                              async for chunk in stream.stream():
                                                                  current_message = chunk
                                                                  if stream.is_complete:
                                                                      current_message = (stream.formatted_content,)  # type: ignore
                                                                      break
                                                          except Exception as e:
                                                              if require_all:
                                                                  msg = f"Chain broken at {agent.name}: {e}"
                                                                  raise ValueError(msg) from e
                                                              logger.warning("Chain handler %s failed: %s", agent.name, e)
                                              
                                                      # Create a stream-like interface for the chain
                                                      class ChainStream(StreamingResponseProtocol[str]):
                                                          def __init__(self):
                                                              self.streams = streams
                                                              self.current_stream_idx = 0
                                                              self.is_complete = False
                                                              self.model_name = None
                                              
                                                          def usage(self) -> Usage:
                                                              @dataclass
                                                              class Usage:
                                                                  total_tokens: int | None
                                                                  request_tokens: int | None
                                                                  response_tokens: int | None
                                              
                                                              return Usage(0, 0, 0)
                                              
                                                          async def stream(self) -> AsyncGenerator[str, None]:  # type: ignore
                                                              for idx, stream in enumerate(self.streams):
                                                                  self.current_stream_idx = idx
                                                                  async for chunk in stream.stream():
                                                                      yield chunk
                                                                      if idx == len(self.streams) - 1 and stream.is_complete:
                                                                          self.is_complete = True
                                              
                                                          async def stream_text(
                                                              self,
                                                              delta: bool = False,
                                                          ) -> AsyncGenerator[str, None]:
                                                              for idx, stream in enumerate(self.streams):
                                                                  self.current_stream_idx = idx
                                                                  async for chunk in stream.stream_text(delta=delta):
                                                                      yield chunk
                                                                      if idx == len(self.streams) - 1 and stream.is_complete:
                                                                          self.is_complete = True
                                              
                                                      yield ChainStream()
                                              

                                              execute async

                                              execute(
                                                  *prompts: AnyPromptType | Image | PathLike[str] | None, **kwargs: Any
                                              ) -> TeamResponse[TResult]
                                              

                                              Start execution with optional monitoring.

                                              Source code in src/llmling_agent/delegation/teamrun.py
                                              136
                                              137
                                              138
                                              139
                                              140
                                              141
                                              142
                                              143
                                              144
                                              145
                                              146
                                              147
                                              148
                                              149
                                              150
                                              151
                                              152
                                              153
                                              async def execute(
                                                  self,
                                                  *prompts: AnyPromptType | PIL.Image.Image | os.PathLike[str] | None,
                                                  **kwargs: Any,
                                              ) -> TeamResponse[TResult]:
                                                  """Start execution with optional monitoring."""
                                                  self._team_talk.clear()
                                                  start_time = get_now()
                                                  final_prompt = list(prompts)
                                                  if self.shared_prompt:
                                                      final_prompt.insert(0, self.shared_prompt)
                                              
                                                  responses = [
                                                      i
                                                      async for i in self.execute_iter(*final_prompt)
                                                      if isinstance(i, AgentResponse)
                                                  ]
                                                  return TeamResponse(responses, start_time)
                                              

                                              run_iter async

                                              run_iter(
                                                  *prompts: AnyPromptType | Image | PathLike[str], **kwargs: Any
                                              ) -> AsyncIterator[ChatMessage[Any]]
                                              

                                              Yield messages from the execution chain.

                                              Source code in src/llmling_agent/delegation/teamrun.py
                                              155
                                              156
                                              157
                                              158
                                              159
                                              160
                                              161
                                              162
                                              163
                                              164
                                              165
                                              166
                                              167
                                              async def run_iter(
                                                  self,
                                                  *prompts: AnyPromptType | PIL.Image.Image | os.PathLike[str],
                                                  **kwargs: Any,
                                              ) -> AsyncIterator[ChatMessage[Any]]:
                                                  """Yield messages from the execution chain."""
                                                  async for item in self.execute_iter(*prompts, **kwargs):
                                                      match item:
                                                          case AgentResponse():
                                                              if item.message:
                                                                  yield item.message
                                                          case Talk():
                                                              pass
                                              

                                              run_stream async

                                              run_stream(
                                                  *prompts: AnyPromptType | Image | PathLike[str], **kwargs: Any
                                              ) -> AsyncIterator[StreamingResponseProtocol[TResult]]
                                              

                                              Stream responses through the chain.

                                              Provides same interface as Agent.run_stream.

                                              Source code in src/llmling_agent/delegation/teamrun.py
                                              300
                                              301
                                              302
                                              303
                                              304
                                              305
                                              306
                                              307
                                              308
                                              309
                                              310
                                              311
                                              @asynccontextmanager
                                              async def run_stream(
                                                  self,
                                                  *prompts: AnyPromptType | PIL.Image.Image | os.PathLike[str],
                                                  **kwargs: Any,
                                              ) -> AsyncIterator[StreamingResponseProtocol[TResult]]:
                                                  """Stream responses through the chain.
                                              
                                                  Provides same interface as Agent.run_stream.
                                                  """
                                                  async with self.chain_stream(*prompts, **kwargs) as stream:
                                                      yield stream
                                              

                                              Tool dataclass

                                              Information about a registered tool.

                                              Source code in src/llmling_agent/tools/base.py
                                               65
                                               66
                                               67
                                               68
                                               69
                                               70
                                               71
                                               72
                                               73
                                               74
                                               75
                                               76
                                               77
                                               78
                                               79
                                               80
                                               81
                                               82
                                               83
                                               84
                                               85
                                               86
                                               87
                                               88
                                               89
                                               90
                                               91
                                               92
                                               93
                                               94
                                               95
                                               96
                                               97
                                               98
                                               99
                                              100
                                              101
                                              102
                                              103
                                              104
                                              105
                                              106
                                              107
                                              108
                                              109
                                              110
                                              111
                                              112
                                              113
                                              114
                                              115
                                              116
                                              117
                                              118
                                              119
                                              120
                                              121
                                              122
                                              123
                                              124
                                              125
                                              126
                                              127
                                              128
                                              129
                                              130
                                              131
                                              132
                                              133
                                              134
                                              135
                                              136
                                              137
                                              138
                                              139
                                              140
                                              141
                                              142
                                              143
                                              144
                                              145
                                              146
                                              147
                                              148
                                              149
                                              150
                                              151
                                              152
                                              153
                                              154
                                              155
                                              156
                                              157
                                              158
                                              159
                                              160
                                              161
                                              162
                                              163
                                              164
                                              165
                                              166
                                              167
                                              168
                                              169
                                              170
                                              171
                                              172
                                              173
                                              174
                                              175
                                              176
                                              177
                                              178
                                              179
                                              180
                                              181
                                              182
                                              183
                                              184
                                              185
                                              186
                                              187
                                              188
                                              189
                                              190
                                              191
                                              192
                                              193
                                              194
                                              195
                                              196
                                              197
                                              198
                                              199
                                              200
                                              201
                                              202
                                              203
                                              204
                                              205
                                              206
                                              207
                                              208
                                              209
                                              210
                                              211
                                              212
                                              213
                                              214
                                              215
                                              216
                                              217
                                              218
                                              219
                                              220
                                              221
                                              222
                                              223
                                              224
                                              225
                                              226
                                              227
                                              228
                                              229
                                              230
                                              231
                                              232
                                              233
                                              234
                                              235
                                              236
                                              237
                                              238
                                              239
                                              240
                                              241
                                              242
                                              243
                                              244
                                              245
                                              246
                                              247
                                              248
                                              249
                                              250
                                              251
                                              252
                                              253
                                              254
                                              255
                                              256
                                              257
                                              258
                                              259
                                              260
                                              261
                                              262
                                              263
                                              264
                                              265
                                              266
                                              267
                                              268
                                              269
                                              270
                                              271
                                              272
                                              273
                                              274
                                              275
                                              276
                                              277
                                              278
                                              279
                                              280
                                              281
                                              282
                                              283
                                              284
                                              285
                                              286
                                              287
                                              288
                                              289
                                              290
                                              291
                                              292
                                              293
                                              294
                                              295
                                              296
                                              297
                                              @dataclass
                                              class Tool:
                                                  """Information about a registered tool."""
                                              
                                                  callable: LLMCallableTool
                                                  """The actual tool implementation"""
                                              
                                                  enabled: bool = True
                                                  """Whether the tool is currently enabled"""
                                              
                                                  source: ToolSource = "runtime"
                                                  """Where the tool came from."""
                                              
                                                  priority: int = 100
                                                  """Priority for tool execution (lower = higher priority)"""
                                              
                                                  requires_confirmation: bool = False
                                                  """Whether tool execution needs explicit confirmation"""
                                              
                                                  requires_capability: str | None = None
                                                  """Optional capability required to use this tool"""
                                              
                                                  agent_name: str | None = None
                                                  """The agent name as an identifier for agent-as-a-tool."""
                                              
                                                  metadata: dict[str, str] = field(default_factory=dict)
                                                  """Additional tool metadata"""
                                              
                                                  cache_enabled: bool = False
                                                  """Whether to enable caching for this tool."""
                                              
                                                  @property
                                                  def schema(self) -> py2openai.OpenAIFunctionTool:
                                                      """Get the OpenAI function schema for the tool."""
                                                      return self.callable.get_schema()
                                              
                                                  @property
                                                  def name(self) -> str:
                                                      """Get tool name."""
                                                      return self.callable.name
                                              
                                                  @property
                                                  def description(self) -> str | None:
                                                      """Get tool description."""
                                                      return self.callable.description
                                              
                                                  def matches_filter(self, state: Literal["all", "enabled", "disabled"]) -> bool:
                                                      """Check if tool matches state filter."""
                                                      match state:
                                                          case "all":
                                                              return True
                                                          case "enabled":
                                                              return self.enabled
                                                          case "disabled":
                                                              return not self.enabled
                                              
                                                  @property
                                                  def parameters(self) -> list[ToolParameter]:
                                                      """Get information about tool parameters."""
                                                      schema = self.schema["function"]
                                                      properties: dict[str, Property] = schema.get("properties", {})  # type: ignore
                                                      required: list[str] = schema.get("required", [])  # type: ignore
                                              
                                                      return [
                                                          ToolParameter(
                                                              name=name,
                                                              required=name in required,
                                                              type_info=details.get("type"),
                                                              description=details.get("description"),
                                                          )
                                                          for name, details in properties.items()
                                                      ]
                                              
                                                  def format_info(self, indent: str = "  ") -> str:
                                                      """Format complete tool information."""
                                                      lines = [f"{indent}{self.name}"]
                                                      if self.description:
                                                          lines.append(f"{indent}  {self.description}")
                                                      if self.parameters:
                                                          lines.append(f"{indent}  Parameters:")
                                                          lines.extend(f"{indent}    {param}" for param in self.parameters)
                                                      if self.metadata:
                                                          lines.append(f"{indent}  Metadata:")
                                                          lines.extend(f"{indent}    {k}: {v}" for k, v in self.metadata.items())
                                                      return "\n".join(lines)
                                              
                                                  async def execute(self, *args: Any, **kwargs: Any) -> Any:
                                                      """Execute tool, handling both sync and async cases."""
                                                      fn = track_tool(self.name)(self.callable.callable)
                                                      return await execute(fn, *args, **kwargs, use_thread=True)
                                              
                                                  @classmethod
                                                  def from_code(
                                                      cls,
                                                      code: str,
                                                      name: str | None = None,
                                                      description: str | None = None,
                                                  ) -> Self:
                                                      """Create a tool from a code string."""
                                                      namespace: dict[str, Any] = {}
                                                      exec(code, namespace)
                                                      func = next((v for v in namespace.values() if callable(v)), None)
                                                      if not func:
                                                          msg = "No callable found in provided code"
                                                          raise ValueError(msg)
                                                      return cls.from_callable(
                                                          func, name_override=name, description_override=description
                                                      )
                                              
                                                  @classmethod
                                                  def from_callable(
                                                      cls,
                                                      fn: Callable[..., Any] | str,
                                                      *,
                                                      name_override: str | None = None,
                                                      description_override: str | None = None,
                                                      schema_override: py2openai.OpenAIFunctionDefinition | None = None,
                                                      **kwargs: Any,
                                                  ) -> Self:
                                                      tool = LLMCallableTool.from_callable(
                                                          fn,
                                                          name_override=name_override,
                                                          description_override=description_override,
                                                          schema_override=schema_override,
                                                      )
                                                      return cls(tool, **kwargs)
                                              
                                                  @classmethod
                                                  def from_crewai_tool(
                                                      cls,
                                                      tool: Any,
                                                      *,
                                                      name_override: str | None = None,
                                                      description_override: str | None = None,
                                                      schema_override: py2openai.OpenAIFunctionDefinition | None = None,
                                                      **kwargs: Any,
                                                  ) -> Self:
                                                      """Allows importing crewai tools."""
                                                      # vaidate_import("crewai_tools", "crewai")
                                                      try:
                                                          from crewai.tools import BaseTool as CrewAiBaseTool
                                                      except ImportError as e:
                                                          msg = "crewai package not found. Please install it with 'pip install crewai'"
                                                          raise ImportError(msg) from e
                                              
                                                      if not isinstance(tool, CrewAiBaseTool):
                                                          msg = f"Expected CrewAI BaseTool, got {type(tool)}"
                                                          raise TypeError(msg)
                                              
                                                      return cls.from_callable(
                                                          tool._run,
                                                          name_override=name_override or tool.__class__.__name__.removesuffix("Tool"),
                                                          description_override=description_override or tool.description,
                                                          schema_override=schema_override,
                                                          **kwargs,
                                                      )
                                              
                                                  @classmethod
                                                  def from_langchain_tool(
                                                      cls,
                                                      tool: Any,
                                                      *,
                                                      name_override: str | None = None,
                                                      description_override: str | None = None,
                                                      schema_override: py2openai.OpenAIFunctionDefinition | None = None,
                                                      **kwargs: Any,
                                                  ) -> Self:
                                                      """Create a tool from a LangChain tool."""
                                                      # vaidate_import("langchain_core", "langchain")
                                                      try:
                                                          from langchain_core.tools import BaseTool as LangChainBaseTool
                                                      except ImportError as e:
                                                          msg = "langchain-core package not found."
                                                          raise ImportError(msg) from e
                                              
                                                      if not isinstance(tool, LangChainBaseTool):
                                                          msg = f"Expected LangChain BaseTool, got {type(tool)}"
                                                          raise TypeError(msg)
                                              
                                                      return cls.from_callable(
                                                          tool.invoke,
                                                          name_override=name_override or tool.name,
                                                          description_override=description_override or tool.description,
                                                          schema_override=schema_override,
                                                          **kwargs,
                                                      )
                                              
                                                  @classmethod
                                                  def from_autogen_tool(
                                                      cls,
                                                      tool: Any,
                                                      *,
                                                      name_override: str | None = None,
                                                      description_override: str | None = None,
                                                      schema_override: py2openai.OpenAIFunctionDefinition | None = None,
                                                      **kwargs: Any,
                                                  ) -> Self:
                                                      """Create a tool from a AutoGen tool."""
                                                      # vaidate_import("autogen_core", "autogen")
                                                      try:
                                                          from autogen_core import CancellationToken
                                                          from autogen_core.tools import BaseTool
                                                      except ImportError as e:
                                                          msg = "autogent_core package not found."
                                                          raise ImportError(msg) from e
                                              
                                                      if not isinstance(tool, BaseTool):
                                                          msg = f"Expected AutoGent BaseTool, got {type(tool)}"
                                                          raise TypeError(msg)
                                                      token = CancellationToken()
                                              
                                                      input_model = tool.__class__.__orig_bases__[0].__args__[0]  # type: ignore
                                              
                                                      name = name_override or tool.name or tool.__class__.__name__.removesuffix("Tool")
                                                      description = (
                                                          description_override
                                                          or tool.description
                                                          or inspect.getdoc(tool.__class__)
                                                          or ""
                                                      )
                                              
                                                      async def wrapper(**kwargs: Any) -> Any:
                                                          # Convert kwargs to the expected input model
                                                          model = input_model(**kwargs)
                                                          return await tool.run(model, cancellation_token=token)
                                              
                                                      return cls.from_callable(
                                                          wrapper,  # type: ignore
                                                          name_override=name,
                                                          description_override=description,
                                                          schema_override=schema_override,
                                                          **kwargs,
                                                      )
                                              

                                              agent_name class-attribute instance-attribute

                                              agent_name: str | None = None
                                              

                                              The agent name as an identifier for agent-as-a-tool.

                                              cache_enabled class-attribute instance-attribute

                                              cache_enabled: bool = False
                                              

                                              Whether to enable caching for this tool.

                                              callable instance-attribute

                                              callable: LLMCallableTool
                                              

                                              The actual tool implementation

                                              description property

                                              description: str | None
                                              

                                              Get tool description.

                                              enabled class-attribute instance-attribute

                                              enabled: bool = True
                                              

                                              Whether the tool is currently enabled

                                              metadata class-attribute instance-attribute

                                              metadata: dict[str, str] = field(default_factory=dict)
                                              

                                              Additional tool metadata

                                              name property

                                              name: str
                                              

                                              Get tool name.

                                              parameters property

                                              parameters: list[ToolParameter]
                                              

                                              Get information about tool parameters.

                                              priority class-attribute instance-attribute

                                              priority: int = 100
                                              

                                              Priority for tool execution (lower = higher priority)

                                              requires_capability class-attribute instance-attribute

                                              requires_capability: str | None = None
                                              

                                              Optional capability required to use this tool

                                              requires_confirmation class-attribute instance-attribute

                                              requires_confirmation: bool = False
                                              

                                              Whether tool execution needs explicit confirmation

                                              schema property

                                              schema: OpenAIFunctionTool
                                              

                                              Get the OpenAI function schema for the tool.

                                              source class-attribute instance-attribute

                                              source: ToolSource = 'runtime'
                                              

                                              Where the tool came from.

                                              execute async

                                              execute(*args: Any, **kwargs: Any) -> Any
                                              

                                              Execute tool, handling both sync and async cases.

                                              Source code in src/llmling_agent/tools/base.py
                                              151
                                              152
                                              153
                                              154
                                              async def execute(self, *args: Any, **kwargs: Any) -> Any:
                                                  """Execute tool, handling both sync and async cases."""
                                                  fn = track_tool(self.name)(self.callable.callable)
                                                  return await execute(fn, *args, **kwargs, use_thread=True)
                                              

                                              format_info

                                              format_info(indent: str = '  ') -> str
                                              

                                              Format complete tool information.

                                              Source code in src/llmling_agent/tools/base.py
                                              138
                                              139
                                              140
                                              141
                                              142
                                              143
                                              144
                                              145
                                              146
                                              147
                                              148
                                              149
                                              def format_info(self, indent: str = "  ") -> str:
                                                  """Format complete tool information."""
                                                  lines = [f"{indent}{self.name}"]
                                                  if self.description:
                                                      lines.append(f"{indent}  {self.description}")
                                                  if self.parameters:
                                                      lines.append(f"{indent}  Parameters:")
                                                      lines.extend(f"{indent}    {param}" for param in self.parameters)
                                                  if self.metadata:
                                                      lines.append(f"{indent}  Metadata:")
                                                      lines.extend(f"{indent}    {k}: {v}" for k, v in self.metadata.items())
                                                  return "\n".join(lines)
                                              

                                              from_autogen_tool classmethod

                                              from_autogen_tool(
                                                  tool: Any,
                                                  *,
                                                  name_override: str | None = None,
                                                  description_override: str | None = None,
                                                  schema_override: OpenAIFunctionDefinition | None = None,
                                                  **kwargs: Any,
                                              ) -> Self
                                              

                                              Create a tool from a AutoGen tool.

                                              Source code in src/llmling_agent/tools/base.py
                                              252
                                              253
                                              254
                                              255
                                              256
                                              257
                                              258
                                              259
                                              260
                                              261
                                              262
                                              263
                                              264
                                              265
                                              266
                                              267
                                              268
                                              269
                                              270
                                              271
                                              272
                                              273
                                              274
                                              275
                                              276
                                              277
                                              278
                                              279
                                              280
                                              281
                                              282
                                              283
                                              284
                                              285
                                              286
                                              287
                                              288
                                              289
                                              290
                                              291
                                              292
                                              293
                                              294
                                              295
                                              296
                                              297
                                              @classmethod
                                              def from_autogen_tool(
                                                  cls,
                                                  tool: Any,
                                                  *,
                                                  name_override: str | None = None,
                                                  description_override: str | None = None,
                                                  schema_override: py2openai.OpenAIFunctionDefinition | None = None,
                                                  **kwargs: Any,
                                              ) -> Self:
                                                  """Create a tool from a AutoGen tool."""
                                                  # vaidate_import("autogen_core", "autogen")
                                                  try:
                                                      from autogen_core import CancellationToken
                                                      from autogen_core.tools import BaseTool
                                                  except ImportError as e:
                                                      msg = "autogent_core package not found."
                                                      raise ImportError(msg) from e
                                              
                                                  if not isinstance(tool, BaseTool):
                                                      msg = f"Expected AutoGent BaseTool, got {type(tool)}"
                                                      raise TypeError(msg)
                                                  token = CancellationToken()
                                              
                                                  input_model = tool.__class__.__orig_bases__[0].__args__[0]  # type: ignore
                                              
                                                  name = name_override or tool.name or tool.__class__.__name__.removesuffix("Tool")
                                                  description = (
                                                      description_override
                                                      or tool.description
                                                      or inspect.getdoc(tool.__class__)
                                                      or ""
                                                  )
                                              
                                                  async def wrapper(**kwargs: Any) -> Any:
                                                      # Convert kwargs to the expected input model
                                                      model = input_model(**kwargs)
                                                      return await tool.run(model, cancellation_token=token)
                                              
                                                  return cls.from_callable(
                                                      wrapper,  # type: ignore
                                                      name_override=name,
                                                      description_override=description,
                                                      schema_override=schema_override,
                                                      **kwargs,
                                                  )
                                              

                                              from_code classmethod

                                              from_code(code: str, name: str | None = None, description: str | None = None) -> Self
                                              

                                              Create a tool from a code string.

                                              Source code in src/llmling_agent/tools/base.py
                                              156
                                              157
                                              158
                                              159
                                              160
                                              161
                                              162
                                              163
                                              164
                                              165
                                              166
                                              167
                                              168
                                              169
                                              170
                                              171
                                              172
                                              @classmethod
                                              def from_code(
                                                  cls,
                                                  code: str,
                                                  name: str | None = None,
                                                  description: str | None = None,
                                              ) -> Self:
                                                  """Create a tool from a code string."""
                                                  namespace: dict[str, Any] = {}
                                                  exec(code, namespace)
                                                  func = next((v for v in namespace.values() if callable(v)), None)
                                                  if not func:
                                                      msg = "No callable found in provided code"
                                                      raise ValueError(msg)
                                                  return cls.from_callable(
                                                      func, name_override=name, description_override=description
                                                  )
                                              

                                              from_crewai_tool classmethod

                                              from_crewai_tool(
                                                  tool: Any,
                                                  *,
                                                  name_override: str | None = None,
                                                  description_override: str | None = None,
                                                  schema_override: OpenAIFunctionDefinition | None = None,
                                                  **kwargs: Any,
                                              ) -> Self
                                              

                                              Allows importing crewai tools.

                                              Source code in src/llmling_agent/tools/base.py
                                              192
                                              193
                                              194
                                              195
                                              196
                                              197
                                              198
                                              199
                                              200
                                              201
                                              202
                                              203
                                              204
                                              205
                                              206
                                              207
                                              208
                                              209
                                              210
                                              211
                                              212
                                              213
                                              214
                                              215
                                              216
                                              217
                                              218
                                              219
                                              220
                                              @classmethod
                                              def from_crewai_tool(
                                                  cls,
                                                  tool: Any,
                                                  *,
                                                  name_override: str | None = None,
                                                  description_override: str | None = None,
                                                  schema_override: py2openai.OpenAIFunctionDefinition | None = None,
                                                  **kwargs: Any,
                                              ) -> Self:
                                                  """Allows importing crewai tools."""
                                                  # vaidate_import("crewai_tools", "crewai")
                                                  try:
                                                      from crewai.tools import BaseTool as CrewAiBaseTool
                                                  except ImportError as e:
                                                      msg = "crewai package not found. Please install it with 'pip install crewai'"
                                                      raise ImportError(msg) from e
                                              
                                                  if not isinstance(tool, CrewAiBaseTool):
                                                      msg = f"Expected CrewAI BaseTool, got {type(tool)}"
                                                      raise TypeError(msg)
                                              
                                                  return cls.from_callable(
                                                      tool._run,
                                                      name_override=name_override or tool.__class__.__name__.removesuffix("Tool"),
                                                      description_override=description_override or tool.description,
                                                      schema_override=schema_override,
                                                      **kwargs,
                                                  )
                                              

                                              from_langchain_tool classmethod

                                              from_langchain_tool(
                                                  tool: Any,
                                                  *,
                                                  name_override: str | None = None,
                                                  description_override: str | None = None,
                                                  schema_override: OpenAIFunctionDefinition | None = None,
                                                  **kwargs: Any,
                                              ) -> Self
                                              

                                              Create a tool from a LangChain tool.

                                              Source code in src/llmling_agent/tools/base.py
                                              222
                                              223
                                              224
                                              225
                                              226
                                              227
                                              228
                                              229
                                              230
                                              231
                                              232
                                              233
                                              234
                                              235
                                              236
                                              237
                                              238
                                              239
                                              240
                                              241
                                              242
                                              243
                                              244
                                              245
                                              246
                                              247
                                              248
                                              249
                                              250
                                              @classmethod
                                              def from_langchain_tool(
                                                  cls,
                                                  tool: Any,
                                                  *,
                                                  name_override: str | None = None,
                                                  description_override: str | None = None,
                                                  schema_override: py2openai.OpenAIFunctionDefinition | None = None,
                                                  **kwargs: Any,
                                              ) -> Self:
                                                  """Create a tool from a LangChain tool."""
                                                  # vaidate_import("langchain_core", "langchain")
                                                  try:
                                                      from langchain_core.tools import BaseTool as LangChainBaseTool
                                                  except ImportError as e:
                                                      msg = "langchain-core package not found."
                                                      raise ImportError(msg) from e
                                              
                                                  if not isinstance(tool, LangChainBaseTool):
                                                      msg = f"Expected LangChain BaseTool, got {type(tool)}"
                                                      raise TypeError(msg)
                                              
                                                  return cls.from_callable(
                                                      tool.invoke,
                                                      name_override=name_override or tool.name,
                                                      description_override=description_override or tool.description,
                                                      schema_override=schema_override,
                                                      **kwargs,
                                                  )
                                              

                                              matches_filter

                                              matches_filter(state: Literal['all', 'enabled', 'disabled']) -> bool
                                              

                                              Check if tool matches state filter.

                                              Source code in src/llmling_agent/tools/base.py
                                              111
                                              112
                                              113
                                              114
                                              115
                                              116
                                              117
                                              118
                                              119
                                              def matches_filter(self, state: Literal["all", "enabled", "disabled"]) -> bool:
                                                  """Check if tool matches state filter."""
                                                  match state:
                                                      case "all":
                                                          return True
                                                      case "enabled":
                                                          return self.enabled
                                                      case "disabled":
                                                          return not self.enabled
                                              

                                              ToolCallInfo

                                              Bases: BaseModel

                                              Information about an executed tool call.

                                              Source code in src/llmling_agent/tools/tool_call_info.py
                                               70
                                               71
                                               72
                                               73
                                               74
                                               75
                                               76
                                               77
                                               78
                                               79
                                               80
                                               81
                                               82
                                               83
                                               84
                                               85
                                               86
                                               87
                                               88
                                               89
                                               90
                                               91
                                               92
                                               93
                                               94
                                               95
                                               96
                                               97
                                               98
                                               99
                                              100
                                              101
                                              102
                                              103
                                              104
                                              105
                                              106
                                              107
                                              108
                                              109
                                              110
                                              111
                                              112
                                              113
                                              114
                                              115
                                              116
                                              117
                                              118
                                              119
                                              120
                                              121
                                              122
                                              123
                                              124
                                              125
                                              126
                                              127
                                              128
                                              129
                                              130
                                              131
                                              132
                                              133
                                              134
                                              135
                                              136
                                              137
                                              138
                                              139
                                              140
                                              141
                                              142
                                              143
                                              144
                                              145
                                              146
                                              147
                                              148
                                              149
                                              150
                                              151
                                              152
                                              153
                                              154
                                              155
                                              156
                                              157
                                              158
                                              159
                                              160
                                              161
                                              162
                                              163
                                              164
                                              165
                                              166
                                              167
                                              168
                                              169
                                              170
                                              171
                                              class ToolCallInfo(BaseModel):
                                                  """Information about an executed tool call."""
                                              
                                                  tool_name: str
                                                  """Name of the tool that was called."""
                                              
                                                  args: dict[str, Any]
                                                  """Arguments passed to the tool."""
                                              
                                                  result: Any
                                                  """Result returned by the tool."""
                                              
                                                  agent_name: str
                                                  """Name of the calling agent."""
                                              
                                                  tool_call_id: str = Field(default_factory=lambda: str(uuid4()))
                                                  """ID provided by the model (e.g. OpenAI function call ID)."""
                                              
                                                  timestamp: datetime = Field(default_factory=get_now)
                                                  """When the tool was called."""
                                              
                                                  message_id: str | None = None
                                                  """ID of the message that triggered this tool call."""
                                              
                                                  context_data: Any | None = None
                                                  """Optional context data that was passed to the agent's run() method."""
                                              
                                                  error: str | None = None
                                                  """Error message if the tool call failed."""
                                              
                                                  timing: float | None = None
                                                  """Time taken for this specific tool call in seconds."""
                                              
                                                  agent_tool_name: str | None = None
                                                  """If this tool is agent-based, the name of that agent."""
                                              
                                                  model_config = ConfigDict(use_attribute_docstrings=True, extra="forbid")
                                              
                                                  def format(
                                                      self,
                                                      style: FormatStyle = "simple",
                                                      *,
                                                      template: str | None = None,
                                                      variables: dict[str, Any] | None = None,
                                                      show_timing: bool = True,
                                                      show_ids: bool = False,
                                                  ) -> str:
                                                      """Format tool call information with configurable style.
                                              
                                                      Args:
                                                          style: Predefined style to use:
                                                              - simple: Compact single-line format
                                                              - detailed: Multi-line with all details
                                                              - markdown: Formatted markdown with syntax highlighting
                                                          template: Optional custom template (required if style="custom")
                                                          variables: Additional variables for template rendering
                                                          show_timing: Whether to include execution timing
                                                          show_ids: Whether to include tool_call_id and message_id
                                              
                                                      Returns:
                                                          Formatted tool call information
                                              
                                                      Raises:
                                                          ValueError: If style is invalid or custom template is missing
                                                      """
                                                      from jinjarope import Environment
                                              
                                                      # Select template
                                                      if template:
                                                          template_str = template
                                                      elif style in TEMPLATES:
                                                          template_str = TEMPLATES[style]
                                                      else:
                                                          msg = f"Invalid style: {style}"
                                                          raise ValueError(msg)
                                              
                                                      # Prepare template variables
                                                      vars_ = {
                                                          "tool_name": self.tool_name,
                                                          "args": self.args,  # No pre-formatting needed
                                                          "result": self.result,
                                                          "error": self.error,
                                                          "agent_name": self.agent_name,
                                                          "timestamp": self.timestamp,
                                                          "timing": self.timing if show_timing else None,
                                                          "agent_tool_name": self.agent_tool_name,
                                                      }
                                              
                                                      if show_ids:
                                                          vars_.update({
                                                              "tool_call_id": self.tool_call_id,
                                                              "message_id": self.message_id,
                                                          })
                                              
                                                      if variables:
                                                          vars_.update(variables)
                                              
                                                      # Render template
                                                      env = Environment(trim_blocks=True, lstrip_blocks=True)
                                                      env.filters["repr"] = repr  # Add repr filter
                                                      template_obj = env.from_string(template_str)
                                                      return template_obj.render(**vars_)
                                              

                                              agent_name instance-attribute

                                              agent_name: str
                                              

                                              Name of the calling agent.

                                              agent_tool_name class-attribute instance-attribute

                                              agent_tool_name: str | None = None
                                              

                                              If this tool is agent-based, the name of that agent.

                                              args instance-attribute

                                              args: dict[str, Any]
                                              

                                              Arguments passed to the tool.

                                              context_data class-attribute instance-attribute

                                              context_data: Any | None = None
                                              

                                              Optional context data that was passed to the agent's run() method.

                                              error class-attribute instance-attribute

                                              error: str | None = None
                                              

                                              Error message if the tool call failed.

                                              message_id class-attribute instance-attribute

                                              message_id: str | None = None
                                              

                                              ID of the message that triggered this tool call.

                                              result instance-attribute

                                              result: Any
                                              

                                              Result returned by the tool.

                                              timestamp class-attribute instance-attribute

                                              timestamp: datetime = Field(default_factory=get_now)
                                              

                                              When the tool was called.

                                              timing class-attribute instance-attribute

                                              timing: float | None = None
                                              

                                              Time taken for this specific tool call in seconds.

                                              tool_call_id class-attribute instance-attribute

                                              tool_call_id: str = Field(default_factory=lambda: str(uuid4()))
                                              

                                              ID provided by the model (e.g. OpenAI function call ID).

                                              tool_name instance-attribute

                                              tool_name: str
                                              

                                              Name of the tool that was called.

                                              format

                                              format(
                                                  style: FormatStyle = "simple",
                                                  *,
                                                  template: str | None = None,
                                                  variables: dict[str, Any] | None = None,
                                                  show_timing: bool = True,
                                                  show_ids: bool = False,
                                              ) -> str
                                              

                                              Format tool call information with configurable style.

                                              Parameters:

                                              Name Type Description Default
                                              style FormatStyle

                                              Predefined style to use: - simple: Compact single-line format - detailed: Multi-line with all details - markdown: Formatted markdown with syntax highlighting

                                              'simple'
                                              template str | None

                                              Optional custom template (required if style="custom")

                                              None
                                              variables dict[str, Any] | None

                                              Additional variables for template rendering

                                              None
                                              show_timing bool

                                              Whether to include execution timing

                                              True
                                              show_ids bool

                                              Whether to include tool_call_id and message_id

                                              False

                                              Returns:

                                              Type Description
                                              str

                                              Formatted tool call information

                                              Raises:

                                              Type Description
                                              ValueError

                                              If style is invalid or custom template is missing

                                              Source code in src/llmling_agent/tools/tool_call_info.py
                                              108
                                              109
                                              110
                                              111
                                              112
                                              113
                                              114
                                              115
                                              116
                                              117
                                              118
                                              119
                                              120
                                              121
                                              122
                                              123
                                              124
                                              125
                                              126
                                              127
                                              128
                                              129
                                              130
                                              131
                                              132
                                              133
                                              134
                                              135
                                              136
                                              137
                                              138
                                              139
                                              140
                                              141
                                              142
                                              143
                                              144
                                              145
                                              146
                                              147
                                              148
                                              149
                                              150
                                              151
                                              152
                                              153
                                              154
                                              155
                                              156
                                              157
                                              158
                                              159
                                              160
                                              161
                                              162
                                              163
                                              164
                                              165
                                              166
                                              167
                                              168
                                              169
                                              170
                                              171
                                              def format(
                                                  self,
                                                  style: FormatStyle = "simple",
                                                  *,
                                                  template: str | None = None,
                                                  variables: dict[str, Any] | None = None,
                                                  show_timing: bool = True,
                                                  show_ids: bool = False,
                                              ) -> str:
                                                  """Format tool call information with configurable style.
                                              
                                                  Args:
                                                      style: Predefined style to use:
                                                          - simple: Compact single-line format
                                                          - detailed: Multi-line with all details
                                                          - markdown: Formatted markdown with syntax highlighting
                                                      template: Optional custom template (required if style="custom")
                                                      variables: Additional variables for template rendering
                                                      show_timing: Whether to include execution timing
                                                      show_ids: Whether to include tool_call_id and message_id
                                              
                                                  Returns:
                                                      Formatted tool call information
                                              
                                                  Raises:
                                                      ValueError: If style is invalid or custom template is missing
                                                  """
                                                  from jinjarope import Environment
                                              
                                                  # Select template
                                                  if template:
                                                      template_str = template
                                                  elif style in TEMPLATES:
                                                      template_str = TEMPLATES[style]
                                                  else:
                                                      msg = f"Invalid style: {style}"
                                                      raise ValueError(msg)
                                              
                                                  # Prepare template variables
                                                  vars_ = {
                                                      "tool_name": self.tool_name,
                                                      "args": self.args,  # No pre-formatting needed
                                                      "result": self.result,
                                                      "error": self.error,
                                                      "agent_name": self.agent_name,
                                                      "timestamp": self.timestamp,
                                                      "timing": self.timing if show_timing else None,
                                                      "agent_tool_name": self.agent_tool_name,
                                                  }
                                              
                                                  if show_ids:
                                                      vars_.update({
                                                          "tool_call_id": self.tool_call_id,
                                                          "message_id": self.message_id,
                                                      })
                                              
                                                  if variables:
                                                      vars_.update(variables)
                                              
                                                  # Render template
                                                  env = Environment(trim_blocks=True, lstrip_blocks=True)
                                                  env.filters["repr"] = repr  # Add repr filter
                                                  template_obj = env.from_string(template_str)
                                                  return template_obj.render(**vars_)
                                              

                                              VideoURLContent

                                              Bases: VideoContent

                                              Video from URL.

                                              Source code in src/llmling_agent/models/content.py
                                              301
                                              302
                                              303
                                              304
                                              305
                                              306
                                              307
                                              308
                                              309
                                              310
                                              311
                                              312
                                              313
                                              class VideoURLContent(VideoContent):
                                                  """Video from URL."""
                                              
                                                  type: Literal["video_url"] = Field("video_url", init=False)
                                                  """URL-based video."""
                                              
                                                  url: str
                                                  """URL to the video."""
                                              
                                                  def to_openai_format(self) -> dict[str, Any]:
                                                      """Convert to OpenAI API format for video models."""
                                                      content = {"url": self.url, "format": self.format or "auto"}
                                                      return {"type": "video", "video": content}
                                              

                                              type class-attribute instance-attribute

                                              type: Literal['video_url'] = Field('video_url', init=False)
                                              

                                              URL-based video.

                                              url instance-attribute

                                              url: str
                                              

                                              URL to the video.

                                              to_openai_format

                                              to_openai_format() -> dict[str, Any]
                                              

                                              Convert to OpenAI API format for video models.

                                              Source code in src/llmling_agent/models/content.py
                                              310
                                              311
                                              312
                                              313
                                              def to_openai_format(self) -> dict[str, Any]:
                                                  """Convert to OpenAI API format for video models."""
                                                  content = {"url": self.url, "format": self.format or "auto"}
                                                  return {"type": "video", "video": content}
                                              

                                              YAMLCode

                                              Bases: BaseCode

                                              YAML with syntax validation.

                                              Source code in src/llmling_agent/common_types.py
                                               91
                                               92
                                               93
                                               94
                                               95
                                               96
                                               97
                                               98
                                               99
                                              100
                                              101
                                              102
                                              103
                                              104
                                              105
                                              class YAMLCode(BaseCode):
                                                  """YAML with syntax validation."""
                                              
                                                  @field_validator("code")
                                                  @classmethod
                                                  def validate_syntax(cls, code: str) -> str:
                                                      import yamling
                                              
                                                      try:
                                                          yamling.load(code, mode="yaml")
                                                      except yamling.ParsingError as e:
                                                          msg = f"Invalid YAML syntax: {e}"
                                                          raise ValueError(msg) from e
                                                      else:
                                                          return code