Skip to content

agent

Class info

Classes

Name Children Inherits
Agent
llmling_agent.agent.agent
Agent for AI-powered interaction with LLMling resources and tools.
    AgentContext
    llmling_agent.agent.context
    Runtime context for agent execution.
      ConversationManager
      llmling_agent.agent.conversation
      Manages conversation state and system prompts.
        Interactions
        llmling_agent.agent.interactions
        Manages agent communication patterns.
          ProcessManager
          llmling_agent.agent.process_manager
          Manages background processes for an agent pool.
            ProcessOutput
            llmling_agent.agent.process_manager
            Output from a running process.
              RunningProcess
              llmling_agent.agent.process_manager
              Represents a running background process.
                StructuredAgent
                llmling_agent.agent.structured
                Wrapper for Agent that enforces a specific result type.
                  SystemPrompts
                  llmling_agent.agent.sys_prompts
                  Manages system prompts for an agent.

                    🛈 DocStrings

                    CLI commands for llmling-agent.

                    Agent

                    Bases: MessageNode[TDeps, str]

                    Agent for AI-powered interaction with LLMling resources and tools.

                    Generically typed with: LLMLingAgent[Type of Dependencies, Type of Result]

                    This agent integrates LLMling's resource system with PydanticAI's agent capabilities. It provides: - Access to resources through RuntimeConfig - Tool registration for resource operations - System prompt customization - Signals - Message history management - Database logging

                    Source code in src/llmling_agent/agent/agent.py
                     116
                     117
                     118
                     119
                     120
                     121
                     122
                     123
                     124
                     125
                     126
                     127
                     128
                     129
                     130
                     131
                     132
                     133
                     134
                     135
                     136
                     137
                     138
                     139
                     140
                     141
                     142
                     143
                     144
                     145
                     146
                     147
                     148
                     149
                     150
                     151
                     152
                     153
                     154
                     155
                     156
                     157
                     158
                     159
                     160
                     161
                     162
                     163
                     164
                     165
                     166
                     167
                     168
                     169
                     170
                     171
                     172
                     173
                     174
                     175
                     176
                     177
                     178
                     179
                     180
                     181
                     182
                     183
                     184
                     185
                     186
                     187
                     188
                     189
                     190
                     191
                     192
                     193
                     194
                     195
                     196
                     197
                     198
                     199
                     200
                     201
                     202
                     203
                     204
                     205
                     206
                     207
                     208
                     209
                     210
                     211
                     212
                     213
                     214
                     215
                     216
                     217
                     218
                     219
                     220
                     221
                     222
                     223
                     224
                     225
                     226
                     227
                     228
                     229
                     230
                     231
                     232
                     233
                     234
                     235
                     236
                     237
                     238
                     239
                     240
                     241
                     242
                     243
                     244
                     245
                     246
                     247
                     248
                     249
                     250
                     251
                     252
                     253
                     254
                     255
                     256
                     257
                     258
                     259
                     260
                     261
                     262
                     263
                     264
                     265
                     266
                     267
                     268
                     269
                     270
                     271
                     272
                     273
                     274
                     275
                     276
                     277
                     278
                     279
                     280
                     281
                     282
                     283
                     284
                     285
                     286
                     287
                     288
                     289
                     290
                     291
                     292
                     293
                     294
                     295
                     296
                     297
                     298
                     299
                     300
                     301
                     302
                     303
                     304
                     305
                     306
                     307
                     308
                     309
                     310
                     311
                     312
                     313
                     314
                     315
                     316
                     317
                     318
                     319
                     320
                     321
                     322
                     323
                     324
                     325
                     326
                     327
                     328
                     329
                     330
                     331
                     332
                     333
                     334
                     335
                     336
                     337
                     338
                     339
                     340
                     341
                     342
                     343
                     344
                     345
                     346
                     347
                     348
                     349
                     350
                     351
                     352
                     353
                     354
                     355
                     356
                     357
                     358
                     359
                     360
                     361
                     362
                     363
                     364
                     365
                     366
                     367
                     368
                     369
                     370
                     371
                     372
                     373
                     374
                     375
                     376
                     377
                     378
                     379
                     380
                     381
                     382
                     383
                     384
                     385
                     386
                     387
                     388
                     389
                     390
                     391
                     392
                     393
                     394
                     395
                     396
                     397
                     398
                     399
                     400
                     401
                     402
                     403
                     404
                     405
                     406
                     407
                     408
                     409
                     410
                     411
                     412
                     413
                     414
                     415
                     416
                     417
                     418
                     419
                     420
                     421
                     422
                     423
                     424
                     425
                     426
                     427
                     428
                     429
                     430
                     431
                     432
                     433
                     434
                     435
                     436
                     437
                     438
                     439
                     440
                     441
                     442
                     443
                     444
                     445
                     446
                     447
                     448
                     449
                     450
                     451
                     452
                     453
                     454
                     455
                     456
                     457
                     458
                     459
                     460
                     461
                     462
                     463
                     464
                     465
                     466
                     467
                     468
                     469
                     470
                     471
                     472
                     473
                     474
                     475
                     476
                     477
                     478
                     479
                     480
                     481
                     482
                     483
                     484
                     485
                     486
                     487
                     488
                     489
                     490
                     491
                     492
                     493
                     494
                     495
                     496
                     497
                     498
                     499
                     500
                     501
                     502
                     503
                     504
                     505
                     506
                     507
                     508
                     509
                     510
                     511
                     512
                     513
                     514
                     515
                     516
                     517
                     518
                     519
                     520
                     521
                     522
                     523
                     524
                     525
                     526
                     527
                     528
                     529
                     530
                     531
                     532
                     533
                     534
                     535
                     536
                     537
                     538
                     539
                     540
                     541
                     542
                     543
                     544
                     545
                     546
                     547
                     548
                     549
                     550
                     551
                     552
                     553
                     554
                     555
                     556
                     557
                     558
                     559
                     560
                     561
                     562
                     563
                     564
                     565
                     566
                     567
                     568
                     569
                     570
                     571
                     572
                     573
                     574
                     575
                     576
                     577
                     578
                     579
                     580
                     581
                     582
                     583
                     584
                     585
                     586
                     587
                     588
                     589
                     590
                     591
                     592
                     593
                     594
                     595
                     596
                     597
                     598
                     599
                     600
                     601
                     602
                     603
                     604
                     605
                     606
                     607
                     608
                     609
                     610
                     611
                     612
                     613
                     614
                     615
                     616
                     617
                     618
                     619
                     620
                     621
                     622
                     623
                     624
                     625
                     626
                     627
                     628
                     629
                     630
                     631
                     632
                     633
                     634
                     635
                     636
                     637
                     638
                     639
                     640
                     641
                     642
                     643
                     644
                     645
                     646
                     647
                     648
                     649
                     650
                     651
                     652
                     653
                     654
                     655
                     656
                     657
                     658
                     659
                     660
                     661
                     662
                     663
                     664
                     665
                     666
                     667
                     668
                     669
                     670
                     671
                     672
                     673
                     674
                     675
                     676
                     677
                     678
                     679
                     680
                     681
                     682
                     683
                     684
                     685
                     686
                     687
                     688
                     689
                     690
                     691
                     692
                     693
                     694
                     695
                     696
                     697
                     698
                     699
                     700
                     701
                     702
                     703
                     704
                     705
                     706
                     707
                     708
                     709
                     710
                     711
                     712
                     713
                     714
                     715
                     716
                     717
                     718
                     719
                     720
                     721
                     722
                     723
                     724
                     725
                     726
                     727
                     728
                     729
                     730
                     731
                     732
                     733
                     734
                     735
                     736
                     737
                     738
                     739
                     740
                     741
                     742
                     743
                     744
                     745
                     746
                     747
                     748
                     749
                     750
                     751
                     752
                     753
                     754
                     755
                     756
                     757
                     758
                     759
                     760
                     761
                     762
                     763
                     764
                     765
                     766
                     767
                     768
                     769
                     770
                     771
                     772
                     773
                     774
                     775
                     776
                     777
                     778
                     779
                     780
                     781
                     782
                     783
                     784
                     785
                     786
                     787
                     788
                     789
                     790
                     791
                     792
                     793
                     794
                     795
                     796
                     797
                     798
                     799
                     800
                     801
                     802
                     803
                     804
                     805
                     806
                     807
                     808
                     809
                     810
                     811
                     812
                     813
                     814
                     815
                     816
                     817
                     818
                     819
                     820
                     821
                     822
                     823
                     824
                     825
                     826
                     827
                     828
                     829
                     830
                     831
                     832
                     833
                     834
                     835
                     836
                     837
                     838
                     839
                     840
                     841
                     842
                     843
                     844
                     845
                     846
                     847
                     848
                     849
                     850
                     851
                     852
                     853
                     854
                     855
                     856
                     857
                     858
                     859
                     860
                     861
                     862
                     863
                     864
                     865
                     866
                     867
                     868
                     869
                     870
                     871
                     872
                     873
                     874
                     875
                     876
                     877
                     878
                     879
                     880
                     881
                     882
                     883
                     884
                     885
                     886
                     887
                     888
                     889
                     890
                     891
                     892
                     893
                     894
                     895
                     896
                     897
                     898
                     899
                     900
                     901
                     902
                     903
                     904
                     905
                     906
                     907
                     908
                     909
                     910
                     911
                     912
                     913
                     914
                     915
                     916
                     917
                     918
                     919
                     920
                     921
                     922
                     923
                     924
                     925
                     926
                     927
                     928
                     929
                     930
                     931
                     932
                     933
                     934
                     935
                     936
                     937
                     938
                     939
                     940
                     941
                     942
                     943
                     944
                     945
                     946
                     947
                     948
                     949
                     950
                     951
                     952
                     953
                     954
                     955
                     956
                     957
                     958
                     959
                     960
                     961
                     962
                     963
                     964
                     965
                     966
                     967
                     968
                     969
                     970
                     971
                     972
                     973
                     974
                     975
                     976
                     977
                     978
                     979
                     980
                     981
                     982
                     983
                     984
                     985
                     986
                     987
                     988
                     989
                     990
                     991
                     992
                     993
                     994
                     995
                     996
                     997
                     998
                     999
                    1000
                    1001
                    1002
                    1003
                    1004
                    1005
                    1006
                    1007
                    1008
                    1009
                    1010
                    1011
                    1012
                    1013
                    1014
                    1015
                    1016
                    1017
                    1018
                    1019
                    1020
                    1021
                    1022
                    1023
                    1024
                    1025
                    1026
                    1027
                    1028
                    1029
                    1030
                    1031
                    1032
                    1033
                    1034
                    1035
                    1036
                    1037
                    1038
                    1039
                    1040
                    1041
                    1042
                    1043
                    1044
                    1045
                    1046
                    1047
                    1048
                    1049
                    1050
                    1051
                    1052
                    1053
                    1054
                    1055
                    1056
                    1057
                    1058
                    1059
                    1060
                    1061
                    1062
                    1063
                    1064
                    1065
                    1066
                    1067
                    1068
                    1069
                    1070
                    1071
                    1072
                    1073
                    1074
                    1075
                    1076
                    1077
                    1078
                    1079
                    1080
                    1081
                    1082
                    1083
                    1084
                    1085
                    1086
                    1087
                    1088
                    1089
                    1090
                    1091
                    1092
                    1093
                    1094
                    1095
                    1096
                    1097
                    1098
                    1099
                    1100
                    1101
                    1102
                    1103
                    1104
                    1105
                    1106
                    1107
                    1108
                    1109
                    1110
                    1111
                    1112
                    1113
                    1114
                    1115
                    1116
                    1117
                    1118
                    1119
                    1120
                    1121
                    1122
                    1123
                    1124
                    1125
                    1126
                    1127
                    1128
                    1129
                    1130
                    1131
                    1132
                    1133
                    1134
                    1135
                    1136
                    1137
                    1138
                    1139
                    1140
                    1141
                    1142
                    1143
                    1144
                    1145
                    1146
                    1147
                    1148
                    1149
                    1150
                    1151
                    1152
                    1153
                    1154
                    1155
                    1156
                    1157
                    1158
                    1159
                    1160
                    1161
                    1162
                    1163
                    1164
                    1165
                    1166
                    1167
                    1168
                    1169
                    1170
                    1171
                    1172
                    1173
                    1174
                    1175
                    1176
                    1177
                    1178
                    1179
                    1180
                    1181
                    1182
                    1183
                    1184
                    1185
                    1186
                    1187
                    1188
                    1189
                    1190
                    1191
                    1192
                    1193
                    1194
                    1195
                    1196
                    1197
                    1198
                    1199
                    1200
                    1201
                    1202
                    1203
                    1204
                    1205
                    1206
                    1207
                    1208
                    1209
                    1210
                    1211
                    1212
                    1213
                    1214
                    1215
                    1216
                    1217
                    1218
                    1219
                    1220
                    1221
                    1222
                    1223
                    1224
                    1225
                    1226
                    1227
                    1228
                    1229
                    1230
                    1231
                    1232
                    1233
                    1234
                    1235
                    1236
                    1237
                    1238
                    1239
                    1240
                    1241
                    1242
                    1243
                    1244
                    1245
                    1246
                    1247
                    1248
                    1249
                    1250
                    1251
                    1252
                    1253
                    1254
                    1255
                    1256
                    1257
                    1258
                    1259
                    1260
                    1261
                    1262
                    1263
                    1264
                    1265
                    1266
                    1267
                    1268
                    1269
                    1270
                    1271
                    1272
                    1273
                    1274
                    1275
                    1276
                    1277
                    1278
                    1279
                    1280
                    1281
                    1282
                    1283
                    1284
                    1285
                    1286
                    1287
                    1288
                    1289
                    1290
                    1291
                    1292
                    1293
                    1294
                    1295
                    1296
                    1297
                    1298
                    1299
                    1300
                    1301
                    1302
                    1303
                    1304
                    1305
                    1306
                    1307
                    1308
                    1309
                    1310
                    1311
                    1312
                    1313
                    1314
                    1315
                    1316
                    1317
                    1318
                    1319
                    1320
                    1321
                    1322
                    1323
                    1324
                    1325
                    1326
                    1327
                    1328
                    1329
                    1330
                    1331
                    1332
                    1333
                    1334
                    1335
                    1336
                    1337
                    1338
                    1339
                    1340
                    1341
                    1342
                    1343
                    1344
                    1345
                    1346
                    1347
                    1348
                    1349
                    1350
                    1351
                    1352
                    1353
                    1354
                    1355
                    class Agent[TDeps = None](MessageNode[TDeps, str]):
                        """Agent for AI-powered interaction with LLMling resources and tools.
                    
                        Generically typed with: LLMLingAgent[Type of Dependencies, Type of Result]
                    
                        This agent integrates LLMling's resource system with PydanticAI's agent capabilities.
                        It provides:
                        - Access to resources through RuntimeConfig
                        - Tool registration for resource operations
                        - System prompt customization
                        - Signals
                        - Message history management
                        - Database logging
                        """
                    
                        @dataclass(frozen=True)
                        class AgentReset:
                            """Emitted when agent is reset."""
                    
                            agent_name: AgentName
                            previous_tools: dict[str, bool]
                            new_tools: dict[str, bool]
                            timestamp: datetime = field(default_factory=get_now)
                    
                        # this fixes weird mypy issue
                        conversation: ConversationManager
                        talk: Interactions
                        model_changed = Signal(object)  # Model | None
                        chunk_streamed = Signal(str, str)  # (chunk, message_id)
                        run_failed = Signal(str, Exception)
                        agent_reset = Signal(AgentReset)
                    
                        def __init__(  # noqa: PLR0915
                            # we dont use AgentKwargs here so that we can work with explicit ones in the ctor
                            self,
                            name: str = "llmling-agent",
                            provider: AgentType = "pydantic_ai",
                            *,
                            model: ModelType = None,
                            runtime: RuntimeConfig | Config | StrPath | None = None,
                            context: AgentContext[TDeps] | None = None,
                            session: SessionIdType | SessionQuery | MemoryConfig | bool | int = None,
                            system_prompt: AnyPromptType | Sequence[AnyPromptType] = (),
                            description: str | None = None,
                            tools: Sequence[ToolType | Tool] | None = None,
                            capabilities: Capabilities | None = None,
                            mcp_servers: Sequence[str | MCPServerConfig] | None = None,
                            resources: Sequence[Resource | PromptType | str] = (),
                            retries: int = 1,
                            output_retries: int | None = None,
                            end_strategy: EndStrategy = "early",
                            defer_model_check: bool = False,
                            input_provider: InputProvider | None = None,
                            parallel_init: bool = True,
                            debug: bool = False,
                        ):
                            """Initialize agent with runtime configuration.
                    
                            Args:
                                runtime: Runtime configuration providing access to resources/tools
                                context: Agent context with capabilities and configuration
                                provider: Agent type to use (ai: PydanticAIProvider, human: HumanProvider)
                                session: Memory configuration.
                                    - None: Default memory config
                                    - False: Disable message history (max_messages=0)
                                    - int: Max tokens for memory
                                    - str/UUID: Session identifier
                                    - SessionQuery: Query to recover conversation
                                    - MemoryConfig: Complete memory configuration
                                model: The default model to use (defaults to GPT-5)
                                system_prompt: Static system prompts to use for this agent
                                name: Name of the agent for logging
                                description: Description of the Agent ("what it can do")
                                tools: List of tools to register with the agent
                                capabilities: Capabilities for the agent
                                mcp_servers: MCP servers to connect to
                                resources: Additional resources to load
                                retries: Default number of retries for failed operations
                                output_retries: Max retries for result validation (defaults to retries)
                                end_strategy: Strategy for handling tool calls that are requested alongside
                                              a final result
                                defer_model_check: Whether to defer model evaluation until first run
                                input_provider: Provider for human input (tool confirmation / HumanProviders)
                                parallel_init: Whether to initialize resources in parallel
                                debug: Whether to enable debug mode
                            """
                            from llmling_agent.agent import AgentContext
                            from llmling_agent.agent.conversation import ConversationManager
                            from llmling_agent.agent.interactions import Interactions
                            from llmling_agent.agent.sys_prompts import SystemPrompts
                            from llmling_agent.resource_providers.capability_provider import (
                                CapabilitiesResourceProvider,
                            )
                            from llmling_agent_providers.base import AgentProvider
                    
                            self.task_manager = TaskManager()
                            self._infinite = False
                            # save some stuff for asnyc init
                            self._owns_runtime = False
                            # prepare context
                            ctx = context or AgentContext[TDeps].create_default(
                                name,
                                input_provider=input_provider,
                                capabilities=capabilities,
                            )
                            self._context = ctx
                            memory_cfg = (
                                session
                                if isinstance(session, MemoryConfig)
                                else MemoryConfig.from_value(session)
                            )
                            super().__init__(
                                name=name,
                                context=ctx,
                                description=description,
                                enable_logging=memory_cfg.enable,
                                mcp_servers=mcp_servers,
                            )
                            # Initialize runtime
                            match runtime:
                                case None:
                                    ctx.runtime = RuntimeConfig.from_config(Config())
                                case Config() | str() | PathLike():
                                    ctx.runtime = RuntimeConfig.from_config(runtime)
                                case RuntimeConfig():
                                    ctx.runtime = runtime
                                case _:
                                    msg = f"Invalid runtime type: {type(runtime)}"
                                    raise TypeError(msg)
                    
                            runtime_provider = RuntimePromptProvider(ctx.runtime)
                            ctx.definition.prompt_manager.providers["runtime"] = runtime_provider
                            # Initialize tool manager
                            all_tools = list(tools or [])
                            self.tools = ToolManager(all_tools)
                            self.tools.add_provider(self.mcp)
                            if builtin_tools := ctx.config.get_tool_provider():
                                self.tools.add_provider(builtin_tools)
                    
                            # Initialize conversation manager
                            resources = list(resources)
                            if ctx.config.knowledge:
                                resources.extend(ctx.config.knowledge.get_resources())
                            self.conversation = ConversationManager(self, memory_cfg, resources=resources)
                            # Initialize provider
                            match provider:
                                case "pydantic_ai":
                                    validate_import("pydantic_ai", "pydantic_ai")
                                    from llmling_agent_providers.pydanticai import PydanticAIProvider
                    
                                    if model and not isinstance(model, str):
                                        from pydantic_ai import models
                    
                                        assert isinstance(model, models.Model)
                                    self._provider: AgentProvider = PydanticAIProvider(
                                        model=model,
                                        retries=retries,
                                        end_strategy=end_strategy,
                                        output_retries=output_retries,
                                        defer_model_check=defer_model_check,
                                        debug=debug,
                                        context=ctx,
                                    )
                                case "human":
                                    from llmling_agent_providers.human import HumanProvider
                    
                                    self._provider = HumanProvider(name=name, debug=debug, context=ctx)
                                case Callable():
                                    from llmling_agent_providers.callback import CallbackProvider
                    
                                    self._provider = CallbackProvider(
                                        provider, name=name, debug=debug, context=ctx
                                    )
                                case AgentProvider():
                                    self._provider = provider
                                    self._provider.context = ctx
                                case _:
                                    msg = f"Invalid agent type: {type}"
                                    raise ValueError(msg)
                            self.tools.add_provider(CapabilitiesResourceProvider(ctx.capabilities))
                    
                            if ctx and ctx.definition:
                                from llmling_agent.observability import registry
                    
                                registry.register_providers(ctx.definition.observability)
                    
                            # init variables
                            self._debug = debug
                            self._result_type: type | None = None
                            self.parallel_init = parallel_init
                            self.name = name
                            self._background_task: asyncio.Task[Any] | None = None
                    
                            # Forward provider signals
                            self._provider.chunk_streamed.connect(self.chunk_streamed)
                            self._provider.model_changed.connect(self.model_changed)
                            self._provider.tool_used.connect(self.tool_used)
                            self._provider.model_changed.connect(self.model_changed)
                    
                            self.talk = Interactions(self)
                    
                            # Set up system prompts
                            config_prompts = ctx.config.system_prompts if ctx else []
                            all_prompts: list[AnyPromptType] = list(config_prompts)
                            if isinstance(system_prompt, list):
                                all_prompts.extend(system_prompt)
                            else:
                                all_prompts.append(system_prompt)
                            self.sys_prompts = SystemPrompts(all_prompts, context=ctx)
                    
                        def __repr__(self) -> str:
                            desc = f", {self.description!r}" if self.description else ""
                            tools = f", tools={len(self.tools)}" if self.tools else ""
                            return f"Agent({self.name!r}, provider={self._provider.NAME!r}{desc}{tools})"
                    
                        def __prompt__(self) -> str:
                            typ = self._provider.__class__.__name__
                            model = self.model_name or "default"
                            parts = [f"Agent: {self.name}", f"Type: {typ}", f"Model: {model}"]
                            if self.description:
                                parts.append(f"Description: {self.description}")
                            parts.extend([self.tools.__prompt__(), self.conversation.__prompt__()])
                    
                            return "\n".join(parts)
                    
                        async def __aenter__(self) -> Self:
                            """Enter async context and set up MCP servers."""
                            try:
                                # Collect all coroutines that need to be run
                                coros: list[Coroutine[Any, Any, Any]] = []
                    
                                # Runtime initialization if needed
                                runtime_ref = self.context.runtime
                                if runtime_ref and not runtime_ref._initialized:
                                    self._owns_runtime = True
                                    coros.append(runtime_ref.__aenter__())
                    
                                # Events initialization
                                coros.append(super().__aenter__())
                    
                                # Get conversation init tasks directly
                                coros.extend(self.conversation.get_initialization_tasks())
                    
                                # Execute coroutines either in parallel or sequentially
                                if self.parallel_init and coros:
                                    await asyncio.gather(*coros)
                                else:
                                    for coro in coros:
                                        await coro
                                if runtime_ref:
                                    self.tools.add_provider(RuntimeResourceProvider(runtime_ref))
                                for provider in await self.context.config.get_toolsets():
                                    self.tools.add_provider(provider)
                            except Exception as e:
                                # Clean up in reverse order
                                if self._owns_runtime and runtime_ref and self.context.runtime == runtime_ref:
                                    await runtime_ref.__aexit__(type(e), e, e.__traceback__)
                                msg = "Failed to initialize agent"
                                raise RuntimeError(msg) from e
                            else:
                                return self
                    
                        async def __aexit__(
                            self,
                            exc_type: type[BaseException] | None,
                            exc_val: BaseException | None,
                            exc_tb: TracebackType | None,
                        ):
                            """Exit async context."""
                            await super().__aexit__(exc_type, exc_val, exc_tb)
                            try:
                                await self.mcp.__aexit__(exc_type, exc_val, exc_tb)
                            finally:
                                if self._owns_runtime and self.context.runtime:
                                    self.tools.remove_provider("runtime")
                                    await self.context.runtime.__aexit__(exc_type, exc_val, exc_tb)
                                # for provider in await self.context.config.get_toolsets():
                                #     self.tools.remove_provider(provider.name)
                    
                        @overload
                        def __and__(
                            self, other: Agent[TDeps] | StructuredAgent[TDeps, Any]
                        ) -> Team[TDeps]: ...
                    
                        @overload
                        def __and__(self, other: Team[TDeps]) -> Team[TDeps]: ...
                    
                        @overload
                        def __and__(self, other: ProcessorCallback[Any]) -> Team[TDeps]: ...
                    
                        def __and__(self, other: MessageNode[Any, Any] | ProcessorCallback[Any]) -> Team[Any]:
                            """Create agent group using | operator.
                    
                            Example:
                                group = analyzer & planner & executor  # Create group of 3
                                group = analyzer & existing_group  # Add to existing group
                            """
                            from llmling_agent.agent import StructuredAgent
                            from llmling_agent.delegation.team import Team
                    
                            match other:
                                case Team():
                                    return Team([self, *other.agents])
                                case Callable():
                                    if callable(other):
                                        if has_return_type(other, str):
                                            agent_2 = Agent.from_callback(other)
                                        else:
                                            agent_2 = StructuredAgent.from_callback(other)
                                    agent_2.context.pool = self.context.pool
                                    return Team([self, agent_2])
                                case MessageNode():
                                    return Team([self, other])
                                case _:
                                    msg = f"Invalid agent type: {type(other)}"
                                    raise ValueError(msg)
                    
                        @overload
                        def __or__(self, other: MessageNode[TDeps, Any]) -> TeamRun[TDeps, Any]: ...
                    
                        @overload
                        def __or__[TOtherDeps](
                            self,
                            other: MessageNode[TOtherDeps, Any],
                        ) -> TeamRun[Any, Any]: ...
                    
                        @overload
                        def __or__(self, other: ProcessorCallback[Any]) -> TeamRun[Any, Any]: ...
                    
                        def __or__(self, other: MessageNode[Any, Any] | ProcessorCallback[Any]) -> TeamRun:
                            # Create new execution with sequential mode (for piping)
                            from llmling_agent import StructuredAgent, TeamRun
                    
                            if callable(other):
                                if has_return_type(other, str):
                                    other = Agent.from_callback(other)
                                else:
                                    other = StructuredAgent.from_callback(other)
                                other.context.pool = self.context.pool
                    
                            return TeamRun([self, other])
                    
                        @classmethod
                        def from_callback(
                            cls,
                            callback: ProcessorCallback[str],
                            *,
                            name: str | None = None,
                            debug: bool = False,
                            **kwargs: Any,
                        ) -> Agent[None]:
                            """Create an agent from a processing callback.
                    
                            Args:
                                callback: Function to process messages. Can be:
                                    - sync or async
                                    - with or without context
                                    - must return str for pipeline compatibility
                                name: Optional name for the agent
                                debug: Whether to enable debug mode
                                kwargs: Additional arguments for agent
                            """
                            from llmling_agent_providers.callback import CallbackProvider
                    
                            name = name or getattr(callback, "__name__", "processor")
                            name = name or "processor"
                            provider = CallbackProvider(callback, name=name)
                            return Agent[None](provider=provider, name=name, debug=debug, **kwargs)
                    
                        @property
                        def name(self) -> str:
                            """Get agent name."""
                            return self._name or "llmling-agent"
                    
                        @name.setter
                        def name(self, value: str):
                            self._provider.name = value
                            self._name = value
                    
                        @property
                        def context(self) -> AgentContext[TDeps]:
                            """Get agent context."""
                            return self._context
                    
                        @context.setter
                        def context(self, value: AgentContext[TDeps]):
                            """Set agent context and propagate to provider."""
                            self._provider.context = value
                            self.mcp.context = value
                            self._context = value
                    
                        def set_result_type(
                            self,
                            result_type: type[TResult] | str | StructuredResponseConfig | None,
                            *,
                            tool_name: str | None = None,
                            tool_description: str | None = None,
                        ):
                            """Set or update the result type for this agent.
                    
                            Args:
                                result_type: New result type, can be:
                                    - A Python type for validation
                                    - Name of a response definition
                                    - Response definition instance
                                    - None to reset to unstructured mode
                                tool_name: Optional override for tool name
                                tool_description: Optional override for tool description
                            """
                            logger.debug("Setting result type to: %s for %r", result_type, self.name)
                            self._result_type = to_type(result_type)
                    
                        @property
                        def provider(self) -> AgentProvider:
                            """Get the underlying provider."""
                            return self._provider
                    
                        @provider.setter
                        def provider(self, value: AgentType, model: ModelType = None):
                            """Set the underlying provider."""
                            from llmling_agent_providers.base import AgentProvider
                    
                            name = self.name
                            debug = self._debug
                            self._provider.chunk_streamed.disconnect(self.chunk_streamed)
                            self._provider.model_changed.disconnect(self.model_changed)
                            self._provider.tool_used.disconnect(self.tool_used)
                            self._provider.model_changed.disconnect(self.model_changed)
                            match value:
                                case AgentProvider():
                                    self._provider = value
                                case "pydantic_ai":
                                    validate_import("pydantic_ai", "pydantic_ai")
                                    from llmling_agent_providers.pydanticai import PydanticAIProvider
                    
                                    self._provider = PydanticAIProvider(model=model, name=name, debug=debug)
                                case "human":
                                    from llmling_agent_providers.human import HumanProvider
                    
                                    self._provider = HumanProvider(name=name, debug=debug)
                                case Callable():
                                    from llmling_agent_providers.callback import CallbackProvider
                    
                                    self._provider = CallbackProvider(value, name=name, debug=debug)
                                case _:
                                    msg = f"Invalid agent type: {type}"
                                    raise ValueError(msg)
                            self._provider.chunk_streamed.connect(self.chunk_streamed)
                            self._provider.model_changed.connect(self.model_changed)
                            self._provider.tool_used.connect(self.tool_used)
                            self._provider.model_changed.connect(self.model_changed)
                            self._provider.context = self._context  # pyright: ignore[reportAttributeAccessIssue]
                    
                        @overload
                        def to_structured(
                            self,
                            result_type: None,
                            *,
                            tool_name: str | None = None,
                            tool_description: str | None = None,
                        ) -> Self: ...
                    
                        @overload
                        def to_structured[TResult](
                            self,
                            result_type: type[TResult] | str | StructuredResponseConfig,
                            *,
                            tool_name: str | None = None,
                            tool_description: str | None = None,
                        ) -> StructuredAgent[TDeps, TResult]: ...
                    
                        def to_structured[TResult](
                            self,
                            result_type: type[TResult] | str | StructuredResponseConfig | None,
                            *,
                            tool_name: str | None = None,
                            tool_description: str | None = None,
                        ) -> StructuredAgent[TDeps, TResult] | Self:
                            """Convert this agent to a structured agent.
                    
                            If result_type is None, returns self unchanged (no wrapping).
                            Otherwise creates a StructuredAgent wrapper.
                    
                            Args:
                                result_type: Type for structured responses. Can be:
                                    - A Python type (Pydantic model)
                                    - Name of response definition from context
                                    - Complete response definition
                                    - None to skip wrapping
                                tool_name: Optional override for result tool name
                                tool_description: Optional override for result tool description
                    
                            Returns:
                                Either StructuredAgent wrapper or self unchanged
                            from llmling_agent.agent import StructuredAgent
                            """
                            if result_type is None:
                                return self
                    
                            from llmling_agent.agent import StructuredAgent
                    
                            return StructuredAgent(
                                self,
                                result_type=result_type,
                                tool_name=tool_name,
                                tool_description=tool_description,
                            )
                    
                        def is_busy(self) -> bool:
                            """Check if agent is currently processing tasks."""
                            return bool(self.task_manager._pending_tasks or self._background_task)
                    
                        @property
                        def model_name(self) -> str | None:
                            """Get the model name in a consistent format."""
                            return self._provider.model_name
                    
                        def to_tool(
                            self,
                            *,
                            name: str | None = None,
                            reset_history_on_run: bool = True,
                            pass_message_history: bool = False,
                            share_context: bool = False,
                            parent: AnyAgent[Any, Any] | None = None,
                        ) -> Tool:
                            """Create a tool from this agent.
                    
                            Args:
                                name: Optional tool name override
                                reset_history_on_run: Clear agent's history before each run
                                pass_message_history: Pass parent's message history to agent
                                share_context: Whether to pass parent's context/deps
                                parent: Optional parent agent for history/context sharing
                            """
                            tool_name = name or f"ask_{self.name}"
                    
                            async def wrapped_tool(prompt: str) -> str:
                                if pass_message_history and not parent:
                                    msg = "Parent agent required for message history sharing"
                                    raise ToolError(msg)
                    
                                if reset_history_on_run:
                                    self.conversation.clear()
                    
                                history = None
                                if pass_message_history and parent:
                                    history = parent.conversation.get_history()
                                    old = self.conversation.get_history()
                                    self.conversation.set_history(history)
                                result = await self.run(prompt, result_type=self._result_type)
                                if history:
                                    self.conversation.set_history(old)
                                return result.data
                    
                            normalized_name = self.name.replace("_", " ").title()
                            docstring = f"Get expert answer from specialized agent: {normalized_name}"
                            if self.description:
                                docstring = f"{docstring}\n\n{self.description}"
                    
                            wrapped_tool.__doc__ = docstring
                            wrapped_tool.__name__ = tool_name
                    
                            return Tool.from_callable(
                                wrapped_tool,
                                name_override=tool_name,
                                description_override=docstring,
                            )
                    
                        @track_action("Calling Agent.run: {prompts}:")
                        async def _run(
                            self,
                            *prompts: AnyPromptType | PIL.Image.Image | os.PathLike[str] | ChatMessage[Any],
                            result_type: type[TResult] | None = None,
                            model: ModelType = None,
                            store_history: bool = True,
                            tool_choice: str | list[str] | None = None,
                            usage_limits: UsageLimits | None = None,
                            message_id: str | None = None,
                            conversation_id: str | None = None,
                            messages: list[ChatMessage[Any]] | None = None,
                            wait_for_connections: bool | None = None,
                        ) -> ChatMessage[TResult]:
                            """Run agent with prompt and get response.
                    
                            Args:
                                prompts: User query or instruction
                                result_type: Optional type for structured responses
                                model: Optional model override
                                store_history: Whether the message exchange should be added to the
                                                context window
                                tool_choice: Filter tool choice by name
                                usage_limits: Optional usage limits for the model
                                message_id: Optional message id for the returned message.
                                            Automatically generated if not provided.
                                conversation_id: Optional conversation id for the returned message.
                                messages: Optional list of messages to replace the conversation history
                                wait_for_connections: Whether to wait for connected agents to complete
                    
                            Returns:
                                Result containing response and run information
                    
                            Raises:
                                UnexpectedModelBehavior: If the model fails or behaves unexpectedly
                            """
                            """Run agent with prompt and get response."""
                            message_id = message_id or str(uuid4())
                            tools = await self.tools.get_tools(state="enabled", names=tool_choice)
                            self.set_result_type(result_type)
                            start_time = time.perf_counter()
                            sys_prompt = await self.sys_prompts.format_system_prompt(self)
                    
                            message_history = (
                                messages if messages is not None else self.conversation.get_history()
                            )
                            try:
                                result = await self._provider.generate_response(
                                    *await convert_prompts(prompts),
                                    message_id=message_id,
                                    message_history=message_history,
                                    tools=tools,
                                    result_type=result_type,
                                    usage_limits=usage_limits,
                                    model=model,
                                    system_prompt=sys_prompt,
                                )
                            except Exception as e:
                                logger.exception("Agent run failed")
                                self.run_failed.emit("Agent run failed", e)
                                raise
                            else:
                                response_msg = ChatMessage[TResult](
                                    content=result.content,
                                    role="assistant",
                                    name=self.name,
                                    model=result.model_name,
                                    message_id=message_id,
                                    conversation_id=conversation_id,
                                    tool_calls=result.tool_calls,
                                    cost_info=result.cost_and_usage,
                                    response_time=time.perf_counter() - start_time,
                                    provider_extra=result.provider_extra or {},
                                )
                                if self._debug:
                                    import devtools
                    
                                    devtools.debug(response_msg)
                                return response_msg
                    
                        @asynccontextmanager
                        async def run_stream(
                            self,
                            *prompt: AnyPromptType | PIL.Image.Image | os.PathLike[str],
                            result_type: type[TResult] | None = None,
                            model: ModelType = None,
                            tool_choice: str | list[str] | None = None,
                            store_history: bool = True,
                            usage_limits: UsageLimits | None = None,
                            message_id: str | None = None,
                            conversation_id: str | None = None,
                            messages: list[ChatMessage[Any]] | None = None,
                            wait_for_connections: bool | None = None,
                        ) -> AsyncIterator[StreamingResponseProtocol[TResult]]:
                            """Run agent with prompt and get a streaming response.
                    
                            Args:
                                prompt: User query or instruction
                                result_type: Optional type for structured responses
                                model: Optional model override
                                tool_choice: Filter tool choice by name
                                store_history: Whether the message exchange should be added to the
                                               context window
                                usage_limits: Optional usage limits for the model
                                message_id: Optional message id for the returned message.
                                            Automatically generated if not provided.
                                conversation_id: Optional conversation id for the returned message.
                                messages: Optional list of messages to replace the conversation history
                                wait_for_connections: Whether to wait for connected agents to complete
                    
                            Returns:
                                A streaming result to iterate over.
                    
                            Raises:
                                UnexpectedModelBehavior: If the model fails or behaves unexpectedly
                            """
                            message_id = message_id or str(uuid4())
                            user_msg, prompts = await self.pre_run(*prompt)
                            self.set_result_type(result_type)
                            start_time = time.perf_counter()
                            sys_prompt = await self.sys_prompts.format_system_prompt(self)
                            tools = await self.tools.get_tools(state="enabled", names=tool_choice)
                            message_history = (
                                messages if messages is not None else self.conversation.get_history()
                            )
                            try:
                                async with self._provider.stream_response(
                                    *prompts,
                                    message_id=message_id,
                                    message_history=message_history,
                                    result_type=result_type,
                                    model=model,
                                    store_history=store_history,
                                    tools=tools,
                                    usage_limits=usage_limits,
                                    system_prompt=sys_prompt,
                                ) as stream:
                                    yield stream
                                    usage = stream.usage()
                                    cost_info = None
                                    model_name = stream.model_name  # type: ignore
                                    if model_name:
                                        cost_info = await TokenCost.from_usage(usage, model_name)
                                    response_msg = ChatMessage[TResult](
                                        content=cast(TResult, stream.formatted_content),  # type: ignore
                                        role="assistant",
                                        name=self.name,
                                        model=model_name,
                                        message_id=message_id,
                                        conversation_id=user_msg.conversation_id,
                                        cost_info=cost_info,
                                        response_time=time.perf_counter() - start_time,
                                        # provider_extra=stream.provider_extra or {},
                                    )
                                    self.message_sent.emit(response_msg)
                                    if store_history:
                                        self.conversation.add_chat_messages([user_msg, response_msg])
                                    await self.connections.route_message(
                                        response_msg,
                                        wait=wait_for_connections,
                                    )
                    
                            except Exception as e:
                                logger.exception("Agent stream failed")
                                self.run_failed.emit("Agent stream failed", e)
                                raise
                    
                        async def run_iter(
                            self,
                            *prompt_groups: Sequence[AnyPromptType | PIL.Image.Image | os.PathLike[str]],
                            result_type: type[TResult] | None = None,
                            model: ModelType = None,
                            store_history: bool = True,
                            wait_for_connections: bool | None = None,
                        ) -> AsyncIterator[ChatMessage[TResult]]:
                            """Run agent sequentially on multiple prompt groups.
                    
                            Args:
                                prompt_groups: Groups of prompts to process sequentially
                                result_type: Optional type for structured responses
                                model: Optional model override
                                store_history: Whether to store in conversation history
                                wait_for_connections: Whether to wait for connected agents
                    
                            Yields:
                                Response messages in sequence
                    
                            Example:
                                questions = [
                                    ["What is your name?"],
                                    ["How old are you?", image1],
                                    ["Describe this image", image2],
                                ]
                                async for response in agent.run_iter(*questions):
                                    print(response.content)
                            """
                            for prompts in prompt_groups:
                                response = await self.run(
                                    *prompts,
                                    result_type=result_type,
                                    model=model,
                                    store_history=store_history,
                                    wait_for_connections=wait_for_connections,
                                )
                                yield response  # pyright: ignore
                    
                        def run_sync(
                            self,
                            *prompt: AnyPromptType | PIL.Image.Image | os.PathLike[str],
                            result_type: type[TResult] | None = None,
                            deps: TDeps | None = None,
                            model: ModelType = None,
                            store_history: bool = True,
                        ) -> ChatMessage[TResult]:
                            """Run agent synchronously (convenience wrapper).
                    
                            Args:
                                prompt: User query or instruction
                                result_type: Optional type for structured responses
                                deps: Optional dependencies for the agent
                                model: Optional model override
                                store_history: Whether the message exchange should be added to the
                                               context window
                            Returns:
                                Result containing response and run information
                            """
                            coro = self.run(
                                *prompt,
                                model=model,
                                store_history=store_history,
                                result_type=result_type,
                            )
                            return self.task_manager.run_task_sync(coro)  # type: ignore
                    
                        async def run_job(
                            self,
                            job: Job[TDeps, str | None],
                            *,
                            store_history: bool = True,
                            include_agent_tools: bool = True,
                        ) -> ChatMessage[str]:
                            """Execute a pre-defined task.
                    
                            Args:
                                job: Job configuration to execute
                                store_history: Whether the message exchange should be added to the
                                               context window
                                include_agent_tools: Whether to include agent tools
                            Returns:
                                Job execution result
                    
                            Raises:
                                JobError: If task execution fails
                                ValueError: If task configuration is invalid
                            """
                            from llmling_agent.tasks import JobError
                    
                            if job.required_dependency is not None:  # noqa: SIM102
                                if not isinstance(self.context.data, job.required_dependency):
                                    msg = (
                                        f"Agent dependencies ({type(self.context.data)}) "
                                        f"don't match job requirement ({job.required_dependency})"
                                    )
                                    raise JobError(msg)
                    
                            # Load task knowledge
                            if job.knowledge:
                                # Add knowledge sources to context
                                resources: list[Resource | str] = list(job.knowledge.paths) + list(
                                    job.knowledge.resources
                                )
                                for source in resources:
                                    await self.conversation.load_context_source(source)
                                for prompt in job.knowledge.prompts:
                                    await self.conversation.load_context_source(prompt)
                            try:
                                # Register task tools temporarily
                                tools = job.get_tools()
                                with self.tools.temporary_tools(tools, exclusive=not include_agent_tools):
                                    # Execute job with job-specific tools
                                    return await self.run(await job.get_prompt(), store_history=store_history)
                    
                            except Exception as e:
                                msg = f"Task execution failed: {e}"
                                logger.exception(msg)
                                raise JobError(msg) from e
                    
                        @asynccontextmanager
                        @track_action("Calling Agent.iterate_run: {prompts}")
                        async def iterate_run(
                            self,
                            *prompts: AnyPromptType | PIL.Image.Image | os.PathLike[str],
                            message_id: str | None = None,
                            message_history: list[ChatMessage[Any]] | None = None,
                            tools: list[Tool] | None = None,
                            result_type: type[TResult] | None = None,
                            usage_limits: UsageLimits | None = None,
                            model: ModelType = None,
                            system_prompt: str | None = None,
                            tool_choice: str | list[str] | None = None,
                            conversation_id: str | None = None,
                            store_history: bool = True,
                        ) -> AsyncIterator[AgentRun[TDeps, TResult]]:
                            """Run the agent step-by-step, yielding an object to observe the execution graph.
                    
                            Args:
                                *prompts: User query/instructions (text, images, paths, BasePrompts).
                                message_id: Optional unique ID for this run attempt. Generated if None.
                                message_history: Optional list of messages to replace current history.
                                tools: Optional sequence of tools to use instead of agent's default tools.
                                result_type: Optional type for structured responses.
                                usage_limits: Optional usage limits (provider support may vary).
                                model: Optional model override for this run.
                                system_prompt: Optional system prompt override for this run.
                                tool_choice: Filter agent's tools by name (ignored if `tools` is provided).
                                conversation_id: Optional ID to associate with the conversation context.
                                store_history: Whether to store the conversation in agent's history.
                    
                            Yields:
                                An AgentRun object for iterating over execution nodes.
                    
                            Example: (Same as before)
                                async with agent.iterate_run("Capital of France?") as agent_run:
                                    async for node in agent_run:
                                        print(f"Processing: {type(node).__name__}")
                                    print(f"Final result: {agent_run.result.output}")
                    
                            Note: (Same as before regarding history management)
                            """
                            run_message_id = message_id or str(uuid4())
                            start_time = time.perf_counter()
                            logger.info("Starting agent iteration run_id=%s", run_message_id)
                            converted_prompts = await convert_prompts(prompts)
                            if not converted_prompts:
                                msg = "No prompts provided for iteration."
                                logger.error(msg)
                                raise ValueError(msg)
                    
                            # Prepare user message for conversation history
                            user_msg = None
                            if store_history:
                                user_msg, _ = await self.pre_run(*prompts)
                    
                            if tools is None:
                                effective_tools = await self.tools.get_tools(
                                    state="enabled", names=tool_choice
                                )
                            else:
                                effective_tools = tools  # Use the direct override
                    
                            self.set_result_type(result_type)
                            effective_system_prompt = (
                                system_prompt
                                if system_prompt is not None
                                else await self.sys_prompts.format_system_prompt(self)
                            )
                            effective_message_history = (
                                message_history
                                if message_history is not None
                                else self.conversation.get_history()
                            )
                            try:
                                async with self._provider.iterate_run(
                                    *converted_prompts,
                                    # Pass consistent arguments to the provider
                                    message_id=run_message_id,
                                    message_history=effective_message_history,
                                    tools=effective_tools,
                                    result_type=result_type,
                                    usage_limits=usage_limits,
                                    model=model,
                                    system_prompt=effective_system_prompt,
                                ) as agent_run:
                                    yield agent_run
                                    # Store conversation history if requested
                                    if store_history and user_msg and agent_run.result:
                                        response_msg = ChatMessage[TResult](
                                            content=agent_run.result.output,
                                            role="assistant",
                                            name=self.name,
                                            model=agent_run.result.response.model_name,
                                            message_id=run_message_id,
                                            conversation_id=conversation_id or user_msg.conversation_id,
                                            response_time=time.perf_counter() - start_time,
                                        )
                                        self.conversation.add_chat_messages([user_msg, response_msg])
                                        msg = "Stored conversation history for run_id=%s"
                                        logger.debug(msg, run_message_id)
                    
                                logger.info("Agent iteration run_id=%s completed.", run_message_id)
                    
                            except Exception as e:
                                logger.exception("Agent iteration run_id=%s failed.", run_message_id)
                                self.run_failed.emit(f"Agent iteration failed: {e}", e)
                                raise
                    
                        async def run_in_background(
                            self,
                            *prompt: AnyPromptType | PIL.Image.Image | os.PathLike[str],
                            max_count: int | None = None,
                            interval: float = 1.0,
                            block: bool = False,
                            **kwargs: Any,
                        ) -> ChatMessage[TResult] | None:
                            """Run agent continuously in background with prompt or dynamic prompt function.
                    
                            Args:
                                prompt: Static prompt or function that generates prompts
                                max_count: Maximum number of runs (None = infinite)
                                interval: Seconds between runs
                                block: Whether to block until completion
                                **kwargs: Arguments passed to run()
                            """
                            self._infinite = max_count is None
                    
                            async def _continuous():
                                count = 0
                                msg = "%s: Starting continuous run (max_count=%s, interval=%s) for %r"
                                logger.debug(msg, self.name, max_count, interval, self.name)
                                latest = None
                                while max_count is None or count < max_count:
                                    try:
                                        current_prompts = [
                                            call_with_context(p, self.context, **kwargs) if callable(p) else p
                                            for p in prompt
                                        ]
                                        msg = "%s: Generated prompt #%d: %s"
                                        logger.debug(msg, self.name, count, current_prompts)
                    
                                        latest = await self.run(current_prompts, **kwargs)
                                        msg = "%s: Run continous result #%d"
                                        logger.debug(msg, self.name, count)
                    
                                        count += 1
                                        await asyncio.sleep(interval)
                                    except asyncio.CancelledError:
                                        logger.debug("%s: Continuous run cancelled", self.name)
                                        break
                                    except Exception:
                                        logger.exception("%s: Background run failed", self.name)
                                        await asyncio.sleep(interval)
                                msg = "%s: Continuous run completed after %d iterations"
                                logger.debug(msg, self.name, count)
                                return latest
                    
                            # Cancel any existing background task
                            await self.stop()
                            task = asyncio.create_task(_continuous(), name=f"background_{self.name}")
                            if block:
                                try:
                                    return await task  # type: ignore
                                finally:
                                    if not task.done():
                                        task.cancel()
                            else:
                                logger.debug("%s: Started background task %s", self.name, task.get_name())
                                self._background_task = task
                                return None
                    
                        async def stop(self):
                            """Stop continuous execution if running."""
                            if self._background_task and not self._background_task.done():
                                self._background_task.cancel()
                                await self._background_task
                                self._background_task = None
                    
                        async def wait(self) -> ChatMessage[TResult]:
                            """Wait for background execution to complete."""
                            if not self._background_task:
                                msg = "No background task running"
                                raise RuntimeError(msg)
                            if self._infinite:
                                msg = "Cannot wait on infinite execution"
                                raise RuntimeError(msg)
                            try:
                                return await self._background_task
                            finally:
                                self._background_task = None
                    
                        def clear_history(self):
                            """Clear both internal and pydantic-ai history."""
                            self._logger.clear_state()
                            self.conversation.clear()
                            logger.debug("Cleared history and reset tool state")
                    
                        async def share(
                            self,
                            target: AnyAgent[TDeps, Any],
                            *,
                            tools: list[str] | None = None,
                            resources: list[str] | None = None,
                            history: bool | int | None = None,  # bool or number of messages
                            token_limit: int | None = None,
                        ):
                            """Share capabilities and knowledge with another agent.
                    
                            Args:
                                target: Agent to share with
                                tools: List of tool names to share
                                resources: List of resource names to share
                                history: Share conversation history:
                                        - True: Share full history
                                        - int: Number of most recent messages to share
                                        - None: Don't share history
                                token_limit: Optional max tokens for history
                    
                            Raises:
                                ValueError: If requested items don't exist
                                RuntimeError: If runtime not available for resources
                            """
                            # Share tools if requested
                            for name in tools or []:
                                if tool := self.tools.get(name):
                                    meta = {"shared_from": self.name}
                                    target.tools.register_tool(tool.callable, metadata=meta)
                                else:
                                    msg = f"Tool not found: {name}"
                                    raise ValueError(msg)
                    
                            # Share resources if requested
                            if resources:
                                if not self.runtime:
                                    msg = "No runtime available for sharing resources"
                                    raise RuntimeError(msg)
                                for name in resources:
                                    if resource := self.runtime.get_resource(name):
                                        await target.conversation.load_context_source(resource)  # type: ignore
                                    else:
                                        msg = f"Resource not found: {name}"
                                        raise ValueError(msg)
                    
                            # Share history if requested
                            if history:
                                history_text = await self.conversation.format_history(
                                    max_tokens=token_limit,
                                    num_messages=history if isinstance(history, int) else None,
                                )
                                target.conversation.add_context_message(
                                    history_text, source=self.name, metadata={"type": "shared_history"}
                                )
                    
                        def register_worker(
                            self,
                            worker: MessageNode[Any, Any],
                            *,
                            name: str | None = None,
                            reset_history_on_run: bool = True,
                            pass_message_history: bool = False,
                            share_context: bool = False,
                        ) -> Tool:
                            """Register another agent as a worker tool."""
                            return self.tools.register_worker(
                                worker,
                                name=name,
                                reset_history_on_run=reset_history_on_run,
                                pass_message_history=pass_message_history,
                                share_context=share_context,
                                parent=self if (pass_message_history or share_context) else None,
                            )
                    
                        def set_model(self, model: ModelType):
                            """Set the model for this agent.
                    
                            Args:
                                model: New model to use (name or instance)
                    
                            Emits:
                                model_changed signal with the new model
                            """
                            self._provider.set_model(model)
                    
                        async def reset(self):
                            """Reset agent state (conversation history and tool states)."""
                            old_tools = await self.tools.list_tools()
                            self.conversation.clear()
                            self.tools.reset_states()
                            new_tools = await self.tools.list_tools()
                    
                            event = self.AgentReset(
                                agent_name=self.name,
                                previous_tools=old_tools,
                                new_tools=new_tools,
                            )
                            self.agent_reset.emit(event)
                    
                        @property
                        def runtime(self) -> RuntimeConfig:
                            """Get runtime configuration from context."""
                            assert self.context.runtime
                            return self.context.runtime
                    
                        @runtime.setter
                        def runtime(self, value: RuntimeConfig):
                            """Set runtime configuration and update context."""
                            self.context.runtime = value
                    
                        @property
                        def stats(self) -> MessageStats:
                            return MessageStats(messages=self._logger.message_history)
                    
                        @asynccontextmanager
                        async def temporary_state(
                            self,
                            *,
                            system_prompts: list[AnyPromptType] | None = None,
                            replace_prompts: bool = False,
                            tools: list[ToolType] | None = None,
                            replace_tools: bool = False,
                            history: list[AnyPromptType] | SessionQuery | None = None,
                            replace_history: bool = False,
                            pause_routing: bool = False,
                            model: ModelType | None = None,
                            provider: AgentProvider | None = None,
                        ) -> AsyncIterator[Self]:
                            """Temporarily modify agent state.
                    
                            Args:
                                system_prompts: Temporary system prompts to use
                                replace_prompts: Whether to replace existing prompts
                                tools: Temporary tools to make available
                                replace_tools: Whether to replace existing tools
                                history: Conversation history (prompts or query)
                                replace_history: Whether to replace existing history
                                pause_routing: Whether to pause message routing
                                model: Temporary model override
                                provider: Temporary provider override
                            """
                            old_model = self._provider.model if hasattr(self._provider, "model") else None  # pyright: ignore
                            old_provider = self._provider
                    
                            async with AsyncExitStack() as stack:
                                # System prompts (async)
                                if system_prompts is not None:
                                    await stack.enter_async_context(
                                        self.sys_prompts.temporary_prompt(
                                            system_prompts, exclusive=replace_prompts
                                        )
                                    )
                    
                                # Tools (sync)
                                if tools is not None:
                                    stack.enter_context(
                                        self.tools.temporary_tools(tools, exclusive=replace_tools)
                                    )
                    
                                # History (async)
                                if history is not None:
                                    await stack.enter_async_context(
                                        self.conversation.temporary_state(
                                            history, replace_history=replace_history
                                        )
                                    )
                    
                                # Routing (async)
                                if pause_routing:
                                    await stack.enter_async_context(self.connections.paused_routing())
                    
                                # Model/Provider
                                if provider is not None:
                                    self._provider = provider
                                elif model is not None:
                                    self._provider.set_model(model)
                    
                                try:
                                    yield self
                                finally:
                                    # Restore model/provider
                                    if provider is not None:
                                        self._provider = old_provider
                                    elif model is not None and old_model:
                                        self._provider.set_model(old_model)
                    

                    context property writable

                    context: AgentContext[TDeps]
                    

                    Get agent context.

                    model_name property

                    model_name: str | None
                    

                    Get the model name in a consistent format.

                    name property writable

                    name: str
                    

                    Get agent name.

                    provider property writable

                    provider: AgentProvider
                    

                    Get the underlying provider.

                    runtime property writable

                    runtime: RuntimeConfig
                    

                    Get runtime configuration from context.

                    AgentReset dataclass

                    Emitted when agent is reset.

                    Source code in src/llmling_agent/agent/agent.py
                    131
                    132
                    133
                    134
                    135
                    136
                    137
                    138
                    @dataclass(frozen=True)
                    class AgentReset:
                        """Emitted when agent is reset."""
                    
                        agent_name: AgentName
                        previous_tools: dict[str, bool]
                        new_tools: dict[str, bool]
                        timestamp: datetime = field(default_factory=get_now)
                    

                    __aenter__ async

                    __aenter__() -> Self
                    

                    Enter async context and set up MCP servers.

                    Source code in src/llmling_agent/agent/agent.py
                    341
                    342
                    343
                    344
                    345
                    346
                    347
                    348
                    349
                    350
                    351
                    352
                    353
                    354
                    355
                    356
                    357
                    358
                    359
                    360
                    361
                    362
                    363
                    364
                    365
                    366
                    367
                    368
                    369
                    370
                    371
                    372
                    373
                    374
                    375
                    376
                    async def __aenter__(self) -> Self:
                        """Enter async context and set up MCP servers."""
                        try:
                            # Collect all coroutines that need to be run
                            coros: list[Coroutine[Any, Any, Any]] = []
                    
                            # Runtime initialization if needed
                            runtime_ref = self.context.runtime
                            if runtime_ref and not runtime_ref._initialized:
                                self._owns_runtime = True
                                coros.append(runtime_ref.__aenter__())
                    
                            # Events initialization
                            coros.append(super().__aenter__())
                    
                            # Get conversation init tasks directly
                            coros.extend(self.conversation.get_initialization_tasks())
                    
                            # Execute coroutines either in parallel or sequentially
                            if self.parallel_init and coros:
                                await asyncio.gather(*coros)
                            else:
                                for coro in coros:
                                    await coro
                            if runtime_ref:
                                self.tools.add_provider(RuntimeResourceProvider(runtime_ref))
                            for provider in await self.context.config.get_toolsets():
                                self.tools.add_provider(provider)
                        except Exception as e:
                            # Clean up in reverse order
                            if self._owns_runtime and runtime_ref and self.context.runtime == runtime_ref:
                                await runtime_ref.__aexit__(type(e), e, e.__traceback__)
                            msg = "Failed to initialize agent"
                            raise RuntimeError(msg) from e
                        else:
                            return self
                    

                    __aexit__ async

                    __aexit__(
                        exc_type: type[BaseException] | None,
                        exc_val: BaseException | None,
                        exc_tb: TracebackType | None,
                    )
                    

                    Exit async context.

                    Source code in src/llmling_agent/agent/agent.py
                    378
                    379
                    380
                    381
                    382
                    383
                    384
                    385
                    386
                    387
                    388
                    389
                    390
                    391
                    async def __aexit__(
                        self,
                        exc_type: type[BaseException] | None,
                        exc_val: BaseException | None,
                        exc_tb: TracebackType | None,
                    ):
                        """Exit async context."""
                        await super().__aexit__(exc_type, exc_val, exc_tb)
                        try:
                            await self.mcp.__aexit__(exc_type, exc_val, exc_tb)
                        finally:
                            if self._owns_runtime and self.context.runtime:
                                self.tools.remove_provider("runtime")
                                await self.context.runtime.__aexit__(exc_type, exc_val, exc_tb)
                    

                    __and__

                    __and__(other: Agent[TDeps] | StructuredAgent[TDeps, Any]) -> Team[TDeps]
                    
                    __and__(other: Team[TDeps]) -> Team[TDeps]
                    
                    __and__(other: ProcessorCallback[Any]) -> Team[TDeps]
                    
                    __and__(other: MessageNode[Any, Any] | ProcessorCallback[Any]) -> Team[Any]
                    

                    Create agent group using | operator.

                    Example

                    group = analyzer & planner & executor # Create group of 3 group = analyzer & existing_group # Add to existing group

                    Source code in src/llmling_agent/agent/agent.py
                    406
                    407
                    408
                    409
                    410
                    411
                    412
                    413
                    414
                    415
                    416
                    417
                    418
                    419
                    420
                    421
                    422
                    423
                    424
                    425
                    426
                    427
                    428
                    429
                    430
                    431
                    def __and__(self, other: MessageNode[Any, Any] | ProcessorCallback[Any]) -> Team[Any]:
                        """Create agent group using | operator.
                    
                        Example:
                            group = analyzer & planner & executor  # Create group of 3
                            group = analyzer & existing_group  # Add to existing group
                        """
                        from llmling_agent.agent import StructuredAgent
                        from llmling_agent.delegation.team import Team
                    
                        match other:
                            case Team():
                                return Team([self, *other.agents])
                            case Callable():
                                if callable(other):
                                    if has_return_type(other, str):
                                        agent_2 = Agent.from_callback(other)
                                    else:
                                        agent_2 = StructuredAgent.from_callback(other)
                                agent_2.context.pool = self.context.pool
                                return Team([self, agent_2])
                            case MessageNode():
                                return Team([self, other])
                            case _:
                                msg = f"Invalid agent type: {type(other)}"
                                raise ValueError(msg)
                    

                    __init__

                    __init__(
                        name: str = "llmling-agent",
                        provider: AgentType = "pydantic_ai",
                        *,
                        model: ModelType = None,
                        runtime: RuntimeConfig | Config | StrPath | None = None,
                        context: AgentContext[TDeps] | None = None,
                        session: SessionIdType | SessionQuery | MemoryConfig | bool | int = None,
                        system_prompt: AnyPromptType | Sequence[AnyPromptType] = (),
                        description: str | None = None,
                        tools: Sequence[ToolType | Tool] | None = None,
                        capabilities: Capabilities | None = None,
                        mcp_servers: Sequence[str | MCPServerConfig] | None = None,
                        resources: Sequence[Resource | PromptType | str] = (),
                        retries: int = 1,
                        output_retries: int | None = None,
                        end_strategy: EndStrategy = "early",
                        defer_model_check: bool = False,
                        input_provider: InputProvider | None = None,
                        parallel_init: bool = True,
                        debug: bool = False,
                    )
                    

                    Initialize agent with runtime configuration.

                    Parameters:

                    Name Type Description Default
                    runtime RuntimeConfig | Config | StrPath | None

                    Runtime configuration providing access to resources/tools

                    None
                    context AgentContext[TDeps] | None

                    Agent context with capabilities and configuration

                    None
                    provider AgentType

                    Agent type to use (ai: PydanticAIProvider, human: HumanProvider)

                    'pydantic_ai'
                    session SessionIdType | SessionQuery | MemoryConfig | bool | int

                    Memory configuration. - None: Default memory config - False: Disable message history (max_messages=0) - int: Max tokens for memory - str/UUID: Session identifier - SessionQuery: Query to recover conversation - MemoryConfig: Complete memory configuration

                    None
                    model ModelType

                    The default model to use (defaults to GPT-5)

                    None
                    system_prompt AnyPromptType | Sequence[AnyPromptType]

                    Static system prompts to use for this agent

                    ()
                    name str

                    Name of the agent for logging

                    'llmling-agent'
                    description str | None

                    Description of the Agent ("what it can do")

                    None
                    tools Sequence[ToolType | Tool] | None

                    List of tools to register with the agent

                    None
                    capabilities Capabilities | None

                    Capabilities for the agent

                    None
                    mcp_servers Sequence[str | MCPServerConfig] | None

                    MCP servers to connect to

                    None
                    resources Sequence[Resource | PromptType | str]

                    Additional resources to load

                    ()
                    retries int

                    Default number of retries for failed operations

                    1
                    output_retries int | None

                    Max retries for result validation (defaults to retries)

                    None
                    end_strategy EndStrategy

                    Strategy for handling tool calls that are requested alongside a final result

                    'early'
                    defer_model_check bool

                    Whether to defer model evaluation until first run

                    False
                    input_provider InputProvider | None

                    Provider for human input (tool confirmation / HumanProviders)

                    None
                    parallel_init bool

                    Whether to initialize resources in parallel

                    True
                    debug bool

                    Whether to enable debug mode

                    False
                    Source code in src/llmling_agent/agent/agent.py
                    148
                    149
                    150
                    151
                    152
                    153
                    154
                    155
                    156
                    157
                    158
                    159
                    160
                    161
                    162
                    163
                    164
                    165
                    166
                    167
                    168
                    169
                    170
                    171
                    172
                    173
                    174
                    175
                    176
                    177
                    178
                    179
                    180
                    181
                    182
                    183
                    184
                    185
                    186
                    187
                    188
                    189
                    190
                    191
                    192
                    193
                    194
                    195
                    196
                    197
                    198
                    199
                    200
                    201
                    202
                    203
                    204
                    205
                    206
                    207
                    208
                    209
                    210
                    211
                    212
                    213
                    214
                    215
                    216
                    217
                    218
                    219
                    220
                    221
                    222
                    223
                    224
                    225
                    226
                    227
                    228
                    229
                    230
                    231
                    232
                    233
                    234
                    235
                    236
                    237
                    238
                    239
                    240
                    241
                    242
                    243
                    244
                    245
                    246
                    247
                    248
                    249
                    250
                    251
                    252
                    253
                    254
                    255
                    256
                    257
                    258
                    259
                    260
                    261
                    262
                    263
                    264
                    265
                    266
                    267
                    268
                    269
                    270
                    271
                    272
                    273
                    274
                    275
                    276
                    277
                    278
                    279
                    280
                    281
                    282
                    283
                    284
                    285
                    286
                    287
                    288
                    289
                    290
                    291
                    292
                    293
                    294
                    295
                    296
                    297
                    298
                    299
                    300
                    301
                    302
                    303
                    304
                    305
                    306
                    307
                    308
                    309
                    310
                    311
                    312
                    313
                    314
                    315
                    316
                    317
                    318
                    319
                    320
                    321
                    322
                    323
                    324
                    def __init__(  # noqa: PLR0915
                        # we dont use AgentKwargs here so that we can work with explicit ones in the ctor
                        self,
                        name: str = "llmling-agent",
                        provider: AgentType = "pydantic_ai",
                        *,
                        model: ModelType = None,
                        runtime: RuntimeConfig | Config | StrPath | None = None,
                        context: AgentContext[TDeps] | None = None,
                        session: SessionIdType | SessionQuery | MemoryConfig | bool | int = None,
                        system_prompt: AnyPromptType | Sequence[AnyPromptType] = (),
                        description: str | None = None,
                        tools: Sequence[ToolType | Tool] | None = None,
                        capabilities: Capabilities | None = None,
                        mcp_servers: Sequence[str | MCPServerConfig] | None = None,
                        resources: Sequence[Resource | PromptType | str] = (),
                        retries: int = 1,
                        output_retries: int | None = None,
                        end_strategy: EndStrategy = "early",
                        defer_model_check: bool = False,
                        input_provider: InputProvider | None = None,
                        parallel_init: bool = True,
                        debug: bool = False,
                    ):
                        """Initialize agent with runtime configuration.
                    
                        Args:
                            runtime: Runtime configuration providing access to resources/tools
                            context: Agent context with capabilities and configuration
                            provider: Agent type to use (ai: PydanticAIProvider, human: HumanProvider)
                            session: Memory configuration.
                                - None: Default memory config
                                - False: Disable message history (max_messages=0)
                                - int: Max tokens for memory
                                - str/UUID: Session identifier
                                - SessionQuery: Query to recover conversation
                                - MemoryConfig: Complete memory configuration
                            model: The default model to use (defaults to GPT-5)
                            system_prompt: Static system prompts to use for this agent
                            name: Name of the agent for logging
                            description: Description of the Agent ("what it can do")
                            tools: List of tools to register with the agent
                            capabilities: Capabilities for the agent
                            mcp_servers: MCP servers to connect to
                            resources: Additional resources to load
                            retries: Default number of retries for failed operations
                            output_retries: Max retries for result validation (defaults to retries)
                            end_strategy: Strategy for handling tool calls that are requested alongside
                                          a final result
                            defer_model_check: Whether to defer model evaluation until first run
                            input_provider: Provider for human input (tool confirmation / HumanProviders)
                            parallel_init: Whether to initialize resources in parallel
                            debug: Whether to enable debug mode
                        """
                        from llmling_agent.agent import AgentContext
                        from llmling_agent.agent.conversation import ConversationManager
                        from llmling_agent.agent.interactions import Interactions
                        from llmling_agent.agent.sys_prompts import SystemPrompts
                        from llmling_agent.resource_providers.capability_provider import (
                            CapabilitiesResourceProvider,
                        )
                        from llmling_agent_providers.base import AgentProvider
                    
                        self.task_manager = TaskManager()
                        self._infinite = False
                        # save some stuff for asnyc init
                        self._owns_runtime = False
                        # prepare context
                        ctx = context or AgentContext[TDeps].create_default(
                            name,
                            input_provider=input_provider,
                            capabilities=capabilities,
                        )
                        self._context = ctx
                        memory_cfg = (
                            session
                            if isinstance(session, MemoryConfig)
                            else MemoryConfig.from_value(session)
                        )
                        super().__init__(
                            name=name,
                            context=ctx,
                            description=description,
                            enable_logging=memory_cfg.enable,
                            mcp_servers=mcp_servers,
                        )
                        # Initialize runtime
                        match runtime:
                            case None:
                                ctx.runtime = RuntimeConfig.from_config(Config())
                            case Config() | str() | PathLike():
                                ctx.runtime = RuntimeConfig.from_config(runtime)
                            case RuntimeConfig():
                                ctx.runtime = runtime
                            case _:
                                msg = f"Invalid runtime type: {type(runtime)}"
                                raise TypeError(msg)
                    
                        runtime_provider = RuntimePromptProvider(ctx.runtime)
                        ctx.definition.prompt_manager.providers["runtime"] = runtime_provider
                        # Initialize tool manager
                        all_tools = list(tools or [])
                        self.tools = ToolManager(all_tools)
                        self.tools.add_provider(self.mcp)
                        if builtin_tools := ctx.config.get_tool_provider():
                            self.tools.add_provider(builtin_tools)
                    
                        # Initialize conversation manager
                        resources = list(resources)
                        if ctx.config.knowledge:
                            resources.extend(ctx.config.knowledge.get_resources())
                        self.conversation = ConversationManager(self, memory_cfg, resources=resources)
                        # Initialize provider
                        match provider:
                            case "pydantic_ai":
                                validate_import("pydantic_ai", "pydantic_ai")
                                from llmling_agent_providers.pydanticai import PydanticAIProvider
                    
                                if model and not isinstance(model, str):
                                    from pydantic_ai import models
                    
                                    assert isinstance(model, models.Model)
                                self._provider: AgentProvider = PydanticAIProvider(
                                    model=model,
                                    retries=retries,
                                    end_strategy=end_strategy,
                                    output_retries=output_retries,
                                    defer_model_check=defer_model_check,
                                    debug=debug,
                                    context=ctx,
                                )
                            case "human":
                                from llmling_agent_providers.human import HumanProvider
                    
                                self._provider = HumanProvider(name=name, debug=debug, context=ctx)
                            case Callable():
                                from llmling_agent_providers.callback import CallbackProvider
                    
                                self._provider = CallbackProvider(
                                    provider, name=name, debug=debug, context=ctx
                                )
                            case AgentProvider():
                                self._provider = provider
                                self._provider.context = ctx
                            case _:
                                msg = f"Invalid agent type: {type}"
                                raise ValueError(msg)
                        self.tools.add_provider(CapabilitiesResourceProvider(ctx.capabilities))
                    
                        if ctx and ctx.definition:
                            from llmling_agent.observability import registry
                    
                            registry.register_providers(ctx.definition.observability)
                    
                        # init variables
                        self._debug = debug
                        self._result_type: type | None = None
                        self.parallel_init = parallel_init
                        self.name = name
                        self._background_task: asyncio.Task[Any] | None = None
                    
                        # Forward provider signals
                        self._provider.chunk_streamed.connect(self.chunk_streamed)
                        self._provider.model_changed.connect(self.model_changed)
                        self._provider.tool_used.connect(self.tool_used)
                        self._provider.model_changed.connect(self.model_changed)
                    
                        self.talk = Interactions(self)
                    
                        # Set up system prompts
                        config_prompts = ctx.config.system_prompts if ctx else []
                        all_prompts: list[AnyPromptType] = list(config_prompts)
                        if isinstance(system_prompt, list):
                            all_prompts.extend(system_prompt)
                        else:
                            all_prompts.append(system_prompt)
                        self.sys_prompts = SystemPrompts(all_prompts, context=ctx)
                    

                    clear_history

                    clear_history()
                    

                    Clear both internal and pydantic-ai history.

                    Source code in src/llmling_agent/agent/agent.py
                    1164
                    1165
                    1166
                    1167
                    1168
                    def clear_history(self):
                        """Clear both internal and pydantic-ai history."""
                        self._logger.clear_state()
                        self.conversation.clear()
                        logger.debug("Cleared history and reset tool state")
                    

                    from_callback classmethod

                    from_callback(
                        callback: ProcessorCallback[str],
                        *,
                        name: str | None = None,
                        debug: bool = False,
                        **kwargs: Any,
                    ) -> Agent[None]
                    

                    Create an agent from a processing callback.

                    Parameters:

                    Name Type Description Default
                    callback ProcessorCallback[str]

                    Function to process messages. Can be: - sync or async - with or without context - must return str for pipeline compatibility

                    required
                    name str | None

                    Optional name for the agent

                    None
                    debug bool

                    Whether to enable debug mode

                    False
                    kwargs Any

                    Additional arguments for agent

                    {}
                    Source code in src/llmling_agent/agent/agent.py
                    458
                    459
                    460
                    461
                    462
                    463
                    464
                    465
                    466
                    467
                    468
                    469
                    470
                    471
                    472
                    473
                    474
                    475
                    476
                    477
                    478
                    479
                    480
                    481
                    482
                    483
                    @classmethod
                    def from_callback(
                        cls,
                        callback: ProcessorCallback[str],
                        *,
                        name: str | None = None,
                        debug: bool = False,
                        **kwargs: Any,
                    ) -> Agent[None]:
                        """Create an agent from a processing callback.
                    
                        Args:
                            callback: Function to process messages. Can be:
                                - sync or async
                                - with or without context
                                - must return str for pipeline compatibility
                            name: Optional name for the agent
                            debug: Whether to enable debug mode
                            kwargs: Additional arguments for agent
                        """
                        from llmling_agent_providers.callback import CallbackProvider
                    
                        name = name or getattr(callback, "__name__", "processor")
                        name = name or "processor"
                        provider = CallbackProvider(callback, name=name)
                        return Agent[None](provider=provider, name=name, debug=debug, **kwargs)
                    

                    is_busy

                    is_busy() -> bool
                    

                    Check if agent is currently processing tasks.

                    Source code in src/llmling_agent/agent/agent.py
                    624
                    625
                    626
                    def is_busy(self) -> bool:
                        """Check if agent is currently processing tasks."""
                        return bool(self.task_manager._pending_tasks or self._background_task)
                    

                    iterate_run async

                    iterate_run(
                        *prompts: AnyPromptType | Image | PathLike[str],
                        message_id: str | None = None,
                        message_history: list[ChatMessage[Any]] | None = None,
                        tools: list[Tool] | None = None,
                        result_type: type[TResult] | None = None,
                        usage_limits: UsageLimits | None = None,
                        model: ModelType = None,
                        system_prompt: str | None = None,
                        tool_choice: str | list[str] | None = None,
                        conversation_id: str | None = None,
                        store_history: bool = True,
                    ) -> AsyncIterator[AgentRun[TDeps, TResult]]
                    

                    Run the agent step-by-step, yielding an object to observe the execution graph.

                    Parameters:

                    Name Type Description Default
                    *prompts AnyPromptType | Image | PathLike[str]

                    User query/instructions (text, images, paths, BasePrompts).

                    ()
                    message_id str | None

                    Optional unique ID for this run attempt. Generated if None.

                    None
                    message_history list[ChatMessage[Any]] | None

                    Optional list of messages to replace current history.

                    None
                    tools list[Tool] | None

                    Optional sequence of tools to use instead of agent's default tools.

                    None
                    result_type type[TResult] | None

                    Optional type for structured responses.

                    None
                    usage_limits UsageLimits | None

                    Optional usage limits (provider support may vary).

                    None
                    model ModelType

                    Optional model override for this run.

                    None
                    system_prompt str | None

                    Optional system prompt override for this run.

                    None
                    tool_choice str | list[str] | None

                    Filter agent's tools by name (ignored if tools is provided).

                    None
                    conversation_id str | None

                    Optional ID to associate with the conversation context.

                    None
                    store_history bool

                    Whether to store the conversation in agent's history.

                    True

                    Yields:

                    Type Description
                    AsyncIterator[AgentRun[TDeps, TResult]]

                    An AgentRun object for iterating over execution nodes.

                    (Same as before)

                    async with agent.iterate_run("Capital of France?") as agent_run: async for node in agent_run: print(f"Processing: {type(node).name}") print(f"Final result: {agent_run.result.output}")

                    Note: (Same as before regarding history management)

                    Source code in src/llmling_agent/agent/agent.py
                     972
                     973
                     974
                     975
                     976
                     977
                     978
                     979
                     980
                     981
                     982
                     983
                     984
                     985
                     986
                     987
                     988
                     989
                     990
                     991
                     992
                     993
                     994
                     995
                     996
                     997
                     998
                     999
                    1000
                    1001
                    1002
                    1003
                    1004
                    1005
                    1006
                    1007
                    1008
                    1009
                    1010
                    1011
                    1012
                    1013
                    1014
                    1015
                    1016
                    1017
                    1018
                    1019
                    1020
                    1021
                    1022
                    1023
                    1024
                    1025
                    1026
                    1027
                    1028
                    1029
                    1030
                    1031
                    1032
                    1033
                    1034
                    1035
                    1036
                    1037
                    1038
                    1039
                    1040
                    1041
                    1042
                    1043
                    1044
                    1045
                    1046
                    1047
                    1048
                    1049
                    1050
                    1051
                    1052
                    1053
                    1054
                    1055
                    1056
                    1057
                    1058
                    1059
                    1060
                    1061
                    1062
                    1063
                    1064
                    1065
                    1066
                    1067
                    1068
                    1069
                    1070
                    1071
                    1072
                    1073
                    1074
                    1075
                    1076
                    1077
                    1078
                    1079
                    @asynccontextmanager
                    @track_action("Calling Agent.iterate_run: {prompts}")
                    async def iterate_run(
                        self,
                        *prompts: AnyPromptType | PIL.Image.Image | os.PathLike[str],
                        message_id: str | None = None,
                        message_history: list[ChatMessage[Any]] | None = None,
                        tools: list[Tool] | None = None,
                        result_type: type[TResult] | None = None,
                        usage_limits: UsageLimits | None = None,
                        model: ModelType = None,
                        system_prompt: str | None = None,
                        tool_choice: str | list[str] | None = None,
                        conversation_id: str | None = None,
                        store_history: bool = True,
                    ) -> AsyncIterator[AgentRun[TDeps, TResult]]:
                        """Run the agent step-by-step, yielding an object to observe the execution graph.
                    
                        Args:
                            *prompts: User query/instructions (text, images, paths, BasePrompts).
                            message_id: Optional unique ID for this run attempt. Generated if None.
                            message_history: Optional list of messages to replace current history.
                            tools: Optional sequence of tools to use instead of agent's default tools.
                            result_type: Optional type for structured responses.
                            usage_limits: Optional usage limits (provider support may vary).
                            model: Optional model override for this run.
                            system_prompt: Optional system prompt override for this run.
                            tool_choice: Filter agent's tools by name (ignored if `tools` is provided).
                            conversation_id: Optional ID to associate with the conversation context.
                            store_history: Whether to store the conversation in agent's history.
                    
                        Yields:
                            An AgentRun object for iterating over execution nodes.
                    
                        Example: (Same as before)
                            async with agent.iterate_run("Capital of France?") as agent_run:
                                async for node in agent_run:
                                    print(f"Processing: {type(node).__name__}")
                                print(f"Final result: {agent_run.result.output}")
                    
                        Note: (Same as before regarding history management)
                        """
                        run_message_id = message_id or str(uuid4())
                        start_time = time.perf_counter()
                        logger.info("Starting agent iteration run_id=%s", run_message_id)
                        converted_prompts = await convert_prompts(prompts)
                        if not converted_prompts:
                            msg = "No prompts provided for iteration."
                            logger.error(msg)
                            raise ValueError(msg)
                    
                        # Prepare user message for conversation history
                        user_msg = None
                        if store_history:
                            user_msg, _ = await self.pre_run(*prompts)
                    
                        if tools is None:
                            effective_tools = await self.tools.get_tools(
                                state="enabled", names=tool_choice
                            )
                        else:
                            effective_tools = tools  # Use the direct override
                    
                        self.set_result_type(result_type)
                        effective_system_prompt = (
                            system_prompt
                            if system_prompt is not None
                            else await self.sys_prompts.format_system_prompt(self)
                        )
                        effective_message_history = (
                            message_history
                            if message_history is not None
                            else self.conversation.get_history()
                        )
                        try:
                            async with self._provider.iterate_run(
                                *converted_prompts,
                                # Pass consistent arguments to the provider
                                message_id=run_message_id,
                                message_history=effective_message_history,
                                tools=effective_tools,
                                result_type=result_type,
                                usage_limits=usage_limits,
                                model=model,
                                system_prompt=effective_system_prompt,
                            ) as agent_run:
                                yield agent_run
                                # Store conversation history if requested
                                if store_history and user_msg and agent_run.result:
                                    response_msg = ChatMessage[TResult](
                                        content=agent_run.result.output,
                                        role="assistant",
                                        name=self.name,
                                        model=agent_run.result.response.model_name,
                                        message_id=run_message_id,
                                        conversation_id=conversation_id or user_msg.conversation_id,
                                        response_time=time.perf_counter() - start_time,
                                    )
                                    self.conversation.add_chat_messages([user_msg, response_msg])
                                    msg = "Stored conversation history for run_id=%s"
                                    logger.debug(msg, run_message_id)
                    
                            logger.info("Agent iteration run_id=%s completed.", run_message_id)
                    
                        except Exception as e:
                            logger.exception("Agent iteration run_id=%s failed.", run_message_id)
                            self.run_failed.emit(f"Agent iteration failed: {e}", e)
                            raise
                    

                    register_worker

                    register_worker(
                        worker: MessageNode[Any, Any],
                        *,
                        name: str | None = None,
                        reset_history_on_run: bool = True,
                        pass_message_history: bool = False,
                        share_context: bool = False,
                    ) -> Tool
                    

                    Register another agent as a worker tool.

                    Source code in src/llmling_agent/agent/agent.py
                    1226
                    1227
                    1228
                    1229
                    1230
                    1231
                    1232
                    1233
                    1234
                    1235
                    1236
                    1237
                    1238
                    1239
                    1240
                    1241
                    1242
                    1243
                    def register_worker(
                        self,
                        worker: MessageNode[Any, Any],
                        *,
                        name: str | None = None,
                        reset_history_on_run: bool = True,
                        pass_message_history: bool = False,
                        share_context: bool = False,
                    ) -> Tool:
                        """Register another agent as a worker tool."""
                        return self.tools.register_worker(
                            worker,
                            name=name,
                            reset_history_on_run=reset_history_on_run,
                            pass_message_history=pass_message_history,
                            share_context=share_context,
                            parent=self if (pass_message_history or share_context) else None,
                        )
                    

                    reset async

                    reset()
                    

                    Reset agent state (conversation history and tool states).

                    Source code in src/llmling_agent/agent/agent.py
                    1256
                    1257
                    1258
                    1259
                    1260
                    1261
                    1262
                    1263
                    1264
                    1265
                    1266
                    1267
                    1268
                    async def reset(self):
                        """Reset agent state (conversation history and tool states)."""
                        old_tools = await self.tools.list_tools()
                        self.conversation.clear()
                        self.tools.reset_states()
                        new_tools = await self.tools.list_tools()
                    
                        event = self.AgentReset(
                            agent_name=self.name,
                            previous_tools=old_tools,
                            new_tools=new_tools,
                        )
                        self.agent_reset.emit(event)
                    

                    run_in_background async

                    run_in_background(
                        *prompt: AnyPromptType | Image | PathLike[str],
                        max_count: int | None = None,
                        interval: float = 1.0,
                        block: bool = False,
                        **kwargs: Any,
                    ) -> ChatMessage[TResult] | None
                    

                    Run agent continuously in background with prompt or dynamic prompt function.

                    Parameters:

                    Name Type Description Default
                    prompt AnyPromptType | Image | PathLike[str]

                    Static prompt or function that generates prompts

                    ()
                    max_count int | None

                    Maximum number of runs (None = infinite)

                    None
                    interval float

                    Seconds between runs

                    1.0
                    block bool

                    Whether to block until completion

                    False
                    **kwargs Any

                    Arguments passed to run()

                    {}
                    Source code in src/llmling_agent/agent/agent.py
                    1081
                    1082
                    1083
                    1084
                    1085
                    1086
                    1087
                    1088
                    1089
                    1090
                    1091
                    1092
                    1093
                    1094
                    1095
                    1096
                    1097
                    1098
                    1099
                    1100
                    1101
                    1102
                    1103
                    1104
                    1105
                    1106
                    1107
                    1108
                    1109
                    1110
                    1111
                    1112
                    1113
                    1114
                    1115
                    1116
                    1117
                    1118
                    1119
                    1120
                    1121
                    1122
                    1123
                    1124
                    1125
                    1126
                    1127
                    1128
                    1129
                    1130
                    1131
                    1132
                    1133
                    1134
                    1135
                    1136
                    1137
                    1138
                    1139
                    1140
                    1141
                    1142
                    async def run_in_background(
                        self,
                        *prompt: AnyPromptType | PIL.Image.Image | os.PathLike[str],
                        max_count: int | None = None,
                        interval: float = 1.0,
                        block: bool = False,
                        **kwargs: Any,
                    ) -> ChatMessage[TResult] | None:
                        """Run agent continuously in background with prompt or dynamic prompt function.
                    
                        Args:
                            prompt: Static prompt or function that generates prompts
                            max_count: Maximum number of runs (None = infinite)
                            interval: Seconds between runs
                            block: Whether to block until completion
                            **kwargs: Arguments passed to run()
                        """
                        self._infinite = max_count is None
                    
                        async def _continuous():
                            count = 0
                            msg = "%s: Starting continuous run (max_count=%s, interval=%s) for %r"
                            logger.debug(msg, self.name, max_count, interval, self.name)
                            latest = None
                            while max_count is None or count < max_count:
                                try:
                                    current_prompts = [
                                        call_with_context(p, self.context, **kwargs) if callable(p) else p
                                        for p in prompt
                                    ]
                                    msg = "%s: Generated prompt #%d: %s"
                                    logger.debug(msg, self.name, count, current_prompts)
                    
                                    latest = await self.run(current_prompts, **kwargs)
                                    msg = "%s: Run continous result #%d"
                                    logger.debug(msg, self.name, count)
                    
                                    count += 1
                                    await asyncio.sleep(interval)
                                except asyncio.CancelledError:
                                    logger.debug("%s: Continuous run cancelled", self.name)
                                    break
                                except Exception:
                                    logger.exception("%s: Background run failed", self.name)
                                    await asyncio.sleep(interval)
                            msg = "%s: Continuous run completed after %d iterations"
                            logger.debug(msg, self.name, count)
                            return latest
                    
                        # Cancel any existing background task
                        await self.stop()
                        task = asyncio.create_task(_continuous(), name=f"background_{self.name}")
                        if block:
                            try:
                                return await task  # type: ignore
                            finally:
                                if not task.done():
                                    task.cancel()
                        else:
                            logger.debug("%s: Started background task %s", self.name, task.get_name())
                            self._background_task = task
                            return None
                    

                    run_iter async

                    run_iter(
                        *prompt_groups: Sequence[AnyPromptType | Image | PathLike[str]],
                        result_type: type[TResult] | None = None,
                        model: ModelType = None,
                        store_history: bool = True,
                        wait_for_connections: bool | None = None,
                    ) -> AsyncIterator[ChatMessage[TResult]]
                    

                    Run agent sequentially on multiple prompt groups.

                    Parameters:

                    Name Type Description Default
                    prompt_groups Sequence[AnyPromptType | Image | PathLike[str]]

                    Groups of prompts to process sequentially

                    ()
                    result_type type[TResult] | None

                    Optional type for structured responses

                    None
                    model ModelType

                    Optional model override

                    None
                    store_history bool

                    Whether to store in conversation history

                    True
                    wait_for_connections bool | None

                    Whether to wait for connected agents

                    None

                    Yields:

                    Type Description
                    AsyncIterator[ChatMessage[TResult]]

                    Response messages in sequence

                    Example

                    questions = [ ["What is your name?"], ["How old are you?", image1], ["Describe this image", image2], ] async for response in agent.run_iter(*questions): print(response.content)

                    Source code in src/llmling_agent/agent/agent.py
                    852
                    853
                    854
                    855
                    856
                    857
                    858
                    859
                    860
                    861
                    862
                    863
                    864
                    865
                    866
                    867
                    868
                    869
                    870
                    871
                    872
                    873
                    874
                    875
                    876
                    877
                    878
                    879
                    880
                    881
                    882
                    883
                    884
                    885
                    886
                    887
                    888
                    889
                    async def run_iter(
                        self,
                        *prompt_groups: Sequence[AnyPromptType | PIL.Image.Image | os.PathLike[str]],
                        result_type: type[TResult] | None = None,
                        model: ModelType = None,
                        store_history: bool = True,
                        wait_for_connections: bool | None = None,
                    ) -> AsyncIterator[ChatMessage[TResult]]:
                        """Run agent sequentially on multiple prompt groups.
                    
                        Args:
                            prompt_groups: Groups of prompts to process sequentially
                            result_type: Optional type for structured responses
                            model: Optional model override
                            store_history: Whether to store in conversation history
                            wait_for_connections: Whether to wait for connected agents
                    
                        Yields:
                            Response messages in sequence
                    
                        Example:
                            questions = [
                                ["What is your name?"],
                                ["How old are you?", image1],
                                ["Describe this image", image2],
                            ]
                            async for response in agent.run_iter(*questions):
                                print(response.content)
                        """
                        for prompts in prompt_groups:
                            response = await self.run(
                                *prompts,
                                result_type=result_type,
                                model=model,
                                store_history=store_history,
                                wait_for_connections=wait_for_connections,
                            )
                            yield response  # pyright: ignore
                    

                    run_job async

                    run_job(
                        job: Job[TDeps, str | None],
                        *,
                        store_history: bool = True,
                        include_agent_tools: bool = True,
                    ) -> ChatMessage[str]
                    

                    Execute a pre-defined task.

                    Parameters:

                    Name Type Description Default
                    job Job[TDeps, str | None]

                    Job configuration to execute

                    required
                    store_history bool

                    Whether the message exchange should be added to the context window

                    True
                    include_agent_tools bool

                    Whether to include agent tools

                    True

                    Returns: Job execution result

                    Raises:

                    Type Description
                    JobError

                    If task execution fails

                    ValueError

                    If task configuration is invalid

                    Source code in src/llmling_agent/agent/agent.py
                    919
                    920
                    921
                    922
                    923
                    924
                    925
                    926
                    927
                    928
                    929
                    930
                    931
                    932
                    933
                    934
                    935
                    936
                    937
                    938
                    939
                    940
                    941
                    942
                    943
                    944
                    945
                    946
                    947
                    948
                    949
                    950
                    951
                    952
                    953
                    954
                    955
                    956
                    957
                    958
                    959
                    960
                    961
                    962
                    963
                    964
                    965
                    966
                    967
                    968
                    969
                    970
                    async def run_job(
                        self,
                        job: Job[TDeps, str | None],
                        *,
                        store_history: bool = True,
                        include_agent_tools: bool = True,
                    ) -> ChatMessage[str]:
                        """Execute a pre-defined task.
                    
                        Args:
                            job: Job configuration to execute
                            store_history: Whether the message exchange should be added to the
                                           context window
                            include_agent_tools: Whether to include agent tools
                        Returns:
                            Job execution result
                    
                        Raises:
                            JobError: If task execution fails
                            ValueError: If task configuration is invalid
                        """
                        from llmling_agent.tasks import JobError
                    
                        if job.required_dependency is not None:  # noqa: SIM102
                            if not isinstance(self.context.data, job.required_dependency):
                                msg = (
                                    f"Agent dependencies ({type(self.context.data)}) "
                                    f"don't match job requirement ({job.required_dependency})"
                                )
                                raise JobError(msg)
                    
                        # Load task knowledge
                        if job.knowledge:
                            # Add knowledge sources to context
                            resources: list[Resource | str] = list(job.knowledge.paths) + list(
                                job.knowledge.resources
                            )
                            for source in resources:
                                await self.conversation.load_context_source(source)
                            for prompt in job.knowledge.prompts:
                                await self.conversation.load_context_source(prompt)
                        try:
                            # Register task tools temporarily
                            tools = job.get_tools()
                            with self.tools.temporary_tools(tools, exclusive=not include_agent_tools):
                                # Execute job with job-specific tools
                                return await self.run(await job.get_prompt(), store_history=store_history)
                    
                        except Exception as e:
                            msg = f"Task execution failed: {e}"
                            logger.exception(msg)
                            raise JobError(msg) from e
                    

                    run_stream async

                    run_stream(
                        *prompt: AnyPromptType | Image | PathLike[str],
                        result_type: type[TResult] | None = None,
                        model: ModelType = None,
                        tool_choice: str | list[str] | None = None,
                        store_history: bool = True,
                        usage_limits: UsageLimits | None = None,
                        message_id: str | None = None,
                        conversation_id: str | None = None,
                        messages: list[ChatMessage[Any]] | None = None,
                        wait_for_connections: bool | None = None,
                    ) -> AsyncIterator[StreamingResponseProtocol[TResult]]
                    

                    Run agent with prompt and get a streaming response.

                    Parameters:

                    Name Type Description Default
                    prompt AnyPromptType | Image | PathLike[str]

                    User query or instruction

                    ()
                    result_type type[TResult] | None

                    Optional type for structured responses

                    None
                    model ModelType

                    Optional model override

                    None
                    tool_choice str | list[str] | None

                    Filter tool choice by name

                    None
                    store_history bool

                    Whether the message exchange should be added to the context window

                    True
                    usage_limits UsageLimits | None

                    Optional usage limits for the model

                    None
                    message_id str | None

                    Optional message id for the returned message. Automatically generated if not provided.

                    None
                    conversation_id str | None

                    Optional conversation id for the returned message.

                    None
                    messages list[ChatMessage[Any]] | None

                    Optional list of messages to replace the conversation history

                    None
                    wait_for_connections bool | None

                    Whether to wait for connected agents to complete

                    None

                    Returns:

                    Type Description
                    AsyncIterator[StreamingResponseProtocol[TResult]]

                    A streaming result to iterate over.

                    Raises:

                    Type Description
                    UnexpectedModelBehavior

                    If the model fails or behaves unexpectedly

                    Source code in src/llmling_agent/agent/agent.py
                    765
                    766
                    767
                    768
                    769
                    770
                    771
                    772
                    773
                    774
                    775
                    776
                    777
                    778
                    779
                    780
                    781
                    782
                    783
                    784
                    785
                    786
                    787
                    788
                    789
                    790
                    791
                    792
                    793
                    794
                    795
                    796
                    797
                    798
                    799
                    800
                    801
                    802
                    803
                    804
                    805
                    806
                    807
                    808
                    809
                    810
                    811
                    812
                    813
                    814
                    815
                    816
                    817
                    818
                    819
                    820
                    821
                    822
                    823
                    824
                    825
                    826
                    827
                    828
                    829
                    830
                    831
                    832
                    833
                    834
                    835
                    836
                    837
                    838
                    839
                    840
                    841
                    842
                    843
                    844
                    845
                    846
                    847
                    848
                    849
                    850
                    @asynccontextmanager
                    async def run_stream(
                        self,
                        *prompt: AnyPromptType | PIL.Image.Image | os.PathLike[str],
                        result_type: type[TResult] | None = None,
                        model: ModelType = None,
                        tool_choice: str | list[str] | None = None,
                        store_history: bool = True,
                        usage_limits: UsageLimits | None = None,
                        message_id: str | None = None,
                        conversation_id: str | None = None,
                        messages: list[ChatMessage[Any]] | None = None,
                        wait_for_connections: bool | None = None,
                    ) -> AsyncIterator[StreamingResponseProtocol[TResult]]:
                        """Run agent with prompt and get a streaming response.
                    
                        Args:
                            prompt: User query or instruction
                            result_type: Optional type for structured responses
                            model: Optional model override
                            tool_choice: Filter tool choice by name
                            store_history: Whether the message exchange should be added to the
                                           context window
                            usage_limits: Optional usage limits for the model
                            message_id: Optional message id for the returned message.
                                        Automatically generated if not provided.
                            conversation_id: Optional conversation id for the returned message.
                            messages: Optional list of messages to replace the conversation history
                            wait_for_connections: Whether to wait for connected agents to complete
                    
                        Returns:
                            A streaming result to iterate over.
                    
                        Raises:
                            UnexpectedModelBehavior: If the model fails or behaves unexpectedly
                        """
                        message_id = message_id or str(uuid4())
                        user_msg, prompts = await self.pre_run(*prompt)
                        self.set_result_type(result_type)
                        start_time = time.perf_counter()
                        sys_prompt = await self.sys_prompts.format_system_prompt(self)
                        tools = await self.tools.get_tools(state="enabled", names=tool_choice)
                        message_history = (
                            messages if messages is not None else self.conversation.get_history()
                        )
                        try:
                            async with self._provider.stream_response(
                                *prompts,
                                message_id=message_id,
                                message_history=message_history,
                                result_type=result_type,
                                model=model,
                                store_history=store_history,
                                tools=tools,
                                usage_limits=usage_limits,
                                system_prompt=sys_prompt,
                            ) as stream:
                                yield stream
                                usage = stream.usage()
                                cost_info = None
                                model_name = stream.model_name  # type: ignore
                                if model_name:
                                    cost_info = await TokenCost.from_usage(usage, model_name)
                                response_msg = ChatMessage[TResult](
                                    content=cast(TResult, stream.formatted_content),  # type: ignore
                                    role="assistant",
                                    name=self.name,
                                    model=model_name,
                                    message_id=message_id,
                                    conversation_id=user_msg.conversation_id,
                                    cost_info=cost_info,
                                    response_time=time.perf_counter() - start_time,
                                    # provider_extra=stream.provider_extra or {},
                                )
                                self.message_sent.emit(response_msg)
                                if store_history:
                                    self.conversation.add_chat_messages([user_msg, response_msg])
                                await self.connections.route_message(
                                    response_msg,
                                    wait=wait_for_connections,
                                )
                    
                        except Exception as e:
                            logger.exception("Agent stream failed")
                            self.run_failed.emit("Agent stream failed", e)
                            raise
                    

                    run_sync

                    run_sync(
                        *prompt: AnyPromptType | Image | PathLike[str],
                        result_type: type[TResult] | None = None,
                        deps: TDeps | None = None,
                        model: ModelType = None,
                        store_history: bool = True,
                    ) -> ChatMessage[TResult]
                    

                    Run agent synchronously (convenience wrapper).

                    Parameters:

                    Name Type Description Default
                    prompt AnyPromptType | Image | PathLike[str]

                    User query or instruction

                    ()
                    result_type type[TResult] | None

                    Optional type for structured responses

                    None
                    deps TDeps | None

                    Optional dependencies for the agent

                    None
                    model ModelType

                    Optional model override

                    None
                    store_history bool

                    Whether the message exchange should be added to the context window

                    True

                    Returns: Result containing response and run information

                    Source code in src/llmling_agent/agent/agent.py
                    891
                    892
                    893
                    894
                    895
                    896
                    897
                    898
                    899
                    900
                    901
                    902
                    903
                    904
                    905
                    906
                    907
                    908
                    909
                    910
                    911
                    912
                    913
                    914
                    915
                    916
                    917
                    def run_sync(
                        self,
                        *prompt: AnyPromptType | PIL.Image.Image | os.PathLike[str],
                        result_type: type[TResult] | None = None,
                        deps: TDeps | None = None,
                        model: ModelType = None,
                        store_history: bool = True,
                    ) -> ChatMessage[TResult]:
                        """Run agent synchronously (convenience wrapper).
                    
                        Args:
                            prompt: User query or instruction
                            result_type: Optional type for structured responses
                            deps: Optional dependencies for the agent
                            model: Optional model override
                            store_history: Whether the message exchange should be added to the
                                           context window
                        Returns:
                            Result containing response and run information
                        """
                        coro = self.run(
                            *prompt,
                            model=model,
                            store_history=store_history,
                            result_type=result_type,
                        )
                        return self.task_manager.run_task_sync(coro)  # type: ignore
                    

                    set_model

                    set_model(model: ModelType)
                    

                    Set the model for this agent.

                    Parameters:

                    Name Type Description Default
                    model ModelType

                    New model to use (name or instance)

                    required
                    Emits

                    model_changed signal with the new model

                    Source code in src/llmling_agent/agent/agent.py
                    1245
                    1246
                    1247
                    1248
                    1249
                    1250
                    1251
                    1252
                    1253
                    1254
                    def set_model(self, model: ModelType):
                        """Set the model for this agent.
                    
                        Args:
                            model: New model to use (name or instance)
                    
                        Emits:
                            model_changed signal with the new model
                        """
                        self._provider.set_model(model)
                    

                    set_result_type

                    set_result_type(
                        result_type: type[TResult] | str | StructuredResponseConfig | None,
                        *,
                        tool_name: str | None = None,
                        tool_description: str | None = None,
                    )
                    

                    Set or update the result type for this agent.

                    Parameters:

                    Name Type Description Default
                    result_type type[TResult] | str | StructuredResponseConfig | None

                    New result type, can be: - A Python type for validation - Name of a response definition - Response definition instance - None to reset to unstructured mode

                    required
                    tool_name str | None

                    Optional override for tool name

                    None
                    tool_description str | None

                    Optional override for tool description

                    None
                    Source code in src/llmling_agent/agent/agent.py
                    507
                    508
                    509
                    510
                    511
                    512
                    513
                    514
                    515
                    516
                    517
                    518
                    519
                    520
                    521
                    522
                    523
                    524
                    525
                    526
                    def set_result_type(
                        self,
                        result_type: type[TResult] | str | StructuredResponseConfig | None,
                        *,
                        tool_name: str | None = None,
                        tool_description: str | None = None,
                    ):
                        """Set or update the result type for this agent.
                    
                        Args:
                            result_type: New result type, can be:
                                - A Python type for validation
                                - Name of a response definition
                                - Response definition instance
                                - None to reset to unstructured mode
                            tool_name: Optional override for tool name
                            tool_description: Optional override for tool description
                        """
                        logger.debug("Setting result type to: %s for %r", result_type, self.name)
                        self._result_type = to_type(result_type)
                    

                    share async

                    share(
                        target: AnyAgent[TDeps, Any],
                        *,
                        tools: list[str] | None = None,
                        resources: list[str] | None = None,
                        history: bool | int | None = None,
                        token_limit: int | None = None,
                    )
                    

                    Share capabilities and knowledge with another agent.

                    Parameters:

                    Name Type Description Default
                    target AnyAgent[TDeps, Any]

                    Agent to share with

                    required
                    tools list[str] | None

                    List of tool names to share

                    None
                    resources list[str] | None

                    List of resource names to share

                    None
                    history bool | int | None

                    Share conversation history: - True: Share full history - int: Number of most recent messages to share - None: Don't share history

                    None
                    token_limit int | None

                    Optional max tokens for history

                    None

                    Raises:

                    Type Description
                    ValueError

                    If requested items don't exist

                    RuntimeError

                    If runtime not available for resources

                    Source code in src/llmling_agent/agent/agent.py
                    1170
                    1171
                    1172
                    1173
                    1174
                    1175
                    1176
                    1177
                    1178
                    1179
                    1180
                    1181
                    1182
                    1183
                    1184
                    1185
                    1186
                    1187
                    1188
                    1189
                    1190
                    1191
                    1192
                    1193
                    1194
                    1195
                    1196
                    1197
                    1198
                    1199
                    1200
                    1201
                    1202
                    1203
                    1204
                    1205
                    1206
                    1207
                    1208
                    1209
                    1210
                    1211
                    1212
                    1213
                    1214
                    1215
                    1216
                    1217
                    1218
                    1219
                    1220
                    1221
                    1222
                    1223
                    1224
                    async def share(
                        self,
                        target: AnyAgent[TDeps, Any],
                        *,
                        tools: list[str] | None = None,
                        resources: list[str] | None = None,
                        history: bool | int | None = None,  # bool or number of messages
                        token_limit: int | None = None,
                    ):
                        """Share capabilities and knowledge with another agent.
                    
                        Args:
                            target: Agent to share with
                            tools: List of tool names to share
                            resources: List of resource names to share
                            history: Share conversation history:
                                    - True: Share full history
                                    - int: Number of most recent messages to share
                                    - None: Don't share history
                            token_limit: Optional max tokens for history
                    
                        Raises:
                            ValueError: If requested items don't exist
                            RuntimeError: If runtime not available for resources
                        """
                        # Share tools if requested
                        for name in tools or []:
                            if tool := self.tools.get(name):
                                meta = {"shared_from": self.name}
                                target.tools.register_tool(tool.callable, metadata=meta)
                            else:
                                msg = f"Tool not found: {name}"
                                raise ValueError(msg)
                    
                        # Share resources if requested
                        if resources:
                            if not self.runtime:
                                msg = "No runtime available for sharing resources"
                                raise RuntimeError(msg)
                            for name in resources:
                                if resource := self.runtime.get_resource(name):
                                    await target.conversation.load_context_source(resource)  # type: ignore
                                else:
                                    msg = f"Resource not found: {name}"
                                    raise ValueError(msg)
                    
                        # Share history if requested
                        if history:
                            history_text = await self.conversation.format_history(
                                max_tokens=token_limit,
                                num_messages=history if isinstance(history, int) else None,
                            )
                            target.conversation.add_context_message(
                                history_text, source=self.name, metadata={"type": "shared_history"}
                            )
                    

                    stop async

                    stop()
                    

                    Stop continuous execution if running.

                    Source code in src/llmling_agent/agent/agent.py
                    1144
                    1145
                    1146
                    1147
                    1148
                    1149
                    async def stop(self):
                        """Stop continuous execution if running."""
                        if self._background_task and not self._background_task.done():
                            self._background_task.cancel()
                            await self._background_task
                            self._background_task = None
                    

                    temporary_state async

                    temporary_state(
                        *,
                        system_prompts: list[AnyPromptType] | None = None,
                        replace_prompts: bool = False,
                        tools: list[ToolType] | None = None,
                        replace_tools: bool = False,
                        history: list[AnyPromptType] | SessionQuery | None = None,
                        replace_history: bool = False,
                        pause_routing: bool = False,
                        model: ModelType | None = None,
                        provider: AgentProvider | None = None,
                    ) -> AsyncIterator[Self]
                    

                    Temporarily modify agent state.

                    Parameters:

                    Name Type Description Default
                    system_prompts list[AnyPromptType] | None

                    Temporary system prompts to use

                    None
                    replace_prompts bool

                    Whether to replace existing prompts

                    False
                    tools list[ToolType] | None

                    Temporary tools to make available

                    None
                    replace_tools bool

                    Whether to replace existing tools

                    False
                    history list[AnyPromptType] | SessionQuery | None

                    Conversation history (prompts or query)

                    None
                    replace_history bool

                    Whether to replace existing history

                    False
                    pause_routing bool

                    Whether to pause message routing

                    False
                    model ModelType | None

                    Temporary model override

                    None
                    provider AgentProvider | None

                    Temporary provider override

                    None
                    Source code in src/llmling_agent/agent/agent.py
                    1285
                    1286
                    1287
                    1288
                    1289
                    1290
                    1291
                    1292
                    1293
                    1294
                    1295
                    1296
                    1297
                    1298
                    1299
                    1300
                    1301
                    1302
                    1303
                    1304
                    1305
                    1306
                    1307
                    1308
                    1309
                    1310
                    1311
                    1312
                    1313
                    1314
                    1315
                    1316
                    1317
                    1318
                    1319
                    1320
                    1321
                    1322
                    1323
                    1324
                    1325
                    1326
                    1327
                    1328
                    1329
                    1330
                    1331
                    1332
                    1333
                    1334
                    1335
                    1336
                    1337
                    1338
                    1339
                    1340
                    1341
                    1342
                    1343
                    1344
                    1345
                    1346
                    1347
                    1348
                    1349
                    1350
                    1351
                    1352
                    1353
                    1354
                    1355
                    @asynccontextmanager
                    async def temporary_state(
                        self,
                        *,
                        system_prompts: list[AnyPromptType] | None = None,
                        replace_prompts: bool = False,
                        tools: list[ToolType] | None = None,
                        replace_tools: bool = False,
                        history: list[AnyPromptType] | SessionQuery | None = None,
                        replace_history: bool = False,
                        pause_routing: bool = False,
                        model: ModelType | None = None,
                        provider: AgentProvider | None = None,
                    ) -> AsyncIterator[Self]:
                        """Temporarily modify agent state.
                    
                        Args:
                            system_prompts: Temporary system prompts to use
                            replace_prompts: Whether to replace existing prompts
                            tools: Temporary tools to make available
                            replace_tools: Whether to replace existing tools
                            history: Conversation history (prompts or query)
                            replace_history: Whether to replace existing history
                            pause_routing: Whether to pause message routing
                            model: Temporary model override
                            provider: Temporary provider override
                        """
                        old_model = self._provider.model if hasattr(self._provider, "model") else None  # pyright: ignore
                        old_provider = self._provider
                    
                        async with AsyncExitStack() as stack:
                            # System prompts (async)
                            if system_prompts is not None:
                                await stack.enter_async_context(
                                    self.sys_prompts.temporary_prompt(
                                        system_prompts, exclusive=replace_prompts
                                    )
                                )
                    
                            # Tools (sync)
                            if tools is not None:
                                stack.enter_context(
                                    self.tools.temporary_tools(tools, exclusive=replace_tools)
                                )
                    
                            # History (async)
                            if history is not None:
                                await stack.enter_async_context(
                                    self.conversation.temporary_state(
                                        history, replace_history=replace_history
                                    )
                                )
                    
                            # Routing (async)
                            if pause_routing:
                                await stack.enter_async_context(self.connections.paused_routing())
                    
                            # Model/Provider
                            if provider is not None:
                                self._provider = provider
                            elif model is not None:
                                self._provider.set_model(model)
                    
                            try:
                                yield self
                            finally:
                                # Restore model/provider
                                if provider is not None:
                                    self._provider = old_provider
                                elif model is not None and old_model:
                                    self._provider.set_model(old_model)
                    

                    to_structured

                    to_structured(
                        result_type: None,
                        *,
                        tool_name: str | None = None,
                        tool_description: str | None = None,
                    ) -> Self
                    
                    to_structured(
                        result_type: type[TResult] | str | StructuredResponseConfig,
                        *,
                        tool_name: str | None = None,
                        tool_description: str | None = None,
                    ) -> StructuredAgent[TDeps, TResult]
                    
                    to_structured(
                        result_type: type[TResult] | str | StructuredResponseConfig | None,
                        *,
                        tool_name: str | None = None,
                        tool_description: str | None = None,
                    ) -> StructuredAgent[TDeps, TResult] | Self
                    

                    Convert this agent to a structured agent.

                    If result_type is None, returns self unchanged (no wrapping). Otherwise creates a StructuredAgent wrapper.

                    Parameters:

                    Name Type Description Default
                    result_type type[TResult] | str | StructuredResponseConfig | None

                    Type for structured responses. Can be: - A Python type (Pydantic model) - Name of response definition from context - Complete response definition - None to skip wrapping

                    required
                    tool_name str | None

                    Optional override for result tool name

                    None
                    tool_description str | None

                    Optional override for result tool description

                    None

                    Returns:

                    Type Description
                    StructuredAgent[TDeps, TResult] | Self

                    Either StructuredAgent wrapper or self unchanged

                    from llmling_agent.agent import StructuredAgent

                    Source code in src/llmling_agent/agent/agent.py
                    587
                    588
                    589
                    590
                    591
                    592
                    593
                    594
                    595
                    596
                    597
                    598
                    599
                    600
                    601
                    602
                    603
                    604
                    605
                    606
                    607
                    608
                    609
                    610
                    611
                    612
                    613
                    614
                    615
                    616
                    617
                    618
                    619
                    620
                    621
                    622
                    def to_structured[TResult](
                        self,
                        result_type: type[TResult] | str | StructuredResponseConfig | None,
                        *,
                        tool_name: str | None = None,
                        tool_description: str | None = None,
                    ) -> StructuredAgent[TDeps, TResult] | Self:
                        """Convert this agent to a structured agent.
                    
                        If result_type is None, returns self unchanged (no wrapping).
                        Otherwise creates a StructuredAgent wrapper.
                    
                        Args:
                            result_type: Type for structured responses. Can be:
                                - A Python type (Pydantic model)
                                - Name of response definition from context
                                - Complete response definition
                                - None to skip wrapping
                            tool_name: Optional override for result tool name
                            tool_description: Optional override for result tool description
                    
                        Returns:
                            Either StructuredAgent wrapper or self unchanged
                        from llmling_agent.agent import StructuredAgent
                        """
                        if result_type is None:
                            return self
                    
                        from llmling_agent.agent import StructuredAgent
                    
                        return StructuredAgent(
                            self,
                            result_type=result_type,
                            tool_name=tool_name,
                            tool_description=tool_description,
                        )
                    

                    to_tool

                    to_tool(
                        *,
                        name: str | None = None,
                        reset_history_on_run: bool = True,
                        pass_message_history: bool = False,
                        share_context: bool = False,
                        parent: AnyAgent[Any, Any] | None = None,
                    ) -> Tool
                    

                    Create a tool from this agent.

                    Parameters:

                    Name Type Description Default
                    name str | None

                    Optional tool name override

                    None
                    reset_history_on_run bool

                    Clear agent's history before each run

                    True
                    pass_message_history bool

                    Pass parent's message history to agent

                    False
                    share_context bool

                    Whether to pass parent's context/deps

                    False
                    parent AnyAgent[Any, Any] | None

                    Optional parent agent for history/context sharing

                    None
                    Source code in src/llmling_agent/agent/agent.py
                    633
                    634
                    635
                    636
                    637
                    638
                    639
                    640
                    641
                    642
                    643
                    644
                    645
                    646
                    647
                    648
                    649
                    650
                    651
                    652
                    653
                    654
                    655
                    656
                    657
                    658
                    659
                    660
                    661
                    662
                    663
                    664
                    665
                    666
                    667
                    668
                    669
                    670
                    671
                    672
                    673
                    674
                    675
                    676
                    677
                    678
                    679
                    680
                    681
                    682
                    683
                    def to_tool(
                        self,
                        *,
                        name: str | None = None,
                        reset_history_on_run: bool = True,
                        pass_message_history: bool = False,
                        share_context: bool = False,
                        parent: AnyAgent[Any, Any] | None = None,
                    ) -> Tool:
                        """Create a tool from this agent.
                    
                        Args:
                            name: Optional tool name override
                            reset_history_on_run: Clear agent's history before each run
                            pass_message_history: Pass parent's message history to agent
                            share_context: Whether to pass parent's context/deps
                            parent: Optional parent agent for history/context sharing
                        """
                        tool_name = name or f"ask_{self.name}"
                    
                        async def wrapped_tool(prompt: str) -> str:
                            if pass_message_history and not parent:
                                msg = "Parent agent required for message history sharing"
                                raise ToolError(msg)
                    
                            if reset_history_on_run:
                                self.conversation.clear()
                    
                            history = None
                            if pass_message_history and parent:
                                history = parent.conversation.get_history()
                                old = self.conversation.get_history()
                                self.conversation.set_history(history)
                            result = await self.run(prompt, result_type=self._result_type)
                            if history:
                                self.conversation.set_history(old)
                            return result.data
                    
                        normalized_name = self.name.replace("_", " ").title()
                        docstring = f"Get expert answer from specialized agent: {normalized_name}"
                        if self.description:
                            docstring = f"{docstring}\n\n{self.description}"
                    
                        wrapped_tool.__doc__ = docstring
                        wrapped_tool.__name__ = tool_name
                    
                        return Tool.from_callable(
                            wrapped_tool,
                            name_override=tool_name,
                            description_override=docstring,
                        )
                    

                    wait async

                    wait() -> ChatMessage[TResult]
                    

                    Wait for background execution to complete.

                    Source code in src/llmling_agent/agent/agent.py
                    1151
                    1152
                    1153
                    1154
                    1155
                    1156
                    1157
                    1158
                    1159
                    1160
                    1161
                    1162
                    async def wait(self) -> ChatMessage[TResult]:
                        """Wait for background execution to complete."""
                        if not self._background_task:
                            msg = "No background task running"
                            raise RuntimeError(msg)
                        if self._infinite:
                            msg = "Cannot wait on infinite execution"
                            raise RuntimeError(msg)
                        try:
                            return await self._background_task
                        finally:
                            self._background_task = None
                    

                    AgentContext dataclass

                    Bases: NodeContext[TDeps]

                    Runtime context for agent execution.

                    Generically typed with AgentContext[Type of Dependencies]

                    Source code in src/llmling_agent/agent/context.py
                     28
                     29
                     30
                     31
                     32
                     33
                     34
                     35
                     36
                     37
                     38
                     39
                     40
                     41
                     42
                     43
                     44
                     45
                     46
                     47
                     48
                     49
                     50
                     51
                     52
                     53
                     54
                     55
                     56
                     57
                     58
                     59
                     60
                     61
                     62
                     63
                     64
                     65
                     66
                     67
                     68
                     69
                     70
                     71
                     72
                     73
                     74
                     75
                     76
                     77
                     78
                     79
                     80
                     81
                     82
                     83
                     84
                     85
                     86
                     87
                     88
                     89
                     90
                     91
                     92
                     93
                     94
                     95
                     96
                     97
                     98
                     99
                    100
                    101
                    102
                    103
                    104
                    105
                    106
                    107
                    108
                    109
                    110
                    111
                    112
                    113
                    114
                    115
                    116
                    117
                    118
                    119
                    120
                    121
                    122
                    123
                    124
                    125
                    126
                    127
                    128
                    @dataclass(kw_only=True)
                    class AgentContext[TDeps = Any](NodeContext[TDeps]):
                        """Runtime context for agent execution.
                    
                        Generically typed with AgentContext[Type of Dependencies]
                        """
                    
                        capabilities: Capabilities
                        """Current agent's capabilities."""
                    
                        config: AgentConfig
                        """Current agent's specific configuration."""
                    
                        model_settings: dict[str, Any] = field(default_factory=dict)
                        """Model-specific settings."""
                    
                        data: TDeps | None = None
                        """Custom context data."""
                    
                        runtime: RuntimeConfig | None = None
                        """Reference to the runtime configuration."""
                    
                        @classmethod
                        def create_default(
                            cls,
                            name: str,
                            capabilities: Capabilities | None = None,
                            deps: TDeps | None = None,
                            pool: AgentPool | None = None,
                            input_provider: InputProvider | None = None,
                        ) -> AgentContext[TDeps]:
                            """Create a default agent context with minimal privileges.
                    
                            Args:
                                name: Name of the agent
                                capabilities: Optional custom capabilities (defaults to minimal access)
                                deps: Optional dependencies for the agent
                                pool: Optional pool the agent is part of
                                input_provider: Optional input provider for the agent
                            """
                            from llmling_agent.config.capabilities import Capabilities
                            from llmling_agent.models import AgentConfig, AgentsManifest
                    
                            caps = capabilities or Capabilities()
                            defn = AgentsManifest()
                            cfg = AgentConfig(name=name)
                            return cls(
                                input_provider=input_provider,
                                node_name=name,
                                capabilities=caps,
                                definition=defn,
                                config=cfg,
                                data=deps,
                                pool=pool,
                            )
                    
                        @cached_property
                        def converter(self) -> ConversionManager:
                            """Get conversion manager from global config."""
                            return ConversionManager(self.definition.conversion)
                    
                        # TODO: perhaps add agent directly to context?
                        @property
                        def agent(self) -> AnyAgent[TDeps, Any]:
                            """Get the agent instance from the pool."""
                            assert self.pool, "No agent pool available"
                            assert self.node_name, "No agent name available"
                            return self.pool.agents[self.node_name]
                    
                        @property
                        def process_manager(self):
                            """Get process manager from pool."""
                            assert self.pool, "No agent pool available"
                            return self.pool.process_manager
                    
                        async def handle_confirmation(
                            self,
                            tool: Tool,
                            args: dict[str, Any],
                        ) -> ConfirmationResult:
                            """Handle tool execution confirmation.
                    
                            Returns True if:
                            - No confirmation handler is set
                            - Handler confirms the execution
                            """
                            provider = self.get_input_provider()
                            mode = self.config.requires_tool_confirmation
                            if (mode == "per_tool" and not tool.requires_confirmation) or mode == "never":
                                return "allow"
                            history = self.agent.conversation.get_history() if self.pool else []
                            return await provider.get_tool_confirmation(self, tool, args, history)
                    
                        async def handle_elicitation(
                            self,
                            params: types.ElicitRequestParams,
                        ) -> types.ElicitResult | types.ErrorData:
                            """Handle elicitation request for additional information."""
                            provider = self.get_input_provider()
                            history = self.agent.conversation.get_history() if self.pool else []
                            return await provider.get_elicitation(self, params, history)
                    

                    agent property

                    agent: AnyAgent[TDeps, Any]
                    

                    Get the agent instance from the pool.

                    capabilities instance-attribute

                    capabilities: Capabilities
                    

                    Current agent's capabilities.

                    config instance-attribute

                    config: AgentConfig
                    

                    Current agent's specific configuration.

                    converter cached property

                    converter: ConversionManager
                    

                    Get conversion manager from global config.

                    data class-attribute instance-attribute

                    data: TDeps | None = None
                    

                    Custom context data.

                    model_settings class-attribute instance-attribute

                    model_settings: dict[str, Any] = field(default_factory=dict)
                    

                    Model-specific settings.

                    process_manager property

                    process_manager
                    

                    Get process manager from pool.

                    runtime class-attribute instance-attribute

                    runtime: RuntimeConfig | None = None
                    

                    Reference to the runtime configuration.

                    create_default classmethod

                    create_default(
                        name: str,
                        capabilities: Capabilities | None = None,
                        deps: TDeps | None = None,
                        pool: AgentPool | None = None,
                        input_provider: InputProvider | None = None,
                    ) -> AgentContext[TDeps]
                    

                    Create a default agent context with minimal privileges.

                    Parameters:

                    Name Type Description Default
                    name str

                    Name of the agent

                    required
                    capabilities Capabilities | None

                    Optional custom capabilities (defaults to minimal access)

                    None
                    deps TDeps | None

                    Optional dependencies for the agent

                    None
                    pool AgentPool | None

                    Optional pool the agent is part of

                    None
                    input_provider InputProvider | None

                    Optional input provider for the agent

                    None
                    Source code in src/llmling_agent/agent/context.py
                    50
                    51
                    52
                    53
                    54
                    55
                    56
                    57
                    58
                    59
                    60
                    61
                    62
                    63
                    64
                    65
                    66
                    67
                    68
                    69
                    70
                    71
                    72
                    73
                    74
                    75
                    76
                    77
                    78
                    79
                    80
                    81
                    82
                    @classmethod
                    def create_default(
                        cls,
                        name: str,
                        capabilities: Capabilities | None = None,
                        deps: TDeps | None = None,
                        pool: AgentPool | None = None,
                        input_provider: InputProvider | None = None,
                    ) -> AgentContext[TDeps]:
                        """Create a default agent context with minimal privileges.
                    
                        Args:
                            name: Name of the agent
                            capabilities: Optional custom capabilities (defaults to minimal access)
                            deps: Optional dependencies for the agent
                            pool: Optional pool the agent is part of
                            input_provider: Optional input provider for the agent
                        """
                        from llmling_agent.config.capabilities import Capabilities
                        from llmling_agent.models import AgentConfig, AgentsManifest
                    
                        caps = capabilities or Capabilities()
                        defn = AgentsManifest()
                        cfg = AgentConfig(name=name)
                        return cls(
                            input_provider=input_provider,
                            node_name=name,
                            capabilities=caps,
                            definition=defn,
                            config=cfg,
                            data=deps,
                            pool=pool,
                        )
                    

                    handle_confirmation async

                    handle_confirmation(tool: Tool, args: dict[str, Any]) -> ConfirmationResult
                    

                    Handle tool execution confirmation.

                    Returns True if: - No confirmation handler is set - Handler confirms the execution

                    Source code in src/llmling_agent/agent/context.py
                    103
                    104
                    105
                    106
                    107
                    108
                    109
                    110
                    111
                    112
                    113
                    114
                    115
                    116
                    117
                    118
                    119
                    async def handle_confirmation(
                        self,
                        tool: Tool,
                        args: dict[str, Any],
                    ) -> ConfirmationResult:
                        """Handle tool execution confirmation.
                    
                        Returns True if:
                        - No confirmation handler is set
                        - Handler confirms the execution
                        """
                        provider = self.get_input_provider()
                        mode = self.config.requires_tool_confirmation
                        if (mode == "per_tool" and not tool.requires_confirmation) or mode == "never":
                            return "allow"
                        history = self.agent.conversation.get_history() if self.pool else []
                        return await provider.get_tool_confirmation(self, tool, args, history)
                    

                    handle_elicitation async

                    handle_elicitation(params: ElicitRequestParams) -> ElicitResult | ErrorData
                    

                    Handle elicitation request for additional information.

                    Source code in src/llmling_agent/agent/context.py
                    121
                    122
                    123
                    124
                    125
                    126
                    127
                    128
                    async def handle_elicitation(
                        self,
                        params: types.ElicitRequestParams,
                    ) -> types.ElicitResult | types.ErrorData:
                        """Handle elicitation request for additional information."""
                        provider = self.get_input_provider()
                        history = self.agent.conversation.get_history() if self.pool else []
                        return await provider.get_elicitation(self, params, history)
                    

                    ConversationManager

                    Manages conversation state and system prompts.

                    Source code in src/llmling_agent/agent/conversation.py
                     53
                     54
                     55
                     56
                     57
                     58
                     59
                     60
                     61
                     62
                     63
                     64
                     65
                     66
                     67
                     68
                     69
                     70
                     71
                     72
                     73
                     74
                     75
                     76
                     77
                     78
                     79
                     80
                     81
                     82
                     83
                     84
                     85
                     86
                     87
                     88
                     89
                     90
                     91
                     92
                     93
                     94
                     95
                     96
                     97
                     98
                     99
                    100
                    101
                    102
                    103
                    104
                    105
                    106
                    107
                    108
                    109
                    110
                    111
                    112
                    113
                    114
                    115
                    116
                    117
                    118
                    119
                    120
                    121
                    122
                    123
                    124
                    125
                    126
                    127
                    128
                    129
                    130
                    131
                    132
                    133
                    134
                    135
                    136
                    137
                    138
                    139
                    140
                    141
                    142
                    143
                    144
                    145
                    146
                    147
                    148
                    149
                    150
                    151
                    152
                    153
                    154
                    155
                    156
                    157
                    158
                    159
                    160
                    161
                    162
                    163
                    164
                    165
                    166
                    167
                    168
                    169
                    170
                    171
                    172
                    173
                    174
                    175
                    176
                    177
                    178
                    179
                    180
                    181
                    182
                    183
                    184
                    185
                    186
                    187
                    188
                    189
                    190
                    191
                    192
                    193
                    194
                    195
                    196
                    197
                    198
                    199
                    200
                    201
                    202
                    203
                    204
                    205
                    206
                    207
                    208
                    209
                    210
                    211
                    212
                    213
                    214
                    215
                    216
                    217
                    218
                    219
                    220
                    221
                    222
                    223
                    224
                    225
                    226
                    227
                    228
                    229
                    230
                    231
                    232
                    233
                    234
                    235
                    236
                    237
                    238
                    239
                    240
                    241
                    242
                    243
                    244
                    245
                    246
                    247
                    248
                    249
                    250
                    251
                    252
                    253
                    254
                    255
                    256
                    257
                    258
                    259
                    260
                    261
                    262
                    263
                    264
                    265
                    266
                    267
                    268
                    269
                    270
                    271
                    272
                    273
                    274
                    275
                    276
                    277
                    278
                    279
                    280
                    281
                    282
                    283
                    284
                    285
                    286
                    287
                    288
                    289
                    290
                    291
                    292
                    293
                    294
                    295
                    296
                    297
                    298
                    299
                    300
                    301
                    302
                    303
                    304
                    305
                    306
                    307
                    308
                    309
                    310
                    311
                    312
                    313
                    314
                    315
                    316
                    317
                    318
                    319
                    320
                    321
                    322
                    323
                    324
                    325
                    326
                    327
                    328
                    329
                    330
                    331
                    332
                    333
                    334
                    335
                    336
                    337
                    338
                    339
                    340
                    341
                    342
                    343
                    344
                    345
                    346
                    347
                    348
                    349
                    350
                    351
                    352
                    353
                    354
                    355
                    356
                    357
                    358
                    359
                    360
                    361
                    362
                    363
                    364
                    365
                    366
                    367
                    368
                    369
                    370
                    371
                    372
                    373
                    374
                    375
                    376
                    377
                    378
                    379
                    380
                    381
                    382
                    383
                    384
                    385
                    386
                    387
                    388
                    389
                    390
                    391
                    392
                    393
                    394
                    395
                    396
                    397
                    398
                    399
                    400
                    401
                    402
                    403
                    404
                    405
                    406
                    407
                    408
                    409
                    410
                    411
                    412
                    413
                    414
                    415
                    416
                    417
                    418
                    419
                    420
                    421
                    422
                    423
                    424
                    425
                    426
                    427
                    428
                    429
                    430
                    431
                    432
                    433
                    434
                    435
                    436
                    437
                    438
                    439
                    440
                    441
                    442
                    443
                    444
                    445
                    446
                    447
                    448
                    449
                    450
                    451
                    452
                    453
                    454
                    455
                    456
                    457
                    458
                    459
                    460
                    461
                    462
                    463
                    464
                    465
                    466
                    467
                    468
                    469
                    470
                    471
                    472
                    473
                    474
                    475
                    476
                    477
                    478
                    479
                    480
                    481
                    482
                    483
                    484
                    485
                    486
                    487
                    488
                    489
                    490
                    491
                    492
                    493
                    494
                    495
                    496
                    497
                    498
                    499
                    500
                    501
                    502
                    503
                    504
                    505
                    506
                    507
                    508
                    509
                    510
                    511
                    512
                    513
                    514
                    515
                    516
                    517
                    518
                    519
                    520
                    521
                    522
                    523
                    524
                    525
                    526
                    527
                    528
                    529
                    530
                    531
                    class ConversationManager:
                        """Manages conversation state and system prompts."""
                    
                        @dataclass(frozen=True)
                        class HistoryCleared:
                            """Emitted when chat history is cleared."""
                    
                            session_id: str
                            timestamp: datetime = field(default_factory=get_now)
                    
                        history_cleared = Signal(HistoryCleared)
                    
                        def __init__(
                            self,
                            agent: Agent[Any],
                            session_config: MemoryConfig | None = None,
                            *,
                            resources: Sequence[Resource | PromptType | str] = (),
                        ):
                            """Initialize conversation manager.
                    
                            Args:
                                agent: instance to manage
                                session_config: Optional MemoryConfig
                                resources: Optional paths to load as context
                            """
                            self._agent = agent
                            self.chat_messages = ChatMessageContainer()
                            self._last_messages: list[ChatMessage] = []
                            self._pending_messages: deque[ChatMessage] = deque()
                            self._config = session_config
                            self._resources = list(resources)  # Store for async loading
                            # Generate new ID if none provided
                            self.id = str(uuid4())
                    
                            if session_config is not None and session_config.session is not None:
                                storage = self._agent.context.storage
                                self._current_history = storage.filter_messages_sync(session_config.session)
                                if session_config.session.name:
                                    self.id = session_config.session.name
                    
                            # Note: max_messages and max_tokens will be handled in add_message/get_history
                            # to maintain the rolling window during conversation
                    
                        def get_initialization_tasks(self) -> list[Coroutine[Any, Any, Any]]:
                            """Get all initialization coroutines."""
                            self._resources = []  # Clear so we dont load again on async init
                            return [self.load_context_source(source) for source in self._resources]
                    
                        async def __aenter__(self) -> Self:
                            """Initialize when used standalone."""
                            if tasks := self.get_initialization_tasks():
                                await asyncio.gather(*tasks)
                            return self
                    
                        async def __aexit__(
                            self,
                            exc_type: type[BaseException] | None,
                            exc_val: BaseException | None,
                            exc_tb: TracebackType | None,
                        ):
                            """Clean up any pending messages."""
                            self._pending_messages.clear()
                    
                        def __bool__(self) -> bool:
                            return bool(self._pending_messages) or bool(self.chat_messages)
                    
                        def __repr__(self) -> str:
                            return f"ConversationManager(id={self.id!r})"
                    
                        def __prompt__(self) -> str:
                            if not self.chat_messages:
                                return "No conversation history"
                    
                            last_msgs = self.chat_messages[-2:]
                            parts = ["Recent conversation:"]
                            parts.extend(msg.format() for msg in last_msgs)
                            return "\n".join(parts)
                    
                        @overload
                        def __getitem__(self, key: int) -> ChatMessage[Any]: ...
                    
                        @overload
                        def __getitem__(self, key: slice | str) -> list[ChatMessage[Any]]: ...
                    
                        def __getitem__(
                            self, key: int | slice | str
                        ) -> ChatMessage[Any] | list[ChatMessage[Any]]:
                            """Access conversation history.
                    
                            Args:
                                key: Either:
                                    - Integer index for single message
                                    - Slice for message range
                                    - Agent name for conversation history with that agent
                            """
                            match key:
                                case int():
                                    return self.chat_messages[key]
                                case slice():
                                    return list(self.chat_messages[key])
                                case str():
                                    query = SessionQuery(name=key)
                                    return self._agent.context.storage.filter_messages_sync(query=query)
                                case _:
                                    msg = f"Invalid key type: {type(key)}"
                                    raise TypeError(msg)
                    
                        def __contains__(self, item: Any) -> bool:
                            """Check if item is in history."""
                            return item in self.chat_messages
                    
                        def __len__(self) -> int:
                            """Get length of history."""
                            return len(self.chat_messages)
                    
                        def get_message_tokens(self, message: ChatMessage) -> int:
                            """Get token count for a single message."""
                            content = "\n".join(message.format())
                            return count_tokens(content, self._agent.model_name)
                    
                        async def format_history(
                            self,
                            *,
                            max_tokens: int | None = None,
                            include_system: bool = False,
                            format_template: str | None = None,
                            num_messages: int | None = None,  # Add this parameter
                        ) -> str:
                            """Format conversation history as a single context message.
                    
                            Args:
                                max_tokens: Optional limit to include only last N tokens
                                include_system: Whether to include system messages
                                format_template: Optional custom format (defaults to agent/message pairs)
                                num_messages: Optional limit to include only last N messages
                            """
                            template = format_template or "Agent {agent}: {content}\n"
                            messages: list[str] = []
                            token_count = 0
                    
                            # Get messages, optionally limited
                            history: Sequence[ChatMessage[Any]] = self.chat_messages
                            if num_messages:
                                history = history[-num_messages:]
                    
                            if max_tokens:
                                history = list(reversed(history))  # Start from newest when token limited
                    
                            for msg in history:
                                # Check role directly from ChatMessage
                                if not include_system and msg.role == "system":
                                    continue
                                name = msg.name or msg.role.title()
                                formatted = template.format(agent=name, content=str(msg.content))
                    
                                if max_tokens:
                                    # Count tokens in this message
                                    if msg.cost_info:
                                        msg_tokens = msg.cost_info.token_usage["total"]
                                    else:
                                        # Fallback to tiktoken if no cost info
                                        msg_tokens = self.get_message_tokens(msg)
                    
                                    if token_count + msg_tokens > max_tokens:
                                        break
                                    token_count += msg_tokens
                                    # Add to front since we're going backwards
                                    messages.insert(0, formatted)
                                else:
                                    messages.append(formatted)
                    
                            return "\n".join(messages)
                    
                        async def load_context_source(self, source: Resource | PromptType | str):
                            """Load context from a single source."""
                            try:
                                match source:
                                    case str():
                                        await self.add_context_from_path(source)
                                    case BaseResource():
                                        await self.add_context_from_resource(source)
                                    case BasePrompt():
                                        await self.add_context_from_prompt(source)
                            except Exception:
                                msg = "Failed to load context from %s"
                                logger.exception(msg, "file" if isinstance(source, str) else source.type)
                    
                        def load_history_from_database(
                            self,
                            session: SessionIdType | SessionQuery = None,
                            *,
                            since: datetime | None = None,
                            until: datetime | None = None,
                            roles: set[MessageRole] | None = None,
                            limit: int | None = None,
                        ):
                            """Load conversation history from database.
                    
                            Args:
                                session: Session ID or query config
                                since: Only include messages after this time (override)
                                until: Only include messages before this time (override)
                                roles: Only include messages with these roles (override)
                                limit: Maximum number of messages to return (override)
                            """
                            storage = self._agent.context.storage
                            match session:
                                case SessionQuery() as query:
                                    # Override query params if provided
                                    if since is not None or until is not None or roles or limit:
                                        update = {
                                            "since": since.isoformat() if since else None,
                                            "until": until.isoformat() if until else None,
                                            "roles": roles,
                                            "limit": limit,
                                        }
                                        query = query.model_copy(update=update)
                                    if query.name:
                                        self.id = query.name
                                case str() | UUID():
                                    self.id = str(session)
                                    query = SessionQuery(
                                        name=self.id,
                                        since=since.isoformat() if since else None,
                                        until=until.isoformat() if until else None,
                                        roles=roles,
                                        limit=limit,
                                    )
                                case None:
                                    # Use current session ID
                                    query = SessionQuery(
                                        name=self.id,
                                        since=since.isoformat() if since else None,
                                        until=until.isoformat() if until else None,
                                        roles=roles,
                                        limit=limit,
                                    )
                                case _:
                                    msg = f"Invalid type for session: {type(session)}"
                                    raise ValueError(msg)
                            self.chat_messages.clear()
                            self.chat_messages.extend(storage.filter_messages_sync(query))
                    
                        def get_history(
                            self,
                            include_pending: bool = True,
                            do_filter: bool = True,
                        ) -> list[ChatMessage]:
                            """Get conversation history.
                    
                            Args:
                                include_pending: Whether to include pending messages
                                do_filter: Whether to apply memory config limits (max_tokens, max_messages)
                    
                            Returns:
                                Filtered list of messages in chronological order
                            """
                            if include_pending and self._pending_messages:
                                self.chat_messages.extend(self._pending_messages)
                                self._pending_messages.clear()
                    
                            # 2. Start with original history
                            history: Sequence[ChatMessage[Any]] = self.chat_messages
                    
                            # 3. Only filter if needed
                            if do_filter and self._config:
                                # First filter by message count (simple slice)
                                if self._config.max_messages:
                                    history = history[-self._config.max_messages :]
                    
                                # Then filter by tokens if needed
                                if self._config.max_tokens:
                                    token_count = 0
                                    filtered = []
                                    # Collect messages from newest to oldest until we hit the limit
                                    for msg in reversed(history):
                                        msg_tokens = self.get_message_tokens(msg)
                                        if token_count + msg_tokens > self._config.max_tokens:
                                            break
                                        token_count += msg_tokens
                                        filtered.append(msg)
                                    history = list(reversed(filtered))
                    
                            return list(history)
                    
                        def get_pending_messages(self) -> list[ChatMessage]:
                            """Get messages that will be included in next interaction."""
                            return list(self._pending_messages)
                    
                        def clear_pending(self):
                            """Clear pending messages without adding them to history."""
                            self._pending_messages.clear()
                    
                        def set_history(self, history: list[ChatMessage]):
                            """Update conversation history after run."""
                            self.chat_messages.clear()
                            self.chat_messages.extend(history)
                    
                        def clear(self):
                            """Clear conversation history and prompts."""
                            self.chat_messages = ChatMessageContainer()
                            self._last_messages = []
                            event = self.HistoryCleared(session_id=str(self.id))
                            self.history_cleared.emit(event)
                    
                        @asynccontextmanager
                        async def temporary_state(
                            self,
                            history: list[AnyPromptType] | SessionQuery | None = None,
                            *,
                            replace_history: bool = False,
                        ) -> AsyncIterator[Self]:
                            """Temporarily set conversation history.
                    
                            Args:
                                history: Optional list of prompts to use as temporary history.
                                        Can be strings, BasePrompts, or other prompt types.
                                replace_history: If True, only use provided history. If False, append
                                        to existing history.
                            """
                            from toprompt import to_prompt
                    
                            old_history = self.chat_messages.copy()
                    
                            try:
                                messages: Sequence[ChatMessage[Any]] = ChatMessageContainer()
                                if history is not None:
                                    if isinstance(history, SessionQuery):
                                        messages = await self._agent.context.storage.filter_messages(history)
                                    else:
                                        messages = [
                                            ChatMessage(content=await to_prompt(p), role="user")
                                            for p in history
                                        ]
                    
                                if replace_history:
                                    self.chat_messages = ChatMessageContainer(messages)
                                else:
                                    self.chat_messages.extend(messages)
                    
                                yield self
                    
                            finally:
                                self.chat_messages = old_history
                    
                        def add_chat_messages(self, messages: Sequence[ChatMessage]):
                            """Add new messages to history and update last_messages."""
                            self._last_messages = list(messages)
                            self.chat_messages.extend(messages)
                    
                        @property
                        def last_run_messages(self) -> list[ChatMessage]:
                            """Get messages from the last run converted to our format."""
                            return self._last_messages
                    
                        def add_context_message(
                            self,
                            content: str,
                            source: str | None = None,
                            **metadata: Any,
                        ):
                            """Add a context message.
                    
                            Args:
                                content: Text content to add
                                source: Description of content source
                                **metadata: Additional metadata to include with the message
                            """
                            meta_str = ""
                            if metadata:
                                meta_str = "\n".join(f"{k}: {v}" for k, v in metadata.items())
                                meta_str = f"\nMetadata:\n{meta_str}\n"
                    
                            header = f"Content from {source}:" if source else "Additional context:"
                            formatted = f"{header}{meta_str}\n{content}\n"
                    
                            chat_message = ChatMessage[str](
                                content=formatted,
                                role="user",
                                name="user",
                                model=self._agent.model_name,
                                metadata=metadata,
                                conversation_id="context",  # TODO: should probably allow DB field to be NULL
                            )
                            self._pending_messages.append(chat_message)
                            # Emit as user message - will trigger logging through existing flow
                            self._agent.message_received.emit(chat_message)
                    
                        async def add_context_from_path(
                            self,
                            path: StrPath,
                            *,
                            convert_to_md: bool = False,
                            **metadata: Any,
                        ):
                            """Add file or URL content as context message.
                    
                            Args:
                                path: Any UPath-supported path
                                convert_to_md: Whether to convert content to markdown
                                **metadata: Additional metadata to include with the message
                    
                            Raises:
                                ValueError: If content cannot be loaded or converted
                            """
                            from upathtools import to_upath
                    
                            path_obj = to_upath(path)
                            if convert_to_md:
                                content = await self._agent.context.converter.convert_file(path)
                                source = f"markdown:{path_obj.name}"
                            else:
                                content = await read_path(path)
                                source = f"{path_obj.protocol}:{path_obj.name}"
                            self.add_context_message(content, source=source, **metadata)
                    
                        async def add_context_from_resource(self, resource: Resource | str):
                            """Add content from a LLMling resource."""
                            if not self._agent.runtime:
                                msg = "No runtime available"
                                raise RuntimeError(msg)
                    
                            if isinstance(resource, str):
                                content = await self._agent.runtime.load_resource(resource)
                                self.add_context_message(
                                    str(content.content),
                                    source=f"Resource {resource}",
                                    mime_type=content.metadata.mime_type,
                                    **content.metadata.extra,
                                )
                            else:
                                loader = self._agent.runtime._loader_registry.get_loader(resource)
                                async for content in loader.load(resource):
                                    self.add_context_message(
                                        str(content.content),
                                        source=f"{resource.type}:{resource.uri}",
                                        mime_type=content.metadata.mime_type,
                                        **content.metadata.extra,
                                    )
                    
                        async def add_context_from_prompt(
                            self,
                            prompt: PromptType,
                            metadata: dict[str, Any] | None = None,
                            **kwargs: Any,
                        ):
                            """Add rendered prompt content as context message.
                    
                            Args:
                                prompt: LLMling prompt (static, dynamic, or file-based)
                                metadata: Additional metadata to include with the message
                                kwargs: Optional kwargs for prompt formatting
                            """
                            try:
                                # Format the prompt using LLMling's prompt system
                                messages = await prompt.format(kwargs)
                                # Extract text content from all messages
                                content = "\n\n".join(msg.get_text_content() for msg in messages)
                    
                                self.add_context_message(
                                    content,
                                    source=f"prompt:{prompt.name or prompt.type}",
                                    prompt_args=kwargs,
                                    **(metadata or {}),
                                )
                            except Exception as e:
                                msg = f"Failed to format prompt: {e}"
                                raise ValueError(msg) from e
                    
                        def get_history_tokens(self) -> int:
                            """Get token count for current history."""
                            # Use cost_info if available
                            return self.chat_messages.get_history_tokens(self._agent.model_name)
                    
                        def get_pending_tokens(self) -> int:
                            """Get token count for pending messages."""
                            text = "\n".join(msg.format() for msg in self._pending_messages)
                            return count_tokens(text, self._agent.model_name)
                    

                    last_run_messages property

                    last_run_messages: list[ChatMessage]
                    

                    Get messages from the last run converted to our format.

                    HistoryCleared dataclass

                    Emitted when chat history is cleared.

                    Source code in src/llmling_agent/agent/conversation.py
                    56
                    57
                    58
                    59
                    60
                    61
                    @dataclass(frozen=True)
                    class HistoryCleared:
                        """Emitted when chat history is cleared."""
                    
                        session_id: str
                        timestamp: datetime = field(default_factory=get_now)
                    

                    __aenter__ async

                    __aenter__() -> Self
                    

                    Initialize when used standalone.

                    Source code in src/llmling_agent/agent/conversation.py
                    102
                    103
                    104
                    105
                    106
                    async def __aenter__(self) -> Self:
                        """Initialize when used standalone."""
                        if tasks := self.get_initialization_tasks():
                            await asyncio.gather(*tasks)
                        return self
                    

                    __aexit__ async

                    __aexit__(
                        exc_type: type[BaseException] | None,
                        exc_val: BaseException | None,
                        exc_tb: TracebackType | None,
                    )
                    

                    Clean up any pending messages.

                    Source code in src/llmling_agent/agent/conversation.py
                    108
                    109
                    110
                    111
                    112
                    113
                    114
                    115
                    async def __aexit__(
                        self,
                        exc_type: type[BaseException] | None,
                        exc_val: BaseException | None,
                        exc_tb: TracebackType | None,
                    ):
                        """Clean up any pending messages."""
                        self._pending_messages.clear()
                    

                    __contains__

                    __contains__(item: Any) -> bool
                    

                    Check if item is in history.

                    Source code in src/llmling_agent/agent/conversation.py
                    161
                    162
                    163
                    def __contains__(self, item: Any) -> bool:
                        """Check if item is in history."""
                        return item in self.chat_messages
                    

                    __getitem__

                    __getitem__(key: int) -> ChatMessage[Any]
                    
                    __getitem__(key: slice | str) -> list[ChatMessage[Any]]
                    
                    __getitem__(key: int | slice | str) -> ChatMessage[Any] | list[ChatMessage[Any]]
                    

                    Access conversation history.

                    Parameters:

                    Name Type Description Default
                    key int | slice | str

                    Either: - Integer index for single message - Slice for message range - Agent name for conversation history with that agent

                    required
                    Source code in src/llmling_agent/agent/conversation.py
                    138
                    139
                    140
                    141
                    142
                    143
                    144
                    145
                    146
                    147
                    148
                    149
                    150
                    151
                    152
                    153
                    154
                    155
                    156
                    157
                    158
                    159
                    def __getitem__(
                        self, key: int | slice | str
                    ) -> ChatMessage[Any] | list[ChatMessage[Any]]:
                        """Access conversation history.
                    
                        Args:
                            key: Either:
                                - Integer index for single message
                                - Slice for message range
                                - Agent name for conversation history with that agent
                        """
                        match key:
                            case int():
                                return self.chat_messages[key]
                            case slice():
                                return list(self.chat_messages[key])
                            case str():
                                query = SessionQuery(name=key)
                                return self._agent.context.storage.filter_messages_sync(query=query)
                            case _:
                                msg = f"Invalid key type: {type(key)}"
                                raise TypeError(msg)
                    

                    __init__

                    __init__(
                        agent: Agent[Any],
                        session_config: MemoryConfig | None = None,
                        *,
                        resources: Sequence[Resource | PromptType | str] = (),
                    )
                    

                    Initialize conversation manager.

                    Parameters:

                    Name Type Description Default
                    agent Agent[Any]

                    instance to manage

                    required
                    session_config MemoryConfig | None

                    Optional MemoryConfig

                    None
                    resources Sequence[Resource | PromptType | str]

                    Optional paths to load as context

                    ()
                    Source code in src/llmling_agent/agent/conversation.py
                    65
                    66
                    67
                    68
                    69
                    70
                    71
                    72
                    73
                    74
                    75
                    76
                    77
                    78
                    79
                    80
                    81
                    82
                    83
                    84
                    85
                    86
                    87
                    88
                    89
                    90
                    91
                    92
                    def __init__(
                        self,
                        agent: Agent[Any],
                        session_config: MemoryConfig | None = None,
                        *,
                        resources: Sequence[Resource | PromptType | str] = (),
                    ):
                        """Initialize conversation manager.
                    
                        Args:
                            agent: instance to manage
                            session_config: Optional MemoryConfig
                            resources: Optional paths to load as context
                        """
                        self._agent = agent
                        self.chat_messages = ChatMessageContainer()
                        self._last_messages: list[ChatMessage] = []
                        self._pending_messages: deque[ChatMessage] = deque()
                        self._config = session_config
                        self._resources = list(resources)  # Store for async loading
                        # Generate new ID if none provided
                        self.id = str(uuid4())
                    
                        if session_config is not None and session_config.session is not None:
                            storage = self._agent.context.storage
                            self._current_history = storage.filter_messages_sync(session_config.session)
                            if session_config.session.name:
                                self.id = session_config.session.name
                    

                    __len__

                    __len__() -> int
                    

                    Get length of history.

                    Source code in src/llmling_agent/agent/conversation.py
                    165
                    166
                    167
                    def __len__(self) -> int:
                        """Get length of history."""
                        return len(self.chat_messages)
                    

                    add_chat_messages

                    add_chat_messages(messages: Sequence[ChatMessage])
                    

                    Add new messages to history and update last_messages.

                    Source code in src/llmling_agent/agent/conversation.py
                    399
                    400
                    401
                    402
                    def add_chat_messages(self, messages: Sequence[ChatMessage]):
                        """Add new messages to history and update last_messages."""
                        self._last_messages = list(messages)
                        self.chat_messages.extend(messages)
                    

                    add_context_from_path async

                    add_context_from_path(path: StrPath, *, convert_to_md: bool = False, **metadata: Any)
                    

                    Add file or URL content as context message.

                    Parameters:

                    Name Type Description Default
                    path StrPath

                    Any UPath-supported path

                    required
                    convert_to_md bool

                    Whether to convert content to markdown

                    False
                    **metadata Any

                    Additional metadata to include with the message

                    {}

                    Raises:

                    Type Description
                    ValueError

                    If content cannot be loaded or converted

                    Source code in src/llmling_agent/agent/conversation.py
                    442
                    443
                    444
                    445
                    446
                    447
                    448
                    449
                    450
                    451
                    452
                    453
                    454
                    455
                    456
                    457
                    458
                    459
                    460
                    461
                    462
                    463
                    464
                    465
                    466
                    467
                    468
                    async def add_context_from_path(
                        self,
                        path: StrPath,
                        *,
                        convert_to_md: bool = False,
                        **metadata: Any,
                    ):
                        """Add file or URL content as context message.
                    
                        Args:
                            path: Any UPath-supported path
                            convert_to_md: Whether to convert content to markdown
                            **metadata: Additional metadata to include with the message
                    
                        Raises:
                            ValueError: If content cannot be loaded or converted
                        """
                        from upathtools import to_upath
                    
                        path_obj = to_upath(path)
                        if convert_to_md:
                            content = await self._agent.context.converter.convert_file(path)
                            source = f"markdown:{path_obj.name}"
                        else:
                            content = await read_path(path)
                            source = f"{path_obj.protocol}:{path_obj.name}"
                        self.add_context_message(content, source=source, **metadata)
                    

                    add_context_from_prompt async

                    add_context_from_prompt(
                        prompt: PromptType, metadata: dict[str, Any] | None = None, **kwargs: Any
                    )
                    

                    Add rendered prompt content as context message.

                    Parameters:

                    Name Type Description Default
                    prompt PromptType

                    LLMling prompt (static, dynamic, or file-based)

                    required
                    metadata dict[str, Any] | None

                    Additional metadata to include with the message

                    None
                    kwargs Any

                    Optional kwargs for prompt formatting

                    {}
                    Source code in src/llmling_agent/agent/conversation.py
                    494
                    495
                    496
                    497
                    498
                    499
                    500
                    501
                    502
                    503
                    504
                    505
                    506
                    507
                    508
                    509
                    510
                    511
                    512
                    513
                    514
                    515
                    516
                    517
                    518
                    519
                    520
                    521
                    async def add_context_from_prompt(
                        self,
                        prompt: PromptType,
                        metadata: dict[str, Any] | None = None,
                        **kwargs: Any,
                    ):
                        """Add rendered prompt content as context message.
                    
                        Args:
                            prompt: LLMling prompt (static, dynamic, or file-based)
                            metadata: Additional metadata to include with the message
                            kwargs: Optional kwargs for prompt formatting
                        """
                        try:
                            # Format the prompt using LLMling's prompt system
                            messages = await prompt.format(kwargs)
                            # Extract text content from all messages
                            content = "\n\n".join(msg.get_text_content() for msg in messages)
                    
                            self.add_context_message(
                                content,
                                source=f"prompt:{prompt.name or prompt.type}",
                                prompt_args=kwargs,
                                **(metadata or {}),
                            )
                        except Exception as e:
                            msg = f"Failed to format prompt: {e}"
                            raise ValueError(msg) from e
                    

                    add_context_from_resource async

                    add_context_from_resource(resource: Resource | str)
                    

                    Add content from a LLMling resource.

                    Source code in src/llmling_agent/agent/conversation.py
                    470
                    471
                    472
                    473
                    474
                    475
                    476
                    477
                    478
                    479
                    480
                    481
                    482
                    483
                    484
                    485
                    486
                    487
                    488
                    489
                    490
                    491
                    492
                    async def add_context_from_resource(self, resource: Resource | str):
                        """Add content from a LLMling resource."""
                        if not self._agent.runtime:
                            msg = "No runtime available"
                            raise RuntimeError(msg)
                    
                        if isinstance(resource, str):
                            content = await self._agent.runtime.load_resource(resource)
                            self.add_context_message(
                                str(content.content),
                                source=f"Resource {resource}",
                                mime_type=content.metadata.mime_type,
                                **content.metadata.extra,
                            )
                        else:
                            loader = self._agent.runtime._loader_registry.get_loader(resource)
                            async for content in loader.load(resource):
                                self.add_context_message(
                                    str(content.content),
                                    source=f"{resource.type}:{resource.uri}",
                                    mime_type=content.metadata.mime_type,
                                    **content.metadata.extra,
                                )
                    

                    add_context_message

                    add_context_message(content: str, source: str | None = None, **metadata: Any)
                    

                    Add a context message.

                    Parameters:

                    Name Type Description Default
                    content str

                    Text content to add

                    required
                    source str | None

                    Description of content source

                    None
                    **metadata Any

                    Additional metadata to include with the message

                    {}
                    Source code in src/llmling_agent/agent/conversation.py
                    409
                    410
                    411
                    412
                    413
                    414
                    415
                    416
                    417
                    418
                    419
                    420
                    421
                    422
                    423
                    424
                    425
                    426
                    427
                    428
                    429
                    430
                    431
                    432
                    433
                    434
                    435
                    436
                    437
                    438
                    439
                    440
                    def add_context_message(
                        self,
                        content: str,
                        source: str | None = None,
                        **metadata: Any,
                    ):
                        """Add a context message.
                    
                        Args:
                            content: Text content to add
                            source: Description of content source
                            **metadata: Additional metadata to include with the message
                        """
                        meta_str = ""
                        if metadata:
                            meta_str = "\n".join(f"{k}: {v}" for k, v in metadata.items())
                            meta_str = f"\nMetadata:\n{meta_str}\n"
                    
                        header = f"Content from {source}:" if source else "Additional context:"
                        formatted = f"{header}{meta_str}\n{content}\n"
                    
                        chat_message = ChatMessage[str](
                            content=formatted,
                            role="user",
                            name="user",
                            model=self._agent.model_name,
                            metadata=metadata,
                            conversation_id="context",  # TODO: should probably allow DB field to be NULL
                        )
                        self._pending_messages.append(chat_message)
                        # Emit as user message - will trigger logging through existing flow
                        self._agent.message_received.emit(chat_message)
                    

                    clear

                    clear()
                    

                    Clear conversation history and prompts.

                    Source code in src/llmling_agent/agent/conversation.py
                    352
                    353
                    354
                    355
                    356
                    357
                    def clear(self):
                        """Clear conversation history and prompts."""
                        self.chat_messages = ChatMessageContainer()
                        self._last_messages = []
                        event = self.HistoryCleared(session_id=str(self.id))
                        self.history_cleared.emit(event)
                    

                    clear_pending

                    clear_pending()
                    

                    Clear pending messages without adding them to history.

                    Source code in src/llmling_agent/agent/conversation.py
                    343
                    344
                    345
                    def clear_pending(self):
                        """Clear pending messages without adding them to history."""
                        self._pending_messages.clear()
                    

                    format_history async

                    format_history(
                        *,
                        max_tokens: int | None = None,
                        include_system: bool = False,
                        format_template: str | None = None,
                        num_messages: int | None = None,
                    ) -> str
                    

                    Format conversation history as a single context message.

                    Parameters:

                    Name Type Description Default
                    max_tokens int | None

                    Optional limit to include only last N tokens

                    None
                    include_system bool

                    Whether to include system messages

                    False
                    format_template str | None

                    Optional custom format (defaults to agent/message pairs)

                    None
                    num_messages int | None

                    Optional limit to include only last N messages

                    None
                    Source code in src/llmling_agent/agent/conversation.py
                    174
                    175
                    176
                    177
                    178
                    179
                    180
                    181
                    182
                    183
                    184
                    185
                    186
                    187
                    188
                    189
                    190
                    191
                    192
                    193
                    194
                    195
                    196
                    197
                    198
                    199
                    200
                    201
                    202
                    203
                    204
                    205
                    206
                    207
                    208
                    209
                    210
                    211
                    212
                    213
                    214
                    215
                    216
                    217
                    218
                    219
                    220
                    221
                    222
                    223
                    224
                    225
                    async def format_history(
                        self,
                        *,
                        max_tokens: int | None = None,
                        include_system: bool = False,
                        format_template: str | None = None,
                        num_messages: int | None = None,  # Add this parameter
                    ) -> str:
                        """Format conversation history as a single context message.
                    
                        Args:
                            max_tokens: Optional limit to include only last N tokens
                            include_system: Whether to include system messages
                            format_template: Optional custom format (defaults to agent/message pairs)
                            num_messages: Optional limit to include only last N messages
                        """
                        template = format_template or "Agent {agent}: {content}\n"
                        messages: list[str] = []
                        token_count = 0
                    
                        # Get messages, optionally limited
                        history: Sequence[ChatMessage[Any]] = self.chat_messages
                        if num_messages:
                            history = history[-num_messages:]
                    
                        if max_tokens:
                            history = list(reversed(history))  # Start from newest when token limited
                    
                        for msg in history:
                            # Check role directly from ChatMessage
                            if not include_system and msg.role == "system":
                                continue
                            name = msg.name or msg.role.title()
                            formatted = template.format(agent=name, content=str(msg.content))
                    
                            if max_tokens:
                                # Count tokens in this message
                                if msg.cost_info:
                                    msg_tokens = msg.cost_info.token_usage["total"]
                                else:
                                    # Fallback to tiktoken if no cost info
                                    msg_tokens = self.get_message_tokens(msg)
                    
                                if token_count + msg_tokens > max_tokens:
                                    break
                                token_count += msg_tokens
                                # Add to front since we're going backwards
                                messages.insert(0, formatted)
                            else:
                                messages.append(formatted)
                    
                        return "\n".join(messages)
                    

                    get_history

                    get_history(include_pending: bool = True, do_filter: bool = True) -> list[ChatMessage]
                    

                    Get conversation history.

                    Parameters:

                    Name Type Description Default
                    include_pending bool

                    Whether to include pending messages

                    True
                    do_filter bool

                    Whether to apply memory config limits (max_tokens, max_messages)

                    True

                    Returns:

                    Type Description
                    list[ChatMessage]

                    Filtered list of messages in chronological order

                    Source code in src/llmling_agent/agent/conversation.py
                    297
                    298
                    299
                    300
                    301
                    302
                    303
                    304
                    305
                    306
                    307
                    308
                    309
                    310
                    311
                    312
                    313
                    314
                    315
                    316
                    317
                    318
                    319
                    320
                    321
                    322
                    323
                    324
                    325
                    326
                    327
                    328
                    329
                    330
                    331
                    332
                    333
                    334
                    335
                    336
                    337
                    def get_history(
                        self,
                        include_pending: bool = True,
                        do_filter: bool = True,
                    ) -> list[ChatMessage]:
                        """Get conversation history.
                    
                        Args:
                            include_pending: Whether to include pending messages
                            do_filter: Whether to apply memory config limits (max_tokens, max_messages)
                    
                        Returns:
                            Filtered list of messages in chronological order
                        """
                        if include_pending and self._pending_messages:
                            self.chat_messages.extend(self._pending_messages)
                            self._pending_messages.clear()
                    
                        # 2. Start with original history
                        history: Sequence[ChatMessage[Any]] = self.chat_messages
                    
                        # 3. Only filter if needed
                        if do_filter and self._config:
                            # First filter by message count (simple slice)
                            if self._config.max_messages:
                                history = history[-self._config.max_messages :]
                    
                            # Then filter by tokens if needed
                            if self._config.max_tokens:
                                token_count = 0
                                filtered = []
                                # Collect messages from newest to oldest until we hit the limit
                                for msg in reversed(history):
                                    msg_tokens = self.get_message_tokens(msg)
                                    if token_count + msg_tokens > self._config.max_tokens:
                                        break
                                    token_count += msg_tokens
                                    filtered.append(msg)
                                history = list(reversed(filtered))
                    
                        return list(history)
                    

                    get_history_tokens

                    get_history_tokens() -> int
                    

                    Get token count for current history.

                    Source code in src/llmling_agent/agent/conversation.py
                    523
                    524
                    525
                    526
                    def get_history_tokens(self) -> int:
                        """Get token count for current history."""
                        # Use cost_info if available
                        return self.chat_messages.get_history_tokens(self._agent.model_name)
                    

                    get_initialization_tasks

                    get_initialization_tasks() -> list[Coroutine[Any, Any, Any]]
                    

                    Get all initialization coroutines.

                    Source code in src/llmling_agent/agent/conversation.py
                     97
                     98
                     99
                    100
                    def get_initialization_tasks(self) -> list[Coroutine[Any, Any, Any]]:
                        """Get all initialization coroutines."""
                        self._resources = []  # Clear so we dont load again on async init
                        return [self.load_context_source(source) for source in self._resources]
                    

                    get_message_tokens

                    get_message_tokens(message: ChatMessage) -> int
                    

                    Get token count for a single message.

                    Source code in src/llmling_agent/agent/conversation.py
                    169
                    170
                    171
                    172
                    def get_message_tokens(self, message: ChatMessage) -> int:
                        """Get token count for a single message."""
                        content = "\n".join(message.format())
                        return count_tokens(content, self._agent.model_name)
                    

                    get_pending_messages

                    get_pending_messages() -> list[ChatMessage]
                    

                    Get messages that will be included in next interaction.

                    Source code in src/llmling_agent/agent/conversation.py
                    339
                    340
                    341
                    def get_pending_messages(self) -> list[ChatMessage]:
                        """Get messages that will be included in next interaction."""
                        return list(self._pending_messages)
                    

                    get_pending_tokens

                    get_pending_tokens() -> int
                    

                    Get token count for pending messages.

                    Source code in src/llmling_agent/agent/conversation.py
                    528
                    529
                    530
                    531
                    def get_pending_tokens(self) -> int:
                        """Get token count for pending messages."""
                        text = "\n".join(msg.format() for msg in self._pending_messages)
                        return count_tokens(text, self._agent.model_name)
                    

                    load_context_source async

                    load_context_source(source: Resource | PromptType | str)
                    

                    Load context from a single source.

                    Source code in src/llmling_agent/agent/conversation.py
                    227
                    228
                    229
                    230
                    231
                    232
                    233
                    234
                    235
                    236
                    237
                    238
                    239
                    async def load_context_source(self, source: Resource | PromptType | str):
                        """Load context from a single source."""
                        try:
                            match source:
                                case str():
                                    await self.add_context_from_path(source)
                                case BaseResource():
                                    await self.add_context_from_resource(source)
                                case BasePrompt():
                                    await self.add_context_from_prompt(source)
                        except Exception:
                            msg = "Failed to load context from %s"
                            logger.exception(msg, "file" if isinstance(source, str) else source.type)
                    

                    load_history_from_database

                    load_history_from_database(
                        session: SessionIdType | SessionQuery = None,
                        *,
                        since: datetime | None = None,
                        until: datetime | None = None,
                        roles: set[MessageRole] | None = None,
                        limit: int | None = None,
                    )
                    

                    Load conversation history from database.

                    Parameters:

                    Name Type Description Default
                    session SessionIdType | SessionQuery

                    Session ID or query config

                    None
                    since datetime | None

                    Only include messages after this time (override)

                    None
                    until datetime | None

                    Only include messages before this time (override)

                    None
                    roles set[MessageRole] | None

                    Only include messages with these roles (override)

                    None
                    limit int | None

                    Maximum number of messages to return (override)

                    None
                    Source code in src/llmling_agent/agent/conversation.py
                    241
                    242
                    243
                    244
                    245
                    246
                    247
                    248
                    249
                    250
                    251
                    252
                    253
                    254
                    255
                    256
                    257
                    258
                    259
                    260
                    261
                    262
                    263
                    264
                    265
                    266
                    267
                    268
                    269
                    270
                    271
                    272
                    273
                    274
                    275
                    276
                    277
                    278
                    279
                    280
                    281
                    282
                    283
                    284
                    285
                    286
                    287
                    288
                    289
                    290
                    291
                    292
                    293
                    294
                    295
                    def load_history_from_database(
                        self,
                        session: SessionIdType | SessionQuery = None,
                        *,
                        since: datetime | None = None,
                        until: datetime | None = None,
                        roles: set[MessageRole] | None = None,
                        limit: int | None = None,
                    ):
                        """Load conversation history from database.
                    
                        Args:
                            session: Session ID or query config
                            since: Only include messages after this time (override)
                            until: Only include messages before this time (override)
                            roles: Only include messages with these roles (override)
                            limit: Maximum number of messages to return (override)
                        """
                        storage = self._agent.context.storage
                        match session:
                            case SessionQuery() as query:
                                # Override query params if provided
                                if since is not None or until is not None or roles or limit:
                                    update = {
                                        "since": since.isoformat() if since else None,
                                        "until": until.isoformat() if until else None,
                                        "roles": roles,
                                        "limit": limit,
                                    }
                                    query = query.model_copy(update=update)
                                if query.name:
                                    self.id = query.name
                            case str() | UUID():
                                self.id = str(session)
                                query = SessionQuery(
                                    name=self.id,
                                    since=since.isoformat() if since else None,
                                    until=until.isoformat() if until else None,
                                    roles=roles,
                                    limit=limit,
                                )
                            case None:
                                # Use current session ID
                                query = SessionQuery(
                                    name=self.id,
                                    since=since.isoformat() if since else None,
                                    until=until.isoformat() if until else None,
                                    roles=roles,
                                    limit=limit,
                                )
                            case _:
                                msg = f"Invalid type for session: {type(session)}"
                                raise ValueError(msg)
                        self.chat_messages.clear()
                        self.chat_messages.extend(storage.filter_messages_sync(query))
                    

                    set_history

                    set_history(history: list[ChatMessage])
                    

                    Update conversation history after run.

                    Source code in src/llmling_agent/agent/conversation.py
                    347
                    348
                    349
                    350
                    def set_history(self, history: list[ChatMessage]):
                        """Update conversation history after run."""
                        self.chat_messages.clear()
                        self.chat_messages.extend(history)
                    

                    temporary_state async

                    temporary_state(
                        history: list[AnyPromptType] | SessionQuery | None = None,
                        *,
                        replace_history: bool = False,
                    ) -> AsyncIterator[Self]
                    

                    Temporarily set conversation history.

                    Parameters:

                    Name Type Description Default
                    history list[AnyPromptType] | SessionQuery | None

                    Optional list of prompts to use as temporary history. Can be strings, BasePrompts, or other prompt types.

                    None
                    replace_history bool

                    If True, only use provided history. If False, append to existing history.

                    False
                    Source code in src/llmling_agent/agent/conversation.py
                    359
                    360
                    361
                    362
                    363
                    364
                    365
                    366
                    367
                    368
                    369
                    370
                    371
                    372
                    373
                    374
                    375
                    376
                    377
                    378
                    379
                    380
                    381
                    382
                    383
                    384
                    385
                    386
                    387
                    388
                    389
                    390
                    391
                    392
                    393
                    394
                    395
                    396
                    397
                    @asynccontextmanager
                    async def temporary_state(
                        self,
                        history: list[AnyPromptType] | SessionQuery | None = None,
                        *,
                        replace_history: bool = False,
                    ) -> AsyncIterator[Self]:
                        """Temporarily set conversation history.
                    
                        Args:
                            history: Optional list of prompts to use as temporary history.
                                    Can be strings, BasePrompts, or other prompt types.
                            replace_history: If True, only use provided history. If False, append
                                    to existing history.
                        """
                        from toprompt import to_prompt
                    
                        old_history = self.chat_messages.copy()
                    
                        try:
                            messages: Sequence[ChatMessage[Any]] = ChatMessageContainer()
                            if history is not None:
                                if isinstance(history, SessionQuery):
                                    messages = await self._agent.context.storage.filter_messages(history)
                                else:
                                    messages = [
                                        ChatMessage(content=await to_prompt(p), role="user")
                                        for p in history
                                    ]
                    
                            if replace_history:
                                self.chat_messages = ChatMessageContainer(messages)
                            else:
                                self.chat_messages.extend(messages)
                    
                            yield self
                    
                        finally:
                            self.chat_messages = old_history
                    

                    Interactions

                    Manages agent communication patterns.

                    Source code in src/llmling_agent/agent/interactions.py
                     87
                     88
                     89
                     90
                     91
                     92
                     93
                     94
                     95
                     96
                     97
                     98
                     99
                    100
                    101
                    102
                    103
                    104
                    105
                    106
                    107
                    108
                    109
                    110
                    111
                    112
                    113
                    114
                    115
                    116
                    117
                    118
                    119
                    120
                    121
                    122
                    123
                    124
                    125
                    126
                    127
                    128
                    129
                    130
                    131
                    132
                    133
                    134
                    135
                    136
                    137
                    138
                    139
                    140
                    141
                    142
                    143
                    144
                    145
                    146
                    147
                    148
                    149
                    150
                    151
                    152
                    153
                    154
                    155
                    156
                    157
                    158
                    159
                    160
                    161
                    162
                    163
                    164
                    165
                    166
                    167
                    168
                    169
                    170
                    171
                    172
                    173
                    174
                    175
                    176
                    177
                    178
                    179
                    180
                    181
                    182
                    183
                    184
                    185
                    186
                    187
                    188
                    189
                    190
                    191
                    192
                    193
                    194
                    195
                    196
                    197
                    198
                    199
                    200
                    201
                    202
                    203
                    204
                    205
                    206
                    207
                    208
                    209
                    210
                    211
                    212
                    213
                    214
                    215
                    216
                    217
                    218
                    219
                    220
                    221
                    222
                    223
                    224
                    225
                    226
                    227
                    228
                    229
                    230
                    231
                    232
                    233
                    234
                    235
                    236
                    237
                    238
                    239
                    240
                    241
                    242
                    243
                    244
                    245
                    246
                    247
                    248
                    249
                    250
                    251
                    252
                    253
                    254
                    255
                    256
                    257
                    258
                    259
                    260
                    261
                    262
                    263
                    264
                    265
                    266
                    267
                    268
                    269
                    270
                    271
                    272
                    273
                    274
                    275
                    276
                    277
                    278
                    279
                    280
                    281
                    282
                    283
                    284
                    285
                    286
                    287
                    288
                    289
                    290
                    291
                    292
                    293
                    294
                    295
                    296
                    297
                    298
                    299
                    300
                    301
                    302
                    303
                    304
                    305
                    306
                    307
                    308
                    309
                    310
                    311
                    312
                    313
                    314
                    315
                    316
                    317
                    318
                    319
                    320
                    321
                    322
                    323
                    324
                    325
                    326
                    327
                    328
                    329
                    330
                    331
                    332
                    333
                    334
                    335
                    336
                    337
                    338
                    339
                    340
                    341
                    342
                    343
                    344
                    345
                    346
                    347
                    348
                    349
                    350
                    351
                    352
                    353
                    354
                    355
                    356
                    357
                    358
                    359
                    360
                    361
                    362
                    363
                    364
                    365
                    366
                    367
                    368
                    369
                    370
                    371
                    372
                    373
                    374
                    375
                    376
                    377
                    378
                    379
                    380
                    381
                    382
                    383
                    384
                    385
                    386
                    387
                    388
                    389
                    390
                    391
                    392
                    393
                    394
                    395
                    396
                    397
                    398
                    399
                    400
                    401
                    402
                    403
                    404
                    405
                    406
                    407
                    408
                    409
                    410
                    411
                    412
                    413
                    414
                    415
                    416
                    417
                    418
                    419
                    420
                    421
                    422
                    423
                    424
                    425
                    426
                    427
                    428
                    429
                    430
                    431
                    432
                    433
                    434
                    435
                    436
                    437
                    438
                    439
                    440
                    441
                    442
                    443
                    444
                    445
                    446
                    447
                    448
                    449
                    450
                    451
                    452
                    453
                    454
                    455
                    456
                    457
                    458
                    459
                    460
                    461
                    462
                    463
                    464
                    465
                    466
                    467
                    468
                    469
                    470
                    471
                    472
                    473
                    474
                    475
                    476
                    477
                    478
                    479
                    480
                    481
                    482
                    483
                    484
                    485
                    486
                    487
                    488
                    489
                    490
                    491
                    492
                    493
                    494
                    495
                    496
                    497
                    498
                    499
                    500
                    501
                    502
                    503
                    504
                    505
                    506
                    507
                    508
                    509
                    510
                    511
                    512
                    513
                    514
                    515
                    516
                    517
                    518
                    519
                    520
                    521
                    522
                    523
                    524
                    525
                    526
                    527
                    528
                    529
                    530
                    531
                    class Interactions[TDeps, TResult]:
                        """Manages agent communication patterns."""
                    
                        def __init__(self, agent: AnyAgent[TDeps, TResult]):
                            self.agent = agent
                    
                        async def conversation(
                            self,
                            other: MessageNode[Any, Any],
                            initial_message: AnyPromptType,
                            *,
                            max_rounds: int | None = None,
                            end_condition: Callable[[list[ChatMessage[Any]], ChatMessage[Any]], bool]
                            | None = None,
                            store_history: bool = True,
                        ) -> AsyncIterator[ChatMessage[Any]]:
                            """Maintain conversation between two agents.
                    
                            Args:
                                other: Agent to converse with
                                initial_message: Message to start conversation with
                                max_rounds: Optional maximum number of exchanges
                                end_condition: Optional predicate to check for conversation end
                                store_history: Whether to store in conversation history
                    
                            Yields:
                                Messages from both agents in conversation order
                            """
                            rounds = 0
                            messages: list[ChatMessage[Any]] = []
                            current_message = initial_message
                            current_node: MessageNode[Any, Any] = self.agent
                    
                            while True:
                                if max_rounds and rounds >= max_rounds:
                                    logger.debug("Conversation ended: max rounds (%d) reached", max_rounds)
                                    return
                    
                                response = await current_node.run(
                                    current_message, store_history=store_history
                                )
                                messages.append(response)
                                yield response
                    
                                if end_condition and end_condition(messages, response):
                                    logger.debug("Conversation ended: end condition met")
                                    return
                    
                                # Switch agents for next round
                                current_node = other if current_node == self.agent else self.agent
                                current_message = response.content
                                rounds += 1
                    
                        @overload
                        async def pick[T: AnyPromptType](
                            self,
                            selections: Sequence[T],
                            task: str,
                            prompt: AnyPromptType | None = None,
                        ) -> Pick[T]: ...
                    
                        @overload
                        async def pick[T: AnyPromptType](
                            self,
                            selections: Sequence[T],
                            task: str,
                            prompt: AnyPromptType | None = None,
                        ) -> Pick[T]: ...
                    
                        @overload
                        async def pick[T: AnyPromptType](
                            self,
                            selections: Mapping[str, T],
                            task: str,
                            prompt: AnyPromptType | None = None,
                        ) -> Pick[T]: ...
                    
                        @overload
                        async def pick(
                            self,
                            selections: AgentPool,
                            task: str,
                            prompt: AnyPromptType | None = None,
                        ) -> Pick[AnyAgent[Any, Any]]: ...
                    
                        @overload
                        async def pick(
                            self,
                            selections: BaseTeam[TDeps, Any],
                            task: str,
                            prompt: AnyPromptType | None = None,
                        ) -> Pick[MessageNode[TDeps, Any]]: ...
                    
                        async def pick[T](
                            self,
                            selections: Sequence[T] | Mapping[str, T] | AgentPool | BaseTeam[TDeps, Any],
                            task: str,
                            prompt: AnyPromptType | None = None,
                        ) -> Pick[T]:
                            """Pick from available options with reasoning.
                    
                            Args:
                                selections: What to pick from:
                                    - Sequence of items (auto-labeled)
                                    - Dict mapping labels to items
                                    - AgentPool
                                    - Team
                                task: Task/decision description
                                prompt: Optional custom selection prompt
                    
                            Returns:
                                Decision with selected item and reasoning
                    
                            Raises:
                                ValueError: If no choices available or invalid selection
                            """
                            # Get items and create label mapping
                            from toprompt import to_prompt
                    
                            from llmling_agent import AgentPool
                            from llmling_agent.delegation.base_team import BaseTeam
                    
                            match selections:
                                case dict():
                                    label_map = selections
                                    items: list[Any] = list(selections.values())
                                case BaseTeam():
                                    items = list(selections.agents)
                                    label_map = {get_label(item): item for item in items}
                                case AgentPool():
                                    items = list(selections.agents.values())
                                    label_map = {get_label(item): item for item in items}
                                case _:
                                    items = list(selections)
                                    label_map = {get_label(item): item for item in items}
                    
                            if not items:
                                msg = "No choices available"
                                raise ValueError(msg)
                    
                            # Get descriptions for all items
                            descriptions = []
                            for label, item in label_map.items():
                                item_desc = await to_prompt(item)
                                descriptions.append(f"{label}:\n{item_desc}")
                    
                            default_prompt = f"""Task/Decision: {task}
                    
                    Available options:
                    {"-" * 40}
                    {"\n\n".join(descriptions)}
                    {"-" * 40}
                    
                    Select ONE option by its exact label."""
                    
                            # Get LLM's string-based decision
                            result = await self.agent.to_structured(LLMPick).run(prompt or default_prompt)
                    
                            # Convert to type-safe decision
                            if result.content.selection not in label_map:
                                msg = f"Invalid selection: {result.content.selection}"
                                raise ValueError(msg)
                    
                            selected = cast(T, label_map[result.content.selection])
                            return Pick(selection=selected, reason=result.content.reason)
                    
                        @overload
                        async def pick_multiple[T: AnyPromptType](
                            self,
                            selections: Sequence[T],
                            task: str,
                            *,
                            min_picks: int = 1,
                            max_picks: int | None = None,
                            prompt: AnyPromptType | None = None,
                        ) -> MultiPick[T]: ...
                    
                        @overload
                        async def pick_multiple[T: AnyPromptType](
                            self,
                            selections: Mapping[str, T],
                            task: str,
                            *,
                            min_picks: int = 1,
                            max_picks: int | None = None,
                            prompt: AnyPromptType | None = None,
                        ) -> MultiPick[T]: ...
                    
                        @overload
                        async def pick_multiple(
                            self,
                            selections: BaseTeam[TDeps, Any],
                            task: str,
                            *,
                            min_picks: int = 1,
                            max_picks: int | None = None,
                            prompt: AnyPromptType | None = None,
                        ) -> MultiPick[MessageNode[TDeps, Any]]: ...
                    
                        @overload
                        async def pick_multiple(
                            self,
                            selections: AgentPool,
                            task: str,
                            *,
                            min_picks: int = 1,
                            max_picks: int | None = None,
                            prompt: AnyPromptType | None = None,
                        ) -> MultiPick[AnyAgent[Any, Any]]: ...
                    
                        async def pick_multiple[T](
                            self,
                            selections: Sequence[T] | Mapping[str, T] | AgentPool | BaseTeam[TDeps, Any],
                            task: str,
                            *,
                            min_picks: int = 1,
                            max_picks: int | None = None,
                            prompt: AnyPromptType | None = None,
                        ) -> MultiPick[T]:
                            """Pick multiple options from available choices.
                    
                            Args:
                                selections: What to pick from
                                task: Task/decision description
                                min_picks: Minimum number of selections required
                                max_picks: Maximum number of selections (None for unlimited)
                                prompt: Optional custom selection prompt
                            """
                            from toprompt import to_prompt
                    
                            from llmling_agent import AgentPool
                            from llmling_agent.delegation.base_team import BaseTeam
                    
                            match selections:
                                case Mapping():
                                    label_map = selections
                                    items: list[Any] = list(selections.values())
                                case BaseTeam():
                                    items = list(selections.agents)
                                    label_map = {get_label(item): item for item in items}
                                case AgentPool():
                                    items = list(selections.agents.values())
                                    label_map = {get_label(item): item for item in items}
                                case _:
                                    items = list(selections)
                                    label_map = {get_label(item): item for item in items}
                    
                            if not items:
                                msg = "No choices available"
                                raise ValueError(msg)
                    
                            if max_picks is not None and max_picks < min_picks:
                                msg = f"max_picks ({max_picks}) cannot be less than min_picks ({min_picks})"
                                raise ValueError(msg)
                    
                            descriptions = []
                            for label, item in label_map.items():
                                item_desc = await to_prompt(item)
                                descriptions.append(f"{label}:\n{item_desc}")
                    
                            picks_info = (
                                f"Select between {min_picks} and {max_picks}"
                                if max_picks is not None
                                else f"Select at least {min_picks}"
                            )
                    
                            default_prompt = f"""Task/Decision: {task}
                    
                    Available options:
                    {"-" * 40}
                    {"\n\n".join(descriptions)}
                    {"-" * 40}
                    
                    {picks_info} options by their exact labels.
                    List your selections, one per line, followed by your reasoning."""
                    
                            result = await self.agent.to_structured(LLMMultiPick).run(
                                prompt or default_prompt
                            )
                    
                            # Validate selections
                            invalid = [s for s in result.content.selections if s not in label_map]
                            if invalid:
                                msg = f"Invalid selections: {', '.join(invalid)}"
                                raise ValueError(msg)
                            num_picks = len(result.content.selections)
                            if num_picks < min_picks:
                                msg = f"Too few selections: got {num_picks}, need {min_picks}"
                                raise ValueError(msg)
                    
                            if max_picks and num_picks > max_picks:
                                msg = f"Too many selections: got {num_picks}, max {max_picks}"
                                raise ValueError(msg)
                    
                            selected = [cast(T, label_map[label]) for label in result.content.selections]
                            return MultiPick(selections=selected, reason=result.content.reason)
                    
                        async def extract[T](
                            self,
                            text: str,
                            as_type: type[T],
                            *,
                            mode: ExtractionMode = "structured",
                            prompt: AnyPromptType | None = None,
                            include_tools: bool = False,
                        ) -> T:
                            """Extract single instance of type from text.
                    
                            Args:
                                text: Text to extract from
                                as_type: Type to extract
                                mode: Extraction approach:
                                    - "structured": Use Pydantic models (more robust)
                                    - "tool_calls": Use tool calls (more flexible)
                                prompt: Optional custom prompt
                                include_tools: Whether to include other tools (tool_calls mode only)
                            """
                            from schemez import create_constructor_schema
                    
                            # Create model for single instance
                            item_model = Schema.for_class_ctor(as_type)
                    
                            # Create extraction prompt
                            final_prompt = prompt or f"Extract {as_type.__name__} from: {text}"
                            schema_obj = create_constructor_schema(as_type)
                            schema = schema_obj.model_dump_openai()["function"]
                    
                            if mode == "structured":
                    
                                class Extraction(Schema):
                                    instance: item_model  # type: ignore
                                    # explanation: str | None = None
                    
                                result = await self.agent.to_structured(Extraction).run(final_prompt)
                    
                                # Convert model instance to actual type
                                return as_type(**result.content.instance.model_dump())  # type: ignore
                    
                            # Legacy tool-calls approach
                    
                            async def construct(**kwargs: Any) -> T:
                                """Construct instance from extracted data."""
                                return as_type(**kwargs)
                    
                            structured = self.agent.to_structured(item_model)
                            tool = Tool.from_callable(
                                construct,
                                name_override=schema["name"],
                                description_override=schema["description"],
                                # schema_override=schema,
                            )
                            with structured.tools.temporary_tools(tool, exclusive=not include_tools):
                                result = await structured.run(final_prompt)  # type: ignore
                            return result.content  # type: ignore
                    
                        async def extract_multiple[T](
                            self,
                            text: str,
                            as_type: type[T],
                            *,
                            mode: ExtractionMode = "structured",
                            min_items: int = 1,
                            max_items: int | None = None,
                            prompt: AnyPromptType | None = None,
                            include_tools: bool = False,
                        ) -> list[T]:
                            """Extract multiple instances of type from text.
                    
                            Args:
                                text: Text to extract from
                                as_type: Type to extract
                                mode: Extraction approach:
                                    - "structured": Use Pydantic models (more robust)
                                    - "tool_calls": Use tool calls (more flexible)
                                min_items: Minimum number of instances to extract
                                max_items: Maximum number of instances (None=unlimited)
                                prompt: Optional custom prompt
                                include_tools: Whether to include other tools (tool_calls mode only)
                            """
                            from schemez import create_constructor_schema
                    
                            item_model = Schema.for_class_ctor(as_type)
                    
                            instances: list[T] = []
                            schema_obj = create_constructor_schema(as_type)
                            final_prompt = prompt or "\n".join([
                                f"Extract {as_type.__name__} instances from text.",
                                # "Requirements:",
                                # f"- Extract at least {min_items} instances",
                                # f"- Extract at most {max_items} instances" if max_items else "",
                                "\nText to analyze:",
                                text,
                            ])
                            if mode == "structured":
                                # Create model for individual instance
                    
                                class Extraction(Schema):
                                    instances: list[item_model]  # type: ignore
                                    # explanation: str | None = None
                    
                                result = await self.agent.to_structured(Extraction).run(final_prompt)
                    
                                # Validate counts
                                num_instances = len(result.content.instances)
                                if len(result.content.instances) < min_items:
                                    msg = f"Found only {num_instances} instances, need {min_items}"
                                    raise ValueError(msg)
                    
                                if max_items and num_instances > max_items:
                                    msg = f"Found {num_instances} instances, max is {max_items}"
                                    raise ValueError(msg)
                    
                                # Convert model instances to actual type
                                return [
                                    as_type(
                                        **instance.data  # type: ignore
                                        if hasattr(instance, "data")
                                        else instance.model_dump()  # type: ignore
                                    )
                                    for instance in result.content.instances
                                ]
                    
                            # Legacy tool-calls approach
                    
                            async def add_instance(**kwargs: Any) -> str:
                                """Add an extracted instance."""
                                if max_items and len(instances) >= max_items:
                                    msg = f"Maximum number of items ({max_items}) reached"
                                    raise ValueError(msg)
                                instance = as_type(**kwargs)
                                instances.append(instance)
                                return f"Added {instance}"
                    
                            add_instance.__annotations__ = schema_obj.get_annotations()
                            add_instance.__signature__ = schema_obj.to_python_signature()  # type: ignore
                            structured = self.agent.to_structured(item_model)
                            with structured.tools.temporary_tools(add_instance, exclusive=not include_tools):
                                # Create extraction prompt
                                await structured.run(final_prompt)
                    
                            if len(instances) < min_items:
                                msg = f"Found only {len(instances)} instances, need at least {min_items}"
                                raise ValueError(msg)
                    
                            return instances
                    

                    conversation async

                    conversation(
                        other: MessageNode[Any, Any],
                        initial_message: AnyPromptType,
                        *,
                        max_rounds: int | None = None,
                        end_condition: Callable[[list[ChatMessage[Any]], ChatMessage[Any]], bool]
                        | None = None,
                        store_history: bool = True,
                    ) -> AsyncIterator[ChatMessage[Any]]
                    

                    Maintain conversation between two agents.

                    Parameters:

                    Name Type Description Default
                    other MessageNode[Any, Any]

                    Agent to converse with

                    required
                    initial_message AnyPromptType

                    Message to start conversation with

                    required
                    max_rounds int | None

                    Optional maximum number of exchanges

                    None
                    end_condition Callable[[list[ChatMessage[Any]], ChatMessage[Any]], bool] | None

                    Optional predicate to check for conversation end

                    None
                    store_history bool

                    Whether to store in conversation history

                    True

                    Yields:

                    Type Description
                    AsyncIterator[ChatMessage[Any]]

                    Messages from both agents in conversation order

                    Source code in src/llmling_agent/agent/interactions.py
                     93
                     94
                     95
                     96
                     97
                     98
                     99
                    100
                    101
                    102
                    103
                    104
                    105
                    106
                    107
                    108
                    109
                    110
                    111
                    112
                    113
                    114
                    115
                    116
                    117
                    118
                    119
                    120
                    121
                    122
                    123
                    124
                    125
                    126
                    127
                    128
                    129
                    130
                    131
                    132
                    133
                    134
                    135
                    136
                    137
                    138
                    async def conversation(
                        self,
                        other: MessageNode[Any, Any],
                        initial_message: AnyPromptType,
                        *,
                        max_rounds: int | None = None,
                        end_condition: Callable[[list[ChatMessage[Any]], ChatMessage[Any]], bool]
                        | None = None,
                        store_history: bool = True,
                    ) -> AsyncIterator[ChatMessage[Any]]:
                        """Maintain conversation between two agents.
                    
                        Args:
                            other: Agent to converse with
                            initial_message: Message to start conversation with
                            max_rounds: Optional maximum number of exchanges
                            end_condition: Optional predicate to check for conversation end
                            store_history: Whether to store in conversation history
                    
                        Yields:
                            Messages from both agents in conversation order
                        """
                        rounds = 0
                        messages: list[ChatMessage[Any]] = []
                        current_message = initial_message
                        current_node: MessageNode[Any, Any] = self.agent
                    
                        while True:
                            if max_rounds and rounds >= max_rounds:
                                logger.debug("Conversation ended: max rounds (%d) reached", max_rounds)
                                return
                    
                            response = await current_node.run(
                                current_message, store_history=store_history
                            )
                            messages.append(response)
                            yield response
                    
                            if end_condition and end_condition(messages, response):
                                logger.debug("Conversation ended: end condition met")
                                return
                    
                            # Switch agents for next round
                            current_node = other if current_node == self.agent else self.agent
                            current_message = response.content
                            rounds += 1
                    

                    extract async

                    extract(
                        text: str,
                        as_type: type[T],
                        *,
                        mode: ExtractionMode = "structured",
                        prompt: AnyPromptType | None = None,
                        include_tools: bool = False,
                    ) -> T
                    

                    Extract single instance of type from text.

                    Parameters:

                    Name Type Description Default
                    text str

                    Text to extract from

                    required
                    as_type type[T]

                    Type to extract

                    required
                    mode ExtractionMode

                    Extraction approach: - "structured": Use Pydantic models (more robust) - "tool_calls": Use tool calls (more flexible)

                    'structured'
                    prompt AnyPromptType | None

                    Optional custom prompt

                    None
                    include_tools bool

                    Whether to include other tools (tool_calls mode only)

                    False
                    Source code in src/llmling_agent/agent/interactions.py
                    384
                    385
                    386
                    387
                    388
                    389
                    390
                    391
                    392
                    393
                    394
                    395
                    396
                    397
                    398
                    399
                    400
                    401
                    402
                    403
                    404
                    405
                    406
                    407
                    408
                    409
                    410
                    411
                    412
                    413
                    414
                    415
                    416
                    417
                    418
                    419
                    420
                    421
                    422
                    423
                    424
                    425
                    426
                    427
                    428
                    429
                    430
                    431
                    432
                    433
                    434
                    435
                    436
                    437
                    438
                    439
                    440
                    async def extract[T](
                        self,
                        text: str,
                        as_type: type[T],
                        *,
                        mode: ExtractionMode = "structured",
                        prompt: AnyPromptType | None = None,
                        include_tools: bool = False,
                    ) -> T:
                        """Extract single instance of type from text.
                    
                        Args:
                            text: Text to extract from
                            as_type: Type to extract
                            mode: Extraction approach:
                                - "structured": Use Pydantic models (more robust)
                                - "tool_calls": Use tool calls (more flexible)
                            prompt: Optional custom prompt
                            include_tools: Whether to include other tools (tool_calls mode only)
                        """
                        from schemez import create_constructor_schema
                    
                        # Create model for single instance
                        item_model = Schema.for_class_ctor(as_type)
                    
                        # Create extraction prompt
                        final_prompt = prompt or f"Extract {as_type.__name__} from: {text}"
                        schema_obj = create_constructor_schema(as_type)
                        schema = schema_obj.model_dump_openai()["function"]
                    
                        if mode == "structured":
                    
                            class Extraction(Schema):
                                instance: item_model  # type: ignore
                                # explanation: str | None = None
                    
                            result = await self.agent.to_structured(Extraction).run(final_prompt)
                    
                            # Convert model instance to actual type
                            return as_type(**result.content.instance.model_dump())  # type: ignore
                    
                        # Legacy tool-calls approach
                    
                        async def construct(**kwargs: Any) -> T:
                            """Construct instance from extracted data."""
                            return as_type(**kwargs)
                    
                        structured = self.agent.to_structured(item_model)
                        tool = Tool.from_callable(
                            construct,
                            name_override=schema["name"],
                            description_override=schema["description"],
                            # schema_override=schema,
                        )
                        with structured.tools.temporary_tools(tool, exclusive=not include_tools):
                            result = await structured.run(final_prompt)  # type: ignore
                        return result.content  # type: ignore
                    

                    extract_multiple async

                    extract_multiple(
                        text: str,
                        as_type: type[T],
                        *,
                        mode: ExtractionMode = "structured",
                        min_items: int = 1,
                        max_items: int | None = None,
                        prompt: AnyPromptType | None = None,
                        include_tools: bool = False,
                    ) -> list[T]
                    

                    Extract multiple instances of type from text.

                    Parameters:

                    Name Type Description Default
                    text str

                    Text to extract from

                    required
                    as_type type[T]

                    Type to extract

                    required
                    mode ExtractionMode

                    Extraction approach: - "structured": Use Pydantic models (more robust) - "tool_calls": Use tool calls (more flexible)

                    'structured'
                    min_items int

                    Minimum number of instances to extract

                    1
                    max_items int | None

                    Maximum number of instances (None=unlimited)

                    None
                    prompt AnyPromptType | None

                    Optional custom prompt

                    None
                    include_tools bool

                    Whether to include other tools (tool_calls mode only)

                    False
                    Source code in src/llmling_agent/agent/interactions.py
                    442
                    443
                    444
                    445
                    446
                    447
                    448
                    449
                    450
                    451
                    452
                    453
                    454
                    455
                    456
                    457
                    458
                    459
                    460
                    461
                    462
                    463
                    464
                    465
                    466
                    467
                    468
                    469
                    470
                    471
                    472
                    473
                    474
                    475
                    476
                    477
                    478
                    479
                    480
                    481
                    482
                    483
                    484
                    485
                    486
                    487
                    488
                    489
                    490
                    491
                    492
                    493
                    494
                    495
                    496
                    497
                    498
                    499
                    500
                    501
                    502
                    503
                    504
                    505
                    506
                    507
                    508
                    509
                    510
                    511
                    512
                    513
                    514
                    515
                    516
                    517
                    518
                    519
                    520
                    521
                    522
                    523
                    524
                    525
                    526
                    527
                    528
                    529
                    530
                    531
                    async def extract_multiple[T](
                        self,
                        text: str,
                        as_type: type[T],
                        *,
                        mode: ExtractionMode = "structured",
                        min_items: int = 1,
                        max_items: int | None = None,
                        prompt: AnyPromptType | None = None,
                        include_tools: bool = False,
                    ) -> list[T]:
                        """Extract multiple instances of type from text.
                    
                        Args:
                            text: Text to extract from
                            as_type: Type to extract
                            mode: Extraction approach:
                                - "structured": Use Pydantic models (more robust)
                                - "tool_calls": Use tool calls (more flexible)
                            min_items: Minimum number of instances to extract
                            max_items: Maximum number of instances (None=unlimited)
                            prompt: Optional custom prompt
                            include_tools: Whether to include other tools (tool_calls mode only)
                        """
                        from schemez import create_constructor_schema
                    
                        item_model = Schema.for_class_ctor(as_type)
                    
                        instances: list[T] = []
                        schema_obj = create_constructor_schema(as_type)
                        final_prompt = prompt or "\n".join([
                            f"Extract {as_type.__name__} instances from text.",
                            # "Requirements:",
                            # f"- Extract at least {min_items} instances",
                            # f"- Extract at most {max_items} instances" if max_items else "",
                            "\nText to analyze:",
                            text,
                        ])
                        if mode == "structured":
                            # Create model for individual instance
                    
                            class Extraction(Schema):
                                instances: list[item_model]  # type: ignore
                                # explanation: str | None = None
                    
                            result = await self.agent.to_structured(Extraction).run(final_prompt)
                    
                            # Validate counts
                            num_instances = len(result.content.instances)
                            if len(result.content.instances) < min_items:
                                msg = f"Found only {num_instances} instances, need {min_items}"
                                raise ValueError(msg)
                    
                            if max_items and num_instances > max_items:
                                msg = f"Found {num_instances} instances, max is {max_items}"
                                raise ValueError(msg)
                    
                            # Convert model instances to actual type
                            return [
                                as_type(
                                    **instance.data  # type: ignore
                                    if hasattr(instance, "data")
                                    else instance.model_dump()  # type: ignore
                                )
                                for instance in result.content.instances
                            ]
                    
                        # Legacy tool-calls approach
                    
                        async def add_instance(**kwargs: Any) -> str:
                            """Add an extracted instance."""
                            if max_items and len(instances) >= max_items:
                                msg = f"Maximum number of items ({max_items}) reached"
                                raise ValueError(msg)
                            instance = as_type(**kwargs)
                            instances.append(instance)
                            return f"Added {instance}"
                    
                        add_instance.__annotations__ = schema_obj.get_annotations()
                        add_instance.__signature__ = schema_obj.to_python_signature()  # type: ignore
                        structured = self.agent.to_structured(item_model)
                        with structured.tools.temporary_tools(add_instance, exclusive=not include_tools):
                            # Create extraction prompt
                            await structured.run(final_prompt)
                    
                        if len(instances) < min_items:
                            msg = f"Found only {len(instances)} instances, need at least {min_items}"
                            raise ValueError(msg)
                    
                        return instances
                    

                    pick async

                    pick(
                        selections: Sequence[T], task: str, prompt: AnyPromptType | None = None
                    ) -> Pick[T]
                    
                    pick(
                        selections: Sequence[T], task: str, prompt: AnyPromptType | None = None
                    ) -> Pick[T]
                    
                    pick(
                        selections: Mapping[str, T], task: str, prompt: AnyPromptType | None = None
                    ) -> Pick[T]
                    
                    pick(
                        selections: AgentPool, task: str, prompt: AnyPromptType | None = None
                    ) -> Pick[AnyAgent[Any, Any]]
                    
                    pick(
                        selections: BaseTeam[TDeps, Any], task: str, prompt: AnyPromptType | None = None
                    ) -> Pick[MessageNode[TDeps, Any]]
                    
                    pick(
                        selections: Sequence[T] | Mapping[str, T] | AgentPool | BaseTeam[TDeps, Any],
                        task: str,
                        prompt: AnyPromptType | None = None,
                    ) -> Pick[T]
                    

                    Pick from available options with reasoning.

                    Parameters:

                    Name Type Description Default
                    selections Sequence[T] | Mapping[str, T] | AgentPool | BaseTeam[TDeps, Any]

                    What to pick from: - Sequence of items (auto-labeled) - Dict mapping labels to items - AgentPool - Team

                    required
                    task str

                    Task/decision description

                    required
                    prompt AnyPromptType | None

                    Optional custom selection prompt

                    None

                    Returns:

                    Type Description
                    Pick[T]

                    Decision with selected item and reasoning

                    Raises:

                    Type Description
                    ValueError

                    If no choices available or invalid selection

                    Source code in src/llmling_agent/agent/interactions.py
                    180
                    181
                    182
                    183
                    184
                    185
                    186
                    187
                    188
                    189
                    190
                    191
                    192
                    193
                    194
                    195
                    196
                    197
                    198
                    199
                    200
                    201
                    202
                    203
                    204
                    205
                    206
                    207
                    208
                    209
                    210
                    211
                    212
                    213
                    214
                    215
                    216
                    217
                    218
                    219
                    220
                    221
                    222
                    223
                    224
                    225
                    226
                    227
                    228
                    229
                    230
                    231
                    232
                    233
                    234
                    235
                    236
                    237
                    238
                    239
                    240
                    241
                    242
                    243
                    244
                    245
                    246
                    247
                    248
                    249
                    250
                    251
                        async def pick[T](
                            self,
                            selections: Sequence[T] | Mapping[str, T] | AgentPool | BaseTeam[TDeps, Any],
                            task: str,
                            prompt: AnyPromptType | None = None,
                        ) -> Pick[T]:
                            """Pick from available options with reasoning.
                    
                            Args:
                                selections: What to pick from:
                                    - Sequence of items (auto-labeled)
                                    - Dict mapping labels to items
                                    - AgentPool
                                    - Team
                                task: Task/decision description
                                prompt: Optional custom selection prompt
                    
                            Returns:
                                Decision with selected item and reasoning
                    
                            Raises:
                                ValueError: If no choices available or invalid selection
                            """
                            # Get items and create label mapping
                            from toprompt import to_prompt
                    
                            from llmling_agent import AgentPool
                            from llmling_agent.delegation.base_team import BaseTeam
                    
                            match selections:
                                case dict():
                                    label_map = selections
                                    items: list[Any] = list(selections.values())
                                case BaseTeam():
                                    items = list(selections.agents)
                                    label_map = {get_label(item): item for item in items}
                                case AgentPool():
                                    items = list(selections.agents.values())
                                    label_map = {get_label(item): item for item in items}
                                case _:
                                    items = list(selections)
                                    label_map = {get_label(item): item for item in items}
                    
                            if not items:
                                msg = "No choices available"
                                raise ValueError(msg)
                    
                            # Get descriptions for all items
                            descriptions = []
                            for label, item in label_map.items():
                                item_desc = await to_prompt(item)
                                descriptions.append(f"{label}:\n{item_desc}")
                    
                            default_prompt = f"""Task/Decision: {task}
                    
                    Available options:
                    {"-" * 40}
                    {"\n\n".join(descriptions)}
                    {"-" * 40}
                    
                    Select ONE option by its exact label."""
                    
                            # Get LLM's string-based decision
                            result = await self.agent.to_structured(LLMPick).run(prompt or default_prompt)
                    
                            # Convert to type-safe decision
                            if result.content.selection not in label_map:
                                msg = f"Invalid selection: {result.content.selection}"
                                raise ValueError(msg)
                    
                            selected = cast(T, label_map[result.content.selection])
                            return Pick(selection=selected, reason=result.content.reason)
                    

                    pick_multiple async

                    pick_multiple(
                        selections: Sequence[T],
                        task: str,
                        *,
                        min_picks: int = 1,
                        max_picks: int | None = None,
                        prompt: AnyPromptType | None = None,
                    ) -> MultiPick[T]
                    
                    pick_multiple(
                        selections: Mapping[str, T],
                        task: str,
                        *,
                        min_picks: int = 1,
                        max_picks: int | None = None,
                        prompt: AnyPromptType | None = None,
                    ) -> MultiPick[T]
                    
                    pick_multiple(
                        selections: BaseTeam[TDeps, Any],
                        task: str,
                        *,
                        min_picks: int = 1,
                        max_picks: int | None = None,
                        prompt: AnyPromptType | None = None,
                    ) -> MultiPick[MessageNode[TDeps, Any]]
                    
                    pick_multiple(
                        selections: AgentPool,
                        task: str,
                        *,
                        min_picks: int = 1,
                        max_picks: int | None = None,
                        prompt: AnyPromptType | None = None,
                    ) -> MultiPick[AnyAgent[Any, Any]]
                    
                    pick_multiple(
                        selections: Sequence[T] | Mapping[str, T] | AgentPool | BaseTeam[TDeps, Any],
                        task: str,
                        *,
                        min_picks: int = 1,
                        max_picks: int | None = None,
                        prompt: AnyPromptType | None = None,
                    ) -> MultiPick[T]
                    

                    Pick multiple options from available choices.

                    Parameters:

                    Name Type Description Default
                    selections Sequence[T] | Mapping[str, T] | AgentPool | BaseTeam[TDeps, Any]

                    What to pick from

                    required
                    task str

                    Task/decision description

                    required
                    min_picks int

                    Minimum number of selections required

                    1
                    max_picks int | None

                    Maximum number of selections (None for unlimited)

                    None
                    prompt AnyPromptType | None

                    Optional custom selection prompt

                    None
                    Source code in src/llmling_agent/agent/interactions.py
                    297
                    298
                    299
                    300
                    301
                    302
                    303
                    304
                    305
                    306
                    307
                    308
                    309
                    310
                    311
                    312
                    313
                    314
                    315
                    316
                    317
                    318
                    319
                    320
                    321
                    322
                    323
                    324
                    325
                    326
                    327
                    328
                    329
                    330
                    331
                    332
                    333
                    334
                    335
                    336
                    337
                    338
                    339
                    340
                    341
                    342
                    343
                    344
                    345
                    346
                    347
                    348
                    349
                    350
                    351
                    352
                    353
                    354
                    355
                    356
                    357
                    358
                    359
                    360
                    361
                    362
                    363
                    364
                    365
                    366
                    367
                    368
                    369
                    370
                    371
                    372
                    373
                    374
                    375
                    376
                    377
                    378
                    379
                    380
                    381
                    382
                        async def pick_multiple[T](
                            self,
                            selections: Sequence[T] | Mapping[str, T] | AgentPool | BaseTeam[TDeps, Any],
                            task: str,
                            *,
                            min_picks: int = 1,
                            max_picks: int | None = None,
                            prompt: AnyPromptType | None = None,
                        ) -> MultiPick[T]:
                            """Pick multiple options from available choices.
                    
                            Args:
                                selections: What to pick from
                                task: Task/decision description
                                min_picks: Minimum number of selections required
                                max_picks: Maximum number of selections (None for unlimited)
                                prompt: Optional custom selection prompt
                            """
                            from toprompt import to_prompt
                    
                            from llmling_agent import AgentPool
                            from llmling_agent.delegation.base_team import BaseTeam
                    
                            match selections:
                                case Mapping():
                                    label_map = selections
                                    items: list[Any] = list(selections.values())
                                case BaseTeam():
                                    items = list(selections.agents)
                                    label_map = {get_label(item): item for item in items}
                                case AgentPool():
                                    items = list(selections.agents.values())
                                    label_map = {get_label(item): item for item in items}
                                case _:
                                    items = list(selections)
                                    label_map = {get_label(item): item for item in items}
                    
                            if not items:
                                msg = "No choices available"
                                raise ValueError(msg)
                    
                            if max_picks is not None and max_picks < min_picks:
                                msg = f"max_picks ({max_picks}) cannot be less than min_picks ({min_picks})"
                                raise ValueError(msg)
                    
                            descriptions = []
                            for label, item in label_map.items():
                                item_desc = await to_prompt(item)
                                descriptions.append(f"{label}:\n{item_desc}")
                    
                            picks_info = (
                                f"Select between {min_picks} and {max_picks}"
                                if max_picks is not None
                                else f"Select at least {min_picks}"
                            )
                    
                            default_prompt = f"""Task/Decision: {task}
                    
                    Available options:
                    {"-" * 40}
                    {"\n\n".join(descriptions)}
                    {"-" * 40}
                    
                    {picks_info} options by their exact labels.
                    List your selections, one per line, followed by your reasoning."""
                    
                            result = await self.agent.to_structured(LLMMultiPick).run(
                                prompt or default_prompt
                            )
                    
                            # Validate selections
                            invalid = [s for s in result.content.selections if s not in label_map]
                            if invalid:
                                msg = f"Invalid selections: {', '.join(invalid)}"
                                raise ValueError(msg)
                            num_picks = len(result.content.selections)
                            if num_picks < min_picks:
                                msg = f"Too few selections: got {num_picks}, need {min_picks}"
                                raise ValueError(msg)
                    
                            if max_picks and num_picks > max_picks:
                                msg = f"Too many selections: got {num_picks}, max {max_picks}"
                                raise ValueError(msg)
                    
                            selected = [cast(T, label_map[label]) for label in result.content.selections]
                            return MultiPick(selections=selected, reason=result.content.reason)
                    

                    ProcessManager

                    Manages background processes for an agent pool.

                    Source code in src/llmling_agent/agent/process_manager.py
                    136
                    137
                    138
                    139
                    140
                    141
                    142
                    143
                    144
                    145
                    146
                    147
                    148
                    149
                    150
                    151
                    152
                    153
                    154
                    155
                    156
                    157
                    158
                    159
                    160
                    161
                    162
                    163
                    164
                    165
                    166
                    167
                    168
                    169
                    170
                    171
                    172
                    173
                    174
                    175
                    176
                    177
                    178
                    179
                    180
                    181
                    182
                    183
                    184
                    185
                    186
                    187
                    188
                    189
                    190
                    191
                    192
                    193
                    194
                    195
                    196
                    197
                    198
                    199
                    200
                    201
                    202
                    203
                    204
                    205
                    206
                    207
                    208
                    209
                    210
                    211
                    212
                    213
                    214
                    215
                    216
                    217
                    218
                    219
                    220
                    221
                    222
                    223
                    224
                    225
                    226
                    227
                    228
                    229
                    230
                    231
                    232
                    233
                    234
                    235
                    236
                    237
                    238
                    239
                    240
                    241
                    242
                    243
                    244
                    245
                    246
                    247
                    248
                    249
                    250
                    251
                    252
                    253
                    254
                    255
                    256
                    257
                    258
                    259
                    260
                    261
                    262
                    263
                    264
                    265
                    266
                    267
                    268
                    269
                    270
                    271
                    272
                    273
                    274
                    275
                    276
                    277
                    278
                    279
                    280
                    281
                    282
                    283
                    284
                    285
                    286
                    287
                    288
                    289
                    290
                    291
                    292
                    293
                    294
                    295
                    296
                    297
                    298
                    299
                    300
                    301
                    302
                    303
                    304
                    305
                    306
                    307
                    308
                    309
                    310
                    311
                    312
                    313
                    314
                    315
                    316
                    317
                    318
                    319
                    320
                    321
                    322
                    323
                    324
                    325
                    326
                    327
                    328
                    329
                    330
                    331
                    332
                    333
                    334
                    335
                    336
                    337
                    338
                    339
                    340
                    341
                    342
                    343
                    344
                    345
                    346
                    347
                    348
                    349
                    350
                    351
                    352
                    353
                    354
                    355
                    356
                    357
                    358
                    359
                    360
                    361
                    362
                    363
                    364
                    365
                    366
                    367
                    368
                    369
                    370
                    371
                    372
                    373
                    374
                    375
                    376
                    377
                    378
                    379
                    380
                    381
                    382
                    383
                    384
                    385
                    386
                    387
                    388
                    389
                    390
                    391
                    392
                    393
                    394
                    395
                    396
                    397
                    398
                    399
                    400
                    401
                    402
                    403
                    404
                    405
                    406
                    407
                    408
                    409
                    410
                    411
                    412
                    413
                    414
                    415
                    416
                    417
                    418
                    419
                    420
                    421
                    422
                    423
                    424
                    425
                    426
                    427
                    428
                    429
                    430
                    431
                    432
                    433
                    434
                    435
                    436
                    437
                    438
                    439
                    class ProcessManager:
                        """Manages background processes for an agent pool."""
                    
                        def __init__(self):
                            """Initialize process manager."""
                            self._processes: dict[str, RunningProcess] = {}
                            self._output_tasks: dict[str, asyncio.Task[None]] = {}
                    
                        async def start_process(
                            self,
                            command: str,
                            args: list[str] | None = None,
                            cwd: str | Path | None = None,
                            env: dict[str, str] | None = None,
                            output_limit: int | None = None,
                        ) -> str:
                            """Start a background process.
                    
                            Args:
                                command: Command to execute
                                args: Command arguments
                                cwd: Working directory
                                env: Environment variables (added to current env)
                                output_limit: Maximum bytes of output to retain
                    
                            Returns:
                                Process ID for tracking
                    
                            Raises:
                                OSError: If process creation fails
                            """
                            process_id = f"proc_{uuid.uuid4().hex[:8]}"
                            args = args or []
                    
                            # Prepare environment
                            proc_env = dict(os.environ)
                            if env:
                                proc_env.update(env)
                    
                            # Convert cwd to Path if provided
                            work_dir = Path(cwd) if cwd else None
                    
                            try:
                                # Start process
                                process = await asyncio.create_subprocess_exec(
                                    command,
                                    *args,
                                    cwd=work_dir,
                                    env=proc_env,
                                    stdout=asyncio.subprocess.PIPE,
                                    stderr=asyncio.subprocess.PIPE,
                                )
                    
                                # Create tracking object
                                running_proc = RunningProcess(
                                    process_id=process_id,
                                    command=command,
                                    args=args,
                                    cwd=work_dir,
                                    env=env or {},
                                    process=process,
                                    output_limit=output_limit,
                                )
                    
                                self._processes[process_id] = running_proc
                    
                                # Start output collection task
                                self._output_tasks[process_id] = asyncio.create_task(
                                    self._collect_output(running_proc)
                                )
                    
                                logger.info("Started process %s: %s %s", process_id, command, " ".join(args))
                            except Exception as e:
                                msg = f"Failed to start process: {command} {' '.join(args)}"
                                logger.exception(msg, exc_info=e)
                                raise OSError(msg) from e
                            else:
                                return process_id
                    
                        async def _collect_output(self, proc: RunningProcess) -> None:
                            """Collect output from process in background."""
                            try:
                                # Read output streams concurrently
                                stdout_task = asyncio.create_task(self._read_stream(proc.process.stdout))
                                stderr_task = asyncio.create_task(self._read_stream(proc.process.stderr))
                    
                                stdout_chunks = []
                                stderr_chunks = []
                    
                                # Collect output until both streams close
                                stdout_done = False
                                stderr_done = False
                    
                                while not (stdout_done and stderr_done):
                                    done, pending = await asyncio.wait(
                                        [stdout_task, stderr_task],
                                        return_when=asyncio.FIRST_COMPLETED,
                                        timeout=0.1,  # Check every 100ms
                                    )
                    
                                    for task in done:
                                        if task == stdout_task and not stdout_done:
                                            chunk = task.result()
                                            if chunk is None:
                                                stdout_done = True
                                            else:
                                                stdout_chunks.append(chunk)
                                                proc.add_output(stdout=chunk)
                                                # Restart task for next chunk
                                                stdout_task = asyncio.create_task(
                                                    self._read_stream(proc.process.stdout)
                                                )
                    
                                        elif task == stderr_task and not stderr_done:
                                            chunk = task.result()
                                            if chunk is None:
                                                stderr_done = True
                                            else:
                                                stderr_chunks.append(chunk)
                                                proc.add_output(stderr=chunk)
                                                # Restart task for next chunk
                                                stderr_task = asyncio.create_task(
                                                    self._read_stream(proc.process.stderr)
                                                )
                    
                                # Cancel any remaining tasks
                                for task in pending:
                                    task.cancel()
                    
                            except Exception:
                                logger.exception("Error collecting output for %s", proc.process_id)
                    
                        async def _read_stream(self, stream: asyncio.StreamReader | None) -> str | None:
                            """Read a chunk from a stream."""
                            if not stream:
                                return None
                            try:
                                data = await stream.read(8192)  # Read in 8KB chunks
                                return data.decode("utf-8", errors="replace") if data else None
                            except Exception:  # noqa: BLE001
                                return None
                    
                        async def get_output(self, process_id: str) -> ProcessOutput:
                            """Get current output from a process.
                    
                            Args:
                                process_id: Process identifier
                    
                            Returns:
                                Current process output
                    
                            Raises:
                                ValueError: If process not found
                            """
                            if process_id not in self._processes:
                                msg = f"Process {process_id} not found"
                                raise ValueError(msg)
                    
                            proc = self._processes[process_id]
                            return proc.get_output()
                    
                        async def wait_for_exit(self, process_id: str) -> int:
                            """Wait for process to complete.
                    
                            Args:
                                process_id: Process identifier
                    
                            Returns:
                                Exit code
                    
                            Raises:
                                ValueError: If process not found
                            """
                            if process_id not in self._processes:
                                msg = f"Process {process_id} not found"
                                raise ValueError(msg)
                    
                            proc = self._processes[process_id]
                            exit_code = await proc.wait()
                    
                            # Wait for output collection to finish
                            if process_id in self._output_tasks:
                                await self._output_tasks[process_id]
                    
                            return exit_code
                    
                        async def kill_process(self, process_id: str) -> None:
                            """Kill a running process.
                    
                            Args:
                                process_id: Process identifier
                    
                            Raises:
                                ValueError: If process not found
                            """
                            if process_id not in self._processes:
                                msg = f"Process {process_id} not found"
                                raise ValueError(msg)
                    
                            proc = self._processes[process_id]
                            await proc.kill()
                    
                            # Cancel output collection task
                            if process_id in self._output_tasks:
                                self._output_tasks[process_id].cancel()
                                with contextlib.suppress(asyncio.CancelledError):
                                    await self._output_tasks[process_id]
                    
                            logger.info("Killed process %s", process_id)
                    
                        async def release_process(self, process_id: str) -> None:
                            """Release resources for a process.
                    
                            Args:
                                process_id: Process identifier
                    
                            Raises:
                                ValueError: If process not found
                            """
                            if process_id not in self._processes:
                                msg = f"Process {process_id} not found"
                                raise ValueError(msg)
                    
                            # Kill if still running
                            proc = self._processes[process_id]
                            if await proc.is_running():
                                await proc.kill()
                    
                            # Clean up tasks
                            if process_id in self._output_tasks:
                                self._output_tasks[process_id].cancel()
                                with contextlib.suppress(asyncio.CancelledError):
                                    await self._output_tasks[process_id]
                                del self._output_tasks[process_id]
                    
                            # Remove from tracking
                            del self._processes[process_id]
                            logger.info("Released process %s", process_id)
                    
                        def list_processes(self) -> list[str]:
                            """List all tracked process IDs."""
                            return list(self._processes.keys())
                    
                        async def get_process_info(self, process_id: str) -> dict[str, Any]:
                            """Get information about a process.
                    
                            Args:
                                process_id: Process identifier
                    
                            Returns:
                                Process information dict
                    
                            Raises:
                                ValueError: If process not found
                            """
                            if process_id not in self._processes:
                                msg = f"Process {process_id} not found"
                                raise ValueError(msg)
                    
                            proc = self._processes[process_id]
                            return {
                                "process_id": process_id,
                                "command": proc.command,
                                "args": proc.args,
                                "cwd": str(proc.cwd) if proc.cwd else None,
                                "created_at": proc.created_at.isoformat(),
                                "is_running": await proc.is_running(),
                                "exit_code": proc.process.returncode,
                                "output_limit": proc.output_limit,
                            }
                    
                        async def cleanup(self) -> None:
                            """Clean up all processes."""
                            logger.info("Cleaning up %s processes", len(self._processes))
                    
                            # Try graceful termination first
                            termination_tasks = []
                            for proc in self._processes.values():
                                if await proc.is_running():
                                    proc.process.terminate()
                                    termination_tasks.append(proc.wait())
                    
                            if termination_tasks:
                                try:
                                    future = asyncio.gather(*termination_tasks, return_exceptions=True)
                                    await asyncio.wait_for(future, timeout=5.0)  # Wait up to 5 seconds
                                except TimeoutError:
                                    msg = "Some processes didn't terminate gracefully, force killing"
                                    logger.warning(msg)
                                    # Force kill remaining processes
                                    for proc in self._processes.values():
                                        if await proc.is_running():
                                            proc.process.kill()
                    
                            if self._output_tasks:
                                for task in self._output_tasks.values():
                                    task.cancel()
                                await asyncio.gather(*self._output_tasks.values(), return_exceptions=True)
                    
                            # Clear all tracking
                            self._processes.clear()
                            self._output_tasks.clear()
                    
                            logger.info("Process cleanup completed")
                    

                    __init__

                    __init__()
                    

                    Initialize process manager.

                    Source code in src/llmling_agent/agent/process_manager.py
                    139
                    140
                    141
                    142
                    def __init__(self):
                        """Initialize process manager."""
                        self._processes: dict[str, RunningProcess] = {}
                        self._output_tasks: dict[str, asyncio.Task[None]] = {}
                    

                    cleanup async

                    cleanup() -> None
                    

                    Clean up all processes.

                    Source code in src/llmling_agent/agent/process_manager.py
                    407
                    408
                    409
                    410
                    411
                    412
                    413
                    414
                    415
                    416
                    417
                    418
                    419
                    420
                    421
                    422
                    423
                    424
                    425
                    426
                    427
                    428
                    429
                    430
                    431
                    432
                    433
                    434
                    435
                    436
                    437
                    438
                    439
                    async def cleanup(self) -> None:
                        """Clean up all processes."""
                        logger.info("Cleaning up %s processes", len(self._processes))
                    
                        # Try graceful termination first
                        termination_tasks = []
                        for proc in self._processes.values():
                            if await proc.is_running():
                                proc.process.terminate()
                                termination_tasks.append(proc.wait())
                    
                        if termination_tasks:
                            try:
                                future = asyncio.gather(*termination_tasks, return_exceptions=True)
                                await asyncio.wait_for(future, timeout=5.0)  # Wait up to 5 seconds
                            except TimeoutError:
                                msg = "Some processes didn't terminate gracefully, force killing"
                                logger.warning(msg)
                                # Force kill remaining processes
                                for proc in self._processes.values():
                                    if await proc.is_running():
                                        proc.process.kill()
                    
                        if self._output_tasks:
                            for task in self._output_tasks.values():
                                task.cancel()
                            await asyncio.gather(*self._output_tasks.values(), return_exceptions=True)
                    
                        # Clear all tracking
                        self._processes.clear()
                        self._output_tasks.clear()
                    
                        logger.info("Process cleanup completed")
                    

                    get_output async

                    get_output(process_id: str) -> ProcessOutput
                    

                    Get current output from a process.

                    Parameters:

                    Name Type Description Default
                    process_id str

                    Process identifier

                    required

                    Returns:

                    Type Description
                    ProcessOutput

                    Current process output

                    Raises:

                    Type Description
                    ValueError

                    If process not found

                    Source code in src/llmling_agent/agent/process_manager.py
                    278
                    279
                    280
                    281
                    282
                    283
                    284
                    285
                    286
                    287
                    288
                    289
                    290
                    291
                    292
                    293
                    294
                    295
                    async def get_output(self, process_id: str) -> ProcessOutput:
                        """Get current output from a process.
                    
                        Args:
                            process_id: Process identifier
                    
                        Returns:
                            Current process output
                    
                        Raises:
                            ValueError: If process not found
                        """
                        if process_id not in self._processes:
                            msg = f"Process {process_id} not found"
                            raise ValueError(msg)
                    
                        proc = self._processes[process_id]
                        return proc.get_output()
                    

                    get_process_info async

                    get_process_info(process_id: str) -> dict[str, Any]
                    

                    Get information about a process.

                    Parameters:

                    Name Type Description Default
                    process_id str

                    Process identifier

                    required

                    Returns:

                    Type Description
                    dict[str, Any]

                    Process information dict

                    Raises:

                    Type Description
                    ValueError

                    If process not found

                    Source code in src/llmling_agent/agent/process_manager.py
                    379
                    380
                    381
                    382
                    383
                    384
                    385
                    386
                    387
                    388
                    389
                    390
                    391
                    392
                    393
                    394
                    395
                    396
                    397
                    398
                    399
                    400
                    401
                    402
                    403
                    404
                    405
                    async def get_process_info(self, process_id: str) -> dict[str, Any]:
                        """Get information about a process.
                    
                        Args:
                            process_id: Process identifier
                    
                        Returns:
                            Process information dict
                    
                        Raises:
                            ValueError: If process not found
                        """
                        if process_id not in self._processes:
                            msg = f"Process {process_id} not found"
                            raise ValueError(msg)
                    
                        proc = self._processes[process_id]
                        return {
                            "process_id": process_id,
                            "command": proc.command,
                            "args": proc.args,
                            "cwd": str(proc.cwd) if proc.cwd else None,
                            "created_at": proc.created_at.isoformat(),
                            "is_running": await proc.is_running(),
                            "exit_code": proc.process.returncode,
                            "output_limit": proc.output_limit,
                        }
                    

                    kill_process async

                    kill_process(process_id: str) -> None
                    

                    Kill a running process.

                    Parameters:

                    Name Type Description Default
                    process_id str

                    Process identifier

                    required

                    Raises:

                    Type Description
                    ValueError

                    If process not found

                    Source code in src/llmling_agent/agent/process_manager.py
                    322
                    323
                    324
                    325
                    326
                    327
                    328
                    329
                    330
                    331
                    332
                    333
                    334
                    335
                    336
                    337
                    338
                    339
                    340
                    341
                    342
                    343
                    344
                    async def kill_process(self, process_id: str) -> None:
                        """Kill a running process.
                    
                        Args:
                            process_id: Process identifier
                    
                        Raises:
                            ValueError: If process not found
                        """
                        if process_id not in self._processes:
                            msg = f"Process {process_id} not found"
                            raise ValueError(msg)
                    
                        proc = self._processes[process_id]
                        await proc.kill()
                    
                        # Cancel output collection task
                        if process_id in self._output_tasks:
                            self._output_tasks[process_id].cancel()
                            with contextlib.suppress(asyncio.CancelledError):
                                await self._output_tasks[process_id]
                    
                        logger.info("Killed process %s", process_id)
                    

                    list_processes

                    list_processes() -> list[str]
                    

                    List all tracked process IDs.

                    Source code in src/llmling_agent/agent/process_manager.py
                    375
                    376
                    377
                    def list_processes(self) -> list[str]:
                        """List all tracked process IDs."""
                        return list(self._processes.keys())
                    

                    release_process async

                    release_process(process_id: str) -> None
                    

                    Release resources for a process.

                    Parameters:

                    Name Type Description Default
                    process_id str

                    Process identifier

                    required

                    Raises:

                    Type Description
                    ValueError

                    If process not found

                    Source code in src/llmling_agent/agent/process_manager.py
                    346
                    347
                    348
                    349
                    350
                    351
                    352
                    353
                    354
                    355
                    356
                    357
                    358
                    359
                    360
                    361
                    362
                    363
                    364
                    365
                    366
                    367
                    368
                    369
                    370
                    371
                    372
                    373
                    async def release_process(self, process_id: str) -> None:
                        """Release resources for a process.
                    
                        Args:
                            process_id: Process identifier
                    
                        Raises:
                            ValueError: If process not found
                        """
                        if process_id not in self._processes:
                            msg = f"Process {process_id} not found"
                            raise ValueError(msg)
                    
                        # Kill if still running
                        proc = self._processes[process_id]
                        if await proc.is_running():
                            await proc.kill()
                    
                        # Clean up tasks
                        if process_id in self._output_tasks:
                            self._output_tasks[process_id].cancel()
                            with contextlib.suppress(asyncio.CancelledError):
                                await self._output_tasks[process_id]
                            del self._output_tasks[process_id]
                    
                        # Remove from tracking
                        del self._processes[process_id]
                        logger.info("Released process %s", process_id)
                    

                    start_process async

                    start_process(
                        command: str,
                        args: list[str] | None = None,
                        cwd: str | Path | None = None,
                        env: dict[str, str] | None = None,
                        output_limit: int | None = None,
                    ) -> str
                    

                    Start a background process.

                    Parameters:

                    Name Type Description Default
                    command str

                    Command to execute

                    required
                    args list[str] | None

                    Command arguments

                    None
                    cwd str | Path | None

                    Working directory

                    None
                    env dict[str, str] | None

                    Environment variables (added to current env)

                    None
                    output_limit int | None

                    Maximum bytes of output to retain

                    None

                    Returns:

                    Type Description
                    str

                    Process ID for tracking

                    Raises:

                    Type Description
                    OSError

                    If process creation fails

                    Source code in src/llmling_agent/agent/process_manager.py
                    144
                    145
                    146
                    147
                    148
                    149
                    150
                    151
                    152
                    153
                    154
                    155
                    156
                    157
                    158
                    159
                    160
                    161
                    162
                    163
                    164
                    165
                    166
                    167
                    168
                    169
                    170
                    171
                    172
                    173
                    174
                    175
                    176
                    177
                    178
                    179
                    180
                    181
                    182
                    183
                    184
                    185
                    186
                    187
                    188
                    189
                    190
                    191
                    192
                    193
                    194
                    195
                    196
                    197
                    198
                    199
                    200
                    201
                    202
                    203
                    204
                    205
                    206
                    207
                    208
                    209
                    210
                    211
                    212
                    213
                    async def start_process(
                        self,
                        command: str,
                        args: list[str] | None = None,
                        cwd: str | Path | None = None,
                        env: dict[str, str] | None = None,
                        output_limit: int | None = None,
                    ) -> str:
                        """Start a background process.
                    
                        Args:
                            command: Command to execute
                            args: Command arguments
                            cwd: Working directory
                            env: Environment variables (added to current env)
                            output_limit: Maximum bytes of output to retain
                    
                        Returns:
                            Process ID for tracking
                    
                        Raises:
                            OSError: If process creation fails
                        """
                        process_id = f"proc_{uuid.uuid4().hex[:8]}"
                        args = args or []
                    
                        # Prepare environment
                        proc_env = dict(os.environ)
                        if env:
                            proc_env.update(env)
                    
                        # Convert cwd to Path if provided
                        work_dir = Path(cwd) if cwd else None
                    
                        try:
                            # Start process
                            process = await asyncio.create_subprocess_exec(
                                command,
                                *args,
                                cwd=work_dir,
                                env=proc_env,
                                stdout=asyncio.subprocess.PIPE,
                                stderr=asyncio.subprocess.PIPE,
                            )
                    
                            # Create tracking object
                            running_proc = RunningProcess(
                                process_id=process_id,
                                command=command,
                                args=args,
                                cwd=work_dir,
                                env=env or {},
                                process=process,
                                output_limit=output_limit,
                            )
                    
                            self._processes[process_id] = running_proc
                    
                            # Start output collection task
                            self._output_tasks[process_id] = asyncio.create_task(
                                self._collect_output(running_proc)
                            )
                    
                            logger.info("Started process %s: %s %s", process_id, command, " ".join(args))
                        except Exception as e:
                            msg = f"Failed to start process: {command} {' '.join(args)}"
                            logger.exception(msg, exc_info=e)
                            raise OSError(msg) from e
                        else:
                            return process_id
                    

                    wait_for_exit async

                    wait_for_exit(process_id: str) -> int
                    

                    Wait for process to complete.

                    Parameters:

                    Name Type Description Default
                    process_id str

                    Process identifier

                    required

                    Returns:

                    Type Description
                    int

                    Exit code

                    Raises:

                    Type Description
                    ValueError

                    If process not found

                    Source code in src/llmling_agent/agent/process_manager.py
                    297
                    298
                    299
                    300
                    301
                    302
                    303
                    304
                    305
                    306
                    307
                    308
                    309
                    310
                    311
                    312
                    313
                    314
                    315
                    316
                    317
                    318
                    319
                    320
                    async def wait_for_exit(self, process_id: str) -> int:
                        """Wait for process to complete.
                    
                        Args:
                            process_id: Process identifier
                    
                        Returns:
                            Exit code
                    
                        Raises:
                            ValueError: If process not found
                        """
                        if process_id not in self._processes:
                            msg = f"Process {process_id} not found"
                            raise ValueError(msg)
                    
                        proc = self._processes[process_id]
                        exit_code = await proc.wait()
                    
                        # Wait for output collection to finish
                        if process_id in self._output_tasks:
                            await self._output_tasks[process_id]
                    
                        return exit_code
                    

                    ProcessOutput dataclass

                    Output from a running process.

                    Source code in src/llmling_agent/agent/process_manager.py
                    20
                    21
                    22
                    23
                    24
                    25
                    26
                    27
                    28
                    29
                    @dataclass
                    class ProcessOutput:
                        """Output from a running process."""
                    
                        stdout: str
                        stderr: str
                        combined: str
                        truncated: bool = False
                        exit_code: int | None = None
                        signal: str | None = None
                    

                    RunningProcess dataclass

                    Represents a running background process.

                    Source code in src/llmling_agent/agent/process_manager.py
                     32
                     33
                     34
                     35
                     36
                     37
                     38
                     39
                     40
                     41
                     42
                     43
                     44
                     45
                     46
                     47
                     48
                     49
                     50
                     51
                     52
                     53
                     54
                     55
                     56
                     57
                     58
                     59
                     60
                     61
                     62
                     63
                     64
                     65
                     66
                     67
                     68
                     69
                     70
                     71
                     72
                     73
                     74
                     75
                     76
                     77
                     78
                     79
                     80
                     81
                     82
                     83
                     84
                     85
                     86
                     87
                     88
                     89
                     90
                     91
                     92
                     93
                     94
                     95
                     96
                     97
                     98
                     99
                    100
                    101
                    102
                    103
                    104
                    105
                    106
                    107
                    108
                    109
                    110
                    111
                    112
                    113
                    114
                    115
                    116
                    117
                    118
                    119
                    120
                    121
                    122
                    123
                    124
                    125
                    126
                    127
                    128
                    129
                    130
                    131
                    132
                    133
                    @dataclass
                    class RunningProcess:
                        """Represents a running background process."""
                    
                        process_id: str
                        command: str
                        args: list[str]
                        cwd: Path | None
                        env: dict[str, str]
                        process: asyncio.subprocess.Process
                        created_at: datetime = field(default_factory=datetime.now)
                        output_limit: int | None = None
                        _stdout_buffer: list[str] = field(default_factory=list)
                        _stderr_buffer: list[str] = field(default_factory=list)
                        _output_size: int = 0
                        _truncated: bool = False
                    
                        def add_output(self, stdout: str = "", stderr: str = "") -> None:
                            """Add output to buffers, applying size limits."""
                            if stdout:
                                self._stdout_buffer.append(stdout)
                                self._output_size += len(stdout.encode())
                            if stderr:
                                self._stderr_buffer.append(stderr)
                                self._output_size += len(stderr.encode())
                    
                            # Apply truncation if limit exceeded
                            if self.output_limit and self._output_size > self.output_limit:
                                self._truncate_output()
                                self._truncated = True
                    
                        def _truncate_output(self) -> None:
                            """Truncate output from beginning to stay within limit."""
                            if not self.output_limit:
                                return
                    
                            # Combine all output to measure total size
                            all_stdout = "".join(self._stdout_buffer)
                            all_stderr = "".join(self._stderr_buffer)
                    
                            # Calculate how much to keep
                            target_size = int(self.output_limit * 0.9)  # Keep 90% of limit
                    
                            # Truncate stdout first, then stderr if needed
                            if len(all_stdout.encode()) > target_size:
                                # Find character boundary for truncation
                                truncated_stdout = all_stdout[-target_size:].lstrip()
                                self._stdout_buffer = [truncated_stdout]
                                self._stderr_buffer = [all_stderr]
                            else:
                                remaining = target_size - len(all_stdout.encode())
                                truncated_stderr = all_stderr[-remaining:].lstrip()
                                self._stdout_buffer = [all_stdout]
                                self._stderr_buffer = [truncated_stderr]
                    
                            # Update size counter
                            self._output_size = sum(
                                len(chunk.encode()) for chunk in self._stdout_buffer + self._stderr_buffer
                            )
                    
                        def get_output(self) -> ProcessOutput:
                            """Get current process output."""
                            stdout = "".join(self._stdout_buffer)
                            stderr = "".join(self._stderr_buffer)
                            combined = stdout + stderr
                    
                            # Check if process has exited
                            exit_code = self.process.returncode
                            signal = None  # TODO: Extract signal info if available
                    
                            return ProcessOutput(
                                stdout=stdout,
                                stderr=stderr,
                                combined=combined,
                                truncated=self._truncated,
                                exit_code=exit_code,
                                signal=signal,
                            )
                    
                        async def is_running(self) -> bool:
                            """Check if process is still running."""
                            return self.process.returncode is None
                    
                        async def wait(self) -> int:
                            """Wait for process to complete and return exit code."""
                            return await self.process.wait()
                    
                        async def kill(self) -> None:
                            """Terminate the process."""
                            if await self.is_running():
                                try:
                                    self.process.terminate()
                                    # Give it a moment to terminate gracefully
                                    try:
                                        await asyncio.wait_for(self.process.wait(), timeout=5.0)
                                    except TimeoutError:
                                        # Force kill if it doesn't terminate
                                        self.process.kill()
                                        await self.process.wait()
                                except ProcessLookupError:
                                    # Process already dead
                                    pass
                    

                    add_output

                    add_output(stdout: str = '', stderr: str = '') -> None
                    

                    Add output to buffers, applying size limits.

                    Source code in src/llmling_agent/agent/process_manager.py
                    49
                    50
                    51
                    52
                    53
                    54
                    55
                    56
                    57
                    58
                    59
                    60
                    61
                    def add_output(self, stdout: str = "", stderr: str = "") -> None:
                        """Add output to buffers, applying size limits."""
                        if stdout:
                            self._stdout_buffer.append(stdout)
                            self._output_size += len(stdout.encode())
                        if stderr:
                            self._stderr_buffer.append(stderr)
                            self._output_size += len(stderr.encode())
                    
                        # Apply truncation if limit exceeded
                        if self.output_limit and self._output_size > self.output_limit:
                            self._truncate_output()
                            self._truncated = True
                    

                    get_output

                    get_output() -> ProcessOutput
                    

                    Get current process output.

                    Source code in src/llmling_agent/agent/process_manager.py
                     92
                     93
                     94
                     95
                     96
                     97
                     98
                     99
                    100
                    101
                    102
                    103
                    104
                    105
                    106
                    107
                    108
                    109
                    def get_output(self) -> ProcessOutput:
                        """Get current process output."""
                        stdout = "".join(self._stdout_buffer)
                        stderr = "".join(self._stderr_buffer)
                        combined = stdout + stderr
                    
                        # Check if process has exited
                        exit_code = self.process.returncode
                        signal = None  # TODO: Extract signal info if available
                    
                        return ProcessOutput(
                            stdout=stdout,
                            stderr=stderr,
                            combined=combined,
                            truncated=self._truncated,
                            exit_code=exit_code,
                            signal=signal,
                        )
                    

                    is_running async

                    is_running() -> bool
                    

                    Check if process is still running.

                    Source code in src/llmling_agent/agent/process_manager.py
                    111
                    112
                    113
                    async def is_running(self) -> bool:
                        """Check if process is still running."""
                        return self.process.returncode is None
                    

                    kill async

                    kill() -> None
                    

                    Terminate the process.

                    Source code in src/llmling_agent/agent/process_manager.py
                    119
                    120
                    121
                    122
                    123
                    124
                    125
                    126
                    127
                    128
                    129
                    130
                    131
                    132
                    133
                    async def kill(self) -> None:
                        """Terminate the process."""
                        if await self.is_running():
                            try:
                                self.process.terminate()
                                # Give it a moment to terminate gracefully
                                try:
                                    await asyncio.wait_for(self.process.wait(), timeout=5.0)
                                except TimeoutError:
                                    # Force kill if it doesn't terminate
                                    self.process.kill()
                                    await self.process.wait()
                            except ProcessLookupError:
                                # Process already dead
                                pass
                    

                    wait async

                    wait() -> int
                    

                    Wait for process to complete and return exit code.

                    Source code in src/llmling_agent/agent/process_manager.py
                    115
                    116
                    117
                    async def wait(self) -> int:
                        """Wait for process to complete and return exit code."""
                        return await self.process.wait()
                    

                    StructuredAgent

                    Bases: MessageNode[TDeps, TResult]

                    Wrapper for Agent that enforces a specific result type.

                    This wrapper ensures the agent always returns results of the specified type. The type can be provided as: - A Python type for validation - A response definition name from the manifest - A complete response definition instance

                    Source code in src/llmling_agent/agent/structured.py
                     46
                     47
                     48
                     49
                     50
                     51
                     52
                     53
                     54
                     55
                     56
                     57
                     58
                     59
                     60
                     61
                     62
                     63
                     64
                     65
                     66
                     67
                     68
                     69
                     70
                     71
                     72
                     73
                     74
                     75
                     76
                     77
                     78
                     79
                     80
                     81
                     82
                     83
                     84
                     85
                     86
                     87
                     88
                     89
                     90
                     91
                     92
                     93
                     94
                     95
                     96
                     97
                     98
                     99
                    100
                    101
                    102
                    103
                    104
                    105
                    106
                    107
                    108
                    109
                    110
                    111
                    112
                    113
                    114
                    115
                    116
                    117
                    118
                    119
                    120
                    121
                    122
                    123
                    124
                    125
                    126
                    127
                    128
                    129
                    130
                    131
                    132
                    133
                    134
                    135
                    136
                    137
                    138
                    139
                    140
                    141
                    142
                    143
                    144
                    145
                    146
                    147
                    148
                    149
                    150
                    151
                    152
                    153
                    154
                    155
                    156
                    157
                    158
                    159
                    160
                    161
                    162
                    163
                    164
                    165
                    166
                    167
                    168
                    169
                    170
                    171
                    172
                    173
                    174
                    175
                    176
                    177
                    178
                    179
                    180
                    181
                    182
                    183
                    184
                    185
                    186
                    187
                    188
                    189
                    190
                    191
                    192
                    193
                    194
                    195
                    196
                    197
                    198
                    199
                    200
                    201
                    202
                    203
                    204
                    205
                    206
                    207
                    208
                    209
                    210
                    211
                    212
                    213
                    214
                    215
                    216
                    217
                    218
                    219
                    220
                    221
                    222
                    223
                    224
                    225
                    226
                    227
                    228
                    229
                    230
                    231
                    232
                    233
                    234
                    235
                    236
                    237
                    238
                    239
                    240
                    241
                    242
                    243
                    244
                    245
                    246
                    247
                    248
                    249
                    250
                    251
                    252
                    253
                    254
                    255
                    256
                    257
                    258
                    259
                    260
                    261
                    262
                    263
                    264
                    265
                    266
                    267
                    268
                    269
                    270
                    271
                    272
                    273
                    274
                    275
                    276
                    277
                    278
                    279
                    280
                    281
                    282
                    283
                    284
                    285
                    286
                    287
                    288
                    289
                    290
                    291
                    292
                    293
                    294
                    295
                    296
                    297
                    298
                    299
                    300
                    301
                    302
                    303
                    304
                    305
                    306
                    307
                    308
                    309
                    310
                    311
                    312
                    313
                    314
                    315
                    316
                    317
                    318
                    319
                    320
                    321
                    322
                    323
                    324
                    325
                    326
                    327
                    328
                    329
                    330
                    331
                    332
                    333
                    334
                    335
                    336
                    337
                    338
                    339
                    340
                    341
                    342
                    343
                    344
                    345
                    346
                    347
                    348
                    349
                    350
                    351
                    352
                    353
                    354
                    355
                    356
                    357
                    358
                    359
                    360
                    361
                    362
                    363
                    364
                    365
                    366
                    367
                    368
                    369
                    370
                    371
                    372
                    373
                    374
                    375
                    376
                    377
                    378
                    379
                    380
                    381
                    382
                    383
                    384
                    385
                    386
                    387
                    388
                    389
                    390
                    391
                    392
                    393
                    394
                    395
                    396
                    397
                    class StructuredAgent[TDeps, TResult](MessageNode[TDeps, TResult]):
                        """Wrapper for Agent that enforces a specific result type.
                    
                        This wrapper ensures the agent always returns results of the specified type.
                        The type can be provided as:
                        - A Python type for validation
                        - A response definition name from the manifest
                        - A complete response definition instance
                        """
                    
                        def __init__(
                            self,
                            agent: Agent[TDeps] | StructuredAgent[TDeps, TResult] | Callable[..., TResult],
                            result_type: type[TResult] | str | StructuredResponseConfig,
                            *,
                            tool_name: str | None = None,
                            tool_description: str | None = None,
                        ):
                            """Initialize structured agent wrapper.
                    
                            Args:
                                agent: Base agent to wrap
                                result_type: Expected result type:
                                    - BaseModel / dataclasses
                                    - Name of response definition in manifest
                                    - Complete response definition instance
                                tool_name: Optional override for tool name
                                tool_description: Optional override for tool description
                    
                            Raises:
                                ValueError: If named response type not found in manifest
                            """
                            from llmling_agent.agent.agent import Agent
                    
                            logger.debug("StructuredAgent.run result_type = %s", result_type)
                            match agent:
                                case StructuredAgent():
                                    self._agent: Agent[TDeps] = agent._agent
                                case Callable():
                                    self._agent = Agent[TDeps](provider=agent, name=agent.__name__)
                                case Agent():
                                    self._agent = agent
                                case _:
                                    msg = "Invalid agent type"
                                    raise ValueError(msg)
                    
                            super().__init__(name=self._agent.name)
                    
                            self._result_type = to_type(result_type)
                            agent.set_result_type(result_type)
                    
                            match result_type:
                                case type() | str():
                                    # For types and named definitions, use overrides if provided
                                    self._agent.set_result_type(
                                        result_type,
                                        tool_name=tool_name,
                                        tool_description=tool_description,
                                    )
                                case StructuredResponseConfig():
                                    # For response definitions, use as-is
                                    # (overrides don't apply to complete definitions)
                                    self._agent.set_result_type(result_type)
                    
                        async def __aenter__(self) -> Self:
                            """Enter async context and set up MCP servers.
                    
                            Called when agent enters its async context. Sets up any configured
                            MCP servers and their tools.
                            """
                            await self._agent.__aenter__()
                            return self
                    
                        async def __aexit__(
                            self,
                            exc_type: type[BaseException] | None,
                            exc_val: BaseException | None,
                            exc_tb: TracebackType | None,
                        ):
                            """Exit async context."""
                            await self._agent.__aexit__(exc_type, exc_val, exc_tb)
                    
                        def __and__(
                            self, other: AnyAgent[Any, Any] | Team[Any] | ProcessorCallback[TResult]
                        ) -> Team[TDeps]:
                            return self._agent.__and__(other)
                    
                        def __or__(self, other: AnyAgent[Any, Any] | ProcessorCallback | BaseTeam) -> TeamRun:
                            return self._agent.__or__(other)
                    
                        async def _run(
                            self,
                            *prompt: AnyPromptType | TResult,
                            result_type: type[TResult] | None = None,
                            model: ModelType = None,
                            tool_choice: str | list[str] | None = None,
                            store_history: bool = True,
                            message_id: str | None = None,
                            conversation_id: str | None = None,
                            wait_for_connections: bool | None = None,
                        ) -> ChatMessage[TResult]:
                            """Run with fixed result type.
                    
                            Args:
                                prompt: Any prompt-compatible object or structured objects of type TResult
                                result_type: Expected result type:
                                    - BaseModel / dataclasses
                                    - Name of response definition in manifest
                                    - Complete response definition instance
                                model: Optional model override
                                tool_choice: Filter available tools by name
                                store_history: Whether the message exchange should be added to the
                                               context window
                                message_id: Optional message id for the returned message.
                                            Automatically generated if not provided.
                                conversation_id: Optional conversation id for the returned message.
                                wait_for_connections: Whether to wait for all connections to complete
                            """
                            typ = result_type or self._result_type
                            return await self._agent._run(
                                *prompt,
                                result_type=typ,  # type: ignore
                                model=model,
                                store_history=store_history,
                                tool_choice=tool_choice,
                                message_id=message_id,
                                conversation_id=conversation_id,
                                wait_for_connections=wait_for_connections,
                            )
                    
                        async def validate_against(
                            self,
                            prompt: str,
                            criteria: type[TResult],
                            **kwargs: Any,
                        ) -> bool:
                            """Check if agent's response satisfies stricter criteria."""
                            result = await self.run(prompt, **kwargs)
                            try:
                                criteria.model_validate(result.content.model_dump())  # type: ignore
                            except ValidationError:
                                return False
                            else:
                                return True
                    
                        def __repr__(self) -> str:
                            type_name = getattr(self._result_type, "__name__", str(self._result_type))
                            return f"StructuredAgent({self._agent!r}, result_type={type_name})"
                    
                        def __prompt__(self) -> str:
                            type_name = getattr(self._result_type, "__name__", str(self._result_type))
                            base_info = self._agent.__prompt__()
                            return f"{base_info}\nStructured output type: {type_name}"
                    
                        def __getattr__(self, name: str) -> Any:
                            return getattr(self._agent, name)
                    
                        @property
                        def context(self) -> AgentContext[TDeps]:
                            return self._agent.context
                    
                        @context.setter
                        def context(self, value: Any):
                            self._agent.context = value
                    
                        @property
                        def name(self) -> str:
                            return self._agent.name
                    
                        @name.setter
                        def name(self, value: str):
                            self._agent.name = value
                    
                        @property
                        def tools(self) -> ToolManager:
                            return self._agent.tools
                    
                        @property
                        def conversation(self) -> ConversationManager:
                            return self._agent.conversation
                    
                        @overload
                        def to_structured(
                            self,
                            result_type: None,
                            *,
                            tool_name: str | None = None,
                            tool_description: str | None = None,
                        ) -> Agent[TDeps]: ...
                    
                        @overload
                        def to_structured[TNewResult](
                            self,
                            result_type: type[TNewResult] | str | StructuredResponseConfig,
                            *,
                            tool_name: str | None = None,
                            tool_description: str | None = None,
                        ) -> StructuredAgent[TDeps, TNewResult]: ...
                    
                        def to_structured[TNewResult](
                            self,
                            result_type: type[TNewResult] | str | StructuredResponseConfig | None,
                            *,
                            tool_name: str | None = None,
                            tool_description: str | None = None,
                        ) -> Agent[TDeps] | StructuredAgent[TDeps, TNewResult]:
                            if result_type is None:
                                return self._agent
                    
                            return StructuredAgent(
                                self._agent,
                                result_type=result_type,
                                tool_name=tool_name,
                                tool_description=tool_description,
                            )
                    
                        @property
                        def stats(self) -> MessageStats:
                            return self._agent.stats
                    
                        async def run_iter(
                            self,
                            *prompt_groups: Sequence[AnyPromptType | PIL.Image.Image | os.PathLike[str]],
                            **kwargs: Any,
                        ) -> AsyncIterator[ChatMessage[Any]]:
                            """Forward run_iter to wrapped agent."""
                            async for message in self._agent.run_iter(*prompt_groups, **kwargs):
                                yield message
                    
                        async def run_job(
                            self,
                            job: Job[TDeps, TResult],
                            *,
                            store_history: bool = True,
                            include_agent_tools: bool = True,
                        ) -> ChatMessage[TResult]:
                            """Execute a pre-defined job ensuring type compatibility.
                    
                            Args:
                                job: Job configuration to execute
                                store_history: Whether to add job execution to conversation history
                                include_agent_tools: Whether to include agent's tools alongside job tools
                    
                            Returns:
                                Task execution result
                    
                            Raises:
                                JobError: If job execution fails or types don't match
                                ValueError: If job configuration is invalid
                            """
                            from llmling_agent.tasks import JobError
                    
                            # Validate dependency requirement
                            if job.required_dependency is not None:  # noqa: SIM102
                                if not isinstance(self.context.data, job.required_dependency):
                                    msg = (
                                        f"Agent dependencies ({type(self.context.data)}) "
                                        f"don't match job requirement ({job.required_dependency})"
                                    )
                                    raise JobError(msg)
                    
                            # Validate return type requirement
                            if job.required_return_type != self._result_type:
                                msg = (
                                    f"Agent result type ({self._result_type}) "
                                    f"doesn't match job requirement ({job.required_return_type})"
                                )
                                raise JobError(msg)
                    
                            # Load task knowledge if provided
                            if job.knowledge:
                                # Add knowledge sources to context
                                resources: list[Resource | str] = list(job.knowledge.paths) + list(
                                    job.knowledge.resources
                                )
                                for source in resources:
                                    await self.conversation.load_context_source(source)
                                for prompt in job.knowledge.prompts:
                                    await self.conversation.load_context_source(prompt)
                    
                            try:
                                # Register task tools temporarily
                                tools = job.get_tools()
                    
                                # Use temporary tools
                                with self._agent.tools.temporary_tools(
                                    tools, exclusive=not include_agent_tools
                                ):
                                    # Execute job using StructuredAgent's run to maintain type safety
                                    return await self.run(await job.get_prompt(), store_history=store_history)
                    
                            except Exception as e:
                                msg = f"Task execution failed: {e}"
                                logger.exception(msg)
                                raise JobError(msg) from e
                    
                        @classmethod
                        def from_callback(
                            cls,
                            callback: ProcessorCallback[TResult],
                            *,
                            name: str | None = None,
                            **kwargs: Any,
                        ) -> StructuredAgent[None, TResult]:
                            """Create a structured agent from a processing callback.
                    
                            Args:
                                callback: Function to process messages. Can be:
                                    - sync or async
                                    - with or without context
                                    - with explicit return type
                                name: Optional name for the agent
                                **kwargs: Additional arguments for agent
                    
                            Example:
                                ```python
                                class AnalysisResult(BaseModel):
                                    sentiment: float
                                    topics: list[str]
                    
                                def analyze(msg: str) -> AnalysisResult:
                                    return AnalysisResult(sentiment=0.8, topics=["tech"])
                    
                                analyzer = StructuredAgent.from_callback(analyze)
                                ```
                            """
                            from llmling_agent.agent.agent import Agent
                            from llmling_agent_providers.callback import CallbackProvider
                    
                            name = name or callback.__name__ or "processor"
                            provider = CallbackProvider(callback, name=name)
                            agent = Agent[None](provider=provider, name=name, **kwargs)
                            # Get return type from signature for validation
                            hints = get_type_hints(callback)
                            return_type = hints.get("return")
                    
                            # If async, unwrap from Awaitable
                            if (
                                return_type
                                and hasattr(return_type, "__origin__")
                                and return_type.__origin__ is Awaitable
                            ):
                                return_type = return_type.__args__[0]
                            return StructuredAgent[None, TResult](agent, return_type or str)  # type: ignore
                    
                        def is_busy(self) -> bool:
                            """Check if agent is currently processing tasks."""
                            return bool(self._pending_tasks or self._background_task)
                    
                        def run_sync(self, *args, **kwargs):
                            """Run agent synchronously."""
                            return self._agent.run_sync(*args, result_type=self._result_type, **kwargs)
                    

                    __aenter__ async

                    __aenter__() -> Self
                    

                    Enter async context and set up MCP servers.

                    Called when agent enters its async context. Sets up any configured MCP servers and their tools.

                    Source code in src/llmling_agent/agent/structured.py
                    110
                    111
                    112
                    113
                    114
                    115
                    116
                    117
                    async def __aenter__(self) -> Self:
                        """Enter async context and set up MCP servers.
                    
                        Called when agent enters its async context. Sets up any configured
                        MCP servers and their tools.
                        """
                        await self._agent.__aenter__()
                        return self
                    

                    __aexit__ async

                    __aexit__(
                        exc_type: type[BaseException] | None,
                        exc_val: BaseException | None,
                        exc_tb: TracebackType | None,
                    )
                    

                    Exit async context.

                    Source code in src/llmling_agent/agent/structured.py
                    119
                    120
                    121
                    122
                    123
                    124
                    125
                    126
                    async def __aexit__(
                        self,
                        exc_type: type[BaseException] | None,
                        exc_val: BaseException | None,
                        exc_tb: TracebackType | None,
                    ):
                        """Exit async context."""
                        await self._agent.__aexit__(exc_type, exc_val, exc_tb)
                    

                    __init__

                    __init__(
                        agent: Agent[TDeps] | StructuredAgent[TDeps, TResult] | Callable[..., TResult],
                        result_type: type[TResult] | str | StructuredResponseConfig,
                        *,
                        tool_name: str | None = None,
                        tool_description: str | None = None,
                    )
                    

                    Initialize structured agent wrapper.

                    Parameters:

                    Name Type Description Default
                    agent Agent[TDeps] | StructuredAgent[TDeps, TResult] | Callable[..., TResult]

                    Base agent to wrap

                    required
                    result_type type[TResult] | str | StructuredResponseConfig

                    Expected result type: - BaseModel / dataclasses - Name of response definition in manifest - Complete response definition instance

                    required
                    tool_name str | None

                    Optional override for tool name

                    None
                    tool_description str | None

                    Optional override for tool description

                    None

                    Raises:

                    Type Description
                    ValueError

                    If named response type not found in manifest

                    Source code in src/llmling_agent/agent/structured.py
                     56
                     57
                     58
                     59
                     60
                     61
                     62
                     63
                     64
                     65
                     66
                     67
                     68
                     69
                     70
                     71
                     72
                     73
                     74
                     75
                     76
                     77
                     78
                     79
                     80
                     81
                     82
                     83
                     84
                     85
                     86
                     87
                     88
                     89
                     90
                     91
                     92
                     93
                     94
                     95
                     96
                     97
                     98
                     99
                    100
                    101
                    102
                    103
                    104
                    105
                    106
                    107
                    108
                    def __init__(
                        self,
                        agent: Agent[TDeps] | StructuredAgent[TDeps, TResult] | Callable[..., TResult],
                        result_type: type[TResult] | str | StructuredResponseConfig,
                        *,
                        tool_name: str | None = None,
                        tool_description: str | None = None,
                    ):
                        """Initialize structured agent wrapper.
                    
                        Args:
                            agent: Base agent to wrap
                            result_type: Expected result type:
                                - BaseModel / dataclasses
                                - Name of response definition in manifest
                                - Complete response definition instance
                            tool_name: Optional override for tool name
                            tool_description: Optional override for tool description
                    
                        Raises:
                            ValueError: If named response type not found in manifest
                        """
                        from llmling_agent.agent.agent import Agent
                    
                        logger.debug("StructuredAgent.run result_type = %s", result_type)
                        match agent:
                            case StructuredAgent():
                                self._agent: Agent[TDeps] = agent._agent
                            case Callable():
                                self._agent = Agent[TDeps](provider=agent, name=agent.__name__)
                            case Agent():
                                self._agent = agent
                            case _:
                                msg = "Invalid agent type"
                                raise ValueError(msg)
                    
                        super().__init__(name=self._agent.name)
                    
                        self._result_type = to_type(result_type)
                        agent.set_result_type(result_type)
                    
                        match result_type:
                            case type() | str():
                                # For types and named definitions, use overrides if provided
                                self._agent.set_result_type(
                                    result_type,
                                    tool_name=tool_name,
                                    tool_description=tool_description,
                                )
                            case StructuredResponseConfig():
                                # For response definitions, use as-is
                                # (overrides don't apply to complete definitions)
                                self._agent.set_result_type(result_type)
                    

                    from_callback classmethod

                    from_callback(
                        callback: ProcessorCallback[TResult], *, name: str | None = None, **kwargs: Any
                    ) -> StructuredAgent[None, TResult]
                    

                    Create a structured agent from a processing callback.

                    Parameters:

                    Name Type Description Default
                    callback ProcessorCallback[TResult]

                    Function to process messages. Can be: - sync or async - with or without context - with explicit return type

                    required
                    name str | None

                    Optional name for the agent

                    None
                    **kwargs Any

                    Additional arguments for agent

                    {}
                    Example
                    class AnalysisResult(BaseModel):
                        sentiment: float
                        topics: list[str]
                    
                    def analyze(msg: str) -> AnalysisResult:
                        return AnalysisResult(sentiment=0.8, topics=["tech"])
                    
                    analyzer = StructuredAgent.from_callback(analyze)
                    
                    Source code in src/llmling_agent/agent/structured.py
                    342
                    343
                    344
                    345
                    346
                    347
                    348
                    349
                    350
                    351
                    352
                    353
                    354
                    355
                    356
                    357
                    358
                    359
                    360
                    361
                    362
                    363
                    364
                    365
                    366
                    367
                    368
                    369
                    370
                    371
                    372
                    373
                    374
                    375
                    376
                    377
                    378
                    379
                    380
                    381
                    382
                    383
                    384
                    385
                    386
                    387
                    388
                    389
                    @classmethod
                    def from_callback(
                        cls,
                        callback: ProcessorCallback[TResult],
                        *,
                        name: str | None = None,
                        **kwargs: Any,
                    ) -> StructuredAgent[None, TResult]:
                        """Create a structured agent from a processing callback.
                    
                        Args:
                            callback: Function to process messages. Can be:
                                - sync or async
                                - with or without context
                                - with explicit return type
                            name: Optional name for the agent
                            **kwargs: Additional arguments for agent
                    
                        Example:
                            ```python
                            class AnalysisResult(BaseModel):
                                sentiment: float
                                topics: list[str]
                    
                            def analyze(msg: str) -> AnalysisResult:
                                return AnalysisResult(sentiment=0.8, topics=["tech"])
                    
                            analyzer = StructuredAgent.from_callback(analyze)
                            ```
                        """
                        from llmling_agent.agent.agent import Agent
                        from llmling_agent_providers.callback import CallbackProvider
                    
                        name = name or callback.__name__ or "processor"
                        provider = CallbackProvider(callback, name=name)
                        agent = Agent[None](provider=provider, name=name, **kwargs)
                        # Get return type from signature for validation
                        hints = get_type_hints(callback)
                        return_type = hints.get("return")
                    
                        # If async, unwrap from Awaitable
                        if (
                            return_type
                            and hasattr(return_type, "__origin__")
                            and return_type.__origin__ is Awaitable
                        ):
                            return_type = return_type.__args__[0]
                        return StructuredAgent[None, TResult](agent, return_type or str)  # type: ignore
                    

                    is_busy

                    is_busy() -> bool
                    

                    Check if agent is currently processing tasks.

                    Source code in src/llmling_agent/agent/structured.py
                    391
                    392
                    393
                    def is_busy(self) -> bool:
                        """Check if agent is currently processing tasks."""
                        return bool(self._pending_tasks or self._background_task)
                    

                    run_iter async

                    run_iter(
                        *prompt_groups: Sequence[AnyPromptType | Image | PathLike[str]], **kwargs: Any
                    ) -> AsyncIterator[ChatMessage[Any]]
                    

                    Forward run_iter to wrapped agent.

                    Source code in src/llmling_agent/agent/structured.py
                    266
                    267
                    268
                    269
                    270
                    271
                    272
                    273
                    async def run_iter(
                        self,
                        *prompt_groups: Sequence[AnyPromptType | PIL.Image.Image | os.PathLike[str]],
                        **kwargs: Any,
                    ) -> AsyncIterator[ChatMessage[Any]]:
                        """Forward run_iter to wrapped agent."""
                        async for message in self._agent.run_iter(*prompt_groups, **kwargs):
                            yield message
                    

                    run_job async

                    run_job(
                        job: Job[TDeps, TResult],
                        *,
                        store_history: bool = True,
                        include_agent_tools: bool = True,
                    ) -> ChatMessage[TResult]
                    

                    Execute a pre-defined job ensuring type compatibility.

                    Parameters:

                    Name Type Description Default
                    job Job[TDeps, TResult]

                    Job configuration to execute

                    required
                    store_history bool

                    Whether to add job execution to conversation history

                    True
                    include_agent_tools bool

                    Whether to include agent's tools alongside job tools

                    True

                    Returns:

                    Type Description
                    ChatMessage[TResult]

                    Task execution result

                    Raises:

                    Type Description
                    JobError

                    If job execution fails or types don't match

                    ValueError

                    If job configuration is invalid

                    Source code in src/llmling_agent/agent/structured.py
                    275
                    276
                    277
                    278
                    279
                    280
                    281
                    282
                    283
                    284
                    285
                    286
                    287
                    288
                    289
                    290
                    291
                    292
                    293
                    294
                    295
                    296
                    297
                    298
                    299
                    300
                    301
                    302
                    303
                    304
                    305
                    306
                    307
                    308
                    309
                    310
                    311
                    312
                    313
                    314
                    315
                    316
                    317
                    318
                    319
                    320
                    321
                    322
                    323
                    324
                    325
                    326
                    327
                    328
                    329
                    330
                    331
                    332
                    333
                    334
                    335
                    336
                    337
                    338
                    339
                    340
                    async def run_job(
                        self,
                        job: Job[TDeps, TResult],
                        *,
                        store_history: bool = True,
                        include_agent_tools: bool = True,
                    ) -> ChatMessage[TResult]:
                        """Execute a pre-defined job ensuring type compatibility.
                    
                        Args:
                            job: Job configuration to execute
                            store_history: Whether to add job execution to conversation history
                            include_agent_tools: Whether to include agent's tools alongside job tools
                    
                        Returns:
                            Task execution result
                    
                        Raises:
                            JobError: If job execution fails or types don't match
                            ValueError: If job configuration is invalid
                        """
                        from llmling_agent.tasks import JobError
                    
                        # Validate dependency requirement
                        if job.required_dependency is not None:  # noqa: SIM102
                            if not isinstance(self.context.data, job.required_dependency):
                                msg = (
                                    f"Agent dependencies ({type(self.context.data)}) "
                                    f"don't match job requirement ({job.required_dependency})"
                                )
                                raise JobError(msg)
                    
                        # Validate return type requirement
                        if job.required_return_type != self._result_type:
                            msg = (
                                f"Agent result type ({self._result_type}) "
                                f"doesn't match job requirement ({job.required_return_type})"
                            )
                            raise JobError(msg)
                    
                        # Load task knowledge if provided
                        if job.knowledge:
                            # Add knowledge sources to context
                            resources: list[Resource | str] = list(job.knowledge.paths) + list(
                                job.knowledge.resources
                            )
                            for source in resources:
                                await self.conversation.load_context_source(source)
                            for prompt in job.knowledge.prompts:
                                await self.conversation.load_context_source(prompt)
                    
                        try:
                            # Register task tools temporarily
                            tools = job.get_tools()
                    
                            # Use temporary tools
                            with self._agent.tools.temporary_tools(
                                tools, exclusive=not include_agent_tools
                            ):
                                # Execute job using StructuredAgent's run to maintain type safety
                                return await self.run(await job.get_prompt(), store_history=store_history)
                    
                        except Exception as e:
                            msg = f"Task execution failed: {e}"
                            logger.exception(msg)
                            raise JobError(msg) from e
                    

                    run_sync

                    run_sync(*args, **kwargs)
                    

                    Run agent synchronously.

                    Source code in src/llmling_agent/agent/structured.py
                    395
                    396
                    397
                    def run_sync(self, *args, **kwargs):
                        """Run agent synchronously."""
                        return self._agent.run_sync(*args, result_type=self._result_type, **kwargs)
                    

                    validate_against async

                    validate_against(prompt: str, criteria: type[TResult], **kwargs: Any) -> bool
                    

                    Check if agent's response satisfies stricter criteria.

                    Source code in src/llmling_agent/agent/structured.py
                    176
                    177
                    178
                    179
                    180
                    181
                    182
                    183
                    184
                    185
                    186
                    187
                    188
                    189
                    async def validate_against(
                        self,
                        prompt: str,
                        criteria: type[TResult],
                        **kwargs: Any,
                    ) -> bool:
                        """Check if agent's response satisfies stricter criteria."""
                        result = await self.run(prompt, **kwargs)
                        try:
                            criteria.model_validate(result.content.model_dump())  # type: ignore
                        except ValidationError:
                            return False
                        else:
                            return True
                    

                    SystemPrompts

                    Manages system prompts for an agent.

                    Source code in src/llmling_agent/agent/sys_prompts.py
                     55
                     56
                     57
                     58
                     59
                     60
                     61
                     62
                     63
                     64
                     65
                     66
                     67
                     68
                     69
                     70
                     71
                     72
                     73
                     74
                     75
                     76
                     77
                     78
                     79
                     80
                     81
                     82
                     83
                     84
                     85
                     86
                     87
                     88
                     89
                     90
                     91
                     92
                     93
                     94
                     95
                     96
                     97
                     98
                     99
                    100
                    101
                    102
                    103
                    104
                    105
                    106
                    107
                    108
                    109
                    110
                    111
                    112
                    113
                    114
                    115
                    116
                    117
                    118
                    119
                    120
                    121
                    122
                    123
                    124
                    125
                    126
                    127
                    128
                    129
                    130
                    131
                    132
                    133
                    134
                    135
                    136
                    137
                    138
                    139
                    140
                    141
                    142
                    143
                    144
                    145
                    146
                    147
                    148
                    149
                    150
                    151
                    152
                    153
                    154
                    155
                    156
                    157
                    158
                    159
                    160
                    161
                    162
                    163
                    164
                    165
                    166
                    167
                    168
                    169
                    170
                    171
                    172
                    173
                    174
                    175
                    176
                    177
                    178
                    179
                    180
                    181
                    182
                    183
                    184
                    185
                    186
                    187
                    188
                    189
                    190
                    191
                    192
                    193
                    194
                    195
                    196
                    197
                    198
                    199
                    200
                    201
                    202
                    203
                    204
                    205
                    206
                    207
                    208
                    209
                    210
                    class SystemPrompts:
                        """Manages system prompts for an agent."""
                    
                        def __init__(
                            self,
                            prompts: AnyPromptType | list[AnyPromptType] | None = None,
                            template: str | None = None,
                            dynamic: bool = True,
                            context: AgentContext | None = None,
                            inject_agent_info: bool = True,
                            inject_tools: ToolInjectionMode = "off",
                            tool_usage_style: ToolUsageStyle = "suggestive",
                        ):
                            """Initialize prompt manager."""
                            from jinjarope import Environment
                            from toprompt import to_prompt
                    
                            match prompts:
                                case list():
                                    self.prompts = prompts
                                case None:
                                    self.prompts = []
                                case _:
                                    self.prompts = [prompts]
                            self.context = context
                            self.template = template
                            self.dynamic = dynamic
                            self.inject_agent_info = inject_agent_info
                            self.inject_tools = inject_tools
                            self.tool_usage_style = tool_usage_style
                            self._cached = False
                            self._env = Environment(enable_async=True)
                            self._env.filters["to_prompt"] = to_prompt
                    
                        def __repr__(self) -> str:
                            return (
                                f"SystemPrompts(prompts={len(self.prompts)}, "
                                f"dynamic={self.dynamic}, inject_agent_info={self.inject_agent_info}, "
                                f"inject_tools={self.inject_tools!r})"
                            )
                    
                        def __len__(self) -> int:
                            return len(self.prompts)
                    
                        def __getitem__(self, idx: int | slice) -> AnyPromptType | list[AnyPromptType]:
                            return self.prompts[idx]
                    
                        async def add_by_reference(self, reference: str):
                            """Add a system prompt using reference syntax.
                    
                            Args:
                                reference: [provider:]identifier[@version][?var1=val1,...]
                    
                            Examples:
                                await sys_prompts.add_by_reference("code_review?language=python")
                                await sys_prompts.add_by_reference("langfuse:expert@v2")
                            """
                            if not self.context:
                                msg = "No context available to resolve prompts"
                                raise RuntimeError(msg)
                    
                            try:
                                content = await self.context.prompt_manager.get(reference)
                                self.prompts.append(content)
                            except Exception as e:
                                msg = f"Failed to add prompt {reference!r}"
                                raise RuntimeError(msg) from e
                    
                        async def add(
                            self,
                            identifier: str,
                            *,
                            provider: str | None = None,
                            version: str | None = None,
                            variables: dict[str, Any] | None = None,
                        ):
                            """Add a system prompt using explicit parameters.
                    
                            Args:
                                identifier: Prompt identifier/name
                                provider: Provider name (None = builtin)
                                version: Optional version string
                                variables: Optional template variables
                    
                            Examples:
                                await sys_prompts.add("code_review", variables={"language": "python"})
                                await sys_prompts.add("expert", provider="langfuse", version="v2")
                            """
                            if not self.context:
                                msg = "No context available to resolve prompts"
                                raise RuntimeError(msg)
                    
                            try:
                                content = await self.context.prompt_manager.get_from(
                                    identifier,
                                    provider=provider,
                                    version=version,
                                    variables=variables,
                                )
                                self.prompts.append(content)
                            except Exception as e:
                                ref = f"{provider + ':' if provider else ''}{identifier}"
                                msg = f"Failed to add prompt {ref!r}"
                                raise RuntimeError(msg) from e
                    
                        def clear(self):
                            """Clear all system prompts."""
                            self.prompts = []
                    
                        async def refresh_cache(self):
                            """Force re-evaluation of prompts."""
                            from toprompt import to_prompt
                    
                            evaluated = []
                            for prompt in self.prompts:
                                result = await to_prompt(prompt)
                                evaluated.append(result)
                            self.prompts = evaluated
                            self._cached = True
                    
                        @asynccontextmanager
                        async def temporary_prompt(
                            self, prompt: AnyPromptType, exclusive: bool = False
                        ) -> AsyncIterator[None]:
                            """Temporarily override system prompts.
                    
                            Args:
                                prompt: Single prompt or sequence of prompts to use temporarily
                                exclusive: Whether to only use given prompt. If False, prompt will be
                                           appended to the agents prompts temporarily.
                            """
                            from toprompt import to_prompt
                    
                            original_prompts = self.prompts.copy()
                            new_prompt = await to_prompt(prompt)
                            self.prompts = [new_prompt] if not exclusive else [*self.prompts, new_prompt]
                            try:
                                yield
                            finally:
                                self.prompts = original_prompts
                    
                        async def format_system_prompt(self, agent: AnyAgent[Any, Any]) -> str:
                            """Format complete system prompt."""
                            if not self.dynamic and not self._cached:
                                await self.refresh_cache()
                    
                            template = self._env.from_string(self.template or DEFAULT_TEMPLATE)
                            result = await template.render_async(
                                agent=agent,
                                prompts=self.prompts,
                                dynamic=self.dynamic,
                                inject_agent_info=self.inject_agent_info,
                                inject_tools=self.inject_tools,
                                tool_usage_style=self.tool_usage_style,
                            )
                            return result.strip()
                    

                    __init__

                    __init__(
                        prompts: AnyPromptType | list[AnyPromptType] | None = None,
                        template: str | None = None,
                        dynamic: bool = True,
                        context: AgentContext | None = None,
                        inject_agent_info: bool = True,
                        inject_tools: ToolInjectionMode = "off",
                        tool_usage_style: ToolUsageStyle = "suggestive",
                    )
                    

                    Initialize prompt manager.

                    Source code in src/llmling_agent/agent/sys_prompts.py
                    58
                    59
                    60
                    61
                    62
                    63
                    64
                    65
                    66
                    67
                    68
                    69
                    70
                    71
                    72
                    73
                    74
                    75
                    76
                    77
                    78
                    79
                    80
                    81
                    82
                    83
                    84
                    85
                    86
                    87
                    def __init__(
                        self,
                        prompts: AnyPromptType | list[AnyPromptType] | None = None,
                        template: str | None = None,
                        dynamic: bool = True,
                        context: AgentContext | None = None,
                        inject_agent_info: bool = True,
                        inject_tools: ToolInjectionMode = "off",
                        tool_usage_style: ToolUsageStyle = "suggestive",
                    ):
                        """Initialize prompt manager."""
                        from jinjarope import Environment
                        from toprompt import to_prompt
                    
                        match prompts:
                            case list():
                                self.prompts = prompts
                            case None:
                                self.prompts = []
                            case _:
                                self.prompts = [prompts]
                        self.context = context
                        self.template = template
                        self.dynamic = dynamic
                        self.inject_agent_info = inject_agent_info
                        self.inject_tools = inject_tools
                        self.tool_usage_style = tool_usage_style
                        self._cached = False
                        self._env = Environment(enable_async=True)
                        self._env.filters["to_prompt"] = to_prompt
                    

                    add async

                    add(
                        identifier: str,
                        *,
                        provider: str | None = None,
                        version: str | None = None,
                        variables: dict[str, Any] | None = None,
                    )
                    

                    Add a system prompt using explicit parameters.

                    Parameters:

                    Name Type Description Default
                    identifier str

                    Prompt identifier/name

                    required
                    provider str | None

                    Provider name (None = builtin)

                    None
                    version str | None

                    Optional version string

                    None
                    variables dict[str, Any] | None

                    Optional template variables

                    None

                    Examples:

                    await sys_prompts.add("code_review", variables={"language": "python"}) await sys_prompts.add("expert", provider="langfuse", version="v2")

                    Source code in src/llmling_agent/agent/sys_prompts.py
                    123
                    124
                    125
                    126
                    127
                    128
                    129
                    130
                    131
                    132
                    133
                    134
                    135
                    136
                    137
                    138
                    139
                    140
                    141
                    142
                    143
                    144
                    145
                    146
                    147
                    148
                    149
                    150
                    151
                    152
                    153
                    154
                    155
                    156
                    157
                    158
                    async def add(
                        self,
                        identifier: str,
                        *,
                        provider: str | None = None,
                        version: str | None = None,
                        variables: dict[str, Any] | None = None,
                    ):
                        """Add a system prompt using explicit parameters.
                    
                        Args:
                            identifier: Prompt identifier/name
                            provider: Provider name (None = builtin)
                            version: Optional version string
                            variables: Optional template variables
                    
                        Examples:
                            await sys_prompts.add("code_review", variables={"language": "python"})
                            await sys_prompts.add("expert", provider="langfuse", version="v2")
                        """
                        if not self.context:
                            msg = "No context available to resolve prompts"
                            raise RuntimeError(msg)
                    
                        try:
                            content = await self.context.prompt_manager.get_from(
                                identifier,
                                provider=provider,
                                version=version,
                                variables=variables,
                            )
                            self.prompts.append(content)
                        except Exception as e:
                            ref = f"{provider + ':' if provider else ''}{identifier}"
                            msg = f"Failed to add prompt {ref!r}"
                            raise RuntimeError(msg) from e
                    

                    add_by_reference async

                    add_by_reference(reference: str)
                    

                    Add a system prompt using reference syntax.

                    Parameters:

                    Name Type Description Default
                    reference str

                    [provider:]identifier[@version][?var1=val1,...]

                    required

                    Examples:

                    await sys_prompts.add_by_reference("code_review?language=python") await sys_prompts.add_by_reference("langfuse:expert@v2")

                    Source code in src/llmling_agent/agent/sys_prompts.py
                    102
                    103
                    104
                    105
                    106
                    107
                    108
                    109
                    110
                    111
                    112
                    113
                    114
                    115
                    116
                    117
                    118
                    119
                    120
                    121
                    async def add_by_reference(self, reference: str):
                        """Add a system prompt using reference syntax.
                    
                        Args:
                            reference: [provider:]identifier[@version][?var1=val1,...]
                    
                        Examples:
                            await sys_prompts.add_by_reference("code_review?language=python")
                            await sys_prompts.add_by_reference("langfuse:expert@v2")
                        """
                        if not self.context:
                            msg = "No context available to resolve prompts"
                            raise RuntimeError(msg)
                    
                        try:
                            content = await self.context.prompt_manager.get(reference)
                            self.prompts.append(content)
                        except Exception as e:
                            msg = f"Failed to add prompt {reference!r}"
                            raise RuntimeError(msg) from e
                    

                    clear

                    clear()
                    

                    Clear all system prompts.

                    Source code in src/llmling_agent/agent/sys_prompts.py
                    160
                    161
                    162
                    def clear(self):
                        """Clear all system prompts."""
                        self.prompts = []
                    

                    format_system_prompt async

                    format_system_prompt(agent: AnyAgent[Any, Any]) -> str
                    

                    Format complete system prompt.

                    Source code in src/llmling_agent/agent/sys_prompts.py
                    196
                    197
                    198
                    199
                    200
                    201
                    202
                    203
                    204
                    205
                    206
                    207
                    208
                    209
                    210
                    async def format_system_prompt(self, agent: AnyAgent[Any, Any]) -> str:
                        """Format complete system prompt."""
                        if not self.dynamic and not self._cached:
                            await self.refresh_cache()
                    
                        template = self._env.from_string(self.template or DEFAULT_TEMPLATE)
                        result = await template.render_async(
                            agent=agent,
                            prompts=self.prompts,
                            dynamic=self.dynamic,
                            inject_agent_info=self.inject_agent_info,
                            inject_tools=self.inject_tools,
                            tool_usage_style=self.tool_usage_style,
                        )
                        return result.strip()
                    

                    refresh_cache async

                    refresh_cache()
                    

                    Force re-evaluation of prompts.

                    Source code in src/llmling_agent/agent/sys_prompts.py
                    164
                    165
                    166
                    167
                    168
                    169
                    170
                    171
                    172
                    173
                    async def refresh_cache(self):
                        """Force re-evaluation of prompts."""
                        from toprompt import to_prompt
                    
                        evaluated = []
                        for prompt in self.prompts:
                            result = await to_prompt(prompt)
                            evaluated.append(result)
                        self.prompts = evaluated
                        self._cached = True
                    

                    temporary_prompt async

                    temporary_prompt(prompt: AnyPromptType, exclusive: bool = False) -> AsyncIterator[None]
                    

                    Temporarily override system prompts.

                    Parameters:

                    Name Type Description Default
                    prompt AnyPromptType

                    Single prompt or sequence of prompts to use temporarily

                    required
                    exclusive bool

                    Whether to only use given prompt. If False, prompt will be appended to the agents prompts temporarily.

                    False
                    Source code in src/llmling_agent/agent/sys_prompts.py
                    175
                    176
                    177
                    178
                    179
                    180
                    181
                    182
                    183
                    184
                    185
                    186
                    187
                    188
                    189
                    190
                    191
                    192
                    193
                    194
                    @asynccontextmanager
                    async def temporary_prompt(
                        self, prompt: AnyPromptType, exclusive: bool = False
                    ) -> AsyncIterator[None]:
                        """Temporarily override system prompts.
                    
                        Args:
                            prompt: Single prompt or sequence of prompts to use temporarily
                            exclusive: Whether to only use given prompt. If False, prompt will be
                                       appended to the agents prompts temporarily.
                        """
                        from toprompt import to_prompt
                    
                        original_prompts = self.prompts.copy()
                        new_prompt = await to_prompt(prompt)
                        self.prompts = [new_prompt] if not exclusive else [*self.prompts, new_prompt]
                        try:
                            yield
                        finally:
                            self.prompts = original_prompts