Skip to content

interactions

Class info

Classes

Name Children Inherits
ChatMessage
llmling_agent.messaging.messages
Common message format for all UI types.
    Interactions
    llmling_agent.agent.interactions
    Manages agent communication patterns.
      LLMMultiPick
      llmling_agent.agent.interactions
      Multiple selection format for LLM response.
        LLMPick
        llmling_agent.agent.interactions
        Decision format for LLM response.
          MultiPick
          llmling_agent.agent.interactions
          Type-safe multiple selection with original objects.
            Pick
            llmling_agent.agent.interactions
            Type-safe decision with original object.
              ToolInfo
              llmling_agent.tools.base
              Information about a registered tool.

                🛈 DocStrings

                Agent interaction patterns.

                Interactions

                Manages agent communication patterns.

                Source code in src/llmling_agent/agent/interactions.py
                 91
                 92
                 93
                 94
                 95
                 96
                 97
                 98
                 99
                100
                101
                102
                103
                104
                105
                106
                107
                108
                109
                110
                111
                112
                113
                114
                115
                116
                117
                118
                119
                120
                121
                122
                123
                124
                125
                126
                127
                128
                129
                130
                131
                132
                133
                134
                135
                136
                137
                138
                139
                140
                141
                142
                143
                144
                145
                146
                147
                148
                149
                150
                151
                152
                153
                154
                155
                156
                157
                158
                159
                160
                161
                162
                163
                164
                165
                166
                167
                168
                169
                170
                171
                172
                173
                174
                175
                176
                177
                178
                179
                180
                181
                182
                183
                184
                185
                186
                187
                188
                189
                190
                191
                192
                193
                194
                195
                196
                197
                198
                199
                200
                201
                202
                203
                204
                205
                206
                207
                208
                209
                210
                211
                212
                213
                214
                215
                216
                217
                218
                219
                220
                221
                222
                223
                224
                225
                226
                227
                228
                229
                230
                231
                232
                233
                234
                235
                236
                237
                238
                239
                240
                241
                242
                243
                244
                245
                246
                247
                248
                249
                250
                251
                252
                253
                254
                255
                256
                257
                258
                259
                260
                261
                262
                263
                264
                265
                266
                267
                268
                269
                270
                271
                272
                273
                274
                275
                276
                277
                278
                279
                280
                281
                282
                283
                284
                285
                286
                287
                288
                289
                290
                291
                292
                293
                294
                295
                296
                297
                298
                299
                300
                301
                302
                303
                304
                305
                306
                307
                308
                309
                310
                311
                312
                313
                314
                315
                316
                317
                318
                319
                320
                321
                322
                323
                324
                325
                326
                327
                328
                329
                330
                331
                332
                333
                334
                335
                336
                337
                338
                339
                340
                341
                342
                343
                344
                345
                346
                347
                348
                349
                350
                351
                352
                353
                354
                355
                356
                357
                358
                359
                360
                361
                362
                363
                364
                365
                366
                367
                368
                369
                370
                371
                372
                373
                374
                375
                376
                377
                378
                379
                380
                381
                382
                383
                384
                385
                386
                387
                388
                389
                390
                391
                392
                393
                394
                395
                396
                397
                398
                399
                400
                401
                402
                403
                404
                405
                406
                407
                408
                409
                410
                411
                412
                413
                414
                415
                416
                417
                418
                419
                420
                421
                422
                423
                424
                425
                426
                427
                428
                429
                430
                431
                432
                433
                434
                435
                436
                437
                438
                439
                440
                441
                442
                443
                444
                445
                446
                447
                448
                449
                450
                451
                452
                453
                454
                455
                456
                457
                458
                459
                460
                461
                462
                463
                464
                465
                466
                467
                468
                469
                470
                471
                472
                473
                474
                475
                476
                477
                478
                479
                480
                481
                482
                483
                484
                485
                486
                487
                488
                489
                490
                491
                492
                493
                494
                495
                496
                497
                498
                499
                500
                501
                502
                503
                504
                505
                506
                507
                508
                509
                510
                511
                512
                513
                514
                515
                516
                517
                518
                519
                520
                521
                522
                523
                524
                525
                526
                527
                528
                529
                530
                531
                532
                533
                534
                535
                class Interactions[TDeps, TResult]:
                    """Manages agent communication patterns."""
                
                    def __init__(self, agent: AnyAgent[TDeps, TResult]):
                        self.agent = agent
                
                    async def conversation(
                        self,
                        other: MessageNode[Any, Any],
                        initial_message: AnyPromptType,
                        *,
                        max_rounds: int | None = None,
                        end_condition: Callable[[list[ChatMessage[Any]], ChatMessage[Any]], bool]
                        | None = None,
                        store_history: bool = True,
                    ) -> AsyncIterator[ChatMessage[Any]]:
                        """Maintain conversation between two agents.
                
                        Args:
                            other: Agent to converse with
                            initial_message: Message to start conversation with
                            max_rounds: Optional maximum number of exchanges
                            end_condition: Optional predicate to check for conversation end
                            store_history: Whether to store in conversation history
                
                        Yields:
                            Messages from both agents in conversation order
                        """
                        rounds = 0
                        messages: list[ChatMessage[Any]] = []
                        current_message = initial_message
                        current_node: MessageNode[Any, Any] = self.agent
                
                        while True:
                            if max_rounds and rounds >= max_rounds:
                                logger.debug("Conversation ended: max rounds (%d) reached", max_rounds)
                                return
                
                            response = await current_node.run(
                                current_message, store_history=store_history
                            )
                            messages.append(response)
                            yield response
                
                            if end_condition and end_condition(messages, response):
                                logger.debug("Conversation ended: end condition met")
                                return
                
                            # Switch agents for next round
                            current_node = other if current_node == self.agent else self.agent
                            current_message = response.content
                            rounds += 1
                
                    @overload
                    async def pick[T: AnyPromptType](
                        self,
                        selections: Sequence[T],
                        task: str,
                        prompt: AnyPromptType | None = None,
                    ) -> Pick[T]: ...
                
                    @overload
                    async def pick[T: AnyPromptType](
                        self,
                        selections: Sequence[T],
                        task: str,
                        prompt: AnyPromptType | None = None,
                    ) -> Pick[T]: ...
                
                    @overload
                    async def pick[T: AnyPromptType](
                        self,
                        selections: Mapping[str, T],
                        task: str,
                        prompt: AnyPromptType | None = None,
                    ) -> Pick[T]: ...
                
                    @overload
                    async def pick(
                        self,
                        selections: AgentPool,
                        task: str,
                        prompt: AnyPromptType | None = None,
                    ) -> Pick[AnyAgent[Any, Any]]: ...
                
                    @overload
                    async def pick(
                        self,
                        selections: BaseTeam[TDeps, Any],
                        task: str,
                        prompt: AnyPromptType | None = None,
                    ) -> Pick[MessageNode[TDeps, Any]]: ...
                
                    async def pick[T](
                        self,
                        selections: Sequence[T] | Mapping[str, T] | AgentPool | BaseTeam[TDeps, Any],
                        task: str,
                        prompt: AnyPromptType | None = None,
                    ) -> Pick[T]:
                        """Pick from available options with reasoning.
                
                        Args:
                            selections: What to pick from:
                                - Sequence of items (auto-labeled)
                                - Dict mapping labels to items
                                - AgentPool
                                - Team
                            task: Task/decision description
                            prompt: Optional custom selection prompt
                
                        Returns:
                            Decision with selected item and reasoning
                
                        Raises:
                            ValueError: If no choices available or invalid selection
                        """
                        # Get items and create label mapping
                        from toprompt import to_prompt
                
                        from llmling_agent.delegation.base_team import BaseTeam
                        from llmling_agent.delegation.pool import AgentPool
                
                        match selections:
                            case dict():
                                label_map = selections
                                items: list[Any] = list(selections.values())
                            case BaseTeam():
                                items = list(selections.agents)
                                label_map = {get_label(item): item for item in items}
                            case AgentPool():
                                items = list(selections.agents.values())
                                label_map = {get_label(item): item for item in items}
                            case _:
                                items = list(selections)
                                label_map = {get_label(item): item for item in items}
                
                        if not items:
                            msg = "No choices available"
                            raise ValueError(msg)
                
                        # Get descriptions for all items
                        descriptions = []
                        for label, item in label_map.items():
                            item_desc = await to_prompt(item)
                            descriptions.append(f"{label}:\n{item_desc}")
                
                        default_prompt = f"""Task/Decision: {task}
                
                Available options:
                {"-" * 40}
                {"\n\n".join(descriptions)}
                {"-" * 40}
                
                Select ONE option by its exact label."""
                
                        # Get LLM's string-based decision
                        result = await self.agent.to_structured(LLMPick).run(prompt or default_prompt)
                
                        # Convert to type-safe decision
                        if result.content.selection not in label_map:
                            msg = f"Invalid selection: {result.content.selection}"
                            raise ValueError(msg)
                
                        selected = cast(T, label_map[result.content.selection])
                        return Pick(selection=selected, reason=result.content.reason)
                
                    @overload
                    async def pick_multiple[T: AnyPromptType](
                        self,
                        selections: Sequence[T],
                        task: str,
                        *,
                        min_picks: int = 1,
                        max_picks: int | None = None,
                        prompt: AnyPromptType | None = None,
                    ) -> MultiPick[T]: ...
                
                    @overload
                    async def pick_multiple[T: AnyPromptType](
                        self,
                        selections: Mapping[str, T],
                        task: str,
                        *,
                        min_picks: int = 1,
                        max_picks: int | None = None,
                        prompt: AnyPromptType | None = None,
                    ) -> MultiPick[T]: ...
                
                    @overload
                    async def pick_multiple(
                        self,
                        selections: BaseTeam[TDeps, Any],
                        task: str,
                        *,
                        min_picks: int = 1,
                        max_picks: int | None = None,
                        prompt: AnyPromptType | None = None,
                    ) -> MultiPick[MessageNode[TDeps, Any]]: ...
                
                    @overload
                    async def pick_multiple(
                        self,
                        selections: AgentPool,
                        task: str,
                        *,
                        min_picks: int = 1,
                        max_picks: int | None = None,
                        prompt: AnyPromptType | None = None,
                    ) -> MultiPick[AnyAgent[Any, Any]]: ...
                
                    async def pick_multiple[T](
                        self,
                        selections: Sequence[T] | Mapping[str, T] | AgentPool | BaseTeam[TDeps, Any],
                        task: str,
                        *,
                        min_picks: int = 1,
                        max_picks: int | None = None,
                        prompt: AnyPromptType | None = None,
                    ) -> MultiPick[T]:
                        """Pick multiple options from available choices.
                
                        Args:
                            selections: What to pick from
                            task: Task/decision description
                            min_picks: Minimum number of selections required
                            max_picks: Maximum number of selections (None for unlimited)
                            prompt: Optional custom selection prompt
                        """
                        from toprompt import to_prompt
                
                        from llmling_agent.delegation.base_team import BaseTeam
                        from llmling_agent.delegation.pool import AgentPool
                
                        match selections:
                            case Mapping():
                                label_map = selections
                                items: list[Any] = list(selections.values())
                            case BaseTeam():
                                items = list(selections.agents)
                                label_map = {get_label(item): item for item in items}
                            case AgentPool():
                                items = list(selections.agents.values())
                                label_map = {get_label(item): item for item in items}
                            case _:
                                items = list(selections)
                                label_map = {get_label(item): item for item in items}
                
                        if not items:
                            msg = "No choices available"
                            raise ValueError(msg)
                
                        if max_picks is not None and max_picks < min_picks:
                            msg = f"max_picks ({max_picks}) cannot be less than min_picks ({min_picks})"
                            raise ValueError(msg)
                
                        descriptions = []
                        for label, item in label_map.items():
                            item_desc = await to_prompt(item)
                            descriptions.append(f"{label}:\n{item_desc}")
                
                        picks_info = (
                            f"Select between {min_picks} and {max_picks}"
                            if max_picks is not None
                            else f"Select at least {min_picks}"
                        )
                
                        default_prompt = f"""Task/Decision: {task}
                
                Available options:
                {"-" * 40}
                {"\n\n".join(descriptions)}
                {"-" * 40}
                
                {picks_info} options by their exact labels.
                List your selections, one per line, followed by your reasoning."""
                
                        result = await self.agent.to_structured(LLMMultiPick).run(
                            prompt or default_prompt
                        )
                
                        # Validate selections
                        invalid = [s for s in result.content.selections if s not in label_map]
                        if invalid:
                            msg = f"Invalid selections: {', '.join(invalid)}"
                            raise ValueError(msg)
                        num_picks = len(result.content.selections)
                        if num_picks < min_picks:
                            msg = f"Too few selections: got {num_picks}, need {min_picks}"
                            raise ValueError(msg)
                
                        if max_picks and num_picks > max_picks:
                            msg = f"Too many selections: got {num_picks}, max {max_picks}"
                            raise ValueError(msg)
                
                        selected = [cast(T, label_map[label]) for label in result.content.selections]
                        return MultiPick(selections=selected, reason=result.content.reason)
                
                    async def extract[T](
                        self,
                        text: str,
                        as_type: type[T],
                        *,
                        mode: ExtractionMode = "structured",
                        prompt: AnyPromptType | None = None,
                        include_tools: bool = False,
                    ) -> T:
                        """Extract single instance of type from text.
                
                        Args:
                            text: Text to extract from
                            as_type: Type to extract
                            mode: Extraction approach:
                                - "structured": Use Pydantic models (more robust)
                                - "tool_calls": Use tool calls (more flexible)
                            prompt: Optional custom prompt
                            include_tools: Whether to include other tools (tool_calls mode only)
                        """
                        from py2openai import create_constructor_schema
                
                        # Create model for single instance
                        item_model = get_ctor_basemodel(as_type)
                
                        # Create extraction prompt
                        final_prompt = prompt or f"Extract {as_type.__name__} from: {text}"
                        schema_obj = create_constructor_schema(as_type)
                        schema = schema_obj.model_dump_openai()["function"]
                
                        if mode == "structured":
                
                            class Extraction(BaseModel):
                                instance: item_model  # type: ignore
                                # explanation: str | None = None
                
                            result = await self.agent.to_structured(Extraction).run(final_prompt)
                
                            # Convert model instance to actual type
                            return as_type(**result.content.instance.model_dump())  # type: ignore
                
                        # Legacy tool-calls approach
                
                        async def construct(**kwargs: Any) -> T:
                            """Construct instance from extracted data."""
                            return as_type(**kwargs)
                
                        structured = self.agent.to_structured(item_model)
                        tool = ToolInfo.from_callable(
                            construct,
                            name_override=schema["name"],
                            description_override=schema["description"],
                            # schema_override=schema,
                        )
                        with structured.tools.temporary_tools(tool, exclusive=not include_tools):
                            result = await structured.run(final_prompt)  # type: ignore
                        return result.content  # type: ignore
                
                    async def extract_multiple[T](
                        self,
                        text: str,
                        as_type: type[T],
                        *,
                        mode: ExtractionMode = "structured",
                        min_items: int = 1,
                        max_items: int | None = None,
                        prompt: AnyPromptType | None = None,
                        include_tools: bool = False,
                    ) -> list[T]:
                        """Extract multiple instances of type from text.
                
                        Args:
                            text: Text to extract from
                            as_type: Type to extract
                            mode: Extraction approach:
                                - "structured": Use Pydantic models (more robust)
                                - "tool_calls": Use tool calls (more flexible)
                            min_items: Minimum number of instances to extract
                            max_items: Maximum number of instances (None=unlimited)
                            prompt: Optional custom prompt
                            include_tools: Whether to include other tools (tool_calls mode only)
                        """
                        from py2openai import create_constructor_schema
                
                        item_model = get_ctor_basemodel(as_type)
                
                        instances: list[T] = []
                        schema_obj = create_constructor_schema(as_type)
                        final_prompt = prompt or "\n".join([
                            f"Extract {as_type.__name__} instances from text.",
                            # "Requirements:",
                            # f"- Extract at least {min_items} instances",
                            # f"- Extract at most {max_items} instances" if max_items else "",
                            "\nText to analyze:",
                            text,
                        ])
                        if mode == "structured":
                            # Create model for individual instance
                
                            class Extraction(BaseModel):
                                instances: list[item_model]  # type: ignore
                                # explanation: str | None = None
                
                            result = await self.agent.to_structured(Extraction).run(final_prompt)
                
                            # Validate counts
                            num_instances = len(result.content.instances)
                            if len(result.content.instances) < min_items:
                                msg = f"Found only {num_instances} instances, need {min_items}"
                                raise ValueError(msg)
                
                            if max_items and num_instances > max_items:
                                msg = f"Found {num_instances} instances, max is {max_items}"
                                raise ValueError(msg)
                
                            # Convert model instances to actual type
                            return [
                                as_type(
                                    **instance.data  # type: ignore
                                    if hasattr(instance, "data")
                                    else instance.model_dump()  # type: ignore
                                )
                                for instance in result.content.instances
                            ]
                
                        # Legacy tool-calls approach
                
                        async def add_instance(**kwargs: Any) -> str:
                            """Add an extracted instance."""
                            if max_items and len(instances) >= max_items:
                                msg = f"Maximum number of items ({max_items}) reached"
                                raise ValueError(msg)
                            instance = as_type(**kwargs)
                            instances.append(instance)
                            return f"Added {instance}"
                
                        add_instance.__annotations__ = schema_obj.get_annotations()
                        add_instance.__signature__ = schema_obj.to_python_signature()  # type: ignore
                        structured = self.agent.to_structured(item_model)
                        with structured.tools.temporary_tools(add_instance, exclusive=not include_tools):
                            # Create extraction prompt
                            await structured.run(final_prompt)
                
                        if len(instances) < min_items:
                            msg = f"Found only {len(instances)} instances, need at least {min_items}"
                            raise ValueError(msg)
                
                        return instances
                

                conversation async

                conversation(
                    other: MessageNode[Any, Any],
                    initial_message: AnyPromptType,
                    *,
                    max_rounds: int | None = None,
                    end_condition: Callable[[list[ChatMessage[Any]], ChatMessage[Any]], bool]
                    | None = None,
                    store_history: bool = True,
                ) -> AsyncIterator[ChatMessage[Any]]
                

                Maintain conversation between two agents.

                Parameters:

                Name Type Description Default
                other MessageNode[Any, Any]

                Agent to converse with

                required
                initial_message AnyPromptType

                Message to start conversation with

                required
                max_rounds int | None

                Optional maximum number of exchanges

                None
                end_condition Callable[[list[ChatMessage[Any]], ChatMessage[Any]], bool] | None

                Optional predicate to check for conversation end

                None
                store_history bool

                Whether to store in conversation history

                True

                Yields:

                Type Description
                AsyncIterator[ChatMessage[Any]]

                Messages from both agents in conversation order

                Source code in src/llmling_agent/agent/interactions.py
                 97
                 98
                 99
                100
                101
                102
                103
                104
                105
                106
                107
                108
                109
                110
                111
                112
                113
                114
                115
                116
                117
                118
                119
                120
                121
                122
                123
                124
                125
                126
                127
                128
                129
                130
                131
                132
                133
                134
                135
                136
                137
                138
                139
                140
                141
                142
                async def conversation(
                    self,
                    other: MessageNode[Any, Any],
                    initial_message: AnyPromptType,
                    *,
                    max_rounds: int | None = None,
                    end_condition: Callable[[list[ChatMessage[Any]], ChatMessage[Any]], bool]
                    | None = None,
                    store_history: bool = True,
                ) -> AsyncIterator[ChatMessage[Any]]:
                    """Maintain conversation between two agents.
                
                    Args:
                        other: Agent to converse with
                        initial_message: Message to start conversation with
                        max_rounds: Optional maximum number of exchanges
                        end_condition: Optional predicate to check for conversation end
                        store_history: Whether to store in conversation history
                
                    Yields:
                        Messages from both agents in conversation order
                    """
                    rounds = 0
                    messages: list[ChatMessage[Any]] = []
                    current_message = initial_message
                    current_node: MessageNode[Any, Any] = self.agent
                
                    while True:
                        if max_rounds and rounds >= max_rounds:
                            logger.debug("Conversation ended: max rounds (%d) reached", max_rounds)
                            return
                
                        response = await current_node.run(
                            current_message, store_history=store_history
                        )
                        messages.append(response)
                        yield response
                
                        if end_condition and end_condition(messages, response):
                            logger.debug("Conversation ended: end condition met")
                            return
                
                        # Switch agents for next round
                        current_node = other if current_node == self.agent else self.agent
                        current_message = response.content
                        rounds += 1
                

                extract async

                extract(
                    text: str,
                    as_type: type[T],
                    *,
                    mode: ExtractionMode = "structured",
                    prompt: AnyPromptType | None = None,
                    include_tools: bool = False,
                ) -> T
                

                Extract single instance of type from text.

                Parameters:

                Name Type Description Default
                text str

                Text to extract from

                required
                as_type type[T]

                Type to extract

                required
                mode ExtractionMode

                Extraction approach: - "structured": Use Pydantic models (more robust) - "tool_calls": Use tool calls (more flexible)

                'structured'
                prompt AnyPromptType | None

                Optional custom prompt

                None
                include_tools bool

                Whether to include other tools (tool_calls mode only)

                False
                Source code in src/llmling_agent/agent/interactions.py
                388
                389
                390
                391
                392
                393
                394
                395
                396
                397
                398
                399
                400
                401
                402
                403
                404
                405
                406
                407
                408
                409
                410
                411
                412
                413
                414
                415
                416
                417
                418
                419
                420
                421
                422
                423
                424
                425
                426
                427
                428
                429
                430
                431
                432
                433
                434
                435
                436
                437
                438
                439
                440
                441
                442
                443
                444
                async def extract[T](
                    self,
                    text: str,
                    as_type: type[T],
                    *,
                    mode: ExtractionMode = "structured",
                    prompt: AnyPromptType | None = None,
                    include_tools: bool = False,
                ) -> T:
                    """Extract single instance of type from text.
                
                    Args:
                        text: Text to extract from
                        as_type: Type to extract
                        mode: Extraction approach:
                            - "structured": Use Pydantic models (more robust)
                            - "tool_calls": Use tool calls (more flexible)
                        prompt: Optional custom prompt
                        include_tools: Whether to include other tools (tool_calls mode only)
                    """
                    from py2openai import create_constructor_schema
                
                    # Create model for single instance
                    item_model = get_ctor_basemodel(as_type)
                
                    # Create extraction prompt
                    final_prompt = prompt or f"Extract {as_type.__name__} from: {text}"
                    schema_obj = create_constructor_schema(as_type)
                    schema = schema_obj.model_dump_openai()["function"]
                
                    if mode == "structured":
                
                        class Extraction(BaseModel):
                            instance: item_model  # type: ignore
                            # explanation: str | None = None
                
                        result = await self.agent.to_structured(Extraction).run(final_prompt)
                
                        # Convert model instance to actual type
                        return as_type(**result.content.instance.model_dump())  # type: ignore
                
                    # Legacy tool-calls approach
                
                    async def construct(**kwargs: Any) -> T:
                        """Construct instance from extracted data."""
                        return as_type(**kwargs)
                
                    structured = self.agent.to_structured(item_model)
                    tool = ToolInfo.from_callable(
                        construct,
                        name_override=schema["name"],
                        description_override=schema["description"],
                        # schema_override=schema,
                    )
                    with structured.tools.temporary_tools(tool, exclusive=not include_tools):
                        result = await structured.run(final_prompt)  # type: ignore
                    return result.content  # type: ignore
                

                extract_multiple async

                extract_multiple(
                    text: str,
                    as_type: type[T],
                    *,
                    mode: ExtractionMode = "structured",
                    min_items: int = 1,
                    max_items: int | None = None,
                    prompt: AnyPromptType | None = None,
                    include_tools: bool = False,
                ) -> list[T]
                

                Extract multiple instances of type from text.

                Parameters:

                Name Type Description Default
                text str

                Text to extract from

                required
                as_type type[T]

                Type to extract

                required
                mode ExtractionMode

                Extraction approach: - "structured": Use Pydantic models (more robust) - "tool_calls": Use tool calls (more flexible)

                'structured'
                min_items int

                Minimum number of instances to extract

                1
                max_items int | None

                Maximum number of instances (None=unlimited)

                None
                prompt AnyPromptType | None

                Optional custom prompt

                None
                include_tools bool

                Whether to include other tools (tool_calls mode only)

                False
                Source code in src/llmling_agent/agent/interactions.py
                446
                447
                448
                449
                450
                451
                452
                453
                454
                455
                456
                457
                458
                459
                460
                461
                462
                463
                464
                465
                466
                467
                468
                469
                470
                471
                472
                473
                474
                475
                476
                477
                478
                479
                480
                481
                482
                483
                484
                485
                486
                487
                488
                489
                490
                491
                492
                493
                494
                495
                496
                497
                498
                499
                500
                501
                502
                503
                504
                505
                506
                507
                508
                509
                510
                511
                512
                513
                514
                515
                516
                517
                518
                519
                520
                521
                522
                523
                524
                525
                526
                527
                528
                529
                530
                531
                532
                533
                534
                535
                async def extract_multiple[T](
                    self,
                    text: str,
                    as_type: type[T],
                    *,
                    mode: ExtractionMode = "structured",
                    min_items: int = 1,
                    max_items: int | None = None,
                    prompt: AnyPromptType | None = None,
                    include_tools: bool = False,
                ) -> list[T]:
                    """Extract multiple instances of type from text.
                
                    Args:
                        text: Text to extract from
                        as_type: Type to extract
                        mode: Extraction approach:
                            - "structured": Use Pydantic models (more robust)
                            - "tool_calls": Use tool calls (more flexible)
                        min_items: Minimum number of instances to extract
                        max_items: Maximum number of instances (None=unlimited)
                        prompt: Optional custom prompt
                        include_tools: Whether to include other tools (tool_calls mode only)
                    """
                    from py2openai import create_constructor_schema
                
                    item_model = get_ctor_basemodel(as_type)
                
                    instances: list[T] = []
                    schema_obj = create_constructor_schema(as_type)
                    final_prompt = prompt or "\n".join([
                        f"Extract {as_type.__name__} instances from text.",
                        # "Requirements:",
                        # f"- Extract at least {min_items} instances",
                        # f"- Extract at most {max_items} instances" if max_items else "",
                        "\nText to analyze:",
                        text,
                    ])
                    if mode == "structured":
                        # Create model for individual instance
                
                        class Extraction(BaseModel):
                            instances: list[item_model]  # type: ignore
                            # explanation: str | None = None
                
                        result = await self.agent.to_structured(Extraction).run(final_prompt)
                
                        # Validate counts
                        num_instances = len(result.content.instances)
                        if len(result.content.instances) < min_items:
                            msg = f"Found only {num_instances} instances, need {min_items}"
                            raise ValueError(msg)
                
                        if max_items and num_instances > max_items:
                            msg = f"Found {num_instances} instances, max is {max_items}"
                            raise ValueError(msg)
                
                        # Convert model instances to actual type
                        return [
                            as_type(
                                **instance.data  # type: ignore
                                if hasattr(instance, "data")
                                else instance.model_dump()  # type: ignore
                            )
                            for instance in result.content.instances
                        ]
                
                    # Legacy tool-calls approach
                
                    async def add_instance(**kwargs: Any) -> str:
                        """Add an extracted instance."""
                        if max_items and len(instances) >= max_items:
                            msg = f"Maximum number of items ({max_items}) reached"
                            raise ValueError(msg)
                        instance = as_type(**kwargs)
                        instances.append(instance)
                        return f"Added {instance}"
                
                    add_instance.__annotations__ = schema_obj.get_annotations()
                    add_instance.__signature__ = schema_obj.to_python_signature()  # type: ignore
                    structured = self.agent.to_structured(item_model)
                    with structured.tools.temporary_tools(add_instance, exclusive=not include_tools):
                        # Create extraction prompt
                        await structured.run(final_prompt)
                
                    if len(instances) < min_items:
                        msg = f"Found only {len(instances)} instances, need at least {min_items}"
                        raise ValueError(msg)
                
                    return instances
                

                pick async

                pick(
                    selections: Sequence[T], task: str, prompt: AnyPromptType | None = None
                ) -> Pick[T]
                
                pick(
                    selections: Sequence[T], task: str, prompt: AnyPromptType | None = None
                ) -> Pick[T]
                
                pick(
                    selections: Mapping[str, T], task: str, prompt: AnyPromptType | None = None
                ) -> Pick[T]
                
                pick(
                    selections: AgentPool, task: str, prompt: AnyPromptType | None = None
                ) -> Pick[AnyAgent[Any, Any]]
                
                pick(
                    selections: BaseTeam[TDeps, Any], task: str, prompt: AnyPromptType | None = None
                ) -> Pick[MessageNode[TDeps, Any]]
                
                pick(
                    selections: Sequence[T] | Mapping[str, T] | AgentPool | BaseTeam[TDeps, Any],
                    task: str,
                    prompt: AnyPromptType | None = None,
                ) -> Pick[T]
                

                Pick from available options with reasoning.

                Parameters:

                Name Type Description Default
                selections Sequence[T] | Mapping[str, T] | AgentPool | BaseTeam[TDeps, Any]

                What to pick from: - Sequence of items (auto-labeled) - Dict mapping labels to items - AgentPool - Team

                required
                task str

                Task/decision description

                required
                prompt AnyPromptType | None

                Optional custom selection prompt

                None

                Returns:

                Type Description
                Pick[T]

                Decision with selected item and reasoning

                Raises:

                Type Description
                ValueError

                If no choices available or invalid selection

                Source code in src/llmling_agent/agent/interactions.py
                184
                185
                186
                187
                188
                189
                190
                191
                192
                193
                194
                195
                196
                197
                198
                199
                200
                201
                202
                203
                204
                205
                206
                207
                208
                209
                210
                211
                212
                213
                214
                215
                216
                217
                218
                219
                220
                221
                222
                223
                224
                225
                226
                227
                228
                229
                230
                231
                232
                233
                234
                235
                236
                237
                238
                239
                240
                241
                242
                243
                244
                245
                246
                247
                248
                249
                250
                251
                252
                253
                254
                255
                    async def pick[T](
                        self,
                        selections: Sequence[T] | Mapping[str, T] | AgentPool | BaseTeam[TDeps, Any],
                        task: str,
                        prompt: AnyPromptType | None = None,
                    ) -> Pick[T]:
                        """Pick from available options with reasoning.
                
                        Args:
                            selections: What to pick from:
                                - Sequence of items (auto-labeled)
                                - Dict mapping labels to items
                                - AgentPool
                                - Team
                            task: Task/decision description
                            prompt: Optional custom selection prompt
                
                        Returns:
                            Decision with selected item and reasoning
                
                        Raises:
                            ValueError: If no choices available or invalid selection
                        """
                        # Get items and create label mapping
                        from toprompt import to_prompt
                
                        from llmling_agent.delegation.base_team import BaseTeam
                        from llmling_agent.delegation.pool import AgentPool
                
                        match selections:
                            case dict():
                                label_map = selections
                                items: list[Any] = list(selections.values())
                            case BaseTeam():
                                items = list(selections.agents)
                                label_map = {get_label(item): item for item in items}
                            case AgentPool():
                                items = list(selections.agents.values())
                                label_map = {get_label(item): item for item in items}
                            case _:
                                items = list(selections)
                                label_map = {get_label(item): item for item in items}
                
                        if not items:
                            msg = "No choices available"
                            raise ValueError(msg)
                
                        # Get descriptions for all items
                        descriptions = []
                        for label, item in label_map.items():
                            item_desc = await to_prompt(item)
                            descriptions.append(f"{label}:\n{item_desc}")
                
                        default_prompt = f"""Task/Decision: {task}
                
                Available options:
                {"-" * 40}
                {"\n\n".join(descriptions)}
                {"-" * 40}
                
                Select ONE option by its exact label."""
                
                        # Get LLM's string-based decision
                        result = await self.agent.to_structured(LLMPick).run(prompt or default_prompt)
                
                        # Convert to type-safe decision
                        if result.content.selection not in label_map:
                            msg = f"Invalid selection: {result.content.selection}"
                            raise ValueError(msg)
                
                        selected = cast(T, label_map[result.content.selection])
                        return Pick(selection=selected, reason=result.content.reason)
                

                pick_multiple async

                pick_multiple(
                    selections: Sequence[T],
                    task: str,
                    *,
                    min_picks: int = 1,
                    max_picks: int | None = None,
                    prompt: AnyPromptType | None = None,
                ) -> MultiPick[T]
                
                pick_multiple(
                    selections: Mapping[str, T],
                    task: str,
                    *,
                    min_picks: int = 1,
                    max_picks: int | None = None,
                    prompt: AnyPromptType | None = None,
                ) -> MultiPick[T]
                
                pick_multiple(
                    selections: BaseTeam[TDeps, Any],
                    task: str,
                    *,
                    min_picks: int = 1,
                    max_picks: int | None = None,
                    prompt: AnyPromptType | None = None,
                ) -> MultiPick[MessageNode[TDeps, Any]]
                
                pick_multiple(
                    selections: AgentPool,
                    task: str,
                    *,
                    min_picks: int = 1,
                    max_picks: int | None = None,
                    prompt: AnyPromptType | None = None,
                ) -> MultiPick[AnyAgent[Any, Any]]
                
                pick_multiple(
                    selections: Sequence[T] | Mapping[str, T] | AgentPool | BaseTeam[TDeps, Any],
                    task: str,
                    *,
                    min_picks: int = 1,
                    max_picks: int | None = None,
                    prompt: AnyPromptType | None = None,
                ) -> MultiPick[T]
                

                Pick multiple options from available choices.

                Parameters:

                Name Type Description Default
                selections Sequence[T] | Mapping[str, T] | AgentPool | BaseTeam[TDeps, Any]

                What to pick from

                required
                task str

                Task/decision description

                required
                min_picks int

                Minimum number of selections required

                1
                max_picks int | None

                Maximum number of selections (None for unlimited)

                None
                prompt AnyPromptType | None

                Optional custom selection prompt

                None
                Source code in src/llmling_agent/agent/interactions.py
                301
                302
                303
                304
                305
                306
                307
                308
                309
                310
                311
                312
                313
                314
                315
                316
                317
                318
                319
                320
                321
                322
                323
                324
                325
                326
                327
                328
                329
                330
                331
                332
                333
                334
                335
                336
                337
                338
                339
                340
                341
                342
                343
                344
                345
                346
                347
                348
                349
                350
                351
                352
                353
                354
                355
                356
                357
                358
                359
                360
                361
                362
                363
                364
                365
                366
                367
                368
                369
                370
                371
                372
                373
                374
                375
                376
                377
                378
                379
                380
                381
                382
                383
                384
                385
                386
                    async def pick_multiple[T](
                        self,
                        selections: Sequence[T] | Mapping[str, T] | AgentPool | BaseTeam[TDeps, Any],
                        task: str,
                        *,
                        min_picks: int = 1,
                        max_picks: int | None = None,
                        prompt: AnyPromptType | None = None,
                    ) -> MultiPick[T]:
                        """Pick multiple options from available choices.
                
                        Args:
                            selections: What to pick from
                            task: Task/decision description
                            min_picks: Minimum number of selections required
                            max_picks: Maximum number of selections (None for unlimited)
                            prompt: Optional custom selection prompt
                        """
                        from toprompt import to_prompt
                
                        from llmling_agent.delegation.base_team import BaseTeam
                        from llmling_agent.delegation.pool import AgentPool
                
                        match selections:
                            case Mapping():
                                label_map = selections
                                items: list[Any] = list(selections.values())
                            case BaseTeam():
                                items = list(selections.agents)
                                label_map = {get_label(item): item for item in items}
                            case AgentPool():
                                items = list(selections.agents.values())
                                label_map = {get_label(item): item for item in items}
                            case _:
                                items = list(selections)
                                label_map = {get_label(item): item for item in items}
                
                        if not items:
                            msg = "No choices available"
                            raise ValueError(msg)
                
                        if max_picks is not None and max_picks < min_picks:
                            msg = f"max_picks ({max_picks}) cannot be less than min_picks ({min_picks})"
                            raise ValueError(msg)
                
                        descriptions = []
                        for label, item in label_map.items():
                            item_desc = await to_prompt(item)
                            descriptions.append(f"{label}:\n{item_desc}")
                
                        picks_info = (
                            f"Select between {min_picks} and {max_picks}"
                            if max_picks is not None
                            else f"Select at least {min_picks}"
                        )
                
                        default_prompt = f"""Task/Decision: {task}
                
                Available options:
                {"-" * 40}
                {"\n\n".join(descriptions)}
                {"-" * 40}
                
                {picks_info} options by their exact labels.
                List your selections, one per line, followed by your reasoning."""
                
                        result = await self.agent.to_structured(LLMMultiPick).run(
                            prompt or default_prompt
                        )
                
                        # Validate selections
                        invalid = [s for s in result.content.selections if s not in label_map]
                        if invalid:
                            msg = f"Invalid selections: {', '.join(invalid)}"
                            raise ValueError(msg)
                        num_picks = len(result.content.selections)
                        if num_picks < min_picks:
                            msg = f"Too few selections: got {num_picks}, need {min_picks}"
                            raise ValueError(msg)
                
                        if max_picks and num_picks > max_picks:
                            msg = f"Too many selections: got {num_picks}, max {max_picks}"
                            raise ValueError(msg)
                
                        selected = [cast(T, label_map[label]) for label in result.content.selections]
                        return MultiPick(selections=selected, reason=result.content.reason)
                

                LLMMultiPick

                Bases: BaseModel

                Multiple selection format for LLM response.

                Source code in src/llmling_agent/agent/interactions.py
                50
                51
                52
                53
                54
                class LLMMultiPick(BaseModel):
                    """Multiple selection format for LLM response."""
                
                    selections: list[str]  # Labels of selected options
                    reason: str
                

                LLMPick

                Bases: BaseModel

                Decision format for LLM response.

                Source code in src/llmling_agent/agent/interactions.py
                36
                37
                38
                39
                40
                class LLMPick(BaseModel):
                    """Decision format for LLM response."""
                
                    selection: str  # The label/name of the selected option
                    reason: str
                

                MultiPick

                Bases: BaseModel

                Type-safe multiple selection with original objects.

                Source code in src/llmling_agent/agent/interactions.py
                57
                58
                59
                60
                61
                class MultiPick[T](BaseModel):
                    """Type-safe multiple selection with original objects."""
                
                    selections: list[T]
                    reason: str
                

                Pick

                Bases: BaseModel

                Type-safe decision with original object.

                Source code in src/llmling_agent/agent/interactions.py
                43
                44
                45
                46
                47
                class Pick[T](BaseModel):
                    """Type-safe decision with original object."""
                
                    selection: T
                    reason: str
                

                get_label

                get_label(item: Any) -> str
                

                Get label for an item to use in selection.

                Parameters:

                Name Type Description Default
                item Any

                Item to get label for

                required

                Returns:

                Type Description
                str

                Label to use for selection

                Strategy
                • strings stay as-is
                • types use name
                • others use repr for unique identifiable string
                Source code in src/llmling_agent/agent/interactions.py
                64
                65
                66
                67
                68
                69
                70
                71
                72
                73
                74
                75
                76
                77
                78
                79
                80
                81
                82
                83
                84
                85
                86
                87
                88
                def get_label(item: Any) -> str:
                    """Get label for an item to use in selection.
                
                    Args:
                        item: Item to get label for
                
                    Returns:
                        Label to use for selection
                
                    Strategy:
                        - strings stay as-is
                        - types use __name__
                        - others use __repr__ for unique identifiable string
                    """
                    from llmling_agent.messaging.messagenode import MessageNode
                
                    match item:
                        case str():
                            return item
                        case type():
                            return item.__name__
                        case MessageNode():
                            return item.name or "unnamed_agent"
                        case _:
                            return repr(item)