Skip to content

providers

Class info

Classes

Name Children Inherits
BaseModelConfig
llmling_agent_models.base
Base for model configurations.
  • AISuiteModelConfig
  • AugmentedModelConfig
  • CostOptimizedModelConfig
  • DelegationModelConfig
  • FallbackModelConfig
  • ImportModelConfig
  • InputModelConfig
  • LLMAdapterConfig
  • RemoteInputConfig
  • RemoteProxyConfig
  • TokenOptimizedModelConfig
  • ...
BaseProviderConfig
llmling_agent.models.providers
Base configuration for agent providers.
CallbackProviderConfig
llmling_agent.models.providers
Configuration for callback-based provider.
    HumanProviderConfig
    llmling_agent.models.providers
    Configuration for human-in-the-loop provider.
      LiteLLMProviderConfig
      llmling_agent.models.providers
      Configuration for LiteLLM-based provider.
        ModelProtocol
        llmling_agent.common_types
        Protocol for model objects.
          ModelSettings
          llmling_agent.models.providers
          Settings to configure an LLM.
            PydanticAIProviderConfig
            llmling_agent.models.providers
            Configuration for PydanticAI-based provider.

              🛈 DocStrings

              Provider configuration models.

              BaseProviderConfig

              Bases: BaseModel

              Base configuration for agent providers.

              Common settings that apply to all provider types, regardless of their specific implementation. Provides basic identification and configuration options that every provider should have.

              Source code in src/llmling_agent/models/providers.py
              22
              23
              24
              25
              26
              27
              28
              29
              30
              31
              32
              33
              34
              35
              36
              37
              38
              39
              class BaseProviderConfig(BaseModel):
                  """Base configuration for agent providers.
              
                  Common settings that apply to all provider types, regardless of their
                  specific implementation. Provides basic identification and configuration
                  options that every provider should have.
                  """
              
                  type: str = Field(init=False)
                  """Type discriminator for provider configs."""
              
                  name: str | None = None
                  """Optional name for the provider instance."""
              
                  model_settings: ModelSettings | None = None
                  """Optional settings to configure the LLM behavior."""
              
                  model_config = ConfigDict(frozen=True, use_attribute_docstrings=True, extra="forbid")
              

              model_settings class-attribute instance-attribute

              model_settings: ModelSettings | None = None
              

              Optional settings to configure the LLM behavior.

              name class-attribute instance-attribute

              name: str | None = None
              

              Optional name for the provider instance.

              type class-attribute instance-attribute

              type: str = Field(init=False)
              

              Type discriminator for provider configs.

              CallbackProviderConfig

              Bases: BaseProviderConfig

              Configuration for callback-based provider.

              Allows defining processor functions through: - Import path to callback function - Generic type for result validation

              Source code in src/llmling_agent/models/providers.py
              171
              172
              173
              174
              175
              176
              177
              178
              179
              180
              181
              182
              183
              184
              185
              186
              187
              188
              189
              190
              class CallbackProviderConfig[TResult](BaseProviderConfig):
                  """Configuration for callback-based provider.
              
                  Allows defining processor functions through:
                  - Import path to callback function
                  - Generic type for result validation
                  """
              
                  type: Literal["callback"] = Field("callback", init=False)
                  """Import-path based Callback provider."""
              
                  callback: ImportString[ProcessorCallback[TResult]]
                  """Import path to processor callback."""
              
                  def get_provider(self) -> CallbackProvider:
                      """Create callback provider instance."""
                      from llmling_agent_providers.callback import CallbackProvider
              
                      name = self.name or self.callback.__name__
                      return CallbackProvider(self.callback, name=name)
              

              callback instance-attribute

              callback: ImportString[ProcessorCallback[TResult]]
              

              Import path to processor callback.

              type class-attribute instance-attribute

              type: Literal['callback'] = Field('callback', init=False)
              

              Import-path based Callback provider.

              get_provider

              get_provider() -> CallbackProvider
              

              Create callback provider instance.

              Source code in src/llmling_agent/models/providers.py
              185
              186
              187
              188
              189
              190
              def get_provider(self) -> CallbackProvider:
                  """Create callback provider instance."""
                  from llmling_agent_providers.callback import CallbackProvider
              
                  name = self.name or self.callback.__name__
                  return CallbackProvider(self.callback, name=name)
              

              HumanProviderConfig

              Bases: BaseProviderConfig

              Configuration for human-in-the-loop provider.

              This provider enables direct human interaction for responses and decisions. Useful for testing, training, and oversight of agent operations.

              Source code in src/llmling_agent/models/providers.py
              144
              145
              146
              147
              148
              149
              150
              151
              152
              153
              154
              155
              156
              157
              158
              159
              160
              161
              162
              163
              164
              165
              166
              167
              168
              class HumanProviderConfig(BaseProviderConfig):
                  """Configuration for human-in-the-loop provider.
              
                  This provider enables direct human interaction for responses and decisions.
                  Useful for testing, training, and oversight of agent operations.
                  """
              
                  type: Literal["human"] = Field("human", init=False)
                  """Human-input provider."""
              
                  timeout: int | None = None
                  """Timeout in seconds for human response. None means wait indefinitely."""
              
                  show_context: bool = True
                  """Whether to show conversation context to human."""
              
                  def get_provider(self) -> AgentProvider:
                      """Create human provider instance."""
                      from llmling_agent_providers.human import HumanProvider
              
                      return HumanProvider(
                          name=self.name or "human-agent",
                          timeout=self.timeout,
                          show_context=self.show_context,
                      )
              

              show_context class-attribute instance-attribute

              show_context: bool = True
              

              Whether to show conversation context to human.

              timeout class-attribute instance-attribute

              timeout: int | None = None
              

              Timeout in seconds for human response. None means wait indefinitely.

              type class-attribute instance-attribute

              type: Literal['human'] = Field('human', init=False)
              

              Human-input provider.

              get_provider

              get_provider() -> AgentProvider
              

              Create human provider instance.

              Source code in src/llmling_agent/models/providers.py
              160
              161
              162
              163
              164
              165
              166
              167
              168
              def get_provider(self) -> AgentProvider:
                  """Create human provider instance."""
                  from llmling_agent_providers.human import HumanProvider
              
                  return HumanProvider(
                      name=self.name or "human-agent",
                      timeout=self.timeout,
                      show_context=self.show_context,
                  )
              

              LiteLLMProviderConfig

              Bases: BaseProviderConfig

              Configuration for LiteLLM-based provider.

              Source code in src/llmling_agent/models/providers.py
              105
              106
              107
              108
              109
              110
              111
              112
              113
              114
              115
              116
              117
              118
              119
              120
              121
              122
              123
              124
              125
              126
              127
              128
              129
              130
              131
              132
              133
              134
              135
              136
              137
              138
              139
              140
              141
              class LiteLLMProviderConfig(BaseProviderConfig):
                  """Configuration for LiteLLM-based provider."""
              
                  type: Literal["litellm"] = Field("litellm", init=False)
                  """LiteLLM provider."""
              
                  retries: int = 1
                  """Maximum retries for model calls."""
              
                  model: str | None = None
                  """Optional model name to use. If not specified, uses default model."""
              
                  def get_provider(self) -> AgentProvider:
                      """Create PydanticAI provider instance."""
                      from llmling_agent_providers.litellm_provider import LiteLLMProvider
              
                      settings = {}
                      if self.model_settings:
                          settings = {
                              "max_tokens": self.model_settings.max_tokens,
                              "temperature": self.model_settings.temperature,
                              "top_p": self.model_settings.top_p,
                              "request_timeout": self.model_settings.timeout,  # different name!
                              "presence_penalty": self.model_settings.presence_penalty,
                              "frequency_penalty": self.model_settings.frequency_penalty,
                              "seed": self.model_settings.seed,
                          }
                          # Remove None values
                          settings = {k: v for k, v in settings.items() if v is not None}
              
                      name = self.name or "ai-agent"
                      return LiteLLMProvider(
                          name=name,
                          model=self.model,
                          retries=self.retries,
                          model_settings=settings,
                      )
              

              model class-attribute instance-attribute

              model: str | None = None
              

              Optional model name to use. If not specified, uses default model.

              retries class-attribute instance-attribute

              retries: int = 1
              

              Maximum retries for model calls.

              type class-attribute instance-attribute

              type: Literal['litellm'] = Field('litellm', init=False)
              

              LiteLLM provider.

              get_provider

              get_provider() -> AgentProvider
              

              Create PydanticAI provider instance.

              Source code in src/llmling_agent/models/providers.py
              117
              118
              119
              120
              121
              122
              123
              124
              125
              126
              127
              128
              129
              130
              131
              132
              133
              134
              135
              136
              137
              138
              139
              140
              141
              def get_provider(self) -> AgentProvider:
                  """Create PydanticAI provider instance."""
                  from llmling_agent_providers.litellm_provider import LiteLLMProvider
              
                  settings = {}
                  if self.model_settings:
                      settings = {
                          "max_tokens": self.model_settings.max_tokens,
                          "temperature": self.model_settings.temperature,
                          "top_p": self.model_settings.top_p,
                          "request_timeout": self.model_settings.timeout,  # different name!
                          "presence_penalty": self.model_settings.presence_penalty,
                          "frequency_penalty": self.model_settings.frequency_penalty,
                          "seed": self.model_settings.seed,
                      }
                      # Remove None values
                      settings = {k: v for k, v in settings.items() if v is not None}
              
                  name = self.name or "ai-agent"
                  return LiteLLMProvider(
                      name=name,
                      model=self.model,
                      retries=self.retries,
                      model_settings=settings,
                  )
              

              ModelSettings

              Bases: BaseModel

              Settings to configure an LLM.

              Source code in src/llmling_agent/models/providers.py
              203
              204
              205
              206
              207
              208
              209
              210
              211
              212
              213
              214
              215
              216
              217
              218
              219
              220
              221
              222
              223
              224
              225
              226
              227
              228
              229
              230
              231
              232
              233
              234
              235
              236
              237
              class ModelSettings(BaseModel):
                  """Settings to configure an LLM."""
              
                  max_tokens: int | None = None
                  """The maximum number of tokens to generate."""
              
                  temperature: float | None = Field(None, ge=0.0, le=2.0)
                  """Amount of randomness in the response (0.0 - 2.0)."""
              
                  top_p: float | None = Field(None, ge=0.0, le=1.0)
                  """An alternative to sampling with temperature, called nucleus sampling."""
              
                  timeout: float | None = None
                  """Override the client-level default timeout for a request, in seconds."""
              
                  parallel_tool_calls: bool | None = None
                  """Whether to allow parallel tool calls."""
              
                  seed: int | None = None
                  """The random seed to use for the model."""
              
                  presence_penalty: float | None = Field(None, ge=-2.0, le=2.0)
                  """Penalize new tokens based on whether they have appeared in the text so far."""
              
                  frequency_penalty: float | None = Field(None, ge=-2.0, le=2.0)
                  """Penalize new tokens based on their existing frequency in the text so far."""
              
                  logit_bias: dict[str, int] | None = None
                  """Modify the likelihood of specified tokens appearing in the completion."""
              
                  model_config = ConfigDict(frozen=True, extra="forbid", use_attribute_docstrings=True)
              
                  def to_dict(self) -> dict[str, Any]:
                      """Convert to TypedDict format for pydantic-ai."""
                      return {k: v for k, v in self.model_dump().items() if v is not None}
              

              frequency_penalty class-attribute instance-attribute

              frequency_penalty: float | None = Field(None, ge=-2.0, le=2.0)
              

              Penalize new tokens based on their existing frequency in the text so far.

              logit_bias class-attribute instance-attribute

              logit_bias: dict[str, int] | None = None
              

              Modify the likelihood of specified tokens appearing in the completion.

              max_tokens class-attribute instance-attribute

              max_tokens: int | None = None
              

              The maximum number of tokens to generate.

              parallel_tool_calls class-attribute instance-attribute

              parallel_tool_calls: bool | None = None
              

              Whether to allow parallel tool calls.

              presence_penalty class-attribute instance-attribute

              presence_penalty: float | None = Field(None, ge=-2.0, le=2.0)
              

              Penalize new tokens based on whether they have appeared in the text so far.

              seed class-attribute instance-attribute

              seed: int | None = None
              

              The random seed to use for the model.

              temperature class-attribute instance-attribute

              temperature: float | None = Field(None, ge=0.0, le=2.0)
              

              Amount of randomness in the response (0.0 - 2.0).

              timeout class-attribute instance-attribute

              timeout: float | None = None
              

              Override the client-level default timeout for a request, in seconds.

              top_p class-attribute instance-attribute

              top_p: float | None = Field(None, ge=0.0, le=1.0)
              

              An alternative to sampling with temperature, called nucleus sampling.

              to_dict

              to_dict() -> dict[str, Any]
              

              Convert to TypedDict format for pydantic-ai.

              Source code in src/llmling_agent/models/providers.py
              235
              236
              237
              def to_dict(self) -> dict[str, Any]:
                  """Convert to TypedDict format for pydantic-ai."""
                  return {k: v for k, v in self.model_dump().items() if v is not None}
              

              PydanticAIProviderConfig

              Bases: BaseProviderConfig

              Configuration for PydanticAI-based provider.

              This provider uses PydanticAI for handling model interactions, tool calls, and structured outputs. It provides fine-grained control over model behavior and validation.

              Source code in src/llmling_agent/models/providers.py
               42
               43
               44
               45
               46
               47
               48
               49
               50
               51
               52
               53
               54
               55
               56
               57
               58
               59
               60
               61
               62
               63
               64
               65
               66
               67
               68
               69
               70
               71
               72
               73
               74
               75
               76
               77
               78
               79
               80
               81
               82
               83
               84
               85
               86
               87
               88
               89
               90
               91
               92
               93
               94
               95
               96
               97
               98
               99
              100
              101
              102
              class PydanticAIProviderConfig(BaseProviderConfig):
                  """Configuration for PydanticAI-based provider.
              
                  This provider uses PydanticAI for handling model interactions, tool calls,
                  and structured outputs. It provides fine-grained control over model behavior
                  and validation.
                  """
              
                  type: Literal["pydantic_ai"] = Field("pydantic_ai", init=False)
                  """Pydantic-AI provider."""
              
                  end_strategy: EndStrategy = "early"
                  """How to handle tool calls when final result found:
                  - early: Stop when valid result found
                  - complete: Run all requested tools
                  - confirm: Ask user what to do
                  """
              
                  model: str | AnyModelConfig | None = None
                  """Optional model name to use. If not specified, uses default model."""
              
                  result_retries: int | None = None
                  """Maximum retries for result validation.
                  None means use the global retry setting.
                  """
              
                  defer_model_check: bool = False
                  """Whether to defer model evaluation until first run.
                  True can speed up initialization but might fail later.
                  """
              
                  validation_enabled: bool = True
                  """Whether to validate model outputs against schemas."""
              
                  allow_text_fallback: bool = True
                  """Whether to accept plain text when structured output fails."""
              
                  def get_provider(self) -> AgentProvider:
                      """Create PydanticAI provider instance."""
                      from llmling_agent_providers.pydanticai import PydanticAIProvider
              
                      settings = (
                          self.model_settings.model_dump(exclude_none=True)
                          if self.model_settings
                          else {}
                      )
                      match self.model:
                          case str():
                              model: str | ModelProtocol | None = self.model
                          case BaseModelConfig():
                              model = self.model.get_model()
                          case _:
                              model = None
                      return PydanticAIProvider(
                          model=model,
                          name=self.name or "ai-agent",
                          end_strategy=self.end_strategy,
                          result_retries=self.result_retries,
                          defer_model_check=self.defer_model_check,
                          model_settings=settings,
                      )
              

              allow_text_fallback class-attribute instance-attribute

              allow_text_fallback: bool = True
              

              Whether to accept plain text when structured output fails.

              defer_model_check class-attribute instance-attribute

              defer_model_check: bool = False
              

              Whether to defer model evaluation until first run. True can speed up initialization but might fail later.

              end_strategy class-attribute instance-attribute

              end_strategy: EndStrategy = 'early'
              

              How to handle tool calls when final result found: - early: Stop when valid result found - complete: Run all requested tools - confirm: Ask user what to do

              model class-attribute instance-attribute

              model: str | AnyModelConfig | None = None
              

              Optional model name to use. If not specified, uses default model.

              result_retries class-attribute instance-attribute

              result_retries: int | None = None
              

              Maximum retries for result validation. None means use the global retry setting.

              type class-attribute instance-attribute

              type: Literal['pydantic_ai'] = Field('pydantic_ai', init=False)
              

              Pydantic-AI provider.

              validation_enabled class-attribute instance-attribute

              validation_enabled: bool = True
              

              Whether to validate model outputs against schemas.

              get_provider

              get_provider() -> AgentProvider
              

              Create PydanticAI provider instance.

              Source code in src/llmling_agent/models/providers.py
               79
               80
               81
               82
               83
               84
               85
               86
               87
               88
               89
               90
               91
               92
               93
               94
               95
               96
               97
               98
               99
              100
              101
              102
              def get_provider(self) -> AgentProvider:
                  """Create PydanticAI provider instance."""
                  from llmling_agent_providers.pydanticai import PydanticAIProvider
              
                  settings = (
                      self.model_settings.model_dump(exclude_none=True)
                      if self.model_settings
                      else {}
                  )
                  match self.model:
                      case str():
                          model: str | ModelProtocol | None = self.model
                      case BaseModelConfig():
                          model = self.model.get_model()
                      case _:
                          model = None
                  return PydanticAIProvider(
                      model=model,
                      name=self.name or "ai-agent",
                      end_strategy=self.end_strategy,
                      result_retries=self.result_retries,
                      defer_model_check=self.defer_model_check,
                      model_settings=settings,
                  )