Skip to content

Model

The Model class configures which LLM provider and model to use. It wraps LangChain's init_chat_model with lazy initialization and caching.

Quick reference

from smelt import Model

# Minimal
model = Model(provider="openai", name="gpt-4.1-mini")

# With API key
model = Model(provider="openai", name="gpt-4.1-mini", api_key="sk-...")

# With parameters
model = Model(
    provider="openai",
    name="gpt-4.1-mini",
    params={"temperature": 0, "max_tokens": 4096},
)

Parameters

Parameter Type Default Description
provider str required LangChain provider identifier (e.g. "openai", "anthropic", "google_genai")
name str required Model name (e.g. "gpt-4.1-mini", "claude-sonnet-4-6")
api_key str \| None None API key. Falls back to provider's environment variable if not set
params dict[str, Any] {} Additional kwargs forwarded to the LangChain model constructor

Examples

OpenAI

model = Model(provider="openai", name="gpt-4.1-mini")
model = Model(provider="openai", name="gpt-5.2", params={"temperature": 0})

Anthropic

model = Model(provider="anthropic", name="claude-sonnet-4-6")
model = Model(provider="anthropic", name="claude-opus-4-6", params={"max_tokens": 4096})

Google Gemini

model = Model(provider="google_genai", name="gemini-3-flash-preview")

Azure OpenAI

model = Model(
    provider="azure_openai",
    name="my-deployment",
    params={
        "azure_endpoint": "https://my-resource.openai.azure.com/",
        "api_version": "2024-02-15-preview",
    },
)

Methods

get_chat_model()

Returns the initialized LangChain BaseChatModel. The model is lazily initialized on first call and cached for subsequent calls.

model = Model(provider="openai", name="gpt-4.1-mini")
chat_model = model.get_chat_model()  # Initializes on first call
chat_model = model.get_chat_model()  # Returns cached instance

Raises: SmeltConfigError if the provider or model cannot be initialized (e.g. missing package, invalid credentials).

Source

Model(provider, name, api_key=None, params=dict()) dataclass

Configuration for an LLM provider used by smelt.

Creates a fresh LangChain chat model on each call to :meth:get_chat_model. This avoids stale event-loop state when multiple asyncio.run() calls reuse the same Model instance (e.g. job.test() followed by job.run()).

Attributes:

Name Type Description
provider str

The LangChain model provider identifier (e.g. "openai", "anthropic").

name str

The model name (e.g. "gpt-4o", "claude-sonnet-4-20250514").

api_key str | None

Optional API key. If not provided, the provider's default environment variable will be used.

params dict[str, Any]

Additional keyword arguments forwarded to the chat model constructor (e.g. {"temperature": 0}).

Examples:

>>> model = Model(provider="openai", name="gpt-4o", params={"temperature": 0})
>>> chat_model = model.get_chat_model()

get_chat_model()

Return a fresh LangChain chat model instance.

A new instance is created on each call to avoid stale HTTP client state across event loops (e.g. when calling job.test() then job.run() in the same script).

Returns:

Type Description
BaseChatModel

The initialized BaseChatModel instance.

Raises:

Type Description
SmeltConfigError

If the model provider cannot be initialized (e.g. missing provider package, invalid credentials).

Source code in src/smelt/model.py
def get_chat_model(self) -> BaseChatModel:
    """Return a fresh LangChain chat model instance.

    A new instance is created on each call to avoid stale HTTP client
    state across event loops (e.g. when calling ``job.test()`` then
    ``job.run()`` in the same script).

    Returns:
        The initialized ``BaseChatModel`` instance.

    Raises:
        SmeltConfigError: If the model provider cannot be initialized
            (e.g. missing provider package, invalid credentials).
    """
    try:
        kwargs: dict[str, Any] = {**self.params}
        if self.api_key is not None:
            kwargs["api_key"] = self.api_key

        return init_chat_model(
            model=self.name,
            model_provider=self.provider,
            **kwargs,
        )
    except Exception as exc:
        raise SmeltConfigError(
            f"Failed to initialize model '{self.name}' with provider '{self.provider}': {exc}"
        ) from exc