Model¶
The Model class configures which LLM provider and model to use. It wraps LangChain's init_chat_model with lazy initialization and caching.
Quick reference¶
from smelt import Model
# Minimal
model = Model(provider="openai", name="gpt-4.1-mini")
# With API key
model = Model(provider="openai", name="gpt-4.1-mini", api_key="sk-...")
# With parameters
model = Model(
provider="openai",
name="gpt-4.1-mini",
params={"temperature": 0, "max_tokens": 4096},
)
Parameters¶
| Parameter | Type | Default | Description |
|---|---|---|---|
provider |
str |
required | LangChain provider identifier (e.g. "openai", "anthropic", "google_genai") |
name |
str |
required | Model name (e.g. "gpt-4.1-mini", "claude-sonnet-4-6") |
api_key |
str \| None |
None |
API key. Falls back to provider's environment variable if not set |
params |
dict[str, Any] |
{} |
Additional kwargs forwarded to the LangChain model constructor |
Examples¶
OpenAI¶
model = Model(provider="openai", name="gpt-4.1-mini")
model = Model(provider="openai", name="gpt-5.2", params={"temperature": 0})
Anthropic¶
model = Model(provider="anthropic", name="claude-sonnet-4-6")
model = Model(provider="anthropic", name="claude-opus-4-6", params={"max_tokens": 4096})
Google Gemini¶
Azure OpenAI¶
model = Model(
provider="azure_openai",
name="my-deployment",
params={
"azure_endpoint": "https://my-resource.openai.azure.com/",
"api_version": "2024-02-15-preview",
},
)
Methods¶
get_chat_model()¶
Returns the initialized LangChain BaseChatModel. The model is lazily initialized on first call and cached for subsequent calls.
model = Model(provider="openai", name="gpt-4.1-mini")
chat_model = model.get_chat_model() # Initializes on first call
chat_model = model.get_chat_model() # Returns cached instance
Raises: SmeltConfigError if the provider or model cannot be initialized (e.g. missing package, invalid credentials).
Source¶
Model(provider, name, api_key=None, params=dict())
dataclass
¶
Configuration for an LLM provider used by smelt.
Creates a fresh LangChain chat model on each call to :meth:get_chat_model.
This avoids stale event-loop state when multiple asyncio.run() calls
reuse the same Model instance (e.g. job.test() followed by job.run()).
Attributes:
| Name | Type | Description |
|---|---|---|
provider |
str
|
The LangChain model provider identifier (e.g. "openai", "anthropic"). |
name |
str
|
The model name (e.g. "gpt-4o", "claude-sonnet-4-20250514"). |
api_key |
str | None
|
Optional API key. If not provided, the provider's default environment variable will be used. |
params |
dict[str, Any]
|
Additional keyword arguments forwarded to the chat model
constructor (e.g. |
Examples:
>>> model = Model(provider="openai", name="gpt-4o", params={"temperature": 0})
>>> chat_model = model.get_chat_model()
get_chat_model()
¶
Return a fresh LangChain chat model instance.
A new instance is created on each call to avoid stale HTTP client
state across event loops (e.g. when calling job.test() then
job.run() in the same script).
Returns:
| Type | Description |
|---|---|
BaseChatModel
|
The initialized |
Raises:
| Type | Description |
|---|---|
SmeltConfigError
|
If the model provider cannot be initialized (e.g. missing provider package, invalid credentials). |