Skip to content

Errors

All smelt exceptions inherit from SmeltError. You can catch any smelt error with:

from smelt.errors import SmeltError

try:
    result = job.run(model, data=data)
except SmeltError as e:
    print(f"Smelt error: {e}")

Hierarchy

SmeltError
├── SmeltConfigError         # Bad configuration
├── SmeltValidationError     # LLM output validation failure
├── SmeltAPIError            # Non-retriable API error
└── SmeltExhaustionError     # Retries exhausted

SmeltError

Base exception class. All smelt exceptions inherit from this.

from smelt.errors import SmeltError

SmeltError

Bases: Exception

Base exception for all smelt errors.


SmeltConfigError

Raised when configuration is invalid. This happens at job creation time or model initialization — before any LLM calls are made.

Common causes

Cause Example
Empty prompt Job(prompt="", output_model=MyModel)
Invalid output_model Job(prompt="ok", output_model=dict)
Bad batch_size Job(prompt="ok", output_model=MyModel, batch_size=0)
Reserved field name Output model has a row_id field
Bad provider Model(provider="nonexistent", name="fake")
Missing package Provider package not installed
Empty data job.test(model, data=[])

Example

from smelt.errors import SmeltConfigError

try:
    job = Job(prompt="", output_model=MyModel)
except SmeltConfigError as e:
    print(e)  # "Job prompt must be a non-empty string."

SmeltConfigError

Bases: SmeltError

Raised when smelt configuration is invalid.

Examples include an unresolvable model provider, invalid batch size, or a missing API key.


SmeltValidationError

Raised internally when LLM output fails Pydantic schema validation or row ID checks. This exception is caught by the batch engine's retry loop — you typically won't see it directly.

Attributes

Attribute Type Description
raw_response Any The raw LLM response that failed validation

When it triggers

  • LLM returned wrong number of rows
  • Missing, duplicate, or unexpected row IDs
  • Pydantic field validation failure (including custom validators)
  • JSON parsing failure
from smelt.errors import SmeltValidationError

# You typically won't catch this directly — it's internal to the retry loop
# But it's available if you need it:
try:
    # ... direct validation call ...
    pass
except SmeltValidationError as e:
    print(e)
    print(f"Raw response: {e.raw_response}")

SmeltValidationError(message, raw_response=None)

Bases: SmeltError

Raised when LLM output fails Pydantic validation.

Attributes:

Name Type Description
raw_response Any

The raw LLM response that could not be validated.

Source code in src/smelt/errors.py
def __init__(self, message: str, raw_response: Any = None) -> None:
    super().__init__(message)
    self.raw_response: Any = raw_response

SmeltAPIError

Raised internally for non-retriable API errors (400, 401, 403). Like SmeltValidationError, this is caught by the batch engine.

Attributes

Attribute Type Description
status_code int \| None HTTP status code, if available

SmeltAPIError(message, status_code=None)

Bases: SmeltError

Raised when the LLM API returns a non-retriable error.

Attributes:

Name Type Description
status_code int | None

HTTP status code returned by the API, if available.

Source code in src/smelt/errors.py
def __init__(self, message: str, status_code: int | None = None) -> None:
    super().__init__(message)
    self.status_code: int | None = status_code

SmeltExhaustionError

Raised when a batch exhausts all retries and stop_on_exhaustion=True (the default). This is the primary user-facing error for batch failures.

Attributes

Attribute Type Description
partial_result SmeltResult Results accumulated before the failure, including successful batches

Example

from smelt.errors import SmeltExhaustionError

try:
    result = job.run(model, data=data)
except SmeltExhaustionError as e:
    print(f"Error: {e}")

    # Access partial results
    partial = e.partial_result
    print(f"Succeeded: {len(partial.data)} rows")
    print(f"Failed: {len(partial.errors)} batches")
    print(f"Tokens used: {partial.metrics.input_tokens + partial.metrics.output_tokens}")

    # Use successful rows
    for row in partial.data:
        process(row)

    # Inspect failures
    for err in partial.errors:
        print(f"  Batch {err.batch_index}: {err.error_type}{err.message}")

Avoiding this exception

Set stop_on_exhaustion=False to collect errors instead of raising:

job = Job(prompt="...", output_model=MyModel, stop_on_exhaustion=False)
result = job.run(model, data=data)
# No exception raised — check result.success and result.errors instead

SmeltExhaustionError(message, partial_result)

Bases: SmeltError

Raised when a batch exhausts all retries and stop_on_exhaustion is enabled.

Attributes:

Name Type Description
partial_result SmeltResult[Any]

The :class:~smelt.types.SmeltResult accumulated before the run was halted, including any successfully processed batches.

Source code in src/smelt/errors.py
def __init__(self, message: str, partial_result: SmeltResult[Any]) -> None:
    super().__init__(message)
    self.partial_result: SmeltResult[Any] = partial_result