Table of Contents
- spoon_ai.llm
- spoon_ai.llm.vlm_provider.gemini
- spoon_ai.llm.vlm_provider.base
- spoon_ai.llm.factory
- spoon_ai.llm.monitoring
- spoon_ai.llm.response_normalizer
- spoon_ai.llm.cache
- spoon_ai.llm.interface
- spoon_ai.llm.providers.deepseek_provider
- spoon_ai.llm.providers.gemini_provider
- spoon_ai.llm.providers.openai_compatible_provider
- spoon_ai.llm.providers.ollama_provider
- spoon_ai.llm.providers.openai_provider
- spoon_ai.llm.providers.anthropic_provider
- spoon_ai.llm.providers.openrouter_provider
- spoon_ai.llm.providers
- spoon_ai.llm.manager
- spoon_ai.llm.registry
- spoon_ai.llm.config
- spoon_ai.llm.errors
- spoon_ai.llm.base
Module spoon_ai.llm
Unified LLM infrastructure package.
This package provides a unified interface for working with different LLM providers, including comprehensive configuration management, monitoring, and error handling.
Module spoon_ai.llm.vlm_provider.gemini
GeminiConfig Objects​
class GeminiConfig(LLMConfig)
Gemini Configuration
validate_api_key​
@model_validator(mode='after')
def validate_api_key()
Validate that API key is provided
GeminiProvider Objects​
@LLMFactory.register("gemini")
class GeminiProvider(LLMBase)
Gemini Provider Implementation
__init__​
def __init__(config_path: str = "config/config.toml",
config_name: str = "chitchat")
Initialize Gemini Provider
Arguments:
config_path- Configuration file pathconfig_name- Configuration name
Raises:
ValueError- If GEMINI_API_KEY environment variable is not set
chat​
async def chat(messages: List[Message],
system_msgs: Optional[List[Message]] = None,
response_modalities: Optional[List[str]] = None,
**kwargs) -> LLMResponse
Send chat request to Gemini and get response
Arguments:
messages- List of messagessystem_msgs- List of system messagesresponse_modalities- List of response modalities (optional, e.g. ['Text', 'Image'])**kwargs- Other parameters
Returns:
LLMResponse- LLM response
completion​
async def completion(prompt: str, **kwargs) -> LLMResponse
Send text completion request to Gemini and get response
Arguments:
prompt- Prompt text**kwargs- Other parameters
Returns:
LLMResponse- LLM response
chat_with_tools​
async def chat_with_tools(messages: List[Message],
system_msgs: Optional[List[Message]] = None,
tools: Optional[List[Dict]] = None,
tool_choice: Literal["none", "auto",
"required"] = "auto",
**kwargs) -> LLMResponse
Send chat request to Gemini that may contain tool calls and get response
Note: Gemini currently doesn't support tool calls, this method will use regular chat method
Arguments:
messages- List of messagessystem_msgs- List of system messagestools- List of tools (not supported by Gemini)tool_choice- Tool choice mode (not supported by Gemini)**kwargs- Other parameters
Returns:
LLMResponse- LLM response
generate_content​
async def generate_content(model: Optional[str] = None,
contents: Union[str, List, types.Content,
types.Part] = None,
config: Optional[
types.GenerateContentConfig] = None,
**kwargs) -> LLMResponse
Directly call Gemini's generate_content interface
Arguments:
model- Model name (optional, will override model in configuration)contents- Request content, can be string, list, or types.Content/types.Part objectconfig- Generation configuration**kwargs- Other parameters
Returns:
LLMResponse- LLM response
Module spoon_ai.llm.vlm_provider.base
LLMConfig Objects​
class LLMConfig(BaseModel)
Base class for LLM configuration
LLMResponse Objects​
class LLMResponse(BaseModel)
Base class for LLM response
text​
Original text response
LLMBase Objects​
class LLMBase(ABC)
Base abstract class for LLM, defining interfaces that all LLM providers must implement
__init__​
def __init__(config_path: str = "config/config.toml",
config_name: str = "llm")
Initialize LLM interface
Arguments:
config_path- Configuration file pathconfig_name- Configuration name
chat​
@abstractmethod
async def chat(messages: List[Message],
system_msgs: Optional[List[Message]] = None,
**kwargs) -> LLMResponse
Send chat request to LLM and get response
Arguments:
messages- List of messagessystem_msgs- List of system messages**kwargs- Other parameters
Returns:
LLMResponse- LLM response
completion​
@abstractmethod
async def completion(prompt: str, **kwargs) -> LLMResponse
Send text completion request to LLM and get response
Arguments:
prompt- Prompt text**kwargs- Other parameters
Returns:
LLMResponse- LLM response
chat_with_tools​
@abstractmethod
async def chat_with_tools(messages: List[Message],
system_msgs: Optional[List[Message]] = None,
tools: Optional[List[Dict]] = None,
tool_choice: Literal["none", "auto",
"required"] = "auto",
**kwargs) -> LLMResponse
Send chat request that may contain tool calls to LLM and get response
Arguments:
messages- List of messagessystem_msgs- List of system messagestools- List of toolstool_choice- Tool selection mode**kwargs- Other parameters
Returns:
LLMResponse- LLM response
generate_image​
async def generate_image(prompt: str, **kwargs) -> Union[str, List[str]]
Generate image (optional implementation)
Arguments:
prompt- Prompt text**kwargs- Other parameters
Returns:
Union[str, List[str]]: Image URL or list of URLs
reset_output_handler​
def reset_output_handler()
Reset output handler
Module spoon_ai.llm.factory
LLMFactory Objects​
class LLMFactory()
LLM factory class, used to create different LLM instances
register​
@classmethod
def register(cls, provider_name: str)
Register LLM provider
Arguments:
provider_name- Provider name
Returns:
Decorator function
create​
@classmethod
def create(cls,
provider: Optional[str] = None,
config_path: str = "config/config.toml",
config_name: str = "llm") -> LLMBase
Create LLM instance
Arguments:
provider- Provider name, if None, read from configuration fileconfig_path- Configuration file pathconfig_name- Configuration name
Returns:
LLMBase- LLM instance
Raises:
ValueError- If provider does not exist
Module spoon_ai.llm.monitoring
Comprehensive monitoring, debugging, and metrics collection for LLM operations.
RequestMetrics Objects​
@dataclass
class RequestMetrics()
Metrics for a single LLM request.
ProviderStats Objects​
@dataclass
class ProviderStats()
Aggregated statistics for a provider.
get​
def get(key: str, default=None)
Get attribute value with default fallback for dictionary-like access.
Arguments:
key- Attribute namedefault- Default value if attribute doesn't exist
Returns:
Attribute value or default
success_rate​
@property
def success_rate() -> float
Calculate success rate as a percentage.
avg_response_time​
@property
def avg_response_time() -> float
Get average response time.
DebugLogger Objects​
class DebugLogger()
Comprehensive logging and debugging system for LLM operations.
__init__​
def __init__(max_history: int = 1000, enable_detailed_logging: bool = True)
Initialize debug logger.
Arguments:
max_history- Maximum number of requests to keep in historyenable_detailed_logging- Whether to enable detailed request/response logging
log_request​
def log_request(provider: str, method: str, params: Dict[str, Any]) -> str
Log request with unique ID.
Arguments:
provider- Provider namemethod- Method being called (chat, completion, etc.)params- Request parameters
Returns:
str- Unique request ID
log_response​
def log_response(request_id: str, response: LLMResponse,
duration: float) -> None
Log response with timing information.
Arguments:
request_id- Request ID from log_requestresponse- LLM response objectduration- Request duration in seconds
log_error​
def log_error(request_id: str, error: Exception, context: Dict[str,
Any]) -> None
Log error with context.
Arguments:
request_id- Request ID from log_requesterror- Exception that occurredcontext- Additional error context
log_fallback​
def log_fallback(from_provider: str, to_provider: str, reason: str) -> None
Log provider fallback event.
Arguments:
from_provider- Provider that failedto_provider- Provider being used as fallbackreason- Reason for fallback
get_request_history​
def get_request_history(provider: Optional[str] = None,
limit: Optional[int] = None) -> List[RequestMetrics]
Get request history.
Arguments:
provider- Filter by provider (optional)limit- Maximum number of requests to return (optional)
Returns:
List[RequestMetrics]- List of request metrics
get_active_requests​
def get_active_requests() -> List[RequestMetrics]
Get currently active requests.
Returns:
List[RequestMetrics]- List of active request metrics
clear_history​
def clear_history() -> None
Clear request history.
MetricsCollector Objects​
class MetricsCollector()
Collects and aggregates performance metrics for LLM providers.
__init__​
def __init__(window_size: int = 3600)
Initialize metrics collector.
Arguments:
window_size- Time window in seconds for rolling metrics
record_request​
def record_request(provider: str,
method: str,
duration: float,
success: bool,
tokens: int = 0,
model: str = '',
error: Optional[str] = None) -> None
Record request metrics.
Arguments:
provider- Provider namemethod- Method calledduration- Request duration in secondssuccess- Whether request was successfultokens- Number of tokens usedmodel- Model nameerror- Error message if failed
get_provider_stats​
def get_provider_stats(provider: str) -> Optional[ProviderStats]
Get statistics for a specific provider.
Arguments:
provider- Provider name
Returns:
Optional[ProviderStats]- Provider statistics or None if not found
get_all_stats​
def get_all_stats() -> Dict[str, ProviderStats]
Get statistics for all providers.
Returns:
Dict[str, ProviderStats]: Dictionary of provider statistics
get_rolling_metrics​
def get_rolling_metrics(provider: Optional[str] = None,
method: Optional[str] = None) -> List[Dict[str, Any]]
Get rolling metrics with optional filtering.
Arguments:
provider- Filter by provider (optional)method- Filter by method (optional)
Returns:
List[Dict[str, Any]]: List of metrics
get_summary​
def get_summary() -> Dict[str, Any]
Get overall summary statistics.
Returns:
Dict[str, Any]: Summary statistics
reset_stats​
def reset_stats(provider: Optional[str] = None) -> None
Reset statistics.
Arguments:
provider- Reset specific provider only (optional)
get_debug_logger​
def get_debug_logger() -> DebugLogger
Get global debug logger instance.
Returns:
DebugLogger- Global debug logger
get_metrics_collector​
def get_metrics_collector() -> MetricsCollector
Get global metrics collector instance.
Returns:
MetricsCollector- Global metrics collector
Module spoon_ai.llm.response_normalizer
Response normalizer for ensuring consistent response formats across providers.
ResponseNormalizer Objects​
class ResponseNormalizer()
Normalizes responses from different providers to ensure consistency.
normalize_response​
def normalize_response(response: LLMResponse) -> LLMResponse
Normalize a response from any provider.
Arguments:
response- Raw LLM response
Returns:
LLMResponse- Normalized response
Raises:
ValidationError- If response cannot be normalized
validate_response​
def validate_response(response: LLMResponse) -> bool
Validate that a response meets minimum requirements.
Arguments:
response- Response to validate
Returns:
bool- True if response is valid
Raises:
ValidationError- If response is invalid
add_provider_mapping​
def add_provider_mapping(provider_name: str, normalizer_func) -> None
Add a custom normalizer for a new provider.
Arguments:
provider_name- Name of the providernormalizer_func- Function that takes and returns LLMResponse
get_supported_providers​
def get_supported_providers() -> List[str]
Get list of providers with custom normalizers.
Returns:
List[str]- List of provider names
get_response_normalizer​
def get_response_normalizer() -> ResponseNormalizer
Get global response normalizer instance.
Returns:
ResponseNormalizer- Global normalizer instance
Module spoon_ai.llm.cache
LLM Response Caching - Cache LLM responses to avoid redundant API calls.
LLMResponseCache Objects​
class LLMResponseCache()
Cache for LLM responses to avoid redundant API calls.
__init__​
def __init__(default_ttl: int = 3600, max_size: int = 1000)
Initialize the cache.
Arguments:
default_ttl- Default time-to-live in seconds (default: 1 hour)max_size- Maximum number of cached entries (default: 1000)
get​
def get(messages: List[Message],
provider: Optional[str] = None,
**kwargs) -> Optional[LLMResponse]
Get cached response if available.
Arguments:
messages- List of conversation messagesprovider- Provider name (optional)**kwargs- Additional parameters
Returns:
Optional[LLMResponse]- Cached response if found and not expired, None otherwise
set​
def set(messages: List[Message],
response: LLMResponse,
provider: Optional[str] = None,
ttl: Optional[int] = None,
**kwargs) -> None
Store response in cache.
Arguments:
messages- List of conversation messagesresponse- LLM response to cacheprovider- Provider name (optional)ttl- Time-to-live in seconds (optional, uses default if not provided)**kwargs- Additional parameters
clear​
def clear() -> None
Clear all cached entries.
get_stats​
def get_stats() -> Dict[str, Any]
Get cache statistics.
Returns:
Dict[str, Any]: Cache statistics including size, max_size, etc.
CachedLLMManager Objects​
class CachedLLMManager()
Wrapper around LLMManager that adds response caching.
__init__​
def __init__(llm_manager: LLMManager,
cache: Optional[LLMResponseCache] = None)
Initialize cached LLM manager.
Arguments:
llm_manager- The underlying LLMManager instancecache- Optional cache instance (creates new one if not provided)
chat​
async def chat(messages: List[Message],
provider: Optional[str] = None,
use_cache: bool = True,
cache_ttl: Optional[int] = None,
**kwargs) -> LLMResponse
Send chat request with caching support.
Arguments:
messages- List of conversation messagesprovider- Specific provider to use (optional)use_cache- Whether to use cache (default: True)cache_ttl- Custom TTL for this request (optional)**kwargs- Additional parameters
Returns:
LLMResponse- LLM response (from cache or API)
chat_stream​
async def chat_stream(messages: List[Message],
provider: Optional[str] = None,
callbacks: Optional[List] = None,
**kwargs)
Send streaming chat request (caching not supported for streaming).
Arguments:
messages- List of conversation messagesprovider- Specific provider to use (optional)callbacks- Optional callback handlers**kwargs- Additional parameters
Yields:
LLMResponseChunk- Streaming response chunks
clear_cache​
def clear_cache() -> None
Clear the response cache.
get_cache_stats​
def get_cache_stats() -> Dict[str, Any]
Get cache statistics.
Returns:
Dict[str, Any]: Cache statistics
Module spoon_ai.llm.interface
LLM Provider Interface - Abstract base class defining the unified interface for all LLM providers.
ProviderCapability Objects​
class ProviderCapability(Enum)
Enumeration of capabilities that LLM providers can support.
ProviderMetadata Objects​
@dataclass
class ProviderMetadata()
Metadata describing a provider's capabilities and limits.
LLMResponse Objects​
@dataclass
class LLMResponse()
Enhanced LLM response with comprehensive metadata and debugging information.
LLMProviderInterface Objects​
class LLMProviderInterface(ABC)
Abstract base class defining the unified interface for all LLM providers.
initialize​
@abstractmethod
async def initialize(config: Dict[str, Any]) -> None
Initialize the provider with configuration.
Arguments:
config- Provider-specific configuration dictionary
Raises:
ConfigurationError- If configuration is invalid
chat​
@abstractmethod
async def chat(messages: List[Message], **kwargs) -> LLMResponse
Send chat request to the provider.
Arguments:
messages- List of conversation messages**kwargs- Additional provider-specific parameters
Returns:
LLMResponse- Standardized response object
Raises:
ProviderError- If the request fails
chat_stream​
@abstractmethod
async def chat_stream(messages: List[Message],
callbacks: Optional[List[BaseCallbackHandler]] = None,
**kwargs) -> AsyncIterator[LLMResponseChunk]
Send streaming chat request to the provider with callback support.
Arguments:
messages- List of conversation messagescallbacks- Optional list of callback handlers for real-time events**kwargs- Additional provider-specific parameters
Yields:
LLMResponseChunk- Structured streaming response chunks
Raises:
ProviderError- If the request fails
completion​
@abstractmethod
async def completion(prompt: str, **kwargs) -> LLMResponse
Send completion request to the provider.
Arguments:
prompt- Text prompt for completion**kwargs- Additional provider-specific parameters
Returns:
LLMResponse- Standardized response object
Raises:
ProviderError- If the request fails
chat_with_tools​
@abstractmethod
async def chat_with_tools(messages: List[Message], tools: List[Dict],
**kwargs) -> LLMResponse
Send chat request with tool support.
Arguments:
messages- List of conversation messagestools- List of available tools**kwargs- Additional provider-specific parameters
Returns:
LLMResponse- Standardized response object with potential tool calls
Raises:
ProviderError- If the request fails
get_metadata​
@abstractmethod
def get_metadata() -> ProviderMetadata
Get provider metadata and capabilities.
Returns:
ProviderMetadata- Provider information and capabilities
health_check​
@abstractmethod
async def health_check() -> bool
Check if provider is healthy and available.
Returns:
bool- True if provider is healthy, False otherwise
cleanup​
@abstractmethod
async def cleanup() -> None
Cleanup resources and connections.
This method should be called when the provider is no longer needed.
Module spoon_ai.llm.providers.deepseek_provider
DeepSeek Provider implementation for the unified LLM interface. DeepSeek provides access to their models through an OpenAI-compatible API.
DeepSeekProvider Objects​
@register_provider("deepseek", [
ProviderCapability.CHAT,
ProviderCapability.COMPLETION,
ProviderCapability.TOOLS,
ProviderCapability.STREAMING
])
class DeepSeekProvider(OpenAICompatibleProvider)
DeepSeek provider implementation using OpenAI-compatible API.
get_metadata​
def get_metadata() -> ProviderMetadata
Get DeepSeek provider metadata.
Module spoon_ai.llm.providers.gemini_provider
Gemini Provider implementation for the unified LLM interface.
GeminiProvider Objects​
@register_provider("gemini", [
ProviderCapability.CHAT,
ProviderCapability.COMPLETION,
ProviderCapability.STREAMING,
ProviderCapability.TOOLS,
ProviderCapability.IMAGE_GENERATION,
ProviderCapability.VISION
])
class GeminiProvider(LLMProviderInterface)
Gemini provider implementation.
initialize​
async def initialize(config: Dict[str, Any]) -> None
Initialize the Gemini provider with configuration.
chat​
async def chat(messages: List[Message], **kwargs) -> LLMResponse
Send chat request to Gemini.
chat_stream​
async def chat_stream(messages: List[Message],
callbacks: Optional[List] = None,
**kwargs) -> AsyncIterator[LLMResponseChunk]
Send streaming chat request to Gemini with callback support.
Yields:
LLMResponseChunk- Structured streaming response chunks
completion​
async def completion(prompt: str, **kwargs) -> LLMResponse
Send completion request to Gemini.
chat_with_tools​
async def chat_with_tools(messages: List[Message], tools: List[Dict],
**kwargs) -> LLMResponse
Send chat request with tools to Gemini using native function calling.
get_metadata​
def get_metadata() -> ProviderMetadata
Get Gemini provider metadata.
health_check​
async def health_check() -> bool
Check if Gemini provider is healthy.
cleanup​
async def cleanup() -> None
Cleanup Gemini provider resources.
Module spoon_ai.llm.providers.openai_compatible_provider
OpenAI Compatible Provider base class for providers that use OpenAI-compatible APIs. This includes OpenAI, OpenRouter, DeepSeek, and other providers with similar interfaces.
OpenAICompatibleProvider Objects​
class OpenAICompatibleProvider(LLMProviderInterface)
Base class for OpenAI-compatible providers.
get_provider_name​
def get_provider_name() -> str
Get the provider name. Should be overridden by subclasses.
get_default_base_url​
def get_default_base_url() -> str
Get the default base URL. Should be overridden by subclasses.
get_default_model​
def get_default_model() -> str
Get the default model. Should be overridden by subclasses.
get_additional_headers​
def get_additional_headers(config: Dict[str, Any]) -> Dict[str, str]
Get additional headers for the provider. Can be overridden by subclasses.
initialize​
async def initialize(config: Dict[str, Any]) -> None
Initialize the provider with configuration.
chat​
async def chat(messages: List[Message], **kwargs) -> LLMResponse
Send chat request to the provider.
chat_stream​
async def chat_stream(messages: List[Message],
callbacks: Optional[List[BaseCallbackHandler]] = None,
**kwargs) -> AsyncIterator[LLMResponseChunk]
Send streaming chat request with full callback support.
Yields:
LLMResponseChunk- Structured streaming response chunks
completion​
async def completion(prompt: str, **kwargs) -> LLMResponse
Send completion request to the provider.
chat_with_tools​
async def chat_with_tools(messages: List[Message], tools: List[Dict],
**kwargs) -> LLMResponse
Send chat request with tools to the provider.
get_metadata​
def get_metadata() -> ProviderMetadata
Get provider metadata. Should be overridden by subclasses.
health_check​
async def health_check() -> bool
Check if provider is healthy.
cleanup​
async def cleanup() -> None
Cleanup provider resources.
Module spoon_ai.llm.providers.ollama_provider
Ollama Provider implementation for the unified LLM interface.
Ollama runs locally and exposes an HTTP API (default: http://localhost:11434). This provider supports chat, completion, and streaming.
Notes:
- Ollama does not require an API key; the configuration layer may still provide a placeholder api_key value for consistency.
- Tool calling is not implemented here.
OllamaProvider Objects​
@register_provider(
"ollama",
[
ProviderCapability.CHAT,
ProviderCapability.COMPLETION,
ProviderCapability.STREAMING,
],
)
class OllamaProvider(LLMProviderInterface)
Local Ollama provider via HTTP.
Module spoon_ai.llm.providers.openai_provider
OpenAI Provider implementation for the unified LLM interface.
OpenAIProvider Objects​
@register_provider("openai", [
ProviderCapability.CHAT,
ProviderCapability.COMPLETION,
ProviderCapability.TOOLS,
ProviderCapability.STREAMING
])
class OpenAIProvider(OpenAICompatibleProvider)
OpenAI provider implementation.
get_metadata​
def get_metadata() -> ProviderMetadata
Get OpenAI provider metadata.
Module spoon_ai.llm.providers.anthropic_provider
Anthropic Provider implementation for the unified LLM interface.
AnthropicProvider Objects​
@register_provider("anthropic", [
ProviderCapability.CHAT,
ProviderCapability.COMPLETION,
ProviderCapability.TOOLS,
ProviderCapability.STREAMING
])
class AnthropicProvider(LLMProviderInterface)
Anthropic provider implementation.
initialize​
async def initialize(config: Dict[str, Any]) -> None
Initialize the Anthropic provider with configuration.
get_cache_metrics​
def get_cache_metrics() -> Dict[str, int]
Get current cache performance metrics.
chat​
async def chat(messages: List[Message], **kwargs) -> LLMResponse
Send chat request to Anthropic.
chat_stream​
async def chat_stream(messages: List[Message],
callbacks: Optional[List] = None,
**kwargs) -> AsyncIterator[LLMResponseChunk]
Send streaming chat request to Anthropic with callback support.
Yields:
LLMResponseChunk- Structured streaming response chunks
completion​
async def completion(prompt: str, **kwargs) -> LLMResponse
Send completion request to Anthropic.
chat_with_tools​
async def chat_with_tools(messages: List[Message], tools: List[Dict],
**kwargs) -> LLMResponse
Send chat request with tools to Anthropic.
get_metadata​
def get_metadata() -> ProviderMetadata
Get Anthropic provider metadata.
health_check​
async def health_check() -> bool
Check if Anthropic provider is healthy.
cleanup​
async def cleanup() -> None
Cleanup Anthropic provider resources.
Module spoon_ai.llm.providers.openrouter_provider
OpenRouter Provider implementation for the unified LLM interface. OpenRouter provides access to multiple LLM models through a unified API compatible with OpenAI.
OpenRouterProvider Objects​
@register_provider("openrouter", [
ProviderCapability.CHAT,
ProviderCapability.COMPLETION,
ProviderCapability.TOOLS,
ProviderCapability.STREAMING
])
class OpenRouterProvider(OpenAICompatibleProvider)
OpenRouter provider implementation using OpenAI-compatible API.
get_additional_headers​
def get_additional_headers(config: Dict[str, Any]) -> Dict[str, str]
Get OpenRouter-specific headers.
get_metadata​
def get_metadata() -> ProviderMetadata
Get OpenRouter provider metadata.
Module spoon_ai.llm.providers
LLM Provider implementations.
Module spoon_ai.llm.manager
LLM Manager - Central orchestrator for managing providers, fallback, and load balancing.
ProviderState Objects​
@dataclass
class ProviderState()
Track provider initialization and health state.
can_retry_initialization​
def can_retry_initialization() -> bool
Check if provider initialization can be retried.
record_initialization_failure​
def record_initialization_failure(error: Exception) -> None
Record initialization failure with exponential backoff.
record_initialization_success​
def record_initialization_success() -> None
Record successful initialization.
FallbackStrategy Objects​
class FallbackStrategy()
Handles fallback logic between providers.
execute_with_fallback​
async def execute_with_fallback(providers: List[str], operation, *args,
**kwargs) -> LLMResponse
Execute operation with fallback chain.
Arguments:
providers- List of provider names in fallback orderoperation- Async operation to execute *args, **kwargs: Arguments for the operation
Returns:
LLMResponse- Response from successful provider
Raises:
ProviderError- If all providers fail
LoadBalancer Objects​
class LoadBalancer()
Handles load balancing between multiple provider instances.
select_provider​
def select_provider(providers: List[str],
strategy: str = "round_robin") -> str
Select a provider based on load balancing strategy.
Arguments:
providers- List of available providersstrategy- Load balancing strategy ('round_robin', 'weighted', 'random')
Returns:
str- Selected provider name
update_provider_health​
def update_provider_health(provider: str, is_healthy: bool) -> None
Update provider health status.
set_provider_weight​
def set_provider_weight(provider: str, weight: float) -> None
Set provider weight for weighted load balancing.
LLMManager Objects​
class LLMManager()
Central orchestrator for LLM providers with fallback and load balancing.
__init__​
def __init__(config_manager: Optional[ConfigurationManager] = None,
debug_logger: Optional[DebugLogger] = None,
metrics_collector: Optional[MetricsCollector] = None,
response_normalizer: Optional[ResponseNormalizer] = None,
registry: Optional[LLMProviderRegistry] = None)
Initialize LLM Manager with enhanced provider state tracking.
cleanup​
async def cleanup() -> None
Enhanced cleanup with proper resource management.
get_provider_status​
def get_provider_status() -> Dict[str, Dict[str, Any]]
Get detailed status of all providers.
reset_provider​
async def reset_provider(provider_name: str) -> bool
Reset a provider's state and force reinitialization.
Arguments:
provider_name- Name of provider to reset
Returns:
bool- True if reset successful
chat​
async def chat(messages: List[Message],
provider: Optional[str] = None,
**kwargs) -> LLMResponse
Send chat request with automatic provider selection and fallback.
Arguments:
messages- List of conversation messagesprovider- Specific provider to use (optional)**kwargs- Additional parameters
Returns:
LLMResponse- Normalized response
chat_stream​
async def chat_stream(messages: List[Message],
provider: Optional[str] = None,
callbacks: Optional[List[BaseCallbackHandler]] = None,
**kwargs) -> AsyncGenerator[LLMResponseChunk, None]
Send streaming chat request with callback support.
Arguments:
messages- List of conversation messagesprovider- Specific provider to use (optional)callbacks- Optional callback handlers for monitoring**kwargs- Additional parameters
Yields:
LLMResponseChunk- Structured streaming response chunks
completion​
async def completion(prompt: str,
provider: Optional[str] = None,
**kwargs) -> LLMResponse
Send completion request.
Arguments:
prompt- Text promptprovider- Specific provider to use (optional)**kwargs- Additional parameters
Returns:
LLMResponse- Normalized response
chat_with_tools​
async def chat_with_tools(messages: List[Message],
tools: List[Dict],
provider: Optional[str] = None,
**kwargs) -> LLMResponse
Send tool-enabled chat request.
Arguments:
messages- List of conversation messagestools- List of available toolsprovider- Specific provider to use (optional)**kwargs- Additional parameters
Returns:
LLMResponse- Normalized response
set_fallback_chain​
def set_fallback_chain(providers: List[str]) -> None
Set fallback provider chain.
Arguments:
providers- List of provider names in fallback order
enable_load_balancing​
def enable_load_balancing(strategy: str = "round_robin") -> None
Enable load balancing with specified strategy.
Arguments:
strategy- Load balancing strategy ('round_robin', 'weighted', 'random')
disable_load_balancing​
def disable_load_balancing() -> None
Disable load balancing.
health_check_all​
async def health_check_all() -> Dict[str, bool]
Check health of all registered providers.
Returns:
Dict[str, bool]: Provider health status
get_stats​
def get_stats() -> Dict[str, Any]
Get comprehensive statistics.
Returns:
Dict[str, Any]: Manager and provider statistics
get_llm_manager​
def get_llm_manager() -> LLMManager
Get global LLM manager instance.
Returns:
LLMManager- Global manager instance
set_llm_manager​
def set_llm_manager(manager: LLMManager) -> None
Set global LLM manager instance.
Arguments:
manager- Manager instance to set as global
Module spoon_ai.llm.registry
LLM Provider Registry for dynamic provider registration and discovery.
LLMProviderRegistry Objects​
class LLMProviderRegistry()
Registry for managing LLM provider classes and instances.
register​
def register(name: str, provider_class: Type[LLMProviderInterface]) -> None
Register a provider class.
Arguments:
name- Unique provider nameprovider_class- Provider class implementing LLMProviderInterface
Raises:
ConfigurationError- If provider name already exists or class is invalid
get_provider​
def get_provider(
name: str,
config: Optional[Dict[str, Any]] = None) -> LLMProviderInterface
Get or create provider instance.
Arguments:
name- Provider nameconfig- Provider configuration (optional if already configured)
Returns:
LLMProviderInterface- Provider instance
Raises:
ConfigurationError- If provider not found or configuration invalid
list_providers​
def list_providers() -> List[str]
List all registered provider names.
Returns:
List[str]- List of provider names
get_capabilities​
def get_capabilities(name: str) -> List[ProviderCapability]
Get provider capabilities.
Arguments:
name- Provider name
Returns:
List[ProviderCapability]- List of supported capabilities
Raises:
ConfigurationError- If provider not found
is_registered​
def is_registered(name: str) -> bool
Check if a provider is registered.
Arguments:
name- Provider name
Returns:
bool- True if provider is registered
unregister​
def unregister(name: str) -> None
Unregister a provider.
Arguments:
name- Provider name
clear​
def clear() -> None
Clear all registered providers and instances.
register_provider​
def register_provider(name: str,
capabilities: Optional[List[ProviderCapability]] = None)
Decorator for automatic provider registration.
Arguments:
name- Provider namecapabilities- List of supported capabilities (optional)
Returns:
Decorator function
get_global_registry​
def get_global_registry() -> LLMProviderRegistry
Get the global provider registry instance.
Returns:
LLMProviderRegistry- Global registry instance
Module spoon_ai.llm.config
Configuration management for LLM providers using environment variables.
ProviderConfig Objects​
@dataclass
class ProviderConfig()
Configuration for a specific LLM provider.
__post_init__​
def __post_init__()
Validate configuration after initialization.
model_dump​
def model_dump() -> Dict[str, Any]
Convert the configuration to a dictionary.
Returns:
Dict[str, Any]: Configuration as dictionary
ConfigurationManager Objects​
class ConfigurationManager()
Manages environment-driven configuration for LLM providers.
__init__​
def __init__() -> None
Initialize configuration manager and load environment variables.
load_provider_config​
def load_provider_config(provider_name: str) -> ProviderConfig
Load and validate provider configuration.
Arguments:
provider_name- Name of the provider
Returns:
ProviderConfig- Validated provider configuration
Raises:
ConfigurationError- If configuration is invalid or missing
validate_config​
def validate_config(config: ProviderConfig) -> bool
Validate provider configuration.
Arguments:
config- Provider configuration to validate
Returns:
bool- True if configuration is valid
Raises:
ConfigurationError- If configuration is invalid
get_default_provider​
def get_default_provider() -> str
Get default provider from configuration with intelligent selection.
Returns:
str- Default provider name
get_fallback_chain​
def get_fallback_chain() -> List[str]
Get fallback chain from configuration.
Returns:
List[str]- List of provider names in fallback order
list_configured_providers​
def list_configured_providers() -> List[str]
List all configured providers.
Returns:
List[str]- List of provider names that have configuration
get_available_providers_by_priority​
def get_available_providers_by_priority() -> List[str]
Get available providers ordered by priority and quality.
Returns:
List[str]- List of available provider names in priority order
get_provider_info​
def get_provider_info() -> Dict[str, Dict[str, Any]]
Get information about all providers and their availability.
Returns:
Dict[str, Dict[str, Any]]: Provider information including availability
reload_config​
def reload_config() -> None
Reload configuration from file.
Module spoon_ai.llm.errors
Standardized error hierarchy for LLM operations.
LLMError Objects​
class LLMError(Exception)
Base exception for all LLM-related errors.
ProviderError Objects​
class ProviderError(LLMError)
Provider-specific error with detailed context.
ConfigurationError Objects​
class ConfigurationError(LLMError)
Configuration validation or loading error.
RateLimitError Objects​
class RateLimitError(ProviderError)
Rate limit exceeded error.
AuthenticationError Objects​
class AuthenticationError(ProviderError)
Authentication failed error.
ModelNotFoundError Objects​
class ModelNotFoundError(ProviderError)
Model not found or not available error.
TokenLimitError Objects​
class TokenLimitError(ProviderError)
Token limit exceeded error.
NetworkError Objects​
class NetworkError(ProviderError)
Network connectivity or timeout error.
ProviderUnavailableError Objects​
class ProviderUnavailableError(ProviderError)
Provider service unavailable error.
ValidationError Objects​
class ValidationError(LLMError)
Input validation error.
Module spoon_ai.llm.base
LLMBase Objects​
class LLMBase(ABC)
Base abstract class for LLM, defining interfaces that all LLM providers must implement
__init__​
def __init__(config_path: str = "config/config.toml",
config_name: str = "llm")
Initialize LLM interface
Arguments:
config_path- Configuration file pathconfig_name- Configuration name
chat​
@abstractmethod
async def chat(messages: List[Message],
system_msgs: Optional[List[Message]] = None,
**kwargs) -> LLMResponse
Send chat request to LLM and get response
Arguments:
messages- List of messagessystem_msgs- List of system messages**kwargs- Other parameters
Returns:
LLMResponse- LLM response
completion​
@abstractmethod
async def completion(prompt: str, **kwargs) -> LLMResponse
Send text completion request to LLM and get response
Arguments:
prompt- Prompt text**kwargs- Other parameters
Returns:
LLMResponse- LLM response
chat_with_tools​
@abstractmethod
async def chat_with_tools(messages: List[Message],
system_msgs: Optional[List[Message]] = None,
tools: Optional[List[Dict]] = None,
tool_choice: Literal["none", "auto",
"required"] = "auto",
**kwargs) -> LLMResponse
Send chat request that may contain tool calls to LLM and get response
Arguments:
messages- List of messagessystem_msgs- List of system messagestools- List of toolstool_choice- Tool selection mode**kwargs- Other parameters
Returns:
LLMResponse- LLM response
generate_image​
async def generate_image(prompt: str, **kwargs) -> Union[str, List[str]]
Generate image (optional implementation)
Arguments:
prompt- Prompt text**kwargs- Other parameters
Returns:
Union[str, List[str]]: Image URL or list of URLs
reset_output_handler​
def reset_output_handler()
Reset output handler