Skip to main content

Table of Contents

Module spoon_ai.llm.manager

LLM Manager - Central orchestrator for managing providers, fallback, and load balancing.

ProviderState Objects​

@dataclass
class ProviderState()

Track provider initialization and health state.

can_retry_initialization​

def can_retry_initialization() -> bool

Check if provider initialization can be retried.

record_initialization_failure​

def record_initialization_failure(error: Exception) -> None

Record initialization failure with exponential backoff.

record_initialization_success​

def record_initialization_success() -> None

Record successful initialization.

FallbackStrategy Objects​

class FallbackStrategy()

Handles fallback logic between providers.

execute_with_fallback​

async def execute_with_fallback(providers: List[str], operation, *args,
**kwargs) -> LLMResponse

Execute operation with fallback chain.

Arguments:

  • providers - List of provider names in fallback order
  • operation - Async operation to execute *args, **kwargs: Arguments for the operation

Returns:

  • LLMResponse - Response from successful provider

Raises:

  • ProviderError - If all providers fail

LoadBalancer Objects​

class LoadBalancer()

Handles load balancing between multiple provider instances.

select_provider​

def select_provider(providers: List[str],
strategy: str = "round_robin") -> str

Select a provider based on load balancing strategy.

Arguments:

  • providers - List of available providers
  • strategy - Load balancing strategy ('round_robin', 'weighted', 'random')

Returns:

  • str - Selected provider name

update_provider_health​

def update_provider_health(provider: str, is_healthy: bool) -> None

Update provider health status.

set_provider_weight​

def set_provider_weight(provider: str, weight: float) -> None

Set provider weight for weighted load balancing.

LLMManager Objects​

class LLMManager()

Central orchestrator for LLM providers with fallback and load balancing.

__init__​

def __init__(config_manager: Optional[ConfigurationManager] = None,
debug_logger: Optional[DebugLogger] = None,
metrics_collector: Optional[MetricsCollector] = None,
response_normalizer: Optional[ResponseNormalizer] = None,
registry: Optional[LLMProviderRegistry] = None)

Initialize LLM Manager with enhanced provider state tracking.

cleanup​

async def cleanup() -> None

Enhanced cleanup with proper resource management.

get_provider_status​

def get_provider_status() -> Dict[str, Dict[str, Any]]

Get detailed status of all providers.

reset_provider​

async def reset_provider(provider_name: str) -> bool

Reset a provider's state and force reinitialization.

Arguments:

  • provider_name - Name of provider to reset

Returns:

  • bool - True if reset successful

chat​

async def chat(messages: List[Message],
provider: Optional[str] = None,
**kwargs) -> LLMResponse

Send chat request with automatic provider selection and fallback.

Arguments:

  • messages - List of conversation messages
  • provider - Specific provider to use (optional)
  • **kwargs - Additional parameters

Returns:

  • LLMResponse - Normalized response

chat_stream​

async def chat_stream(messages: List[Message],
provider: Optional[str] = None,
callbacks: Optional[List[BaseCallbackHandler]] = None,
**kwargs) -> AsyncGenerator[LLMResponseChunk, None]

Send streaming chat request with callback support.

Arguments:

  • messages - List of conversation messages
  • provider - Specific provider to use (optional)
  • callbacks - Optional callback handlers for monitoring
  • **kwargs - Additional parameters

Yields:

  • LLMResponseChunk - Structured streaming response chunks

completion​

async def completion(prompt: str,
provider: Optional[str] = None,
**kwargs) -> LLMResponse

Send completion request.

Arguments:

  • prompt - Text prompt
  • provider - Specific provider to use (optional)
  • **kwargs - Additional parameters

Returns:

  • LLMResponse - Normalized response

chat_with_tools​

async def chat_with_tools(messages: List[Message],
tools: List[Dict],
provider: Optional[str] = None,
**kwargs) -> LLMResponse

Send tool-enabled chat request.

Arguments:

  • messages - List of conversation messages
  • tools - List of available tools
  • provider - Specific provider to use (optional)
  • **kwargs - Additional parameters

Returns:

  • LLMResponse - Normalized response

set_fallback_chain​

def set_fallback_chain(providers: List[str]) -> None

Set fallback provider chain.

Arguments:

  • providers - List of provider names in fallback order

enable_load_balancing​

def enable_load_balancing(strategy: str = "round_robin") -> None

Enable load balancing with specified strategy.

Arguments:

  • strategy - Load balancing strategy ('round_robin', 'weighted', 'random')

disable_load_balancing​

def disable_load_balancing() -> None

Disable load balancing.

health_check_all​

async def health_check_all() -> Dict[str, bool]

Check health of all registered providers.

Returns:

Dict[str, bool]: Provider health status

get_stats​

def get_stats() -> Dict[str, Any]

Get comprehensive statistics.

Returns:

Dict[str, Any]: Manager and provider statistics

get_llm_manager​

def get_llm_manager() -> LLMManager

Get global LLM manager instance.

Returns:

  • LLMManager - Global manager instance

set_llm_manager​

def set_llm_manager(manager: LLMManager) -> None

Set global LLM manager instance.

Arguments:

  • manager - Manager instance to set as global