Table of Contents
Module spoon_ai.llm.interface
LLM Provider Interface - Abstract base class defining the unified interface for all LLM providers.
ProviderCapability Objects​
class ProviderCapability(Enum)
Enumeration of capabilities that LLM providers can support.
ProviderMetadata Objects​
@dataclass
class ProviderMetadata()
Metadata describing a provider's capabilities and limits.
LLMResponse Objects​
@dataclass
class LLMResponse()
Enhanced LLM response with comprehensive metadata and debugging information.
LLMProviderInterface Objects​
class LLMProviderInterface(ABC)
Abstract base class defining the unified interface for all LLM providers.
initialize​
@abstractmethod
async def initialize(config: Dict[str, Any]) -> None
Initialize the provider with configuration.
Arguments:
config- Provider-specific configuration dictionary
Raises:
ConfigurationError- If configuration is invalid
chat​
@abstractmethod
async def chat(messages: List[Message], **kwargs) -> LLMResponse
Send chat request to the provider.
Arguments:
messages- List of conversation messages**kwargs- Additional provider-specific parameters
Returns:
LLMResponse- Standardized response object
Raises:
ProviderError- If the request fails
chat_stream​
@abstractmethod
async def chat_stream(messages: List[Message],
callbacks: Optional[List[BaseCallbackHandler]] = None,
**kwargs) -> AsyncIterator[LLMResponseChunk]
Send streaming chat request to the provider with callback support.
Arguments:
messages- List of conversation messagescallbacks- Optional list of callback handlers for real-time events**kwargs- Additional provider-specific parameters
Yields:
LLMResponseChunk- Structured streaming response chunks
Raises:
ProviderError- If the request fails
completion​
@abstractmethod
async def completion(prompt: str, **kwargs) -> LLMResponse
Send completion request to the provider.
Arguments:
prompt- Text prompt for completion**kwargs- Additional provider-specific parameters
Returns:
LLMResponse- Standardized response object
Raises:
ProviderError- If the request fails
chat_with_tools​
@abstractmethod
async def chat_with_tools(messages: List[Message], tools: List[Dict],
**kwargs) -> LLMResponse
Send chat request with tool support.
Arguments:
messages- List of conversation messagestools- List of available tools**kwargs- Additional provider-specific parameters
Returns:
LLMResponse- Standardized response object with potential tool calls
Raises:
ProviderError- If the request fails
get_metadata​
@abstractmethod
def get_metadata() -> ProviderMetadata
Get provider metadata and capabilities.
Returns:
ProviderMetadata- Provider information and capabilities
health_check​
@abstractmethod
async def health_check() -> bool
Check if provider is healthy and available.
Returns:
bool- True if provider is healthy, False otherwise
cleanup​
@abstractmethod
async def cleanup() -> None
Cleanup resources and connections.
This method should be called when the provider is no longer needed.