Table of Contents
- spoon_ai.llm.providers
- spoon_ai.llm.providers.deepseek_provider
- spoon_ai.llm.providers.gemini_provider
- spoon_ai.llm.providers.openai_compatible_provider
- spoon_ai.llm.providers.ollama_provider
- spoon_ai.llm.providers.openai_provider
- spoon_ai.llm.providers.anthropic_provider
- spoon_ai.llm.providers.openrouter_provider
Module spoon_ai.llm.providers
LLM Provider implementations.
Module spoon_ai.llm.providers.deepseek_provider
DeepSeek Provider implementation for the unified LLM interface. DeepSeek provides access to their models through an OpenAI-compatible API.
DeepSeekProvider Objects​
@register_provider("deepseek", [
ProviderCapability.CHAT,
ProviderCapability.COMPLETION,
ProviderCapability.TOOLS,
ProviderCapability.STREAMING
])
class DeepSeekProvider(OpenAICompatibleProvider)
DeepSeek provider implementation using OpenAI-compatible API.
get_metadata​
def get_metadata() -> ProviderMetadata
Get DeepSeek provider metadata.
Module spoon_ai.llm.providers.gemini_provider
Gemini Provider implementation for the unified LLM interface.
GeminiProvider Objects​
@register_provider("gemini", [
ProviderCapability.CHAT,
ProviderCapability.COMPLETION,
ProviderCapability.STREAMING,
ProviderCapability.TOOLS,
ProviderCapability.IMAGE_GENERATION,
ProviderCapability.VISION
])
class GeminiProvider(LLMProviderInterface)
Gemini provider implementation.
initialize​
async def initialize(config: Dict[str, Any]) -> None
Initialize the Gemini provider with configuration.
chat​
async def chat(messages: List[Message], **kwargs) -> LLMResponse
Send chat request to Gemini.
chat_stream​
async def chat_stream(messages: List[Message],
callbacks: Optional[List] = None,
**kwargs) -> AsyncIterator[LLMResponseChunk]
Send streaming chat request to Gemini with callback support.
Yields:
LLMResponseChunk- Structured streaming response chunks
completion​
async def completion(prompt: str, **kwargs) -> LLMResponse
Send completion request to Gemini.
chat_with_tools​
async def chat_with_tools(messages: List[Message], tools: List[Dict],
**kwargs) -> LLMResponse
Send chat request with tools to Gemini using native function calling.
get_metadata​
def get_metadata() -> ProviderMetadata
Get Gemini provider metadata.
health_check​
async def health_check() -> bool
Check if Gemini provider is healthy.
cleanup​
async def cleanup() -> None
Cleanup Gemini provider resources.
Module spoon_ai.llm.providers.openai_compatible_provider
OpenAI Compatible Provider base class for providers that use OpenAI-compatible APIs. This includes OpenAI, OpenRouter, DeepSeek, and other providers with similar interfaces.
OpenAICompatibleProvider Objects​
class OpenAICompatibleProvider(LLMProviderInterface)
Base class for OpenAI-compatible providers.
get_provider_name​
def get_provider_name() -> str
Get the provider name. Should be overridden by subclasses.
get_default_base_url​
def get_default_base_url() -> str
Get the default base URL. Should be overridden by subclasses.
get_default_model​
def get_default_model() -> str
Get the default model. Should be overridden by subclasses.
get_additional_headers​
def get_additional_headers(config: Dict[str, Any]) -> Dict[str, str]
Get additional headers for the provider. Can be overridden by subclasses.
initialize​
async def initialize(config: Dict[str, Any]) -> None
Initialize the provider with configuration.
chat​
async def chat(messages: List[Message], **kwargs) -> LLMResponse
Send chat request to the provider.
chat_stream​
async def chat_stream(messages: List[Message],
callbacks: Optional[List[BaseCallbackHandler]] = None,
**kwargs) -> AsyncIterator[LLMResponseChunk]
Send streaming chat request with full callback support.
Yields:
LLMResponseChunk- Structured streaming response chunks
completion​
async def completion(prompt: str, **kwargs) -> LLMResponse
Send completion request to the provider.
chat_with_tools​
async def chat_with_tools(messages: List[Message], tools: List[Dict],
**kwargs) -> LLMResponse
Send chat request with tools to the provider.
get_metadata​
def get_metadata() -> ProviderMetadata
Get provider metadata. Should be overridden by subclasses.
health_check​
async def health_check() -> bool
Check if provider is healthy.
cleanup​
async def cleanup() -> None
Cleanup provider resources.
Module spoon_ai.llm.providers.ollama_provider
Ollama Provider implementation for the unified LLM interface.
Ollama runs locally and exposes an HTTP API (default: http://localhost:11434). This provider supports chat, completion, and streaming.
Notes:
- Ollama does not require an API key; the configuration layer may still provide a placeholder api_key value for consistency.
- Tool calling is not implemented here.
OllamaProvider Objects​
@register_provider(
"ollama",
[
ProviderCapability.CHAT,
ProviderCapability.COMPLETION,
ProviderCapability.STREAMING,
],
)
class OllamaProvider(LLMProviderInterface)
Local Ollama provider via HTTP.
Module spoon_ai.llm.providers.openai_provider
OpenAI Provider implementation for the unified LLM interface.
OpenAIProvider Objects​
@register_provider("openai", [
ProviderCapability.CHAT,
ProviderCapability.COMPLETION,
ProviderCapability.TOOLS,
ProviderCapability.STREAMING
])
class OpenAIProvider(OpenAICompatibleProvider)
OpenAI provider implementation.
get_metadata​
def get_metadata() -> ProviderMetadata
Get OpenAI provider metadata.
Module spoon_ai.llm.providers.anthropic_provider
Anthropic Provider implementation for the unified LLM interface.
AnthropicProvider Objects​
@register_provider("anthropic", [
ProviderCapability.CHAT,
ProviderCapability.COMPLETION,
ProviderCapability.TOOLS,
ProviderCapability.STREAMING
])
class AnthropicProvider(LLMProviderInterface)
Anthropic provider implementation.
initialize​
async def initialize(config: Dict[str, Any]) -> None
Initialize the Anthropic provider with configuration.
get_cache_metrics​
def get_cache_metrics() -> Dict[str, int]
Get current cache performance metrics.
chat​
async def chat(messages: List[Message], **kwargs) -> LLMResponse
Send chat request to Anthropic.
chat_stream​
async def chat_stream(messages: List[Message],
callbacks: Optional[List] = None,
**kwargs) -> AsyncIterator[LLMResponseChunk]
Send streaming chat request to Anthropic with callback support.
Yields:
LLMResponseChunk- Structured streaming response chunks
completion​
async def completion(prompt: str, **kwargs) -> LLMResponse
Send completion request to Anthropic.
chat_with_tools​
async def chat_with_tools(messages: List[Message], tools: List[Dict],
**kwargs) -> LLMResponse
Send chat request with tools to Anthropic.
get_metadata​
def get_metadata() -> ProviderMetadata
Get Anthropic provider metadata.
health_check​
async def health_check() -> bool
Check if Anthropic provider is healthy.
cleanup​
async def cleanup() -> None
Cleanup Anthropic provider resources.
Module spoon_ai.llm.providers.openrouter_provider
OpenRouter Provider implementation for the unified LLM interface. OpenRouter provides access to multiple LLM models through a unified API compatible with OpenAI.
OpenRouterProvider Objects​
@register_provider("openrouter", [
ProviderCapability.CHAT,
ProviderCapability.COMPLETION,
ProviderCapability.TOOLS,
ProviderCapability.STREAMING
])
class OpenRouterProvider(OpenAICompatibleProvider)
OpenRouter provider implementation using OpenAI-compatible API.
get_additional_headers​
def get_additional_headers(config: Dict[str, Any]) -> Dict[str, str]
Get OpenRouter-specific headers.
get_metadata​
def get_metadata() -> ProviderMetadata
Get OpenRouter provider metadata.