Table of Contents
- spoon_ai.callbacks
- spoon_ai.callbacks.streaming_stdout
- spoon_ai.callbacks.statistics
- spoon_ai.callbacks.stream_event
- spoon_ai.callbacks.manager
- spoon_ai.callbacks.base
Module spoon_ai.callbacks
Callback system for streaming and event handling in Spoon AI.
This module provides a comprehensive callback system similar to LangChain's callbacks, enabling real-time monitoring and event handling for LLM calls, agent execution, tool invocation, and graph workflows.
Module spoon_ai.callbacks.streaming_stdout
StreamingStdOutCallbackHandler Objects​
class StreamingStdOutCallbackHandler(BaseCallbackHandler)
Callback handler that streams tokens to standard output.
on_llm_new_token​
def on_llm_new_token(token: str, **kwargs: Any) -> None
Print token to stdout immediately.
Arguments:
token- The new token to print**kwargs- Additional context (ignored)
on_llm_end​
def on_llm_end(response: Any, **kwargs: Any) -> None
Print newline after LLM completes.
Arguments:
response- The complete LLM response (ignored)**kwargs- Additional context (ignored)
Module spoon_ai.callbacks.statistics
StreamingStatisticsCallback Objects​
class StreamingStatisticsCallback(BaseCallbackHandler, LLMManagerMixin)
Collect simple throughput statistics during streaming runs.
By default, the callback prints summary metrics when the LLM finishes.
Consumers can provide a custom print_fn to redirect output, or disable
printing entirely and read the public attributes after execution.
Module spoon_ai.callbacks.stream_event
StreamEventCallbackHandler Objects​
class StreamEventCallbackHandler(BaseCallbackHandler)
Translate callback invocations into standardized stream events.
Module spoon_ai.callbacks.manager
CallbackManager Objects​
class CallbackManager()
Lightweight dispatcher for callback handlers.
Module spoon_ai.callbacks.base
RetrieverManagerMixin Objects​
class RetrieverManagerMixin()
Mixin providing retriever callback hooks.
on_retriever_start​
def on_retriever_start(run_id: UUID, query: Any, **kwargs: Any) -> Any
Run when a retriever begins execution.
on_retriever_end​
def on_retriever_end(run_id: UUID, documents: Any, **kwargs: Any) -> Any
Run when a retriever finishes successfully.
on_retriever_error​
def on_retriever_error(error: BaseException, *, run_id: UUID,
**kwargs: Any) -> Any
Run when a retriever raises an error.
LLMManagerMixin Objects​
class LLMManagerMixin()
Mixin providing large language model callback hooks.
on_llm_start​
def on_llm_start(run_id: UUID, messages: List[Message], **kwargs: Any) -> Any
Run when an LLM or chat model begins execution.
on_llm_new_token​
def on_llm_new_token(token: str,
*,
chunk: Optional[LLMResponseChunk] = None,
run_id: Optional[UUID] = None,
**kwargs: Any) -> Any
Run for each streamed token emitted by an LLM.
on_llm_end​
def on_llm_end(response: LLMResponse, *, run_id: UUID, **kwargs: Any) -> Any
Run when an LLM finishes successfully.
on_llm_error​
def on_llm_error(error: BaseException, *, run_id: UUID, **kwargs: Any) -> Any
Run when an LLM raises an error.
ChainManagerMixin Objects​
class ChainManagerMixin()
Mixin providing chain-level callback hooks.
on_chain_start​
def on_chain_start(run_id: UUID, inputs: Any, **kwargs: Any) -> Any
Run when a chain (Runnable) starts executing.
on_chain_end​
def on_chain_end(run_id: UUID, outputs: Any, **kwargs: Any) -> Any
Run when a chain finishes successfully.
on_chain_error​
def on_chain_error(error: BaseException, *, run_id: UUID,
**kwargs: Any) -> Any
Run when a chain raises an error.
ToolManagerMixin Objects​
class ToolManagerMixin()
Mixin providing tool callback hooks.
on_tool_start​
def on_tool_start(tool_name: str, tool_input: Any, *, run_id: UUID,
**kwargs: Any) -> Any
Run when a tool invocation begins.
on_tool_end​
def on_tool_end(tool_name: str, tool_output: Any, *, run_id: UUID,
**kwargs: Any) -> Any
Run when a tool invocation succeeds.
on_tool_error​
def on_tool_error(error: BaseException,
*,
run_id: UUID,
tool_name: Optional[str] = None,
**kwargs: Any) -> Any
Run when a tool invocation raises an error.
PromptManagerMixin Objects​
class PromptManagerMixin()
Mixin providing prompt template callback hooks.
on_prompt_start​
def on_prompt_start(run_id: UUID, inputs: Any, **kwargs: Any) -> Any
Run when a prompt template begins formatting.
on_prompt_end​
def on_prompt_end(run_id: UUID, output: Any, **kwargs: Any) -> Any
Run when a prompt template finishes formatting.
on_prompt_error​
def on_prompt_error(error: BaseException, *, run_id: UUID,
**kwargs: Any) -> Any
Run when prompt formatting raises an error.
BaseCallbackHandler Objects​
class BaseCallbackHandler(LLMManagerMixin, ChainManagerMixin, ToolManagerMixin,
RetrieverManagerMixin, PromptManagerMixin, ABC)
Base class for SpoonAI callback handlers.
raise_error​
Whether to re-raise exceptions originating from callbacks.
run_inline​
Whether the callback prefers to run on the caller's event loop.
ignore_llm​
@property
def ignore_llm() -> bool
Return True to skip LLM callbacks.
ignore_chain​
@property
def ignore_chain() -> bool
Return True to skip chain callbacks.
ignore_tool​
@property
def ignore_tool() -> bool
Return True to skip tool callbacks.
ignore_retriever​
@property
def ignore_retriever() -> bool
Return True to skip retriever callbacks.
ignore_prompt​
@property
def ignore_prompt() -> bool
Return True to skip prompt callbacks.
AsyncCallbackHandler Objects​
class AsyncCallbackHandler(BaseCallbackHandler)
Async version of the callback handler base class.