LLM client factory. Creates backend-specific clients for text generation via llama.cpp or OpenAI-compatible APIs. Exposes the chat template engine and tool registry as submodules.
| Module | Description |
|---|---|
| llm.discovery | Provider model discovery framework. |
| llm.llamacpp | llama.cpp completion API client. |
| llm.oaic | OpenAI-compatible API client. |
| llm.templates | Chat template engine for prompt formatting. |
| llm.tool_calls | Shared helpers for normalizing LLM tool call payloads across different provider wire formats (tool_calls, function, and legacy function_call), while preserving backward compatibility. |
| llm.tools | Tool registry and execution engine for LLM function calling. |
| llm.utils | Shared helpers for LLM backend clients. |
| Name | Signature |
|---|---|
new | new(backend, api_url, api_key) -> client, err |
new(
backend,api_url,api_key) ->client,err
Create a new LLM client for the specified backend
Returns a client instance with complete, stream, chat_complete,
and chat_stream methods. Backend defaults to "llamacpp"; also
accepts "oaic" for OpenAI-compatible APIs. API URL and key fall
back to LLM_API_URL / LLM_API_KEY environment variables.