Module request

Module request 

Source
Expand description

Inference request management for AI model providers.

Provides a minimal blocking interface for sending prompts via configured Providers and returning a unified InferenceResponse. Supports OpenAI-compatible and Ollama chat APIs using the Prompt abstraction.

Structs§

HttpResponse
Internal, provider-agnostic HTTP response used by the inference client abstraction.
InferenceResponse
Unified response for a single provider request, including text and token usage.
OllamaChatResponse
Response structure from Ollama chat API
OllamaMessage
OllamaMessageResponse
OllamaOptions
OpenAiChoice
OpenAiMessage
OpenAiRequest
OpenAiResponse
OpenAiUsage
ReqwestInferenceClient
Default blocking HTTP client implementation backed by reqwest.

Traits§

InferenceHttpClient
Minimal HTTP client abstraction for inference requests.

Functions§

send_request
Public entry point: send a request using the default blocking HTTP client.
send_request_with_client
Internal helper: core logic parameterized over an InferenceHttpClient.