Core LLM Interface#
Module contents#
Unified interface for using LLMs from OpenAI, Google, Anthropic, and XAI.
- pydantic model AnthropicChoice#
Bases:
BaseModelAn LLM from Anthropic.
-
field model:
Union[Literal['claude-3-7-sonnet-latest','claude-3-7-sonnet-20250219','claude-3-5-haiku-latest','claude-3-5-haiku-20241022','claude-sonnet-4-20250514','claude-sonnet-4-0','claude-4-sonnet-20250514','claude-3-5-sonnet-latest','claude-3-5-sonnet-20241022','claude-3-5-sonnet-20240620','claude-opus-4-0','claude-opus-4-20250514','claude-4-opus-20250514','claude-opus-4-1-20250805','claude-3-opus-latest','claude-3-opus-20240229','claude-3-haiku-20240307'],str] [Required]#
-
field model:
- class AnthropicChoiceDict#
Bases:
TypedDictAn LLM from Anthropic.
-
model:
Union[Literal['claude-3-7-sonnet-latest','claude-3-7-sonnet-20250219','claude-3-5-haiku-latest','claude-3-5-haiku-20241022','claude-sonnet-4-20250514','claude-sonnet-4-0','claude-4-sonnet-20250514','claude-3-5-sonnet-latest','claude-3-5-sonnet-20241022','claude-3-5-sonnet-20240620','claude-opus-4-0','claude-opus-4-20250514','claude-4-opus-20250514','claude-opus-4-1-20250805','claude-3-opus-latest','claude-3-opus-20240229','claude-3-haiku-20240307'],str]#
-
model:
- pydantic model GoogleChoice#
Bases:
BaseModelAn LLM from Google.
- class GoogleChoiceDict#
Bases:
TypedDictAn LLM from Google.
- class LLM#
Bases:
objectA singleton class that provides a unified interface for the LLMs.
- async respond(query, llm_priority, response_format, instructions=NOT_GIVEN, temperature=NOT_GIVEN, **kwargs)#
Respond to a query using the LLM asynchronously.
- Parameters:
query (
str) – The query to respond to.llm_priority (
List[Union[AnthropicChoice,GoogleChoice,OpenAIChoice,XAIChoice,AnthropicChoiceDict,GoogleChoiceDict,OpenAIChoiceDict,XAIChoiceDict]]) – LLMs to use in order of priority.response_format (
Type[TypeVar(T, bound=BaseModel|str, covariant=True)]) – The response format to use.instructions (
str|NotGiven|None) – Optional instructions to use.temperature (
float|NotGiven|None) – Optional temperature to use.Additional arguments to pass to the LLM. For:
- OpenAI:
openai.OpenAI.responses.parseor openai.OpenAI.responses.create.
- OpenAI:
Google:
google.genai.types.GenerateContentConfig.XAI:
xai_sdk.Client.chat.create.Anthropic:
anthropic.Client.messages.create.
- Return type:
Note
Provided kwargs override the function arguments.
- Return type:
- Returns:
The response from the LLM.
- Raises:
AssertionError – If llm_priority is an empty list.
ValueError – If none of the LLMs worked.
- Parameters:
query (str)
llm_priority (List[AnthropicChoice | GoogleChoice | OpenAIChoice | XAIChoice | AnthropicChoiceDict | GoogleChoiceDict | OpenAIChoiceDict | XAIChoiceDict])
response_format (Type[T])
- respond_sync(llm_priority, query='', response_format=<class 'str'>, instructions=NOT_GIVEN, temperature=NOT_GIVEN, **kwargs)#
Respond to a query using the LLM synchronously.
- Parameters:
query (
str) – The query to respond to.llm_priority (
List[Union[AnthropicChoice,GoogleChoice,OpenAIChoice,XAIChoice,AnthropicChoiceDict,GoogleChoiceDict,OpenAIChoiceDict,XAIChoiceDict]]) – LLMs to use in order of priority.response_format (
Type[TypeVar(T, bound=BaseModel|str, covariant=True)]) – The response format to use.instructions (
str|NotGiven|None) – Optional instructions to use.temperature (
float|NotGiven|None) – Optional temperature to use.Additional arguments to pass to the LLM. For:
- OpenAI:
openai.OpenAI.responses.parseor openai.OpenAI.responses.create.
- OpenAI:
Google:
google.genai.types.GenerateContentConfig.XAI:
xai_sdk.Client.chat.create.Anthropic:
anthropic.Client.messages.create.
- Return type:
Note
Provided kwargs override the function arguments.
- Return type:
- Returns:
The response from the LLM.
- Raises:
AssertionError – If llm_priority is an empty list.
ValueError – If none of the LLMs worked.
- Parameters:
llm_priority (List[AnthropicChoice | GoogleChoice | OpenAIChoice | XAIChoice | AnthropicChoiceDict | GoogleChoiceDict | OpenAIChoiceDict | XAIChoiceDict])
query (str)
response_format (Type[T])
- pydantic model LLMResponse#
-
A response from an LLM.
- field logprobs: List[Tuple[str, float | None]] [Required]#
The log probabilities of the response.
- field provider_model: LLMChoiceModel [Required]#
The provider and model used to generate the response.
- field response: T | None = None#
The response from the LLM.
- field total_tokens: int | None = None#
The total number of input and output tokens used to generate the response.
- class NotGiven#
Bases:
objectA sentinel singleton class used to distinguish omitted keyword arguments.
Examples
def get(timeout: int | NotGiven | None = NotGiven()) -> Response: ... get(timeout=1) # 1s timeout get(timeout=None) # No timeout get() # Default timeout behavior; may not be statically # known at the method definition.
-
ANTHROPIC_NOT_GIVEN:
NotGiven= NOT_GIVEN#
-
OPENAI_NOT_GIVEN:
NotGiven= NOT_GIVEN#
-
ANTHROPIC_NOT_GIVEN:
- pydantic model OpenAIChoice#
Bases:
BaseModelAn LLM from OpenAI.
-
field model:
Union[str,Literal['gpt-5','gpt-5-mini','gpt-5-nano','gpt-5-2025-08-07','gpt-5-mini-2025-08-07','gpt-5-nano-2025-08-07','gpt-5-chat-latest','gpt-4.1','gpt-4.1-mini','gpt-4.1-nano','gpt-4.1-2025-04-14','gpt-4.1-mini-2025-04-14','gpt-4.1-nano-2025-04-14','o4-mini','o4-mini-2025-04-16','o3','o3-2025-04-16','o3-mini','o3-mini-2025-01-31','o1','o1-2024-12-17','o1-preview','o1-preview-2024-09-12','o1-mini','o1-mini-2024-09-12','gpt-4o','gpt-4o-2024-11-20','gpt-4o-2024-08-06','gpt-4o-2024-05-13','gpt-4o-audio-preview','gpt-4o-audio-preview-2024-10-01','gpt-4o-audio-preview-2024-12-17','gpt-4o-audio-preview-2025-06-03','gpt-4o-mini-audio-preview','gpt-4o-mini-audio-preview-2024-12-17','gpt-4o-search-preview','gpt-4o-mini-search-preview','gpt-4o-search-preview-2025-03-11','gpt-4o-mini-search-preview-2025-03-11','chatgpt-4o-latest','codex-mini-latest','gpt-4o-mini','gpt-4o-mini-2024-07-18','gpt-4-turbo','gpt-4-turbo-2024-04-09','gpt-4-0125-preview','gpt-4-turbo-preview','gpt-4-1106-preview','gpt-4-vision-preview','gpt-4','gpt-4-0314','gpt-4-0613','gpt-4-32k','gpt-4-32k-0314','gpt-4-32k-0613','gpt-3.5-turbo','gpt-3.5-turbo-16k','gpt-3.5-turbo-0301','gpt-3.5-turbo-0613','gpt-3.5-turbo-1106','gpt-3.5-turbo-0125','gpt-3.5-turbo-16k-0613']] [Required]#
-
field model:
- class OpenAIChoiceDict#
Bases:
TypedDictAn LLM from OpenAI.
-
model:
Union[str,Literal['gpt-5','gpt-5-mini','gpt-5-nano','gpt-5-2025-08-07','gpt-5-mini-2025-08-07','gpt-5-nano-2025-08-07','gpt-5-chat-latest','gpt-4.1','gpt-4.1-mini','gpt-4.1-nano','gpt-4.1-2025-04-14','gpt-4.1-mini-2025-04-14','gpt-4.1-nano-2025-04-14','o4-mini','o4-mini-2025-04-16','o3','o3-2025-04-16','o3-mini','o3-mini-2025-01-31','o1','o1-2024-12-17','o1-preview','o1-preview-2024-09-12','o1-mini','o1-mini-2024-09-12','gpt-4o','gpt-4o-2024-11-20','gpt-4o-2024-08-06','gpt-4o-2024-05-13','gpt-4o-audio-preview','gpt-4o-audio-preview-2024-10-01','gpt-4o-audio-preview-2024-12-17','gpt-4o-audio-preview-2025-06-03','gpt-4o-mini-audio-preview','gpt-4o-mini-audio-preview-2024-12-17','gpt-4o-search-preview','gpt-4o-mini-search-preview','gpt-4o-search-preview-2025-03-11','gpt-4o-mini-search-preview-2025-03-11','chatgpt-4o-latest','codex-mini-latest','gpt-4o-mini','gpt-4o-mini-2024-07-18','gpt-4-turbo','gpt-4-turbo-2024-04-09','gpt-4-0125-preview','gpt-4-turbo-preview','gpt-4-1106-preview','gpt-4-vision-preview','gpt-4','gpt-4-0314','gpt-4-0613','gpt-4-32k','gpt-4-32k-0314','gpt-4-32k-0613','gpt-3.5-turbo','gpt-3.5-turbo-16k','gpt-3.5-turbo-0301','gpt-3.5-turbo-0613','gpt-3.5-turbo-1106','gpt-3.5-turbo-0125','gpt-3.5-turbo-16k-0613']]#
-
model:
- class TokenCount(provider, model, number_of_calls=0, value=0, is_min_estimate=False, callers=<factory>)#
Bases:
objectA token count from an LLM.
- Parameters:
provider (Literal['anthropic', 'google', 'openai', 'xai'])
model (Literal['claude-3-7-sonnet-latest', 'claude-3-7-sonnet-20250219', 'claude-3-5-haiku-latest', 'claude-3-5-haiku-20241022', 'claude-sonnet-4-20250514', 'claude-sonnet-4-0', 'claude-4-sonnet-20250514', 'claude-3-5-sonnet-latest', 'claude-3-5-sonnet-20241022', 'claude-3-5-sonnet-20240620', 'claude-opus-4-0', 'claude-opus-4-20250514', 'claude-4-opus-20250514', 'claude-opus-4-1-20250805', 'claude-3-opus-latest', 'claude-3-opus-20240229', 'claude-3-haiku-20240307'] | str | ~typing.Literal['gemini-2.5-flash', 'gemini-2.5-pro', 'gemini-2.5-flash-lite', 'gemini-2.0-flash-lite', 'gemini-2.0-flash'] | ~typing.Literal['gpt-5', 'gpt-5-mini', 'gpt-5-nano', 'gpt-5-2025-08-07', 'gpt-5-mini-2025-08-07', 'gpt-5-nano-2025-08-07', 'gpt-5-chat-latest', 'gpt-4.1', 'gpt-4.1-mini', 'gpt-4.1-nano', 'gpt-4.1-2025-04-14', 'gpt-4.1-mini-2025-04-14', 'gpt-4.1-nano-2025-04-14', 'o4-mini', 'o4-mini-2025-04-16', 'o3', 'o3-2025-04-16', 'o3-mini', 'o3-mini-2025-01-31', 'o1', 'o1-2024-12-17', 'o1-preview', 'o1-preview-2024-09-12', 'o1-mini', 'o1-mini-2024-09-12', 'gpt-4o', 'gpt-4o-2024-11-20', 'gpt-4o-2024-08-06', 'gpt-4o-2024-05-13', 'gpt-4o-audio-preview', 'gpt-4o-audio-preview-2024-10-01', 'gpt-4o-audio-preview-2024-12-17', 'gpt-4o-audio-preview-2025-06-03', 'gpt-4o-mini-audio-preview', 'gpt-4o-mini-audio-preview-2024-12-17', 'gpt-4o-search-preview', 'gpt-4o-mini-search-preview', 'gpt-4o-search-preview-2025-03-11', 'gpt-4o-mini-search-preview-2025-03-11', 'chatgpt-4o-latest', 'codex-mini-latest', 'gpt-4o-mini', 'gpt-4o-mini-2024-07-18', 'gpt-4-turbo', 'gpt-4-turbo-2024-04-09', 'gpt-4-0125-preview', 'gpt-4-turbo-preview', 'gpt-4-1106-preview', 'gpt-4-vision-preview', 'gpt-4', 'gpt-4-0314', 'gpt-4-0613', 'gpt-4-32k', 'gpt-4-32k-0314', 'gpt-4-32k-0613', 'gpt-3.5-turbo', 'gpt-3.5-turbo-16k', 'gpt-3.5-turbo-0301', 'gpt-3.5-turbo-0613', 'gpt-3.5-turbo-1106', 'gpt-3.5-turbo-0125', 'gpt-3.5-turbo-16k-0613'] | ~typing.Literal['grok-3', 'grok-3-mini', 'grok-code-fast-1', 'grok-4'])
number_of_calls (int)
value (int)
is_min_estimate (bool)
- classmethod from_dict(d)#
- Return type:
- Parameters:
-
model:
Union[Literal['claude-3-7-sonnet-latest','claude-3-7-sonnet-20250219','claude-3-5-haiku-latest','claude-3-5-haiku-20241022','claude-sonnet-4-20250514','claude-sonnet-4-0','claude-4-sonnet-20250514','claude-3-5-sonnet-latest','claude-3-5-sonnet-20241022','claude-3-5-sonnet-20240620','claude-opus-4-0','claude-opus-4-20250514','claude-4-opus-20250514','claude-opus-4-1-20250805','claude-3-opus-latest','claude-3-opus-20240229','claude-3-haiku-20240307'],str,Literal['gemini-2.5-flash','gemini-2.5-pro','gemini-2.5-flash-lite','gemini-2.0-flash-lite','gemini-2.0-flash'],Literal['gpt-5','gpt-5-mini','gpt-5-nano','gpt-5-2025-08-07','gpt-5-mini-2025-08-07','gpt-5-nano-2025-08-07','gpt-5-chat-latest','gpt-4.1','gpt-4.1-mini','gpt-4.1-nano','gpt-4.1-2025-04-14','gpt-4.1-mini-2025-04-14','gpt-4.1-nano-2025-04-14','o4-mini','o4-mini-2025-04-16','o3','o3-2025-04-16','o3-mini','o3-mini-2025-01-31','o1','o1-2024-12-17','o1-preview','o1-preview-2024-09-12','o1-mini','o1-mini-2024-09-12','gpt-4o','gpt-4o-2024-11-20','gpt-4o-2024-08-06','gpt-4o-2024-05-13','gpt-4o-audio-preview','gpt-4o-audio-preview-2024-10-01','gpt-4o-audio-preview-2024-12-17','gpt-4o-audio-preview-2025-06-03','gpt-4o-mini-audio-preview','gpt-4o-mini-audio-preview-2024-12-17','gpt-4o-search-preview','gpt-4o-mini-search-preview','gpt-4o-search-preview-2025-03-11','gpt-4o-mini-search-preview-2025-03-11','chatgpt-4o-latest','codex-mini-latest','gpt-4o-mini','gpt-4o-mini-2024-07-18','gpt-4-turbo','gpt-4-turbo-2024-04-09','gpt-4-0125-preview','gpt-4-turbo-preview','gpt-4-1106-preview','gpt-4-vision-preview','gpt-4','gpt-4-0314','gpt-4-0613','gpt-4-32k','gpt-4-32k-0314','gpt-4-32k-0613','gpt-3.5-turbo','gpt-3.5-turbo-16k','gpt-3.5-turbo-0301','gpt-3.5-turbo-0613','gpt-3.5-turbo-1106','gpt-3.5-turbo-0125','gpt-3.5-turbo-16k-0613'],Literal['grok-3','grok-3-mini','grok-code-fast-1','grok-4']]#
- class TokenCounter(token_counts=<factory>, _lock=<factory>)#
Bases:
objectCompactly represents token usage across multiple calls to LLMs.
- Parameters:
token_counts (Dict[str, TokenCount])
_lock (Lock)
- classmethod from_dict(d)#
- Return type:
- Parameters:
- async append(model, provider, value, caller)#
Append a token count to the counter.
- Parameters:
model (
Union[Literal['claude-3-7-sonnet-latest','claude-3-7-sonnet-20250219','claude-3-5-haiku-latest','claude-3-5-haiku-20241022','claude-sonnet-4-20250514','claude-sonnet-4-0','claude-4-sonnet-20250514','claude-3-5-sonnet-latest','claude-3-5-sonnet-20241022','claude-3-5-sonnet-20240620','claude-opus-4-0','claude-opus-4-20250514','claude-4-opus-20250514','claude-opus-4-1-20250805','claude-3-opus-latest','claude-3-opus-20240229','claude-3-haiku-20240307'],str,Literal['gemini-2.5-flash','gemini-2.5-pro','gemini-2.5-flash-lite','gemini-2.0-flash-lite','gemini-2.0-flash'],Literal['gpt-5','gpt-5-mini','gpt-5-nano','gpt-5-2025-08-07','gpt-5-mini-2025-08-07','gpt-5-nano-2025-08-07','gpt-5-chat-latest','gpt-4.1','gpt-4.1-mini','gpt-4.1-nano','gpt-4.1-2025-04-14','gpt-4.1-mini-2025-04-14','gpt-4.1-nano-2025-04-14','o4-mini','o4-mini-2025-04-16','o3','o3-2025-04-16','o3-mini','o3-mini-2025-01-31','o1','o1-2024-12-17','o1-preview','o1-preview-2024-09-12','o1-mini','o1-mini-2024-09-12','gpt-4o','gpt-4o-2024-11-20','gpt-4o-2024-08-06','gpt-4o-2024-05-13','gpt-4o-audio-preview','gpt-4o-audio-preview-2024-10-01','gpt-4o-audio-preview-2024-12-17','gpt-4o-audio-preview-2025-06-03','gpt-4o-mini-audio-preview','gpt-4o-mini-audio-preview-2024-12-17','gpt-4o-search-preview','gpt-4o-mini-search-preview','gpt-4o-search-preview-2025-03-11','gpt-4o-mini-search-preview-2025-03-11','chatgpt-4o-latest','codex-mini-latest','gpt-4o-mini','gpt-4o-mini-2024-07-18','gpt-4-turbo','gpt-4-turbo-2024-04-09','gpt-4-0125-preview','gpt-4-turbo-preview','gpt-4-1106-preview','gpt-4-vision-preview','gpt-4','gpt-4-0314','gpt-4-0613','gpt-4-32k','gpt-4-32k-0314','gpt-4-32k-0613','gpt-3.5-turbo','gpt-3.5-turbo-16k','gpt-3.5-turbo-0301','gpt-3.5-turbo-0613','gpt-3.5-turbo-1106','gpt-3.5-turbo-0125','gpt-3.5-turbo-16k-0613'],Literal['grok-3','grok-3-mini','grok-code-fast-1','grok-4']]) – The model used.provider (
Literal['anthropic','google','openai','xai']) – The provider of the model.caller (
str) – The caller (function/module) of the LLM. Preferably the function name.
- Return type:
- Returns:
The token count object.
-
token_counts:
Dict[str,TokenCount]#
- llm = <think_reason_learn.core.llms._ask.LLM object>#
An instance of the singleton LLM class.
Type aliases#
- LLMChoiceModel#
alias of
AnthropicChoice|GoogleChoice|OpenAIChoice|XAIChoice
- LLMChoiceDict#
alias of
AnthropicChoiceDict|GoogleChoiceDict|OpenAIChoiceDict|XAIChoiceDict
- LLMChoice#
alias of
AnthropicChoice|GoogleChoice|OpenAIChoice|XAIChoice|AnthropicChoiceDict|GoogleChoiceDict|OpenAIChoiceDict|XAIChoiceDict