Package: tidyllm 0.2.0

Eduard Brüll

tidyllm: Tidy Integration of Large Language Models

A tidy interface for integrating large language model (LLM) APIs such as 'Claude', 'Openai', 'Groq','Mistral' and local models via 'Ollama' into R workflows. The package supports text and media-based interactions, interactive message history, batch request APIs, and a tidy, pipeline-oriented interface for streamlined integration into data workflows. Web services are available at <https://www.anthropic.com>, <https://openai.com>, <https://groq.com>, <https://mistral.ai/> and <https://ollama.com>.

Authors:Eduard Brüll [aut, cre]

tidyllm_0.2.0.tar.gz
tidyllm_0.2.0.tar.gz(r-4.5-noble)tidyllm_0.2.0.tar.gz(r-4.4-noble)
tidyllm_0.2.0.tgz(r-4.4-emscripten)tidyllm_0.2.0.tgz(r-4.3-emscripten)
tidyllm.pdf |tidyllm.html
tidyllm/json (API)
NEWS

# Install 'tidyllm' in R:
install.packages('tidyllm', repos = c('https://cran.r-universe.dev', 'https://cloud.r-project.org'))

Peer review:

Bug tracker:https://github.com/edubruell/tidyllm/issues

3.00 score 7 scripts 262 downloads 34 exports 32 dependencies

Last updated 15 days agofrom:7f50019bbf. Checks:OK: 2. Indexed: no.

TargetResultDate
Doc / VignettesOKNov 08 2024
R-4.5-linuxOKNov 08 2024

Exports:azure_openaichatgptcheck_claude_batchcheck_openai_batchclaudedf_llm_messagefetch_claude_batchfetch_openai_batchget_replyget_reply_dataget_user_messagegroqgroq_transcribeinitialize_api_envlast_replylast_reply_datalast_user_messagelist_claude_batcheslist_openai_batchesllm_messagemistralmistral_embeddingollamaollama_download_modelollama_embeddingollama_list_modelsopenaiopenai_embeddingpdf_page_batchrate_limit_infosend_claude_batchsend_openai_batchtidyllm_schemaupdate_rate_limit

Dependencies:askpassbase64encclicpp11curlfansigenericsgluehttr2jsonlitelifecyclelubridatemagrittropensslpdftoolspillarpkgconfigpngpurrrqpdfR6rappdirsRcpprlangstringistringrsystibbletimechangeutf8vctrswithr

Get Started

Rendered fromtidyllm.Rmdusingknitr::rmarkdownon Nov 08 2024.

Last update: 2024-11-07
Started: 2024-11-07

Readme and manuals

Help Manual

Help pageTopics
Send LLM Messages to an OpenAI Chat Completions endpoint on Azureazure_openai
ChatGPT Wrapper (Deprecated)chatgpt
Check Batch Processing Status for Claude APIcheck_claude_batch
Check Batch Processing Status for OpenAI Batch APIcheck_openai_batch
Interact with Claude AI models via the Anthropic APIclaude
Convert a Data Frame to an LLMMessage Objectdf_llm_message
Fetch Results for a Claude Batchfetch_claude_batch
Fetch Results for an OpenAI Batchfetch_openai_batch
Generate API-Specific Callback Function for Streaming Responsesgenerate_callback_function
Get Assistant Reply as Textget_reply
Get Data from an Assistant Reply by parsing structured JSON responsesget_reply_data
Retrieve a User Message by Indexget_user_message
Send LLM Messages to the Groq Chat APIgroq
Transcribe an Audio File Using Groq transcription APIgroq_transcribe
Initialize or Retrieve API-specific Environmentinitialize_api_env
Get the Last Assistant Reply as Textlast_reply
Get the Last Assistant Reply as Textlast_reply_data
Retrieve the Last User Messagelast_user_message
List Claude Batch Requestslist_claude_batches
List OpenAI Batch Requestslist_openai_batches
Create or Update Large Language Model Message Objectllm_message
Large Language Model Message ClassLLMMessage
Send LLMMessage to Mistral APImistral
Generate Embeddings Using Mistral APImistral_embedding
Interact with local AI models via the Ollama APIollama
Download a model from the Ollama APIollama_download_model
Generate Embeddings Using Ollama APIollama_embedding
Retrieve and return model information from the Ollama APIollama_list_models
Send LLM Messages to the OpenAI Chat Completions APIopenai
Generate Embeddings Using OpenAI APIopenai_embedding
This internal function parses duration strings as returned by the OpenAI APIparse_duration_to_seconds
Batch Process PDF into LLM Messagespdf_page_batch
Perform an API request to interact with language modelsperform_api_request
Get the current rate limit information for all or a specific APIrate_limit_info
Extract rate limit information from API response headersratelimit_from_header
Send a Batch of Messages to Claude APIsend_claude_batch
Send a Batch of Messages to OpenAI Batch APIsend_openai_batch
Create a JSON schema for structured outputstidyllm_schema
Update the standard API rate limit info in the hidden .tidyllm_rate_limit_env environmentupdate_rate_limit