Title: | Rating Text Using Large Language Models |
---|---|
Description: | Generates ratings for textual stimuli using large language models. It allows users to evaluate idioms and similar texts by combining context, prompts, and stimulus inputs. The package supports both 'OpenAI' and 'DeepSeek' APIs by enabling users to switch models simply by specifying the model parameter. It implements methods for constructing the request payload and parsing numeric ratings from the model outputs. |
Authors: | Shiyang Zheng [aut, cre] |
Maintainer: | Shiyang Zheng <[email protected]> |
License: | MIT + file LICENSE |
Version: | 1.0.0 |
Built: | 2025-02-17 13:33:39 UTC |
Source: | CRAN |
This function generates ratings for a given stimulus using a Large Language Model (LLM). It supports both OpenAI and DeepSeek APIs. When the model parameter is set to "deepseek-chat", the DeepSeek API endpoint will be used.
generate_ratings( model = "gpt-3.5-turbo", stim = "kick the bucket", prompt = "...", question = "...", top_p = 1, temp = 0, n_iterations = 30, api_key = "", debug = FALSE )
generate_ratings( model = "gpt-3.5-turbo", stim = "kick the bucket", prompt = "...", question = "...", top_p = 1, temp = 0, n_iterations = 30, api_key = "", debug = FALSE )
model |
A character string specifying the LLM model to use. Use "deepseek-chat" to call the DeepSeek API. |
stim |
A character string representing the stim (e.g., an idiom). |
prompt |
A character string providing context or an identity for LLM (e.g., "You are a native English speaker."). |
question |
A character string that provides instructions for LLM. |
top_p |
A numeric value limiting token selection to a probability mass. |
temp |
A numeric value specifying the temperature for the API call. |
n_iterations |
An integer indicating the number of times to query LLM for the stim. |
api_key |
Your OpenAI API key. |
debug |
Logical, whether to run in debug mode. Defaults to FALSE. |
A data frame containing the stim, rating, and iteration number for each API call.
## Not run: generate_ratings(model = "gpt-3.5-turbo", stim = "kick the bucket", prompt = "You are a native English speaker.", question = "Please rate the following stim:", top_p = 1, temp = 0, n_iterations = 30, api_key = "your_api_key", debug = TRUE) ## End(Not run)
## Not run: generate_ratings(model = "gpt-3.5-turbo", stim = "kick the bucket", prompt = "You are a native English speaker.", question = "Please rate the following stim:", top_p = 1, temp = 0, n_iterations = 30, api_key = "your_api_key", debug = TRUE) ## End(Not run)
This function iterates over a vector of stims (e.g., idioms) and generates ratings for each
by calling the generate_ratings
function. It aggregates all results into a single data frame.
generate_ratings_for_all( model = "gpt-3.5-turbo", stim_list, prompt = "...", question = "...", top_p = 1, temp = 0, n_iterations = 30, api_key = "", debug = FALSE )
generate_ratings_for_all( model = "gpt-3.5-turbo", stim_list, prompt = "...", question = "...", top_p = 1, temp = 0, n_iterations = 30, api_key = "", debug = FALSE )
model |
A character string specifying the LLM model to use. |
stim_list |
A character vector of stims (e.g., idioms) for which ratings will be generated. |
prompt |
A character string providing context or an identity for LLM (e.g., "You are a native English speaker."). |
question |
A character string that provides instructions for LLM. |
top_p |
A numeric value limiting token selection to a probability mass. |
temp |
A numeric value specifying the temperature for the API call. |
n_iterations |
An integer indicating the number of times to query LLM for each stim. |
api_key |
Your OpenAI API key. |
debug |
Logical, whether to run in debug mode. Defaults to FALSE. |
A data frame containing the stim, rating, and iteration number for each API call.
## Not run: generate_ratings_for_all(model = "gpt-3.5-turbo", stim_list = c("kick the bucket", "spill the beans"), prompt = "You are a native English speaker.", question = "Please rate the following stim:", top_p = 1, temp = 0, n_iterations = 30, api_key = "your_api_key", debug = TRUE) ## End(Not run)
## Not run: generate_ratings_for_all(model = "gpt-3.5-turbo", stim_list = c("kick the bucket", "spill the beans"), prompt = "You are a native English speaker.", question = "Please rate the following stim:", top_p = 1, temp = 0, n_iterations = 30, api_key = "your_api_key", debug = TRUE) ## End(Not run)