Title: | Interface for 'Google Gemini' API |
---|---|
Description: | Provides a comprehensive interface for Google Gemini API, enabling users to access and utilize Gemini Large Language Model (LLM) functionalities directly from R. This package facilitates seamless integration with Google Gemini, allowing for advanced language processing, text generation, and other AI-driven capabilities within the R environment. For more information, please visit <https://ai.google.dev/docs/gemini_api_overview>. |
Authors: | Jinhwan Kim [aut, cre, cph]
|
Maintainer: | Jinhwan Kim <[email protected]> |
License: | MIT + file LICENSE |
Version: | 0.9.2 |
Built: | 2025-03-12 13:30:04 UTC |
Source: | CRAN |
Add history for chating context
addHistory(history, role = NULL, item = NULL)
addHistory(history, role = NULL, item = NULL)
history |
The history of chat |
role |
The role of chat: "user" or "model" |
item |
The item of chat: "prompt" or "output" |
The history of chat
Generate text from text with Gemini
gemini( prompt, model = "2.0-flash", temperature = 1, maxOutputTokens = 8192, topK = 40, topP = 0.95, seed = 1234 )
gemini( prompt, model = "2.0-flash", temperature = 1, maxOutputTokens = 8192, topK = 40, topP = 0.95, seed = 1234 )
prompt |
The prompt to generate text from |
model |
The model to use. Options are "2.0-flash", "2.0-flash-lite", "1.5-flash", "1.5-flash-8b", "1.5-pro", Default is '2.0-flash' see https://ai.google.dev/gemini-api/docs/models/gemini |
temperature |
The temperature to use. Default is 1 value should be between 0 and 2 see https://ai.google.dev/gemini-api/docs/models/generative-models#model-parameters |
maxOutputTokens |
The maximum number of tokens to generate. Default is 8192 and 100 tokens correspond to roughly 60-80 words. |
topK |
The top-k value to use. Default is 40 value should be between 0 and 100 see https://ai.google.dev/gemini-api/docs/models/generative-models#model-parameters |
topP |
The top-p value to use. Default is 0.95 value should be between 0 and 1 see https://ai.google.dev/gemini-api/docs/models/generative-models#model-parameters |
seed |
The seed to use. Default is 1234 value should be integer see https://ai.google.dev/gemini-api/docs/models/generative-models#model-parameters |
Generated text
https://ai.google.dev/docs/gemini_api_overview#text_input
## Not run: library(gemini.R) setAPI("YOUR_API_KEY") gemini("Explain dplyr's mutate function") ## End(Not run)
## Not run: library(gemini.R) setAPI("YOUR_API_KEY") gemini("Explain dplyr's mutate function") ## End(Not run)
This function sends audio to the Gemini API and returns a text description.
gemini_audio( audio = NULL, prompt = "Describe this audio", model = "2.0-flash", temperature = 1, maxOutputTokens = 8192, topK = 40, topP = 0.95, seed = 1234 )
gemini_audio( audio = NULL, prompt = "Describe this audio", model = "2.0-flash", temperature = 1, maxOutputTokens = 8192, topK = 40, topP = 0.95, seed = 1234 )
audio |
Path to the audio file (default: uses a sample file). Must be an MP3. |
prompt |
A string describing what to do with the audio. |
model |
The model to use. Options are "2.0-flash", "2.0-flash-lite", "1.5-flash", "1.5-flash-8b", "1.5-pro". Default is '2.0-flash' see https://ai.google.dev/gemini-api/docs/models/gemini |
temperature |
The temperature to use. Default is 1 value should be between 0 and 2 see https://ai.google.dev/gemini-api/docs/models/generative-models#model-parameters |
maxOutputTokens |
The maximum number of tokens to generate. Default is 8192 and 100 tokens correspond to roughly 60-80 words. |
topK |
The top-k value to use. Default is 40 value should be between 0 and 100 see https://ai.google.dev/gemini-api/docs/models/generative-models#model-parameters |
topP |
The top-p value to use. Default is 0.95 value should be between 0 and 1 see https://ai.google.dev/gemini-api/docs/models/generative-models#model-parameters |
seed |
The seed to use. Default is 1234 value should be integer see https://ai.google.dev/gemini-api/docs/models/generative-models#model-parameters |
A character vector containing the Gemini API's response.
## Not run: library(gemini.R) setAPI("YOUR_API_KEY") gemini_audio(audio = system.file("docs/reference/helloworld.mp3", package = "gemini.R")) ## End(Not run)
## Not run: library(gemini.R) setAPI("YOUR_API_KEY") gemini_audio(audio = system.file("docs/reference/helloworld.mp3", package = "gemini.R")) ## End(Not run)
This function sends audio to the Gemini API and returns a text description.
gemini_audio.vertex( audio = NULL, prompt = "Describe this audio", tokens = NULL, temperature = 1, maxOutputTokens = 8192, topK = 40, topP = 0.95, seed = 1234 )
gemini_audio.vertex( audio = NULL, prompt = "Describe this audio", tokens = NULL, temperature = 1, maxOutputTokens = 8192, topK = 40, topP = 0.95, seed = 1234 )
audio |
Path to the audio file (character string). only supports "mp3". |
prompt |
A prompt to guide the Gemini API's analysis (character string, defaults to "Describe this audio"). |
tokens |
A list containing the API URL and key from token.vertex() function. |
temperature |
The temperature to use. Default is 1 value should be between 0 and 2 see https://ai.google.dev/gemini-api/docs/models/generative-models#model-parameters |
maxOutputTokens |
The maximum number of tokens to generate. Default is 8192 and 100 tokens correspond to roughly 60-80 words. |
topK |
The top-k value to use. Default is 40 value should be between 0 and 100 see https://ai.google.dev/gemini-api/docs/models/generative-models#model-parameters |
topP |
The top-p value to use. Default is 0.95 value should be between 0 and 1 see https://ai.google.dev/gemini-api/docs/models/generative-models#model-parameters |
seed |
The seed to use. Default is 1234 value should be integer see https://ai.google.dev/gemini-api/docs/models/generative-models#model-parameters |
A character vector containing the Gemini API's description of the audio.
Generate text from text with Gemini
gemini_chat( prompt, history = list(), model = "2.0-flash", temperature = 1, maxOutputTokens = 8192, topK = 40, topP = 0.95, seed = 1234 )
gemini_chat( prompt, history = list(), model = "2.0-flash", temperature = 1, maxOutputTokens = 8192, topK = 40, topP = 0.95, seed = 1234 )
prompt |
The prompt to generate text from |
history |
history object to keep track of the conversation |
model |
The model to use. Options are "2.0-flash", "2.0-flash-lite", "1.5-flash", "1.5-flash-8b", "1.5-pro". Default is '2.0-flash' see https://ai.google.dev/gemini-api/docs/models/gemini |
temperature |
The temperature to use. Default is 1 value should be between 0 and 2 see https://ai.google.dev/gemini-api/docs/models/generative-models#model-parameters |
maxOutputTokens |
The maximum number of tokens to generate. Default is 8192 and 100 tokens correspond to roughly 60-80 words. |
topK |
The top-k value to use. Default is 40 value should be between 0 and 100 see https://ai.google.dev/gemini-api/docs/models/generative-models#model-parameters |
topP |
The top-p value to use. Default is 0.95 value should be between 0 and 1 see https://ai.google.dev/gemini-api/docs/models/generative-models#model-parameters |
seed |
The seed to use. Default is 1234 value should be integer see https://ai.google.dev/gemini-api/docs/models/generative-models#model-parameters |
Generated text
https://ai.google.dev/docs/gemini_api_overview#chat
## Not run: library(gemini.R) setAPI("YOUR_API_KEY") chats <- gemini_chat("Pretend you're a snowman and stay in character for each") print(chats$outputs) chats <- gemini_chat("What's your favorite season of the year?", chats$history) print(chats$outputs) chats <- gemini_chat("How do you think about summer?", chats$history) print(chats$outputs) ## End(Not run)
## Not run: library(gemini.R) setAPI("YOUR_API_KEY") chats <- gemini_chat("Pretend you're a snowman and stay in character for each") print(chats$outputs) chats <- gemini_chat("What's your favorite season of the year?", chats$history) print(chats$outputs) chats <- gemini_chat("How do you think about summer?", chats$history) print(chats$outputs) ## End(Not run)
Generate text from text and image with Gemini
gemini_image( image = NULL, prompt = "Explain this image", model = "2.0-flash", temperature = 1, maxOutputTokens = 8192, topK = 40, topP = 0.95, seed = 1234, type = "png" )
gemini_image( image = NULL, prompt = "Explain this image", model = "2.0-flash", temperature = 1, maxOutputTokens = 8192, topK = 40, topP = 0.95, seed = 1234, type = "png" )
image |
The image to generate text |
prompt |
The prompt to generate text, Default is "Explain this image" |
model |
The model to use. Options are "2.0-flash", "2.0-flash-lite", "1.5-flash", "1.5-flash-8b", "1.5-pro". Default is '2.0-flash' see https://ai.google.dev/gemini-api/docs/models/gemini |
temperature |
The temperature to use. Default is 1 value should be between 0 and 2 see https://ai.google.dev/gemini-api/docs/models/generative-models#model-parameters |
maxOutputTokens |
The maximum number of tokens to generate. Default is 8192 and 100 tokens correspond to roughly 60-80 words. |
topK |
The top-k value to use. Default is 40 value should be between 0 and 100 see https://ai.google.dev/gemini-api/docs/models/generative-models#model-parameters |
topP |
The top-p value to use. Default is 0.95 value should be between 0 and 1 see https://ai.google.dev/gemini-api/docs/models/generative-models#model-parameters |
seed |
The seed to use. Default is 1234 value should be integer see https://ai.google.dev/gemini-api/docs/models/generative-models#model-parameters |
type |
The type of image. Options are 'png', 'jpeg', 'webp', 'heic', 'heif'. Default is 'png' |
Generated text
https://ai.google.dev/docs/gemini_api_overview#text_image_input
## Not run: library(gemini.R) setAPI("YOUR_API_KEY") gemini_image(image = system.file("docs/reference/figures/image.png", package = "gemini.R")) ## End(Not run)
## Not run: library(gemini.R) setAPI("YOUR_API_KEY") gemini_image(image = system.file("docs/reference/figures/image.png", package = "gemini.R")) ## End(Not run)
Generate text from text and image with Gemini Vertex API
gemini_image.vertex( image = NULL, prompt = "Explain this image", type = "png", tokens = NULL, temperature = 1, maxOutputTokens = 8192, topK = 40, topP = 0.95, seed = 1234 )
gemini_image.vertex( image = NULL, prompt = "Explain this image", type = "png", tokens = NULL, temperature = 1, maxOutputTokens = 8192, topK = 40, topP = 0.95, seed = 1234 )
image |
The image to generate text |
prompt |
A character string specifying the prompt to use with the image. Defaults to "Explain this image". |
type |
A character string specifying the image type ("png", "jpeg", "webp", "heic", "heif"). Defaults to "png". |
tokens |
A list containing the API URL and key from token.vertex() function. |
temperature |
The temperature to use. Default is 1 value should be between 0 and 2 see https://ai.google.dev/gemini-api/docs/models/generative-models#model-parameters |
maxOutputTokens |
The maximum number of tokens to generate. Default is 8192 and 100 tokens correspond to roughly 60-80 words. |
topK |
The top-k value to use. Default is 40 value should be between 0 and 100 see https://ai.google.dev/gemini-api/docs/models/generative-models#model-parameters |
topP |
The top-p value to use. Default is 0.95 value should be between 0 and 1 see https://ai.google.dev/gemini-api/docs/models/generative-models#model-parameters |
seed |
The seed to use. Default is 1234 value should be integer see https://ai.google.dev/gemini-api/docs/models/generative-models#model-parameters |
A character string containing Gemini's description of the image.
Generate text from text with Gemini Vertex API
gemini.vertex( prompt = NULL, tokens = NULL, temperature = 1, maxOutputTokens = 8192, topK = 40, topP = 0.95, seed = 1234 )
gemini.vertex( prompt = NULL, tokens = NULL, temperature = 1, maxOutputTokens = 8192, topK = 40, topP = 0.95, seed = 1234 )
prompt |
A character string containing the prompt for the Gemini model. |
tokens |
A list containing the API URL and key from token.vertex() function. |
temperature |
The temperature to use. Default is 1 value should be between 0 and 2 see https://ai.google.dev/gemini-api/docs/models/generative-models#model-parameters |
maxOutputTokens |
The maximum number of tokens to generate. Default is 8192 and 100 tokens correspond to roughly 60-80 words. |
topK |
The top-k value to use. Default is 40 value should be between 0 and 100 see https://ai.google.dev/gemini-api/docs/models/generative-models#model-parameters |
topP |
The top-p value to use. Default is 0.95 value should be between 0 and 1 see https://ai.google.dev/gemini-api/docs/models/generative-models#model-parameters |
seed |
The seed to use. Default is 1234 value should be integer see https://ai.google.dev/gemini-api/docs/models/generative-models#model-parameters |
A character string containing the generated text.
https://ai.google.dev/docs/gemini_api_overview#text_input
## Not run: # token should be created before this. using the token.vertex() function prompt <- "What is sachins Jersey number?" gemini.vertex(prompt, tokens) ## End(Not run)
## Not run: # token should be created before this. using the token.vertex() function prompt <- "What is sachins Jersey number?" gemini.vertex(prompt, tokens) ## End(Not run)
Generates Roxygen2 documentation for an R function based on the currently selected code.
gen_docs(prompt = NULL)
gen_docs(prompt = NULL)
prompt |
A character string specifying additional instructions for the LLM. Defaults to a prompt requesting Roxygen2 documentation without the original code. |
A character string containing the generated Roxygen2 documentation.
Generates unit test code for an R function.
gen_tests(prompt = NULL)
gen_tests(prompt = NULL)
prompt |
A character string specifying the prompt for the Gemini model. If NULL, a default prompt is used. |
#' A character string containing the generated unit test code.
Generates an access token for the Gemini model and constructs the corresponding endpoint URL.
token.vertex( jsonkey = NULL, model_id = NULL, expTime = 3600, region = "us-central1" )
token.vertex( jsonkey = NULL, model_id = NULL, expTime = 3600, region = "us-central1" )
jsonkey |
A path to JSON file containing the service account key from Vertex AI. |
model_id |
The ID of the Gemini model. This will be prepended with "gemini-". |
expTime |
The expiration time of the access token in seconds (default is 3600 seconds, or 1 hour). |
region |
The Google Cloud region where your Vertex AI resources are located (default is "us-central1"). See https://cloud.google.com/vertex-ai/docs/general/locations for available regions. |
A list containing:
key |
The generated access token. |
url |
The endpoint URL for the Gemini model. |
## Not run: library(gemini.R) tokens <- token.vertex(jsonkey = "YOURAPIKEY.json", model_id = "1.5-flash") # Specify a different region tokens <- token.vertex(jsonkey = "YOURAPIKEY.json", model_id = "1.5-flash", region = "europe-west4") ## End(Not run)
## Not run: library(gemini.R) tokens <- token.vertex(jsonkey = "YOURAPIKEY.json", model_id = "1.5-flash") # Specify a different region tokens <- token.vertex(jsonkey = "YOURAPIKEY.json", model_id = "1.5-flash", region = "europe-west4") ## End(Not run)