Title: | Interface to the 'ValidMind' Platform |
---|---|
Description: | Deploy, execute, and analyze the results of models hosted on the 'ValidMind' platform <https://validmind.com>. This package interfaces with the 'Python' client library in order to allow advanced diagnostics and insight into trained models all from an 'R' environment. |
Authors: | Andres Rodriguez [aut, cre, cph] |
Maintainer: | Andres Rodriguez <[email protected]> |
License: | AGPL-3 |
Version: | 0.1.2 |
Built: | 2024-12-08 07:20:21 UTC |
Source: | CRAN |
Build an R Plotly figure from a JSON representation
build_r_plotly(plotly_figure)
build_r_plotly(plotly_figure)
plotly_figure |
A nested list containing plotly elements |
An R Plotly object derived from the JSON representation
Produce RMarkdown-compatible output of all results
display_report(processed_results)
display_report(processed_results)
processed_results |
A list of processed result objects |
A formatted list of RMarkdown widgets
## Not run: vm_dataset = vm_r$init_dataset( dataset=data, target_column="Exited", class_labels=list("0" = "Did not exit", "1" = "Exited") ) tabular_suite_results <- vm_r$run_test_suite("tabular_dataset", dataset=vm_dataset) processed_results <- process_result(tabular_suite_results) all_widgets <- display_report(processed_results) for (widget in all_widgets) { print(widget) } ## End(Not run)
## Not run: vm_dataset = vm_r$init_dataset( dataset=data, target_column="Exited", class_labels=list("0" = "Did not exit", "1" = "Exited") ) tabular_suite_results <- vm_r$run_test_suite("tabular_dataset", dataset=vm_dataset) processed_results <- process_result(tabular_suite_results) all_widgets <- display_report(processed_results) for (widget in all_widgets) { print(widget) } ## End(Not run)
Print a summary table of the ValidMind results
print_summary_tables(result_summary)
print_summary_tables(result_summary)
result_summary |
A summary of the results |
A data frame containing the summary of the ValidMind results
Process a set of ValidMind results into parseable data
process_result(results)
process_result(results)
results |
A list of ValidMind result objects |
A nested list of ValidMind results (dataframes, plotly plots, and matplotlib plots)
## Not run: vm_dataset = vm_r$init_dataset( dataset=data, target_column="Exited", class_labels=list("0" = "Did not exit", "1" = "Exited") ) tabular_suite_results <- vm_r$run_test_suite("tabular_dataset", dataset=vm_dataset) processed_results <- process_result(tabular_suite_results) processed_results ## End(Not run)
## Not run: vm_dataset = vm_r$init_dataset( dataset=data, target_column="Exited", class_labels=list("0" = "Did not exit", "1" = "Exited") ) tabular_suite_results <- vm_r$run_test_suite("tabular_dataset", dataset=vm_dataset) processed_results <- process_result(tabular_suite_results) processed_results ## End(Not run)
Registers an R function as a custom test within the ValidMind testing framework, allowing it to be used as a custom metric for model validation.
register_custom_test( func, test_id = NULL, description = NULL, required_inputs = NULL )
register_custom_test( func, test_id = NULL, description = NULL, required_inputs = NULL )
func |
An R function to be registered as a custom test. |
test_id |
A unique identifier for the test. If |
description |
A description of the test. If |
required_inputs |
A character vector specifying the required inputs for the test. If |
The provided R function is converted into a Python callable using r_to_py
.
A Python class is then defined, inheriting from ValidMind's Metric
class, which wraps this callable.
This custom test is registered within ValidMind's test store and can be used in the framework for model validation purposes.
The test store object containing the newly registered custom test.
r_to_py
, import_main
, py_run_string
## Not run: # Define a custom test function in R my_custom_metric <- function(predictions, targets) { # Custom metric logic mean(abs(predictions - targets)) } # Register the custom test register_custom_test( func = my_custom_metric, test_id = "custom.mae", description = "Custom Mean Absolute Error", required_inputs = c("predictions", "targets") ) ## End(Not run)
## Not run: # Define a custom test function in R my_custom_metric <- function(predictions, targets) { # Custom metric logic mean(abs(predictions - targets)) } # Register the custom test register_custom_test( func = my_custom_metric, test_id = "custom.mae", description = "Custom Mean Absolute Error", required_inputs = c("predictions", "targets") ) ## End(Not run)
This function runs a custom test using the ValidMind framework through Python's 'validmind.vm_models'. It retrieves a custom test by 'test_id', executes it with the provided 'inputs', and optionally displays the result. The result is also logged.
run_custom_test(test_id, inputs, test_registry, show = FALSE)
run_custom_test(test_id, inputs, test_registry, show = FALSE)
test_id |
A string representing the ID of the custom test to run. |
inputs |
A list of inputs required for the custom test. |
test_registry |
A reference to the test register object which provides the custom test class. |
show |
A logical value. If TRUE, the result will be displayed. Defaults to FALSE. |
An object representing the result of the test, with an additional log function.
## Not run: result <- run_custom_test("test123", my_inputs, test_registry, show = TRUE) ## End(Not run)
## Not run: result <- run_custom_test("test123", my_inputs, test_registry, show = TRUE) ## End(Not run)
This function saves a given R model object to a randomly named '.RData' file in the '/tmp/' directory. The file is saved with a unique name generated using random letters.
save_model(model)
save_model(model)
model |
The R model object to be saved. |
A string representing the full file path to the saved '.RData' file.
model <- lm(mpg ~ cyl, data = mtcars) file_path <- save_model(model)
model <- lm(mpg ~ cyl, data = mtcars) file_path <- save_model(model)
Provide a summarization of a single metric result
summarize_metric_result(result)
summarize_metric_result(result)
result |
The ValidMind result object |
A list containing the summary of the ValidMind results
Provide a summarization of a single result (test or metric)
summarize_result(result)
summarize_result(result)
result |
The ValidMind result object |
Based on the type of 'result', either A list containing the summary of the ValidMind results, or a list containing the summary of the ValidMind results
Provide a summarization of a single test result
summarize_test_result(result)
summarize_test_result(result)
result |
The ValidMind result object |
A list containing the summary of the ValidMind test results
Retrieve a validmind (vm) connection object using reticulate
vm( api_key, api_secret, model, python_version, api_host = "http://localhost:3000/api/v1/tracking" )
vm( api_key, api_secret, model, python_version, api_host = "http://localhost:3000/api/v1/tracking" )
api_key |
The ValidMind API key |
api_secret |
The ValidMind API secret |
model |
The ValidMind model |
python_version |
The Python Version to use |
api_host |
The ValidMind host, defaulting to local |
A validmind connection object, obtained from 'reticulate', which orchestrates the connection to the ValidMind API
## Not run: vm_r <- vm( api_key="<your_api_key_here>", api_secret="<your_api_secret_here>", model="<your_model_id_here>", python_version=python_version, api_host="https://api.dev.vm.validmind.ai/api/v1/tracking" ) ## End(Not run)
## Not run: vm_r <- vm( api_key="<your_api_key_here>", api_secret="<your_api_secret_here>", model="<your_model_id_here>", python_version=python_version, api_host="https://api.dev.vm.validmind.ai/api/v1/tracking" ) ## End(Not run)