Title: | Result Model Manager |
---|---|
Description: | Database data model management utilities for R packages in the Observational Health Data Sciences and Informatics program <https://ohdsi.org>. 'ResultModelManager' provides utility functions to allow package maintainers to migrate existing SQL database models, export and import results in consistent patterns. |
Authors: | Jamie Gilbert [aut, cre] |
Maintainer: | Jamie Gilbert <[email protected]> |
License: | Apache License (== 2.0) |
Version: | 0.5.11 |
Built: | 2024-11-01 11:37:23 UTC |
Source: | CRAN |
Class for handling DatabaseConnector:connection objects with consistent R6 interfaces for pooled and non-pooled connections. Allows a connection to cleanly be opened and closed and stored within class/object variables
DatabaseConnector Connection instance close Connection
boolean TRUE if connection is valid queryDb
boolean TRUE if connection is valid executeSql
connectionDetails
DatabaseConnector connectionDetails object
con
DatabaseConnector connection object
isActive
Is connection active or not#'
snakeCaseToCamelCase
(Optional) Boolean. return the results columns in camel case (default)
new()
ConnectionHandler$new( connectionDetails, loadConnection = TRUE, snakeCaseToCamelCase = TRUE )
connectionDetails
DatabaseConnector::connectionDetails class
loadConnection
Boolean option to load connection right away
snakeCaseToCamelCase
(Optional) Boolean. return the results columns in camel case (default) get dbms
dbms()
Get the dbms type of the connection get table
ConnectionHandler$dbms()
tbl()
get a dplyr table object (i.e. lazy loaded)
ConnectionHandler$tbl(table, databaseSchema = NULL)
table
table name
databaseSchema
databaseSchema to which table belongs Render Translate Sql.
renderTranslateSql()
Masked call to SqlRender
ConnectionHandler$renderTranslateSql(sql, ...)
sql
Sql query string
...
Elipsis initConnection
initConnection()
Load connection Get Connection
ConnectionHandler$initConnection()
getConnection()
Returns connection for use with standard DatabaseConnector calls. Connects automatically if it isn't yet loaded
ConnectionHandler$getConnection()
closeConnection()
Closes connection (if active) close Connection
ConnectionHandler$closeConnection()
finalize()
Closes connection (if active) db Is Valid
ConnectionHandler$finalize()
dbIsValid()
Masks call to DBI::dbIsValid. Returns False if connection is NULL
ConnectionHandler$dbIsValid()
queryDb()
query database and return the resulting data.frame
If environment variable LIMIT_ROW_COUNT is set Returned rows are limited to this value (no default) Limit row count is intended for web applications that may cause a denial of service if they consume too many resources.
ConnectionHandler$queryDb( sql, snakeCaseToCamelCase = self$snakeCaseToCamelCase, overrideRowLimit = FALSE, ... )
sql
sql query string
snakeCaseToCamelCase
(Optional) Boolean. return the results columns in camel case (default)
overrideRowLimit
(Optional) Boolean. In some cases, where row limit is enforced on the system You may wish to ignore it.
...
Additional query parameters
executeSql()
execute set of database queries
ConnectionHandler$executeSql(sql, ...)
sql
sql query string
...
Additional query parameters query Function
queryFunction()
queryFunction that can be overriden with subclasses (e.g. use different base function or intercept query) Does not translate or render sql.
ConnectionHandler$queryFunction( sql, snakeCaseToCamelCase = self$snakeCaseToCamelCase, connection = self$getConnection() )
sql
sql query string
snakeCaseToCamelCase
(Optional) Boolean. return the results columns in camel case (default)
connection
(Optional) connection object execute Function
executeFunction()
exec query Function that can be overriden with subclasses (e.g. use different base function or intercept query) Does not translate or render sql.
ConnectionHandler$executeFunction(sql, connection = self$getConnection())
sql
sql query string
connection
connection object
clone()
The objects of this class are cloneable with this method.
ConnectionHandler$clone(deep = FALSE)
deep
Whether to make a deep clone.
Create a QueryNamespace instance from either a connection handler or a connectionDetails object Allows construction with various options not handled by QueryNamespace$new
Note - currently not supported is having multiple table prefixes for multiple table namespaces
createQueryNamespace( connectionDetails = NULL, connectionHandler = NULL, usePooledConnection = FALSE, tableSpecification = NULL, resultModelSpecificationPath = NULL, tablePrefix = "", snakeCaseToCamelCase = TRUE, ... )
createQueryNamespace( connectionDetails = NULL, connectionHandler = NULL, usePooledConnection = FALSE, tableSpecification = NULL, resultModelSpecificationPath = NULL, tablePrefix = "", snakeCaseToCamelCase = TRUE, ... )
connectionDetails |
An object of type |
connectionHandler |
ResultModelManager ConnectionHandler or PooledConnectionHandler instance |
usePooledConnection |
Use Pooled database connection instead of standard DatabaseConnector single connection. |
tableSpecification |
Table specfication data.frame |
resultModelSpecificationPath |
(optional) csv file or files for tableSpecifications - must conform to table spec format. |
tablePrefix |
String to prefix table names with - default is empty string |
snakeCaseToCamelCase |
convert snakecase results to camelCase field names (TRUE by default) |
... |
Elipsis - use for any additional string keys to replace |
For a give table specification file, create an export manager instance for creating results data sets that conform to the data model.
This checks that, at export time, internal validity is assured for the data (e.g. primary keys are valid, data types are compatible
In addition this utility will create a manifest object that can be used to maintain the validity of data.
If an instance of a DataMigrationManager is present and available a packageVersion reference (where applicable) and migration set will be referenced. Allowing data to be imported into a database schema at a specific version.
createResultExportManager( tableSpecification, exportDir, minCellCount = getOption("ohdsi.minCellCount", default = 5), databaseId = NULL )
createResultExportManager( tableSpecification, exportDir, minCellCount = getOption("ohdsi.minCellCount", default = 5), databaseId = NULL )
tableSpecification |
Table specification data.frame |
exportDir |
Directory files are being exported to |
minCellCount |
Minimum cell count - reccomended that you set with options("ohdsi.minCellCount" = count) in all R projects. Default is 5 |
databaseId |
database identifier - required when exporting according to many specs |
R6 class for management of database migration
data frame all migrations, including file name, order and execution status Get connection handler
migrationPath
Path migrations exist in
databaseSchema
Path migrations exist in
packageName
packageName, can be null
tablePrefix
tablePrefix, can be empty character vector
packageTablePrefix
packageTablePrefix, can be empty character vector
new()
DataMigrationManager$new( connectionDetails, databaseSchema, tablePrefix = "", packageTablePrefix = "", migrationPath, packageName = NULL, migrationRegexp = .defaultMigrationRegexp )
connectionDetails
DatabaseConnector connection details object
databaseSchema
Database Schema to execute on
tablePrefix
Optional table prefix for all tables (e.g. plp, cm, cd etc)
packageTablePrefix
A table prefix when used in conjunction with other package results schema, e.g. "cd_", "sccs_", "plp_", "cm_"
migrationPath
Path to location of migration sql files. If in package mode, this should just be a folder (e.g. "migrations") that lives in the location "sql/sql_server" (and) other database platforms. If in folder model, the folder must include "sql_server" in the relative path, (e.g if migrationPath = 'migrations' then the folder 'migrations/sql_server' should exists)
packageName
If in package mode, the name of the R package
migrationRegexp
(Optional) regular expression pattern default is (Migration_([0-9]+))-(.+).sql
Migration table exists
migrationTableExists()
Check if migration table is present in schema
DataMigrationManager$migrationTableExists()
boolean Get path of migrations
getMigrationsPath()
Get path to sql migration files
DataMigrationManager$getMigrationsPath(dbms = "sql server")
dbms
Optionally specify the dbms that the migration fits under Get status of result model
getStatus()
Get status of all migrations (executed or not)
DataMigrationManager$getStatus()
getConnectionHandler()
Return connection handler instance
DataMigrationManager$getConnectionHandler()
ConnectionHandler instance Check migrations in folder
check()
Check if file names are valid for migrations Execute Migrations
DataMigrationManager$check()
executeMigrations()
Execute any unexecuted migrations
DataMigrationManager$executeMigrations(stopMigrationVersion = NULL)
stopMigrationVersion
(Optional) Migrate to a specific migration number isPackage
isPackage()
is a package folder structure or not finalize
DataMigrationManager$isPackage()
finalize()
close database connection
DataMigrationManager$finalize()
clone()
The objects of this class are cloneable with this method.
DataMigrationManager$clone(deep = FALSE)
deep
Whether to make a deep clone.
ConnectionHandler for information on returned class
Delete all rows for database id
deleteAllRowsForDatabaseId( connection, schema, tableName, databaseId, idIsInt = TRUE )
deleteAllRowsForDatabaseId( connection, schema, tableName, databaseId, idIsInt = TRUE )
connection |
DatabaseConnector connection instance |
schema |
The schema on the postgres server where the results table exists |
tableName |
Database table name |
databaseId |
Results source database identifier |
idIsInt |
Identified is a numeric type? If not character is used |
Only PostgreSQL servers are supported.
Delete results rows for primary key values from database server tables
deleteAllRowsForPrimaryKey(connection, schema, tableName, keyValues)
deleteAllRowsForPrimaryKey(connection, schema, tableName, keyValues)
connection |
DatabaseConnector connection instance |
schema |
The schema on the postgres server where the results table exists |
tableName |
Database table name |
keyValues |
Key values of results rows to be deleted |
Only PostgreSQL servers are supported.
Take a csv schema definition and create a basic sql script with it. returns string containing the sql for the table
generateSqlSchema( csvFilepath = NULL, schemaDefinition = NULL, sqlOutputPath = NULL, overwrite = FALSE )
generateSqlSchema( csvFilepath = NULL, schemaDefinition = NULL, sqlOutputPath = NULL, overwrite = FALSE )
csvFilepath |
Path to schema file. Csv file must have the columns: "table_name", "column_name", "data_type", "primary_key" |
schemaDefinition |
A schemaDefintiion data.frame' with the columns: tableName, columnName, dataType, isRequired, primaryKey |
sqlOutputPath |
File to write sql to. |
overwrite |
Boolean - overwrite existing file? |
Grant a given permission for all tables on a given tableSpecification
Very useful if you're hosting studies on data.ohdsi.org or other postgresql instances
NOTE: only tested on postgresql, users' of other platforms may have Sql translation issues
grantTablePermissions( connectionDetails = NULL, connection = NULL, tableSpecification, databaseSchema, tablePrefix = "", permissions = "SELECT", user )
grantTablePermissions( connectionDetails = NULL, connection = NULL, tableSpecification, databaseSchema, tablePrefix = "", permissions = "SELECT", user )
connectionDetails |
An object of type |
connection |
DatabaseConnector connection instance |
tableSpecification |
data.frame conforming to table spec (must contain tableName field) |
databaseSchema |
database schema to run this on |
tablePrefix |
String to prefix table names with - default is empty string |
permissions |
permissions to generate must be one of SELECT, INSERT, DELETE or UPDATE |
user |
database user to grant permissions to |
Get specifications from a given file path
loadResultsDataModelSpecifications(filePath)
loadResultsDataModelSpecifications(filePath)
filePath |
path to a valid csv file |
A tibble data frame object with specifications
Transparently works the same way as a standard connection handler but stores pooled connections.
Useful for long running applications that serve multiple concurrent requests.
Note that a side effect of using this is that each call to this increments the .GlobalEnv attribute RMMPooledHandlerCount
boolean TRUE if connection is valid executeSql
ResultModelManager::ConnectionHandler
-> PooledConnectionHandler
new()
PooledConnectionHandler$new( connectionDetails = NULL, snakeCaseToCamelCase = TRUE, loadConnection = TRUE, dbConnectArgs = NULL, forceJdbcConnection = TRUE )
connectionDetails
DatabaseConnector::connectionDetails class
snakeCaseToCamelCase
(Optional) Boolean. return the results columns in camel case (default)
loadConnection
Boolean option to load connection right away
dbConnectArgs
Optional arguments to call pool::dbPool overrides default usage of connectionDetails
forceJdbcConnection
Force JDBC connection (requires using DatabaseConnector ConnectionDetails) initialize pooled db connection
initConnection()
Overrides ConnectionHandler Call Used for getting a checked out connection from a given environment (if one exists)
PooledConnectionHandler$initConnection()
getCheckedOutConnectionPath()
PooledConnectionHandler$getCheckedOutConnectionPath()
.deferedFrame
defaults to the parent frame of the calling block. Get Connection
getConnection()
Returns a connection from the pool When the desired frame exits, the connection will be returned to the pool As a side effect, the connection is stored as an attribute within the calling frame (e.g. the same function) to prevent multiple connections being spawned, which limits performance.
If you call this somewhere you need to think about returning the object or you may create a connection that is never returned to the pool.
PooledConnectionHandler$getConnection(.deferedFrame = parent.frame(n = 2))
.deferedFrame
defaults to the parent frame of the calling block. get dbms
dbms()
Get the dbms type of the connection Close Connection
PooledConnectionHandler$dbms()
closeConnection()
Overrides ConnectionHandler Call - closes all active connections called with getConnection queryDb
PooledConnectionHandler$closeConnection()
queryDb()
query database and return the resulting data.frame
If environment variable LIMIT_ROW_COUNT is set Returned rows are limited to this value (no default) Limit row count is intended for web applications that may cause a denial of service if they consume too many resources.
PooledConnectionHandler$queryDb( sql, snakeCaseToCamelCase = self$snakeCaseToCamelCase, overrideRowLimit = FALSE, ... )
sql
sql query string
snakeCaseToCamelCase
(Optional) Boolean. return the results columns in camel case (default)
overrideRowLimit
(Optional) Boolean. In some cases, where row limit is enforced on the system You may wish to ignore it.
...
Additional query parameters
executeSql()
execute set of database queries
PooledConnectionHandler$executeSql(sql, ...)
sql
sql query string
...
Additional query parameters query Function
queryFunction()
Overrides ConnectionHandler Call. Does not translate or render sql.
PooledConnectionHandler$queryFunction( sql, snakeCaseToCamelCase = self$snakeCaseToCamelCase, connection )
sql
sql query string
snakeCaseToCamelCase
(Optional) Boolean. return the results columns in camel case (default) query Function
connection
db connection assumes pooling is handled outside of call
executeFunction()
Overrides ConnectionHandler Call. Does not translate or render sql.
PooledConnectionHandler$executeFunction(sql, connection)
sql
sql query string
connection
DatabaseConnector connection. Assumes pooling is handled outside of call
clone()
The objects of this class are cloneable with this method.
PooledConnectionHandler$clone(deep = FALSE)
deep
Whether to make a deep clone.
Given a results specification and ConnectionHandler instance - this class allow queries to be namespaced within any tables specified within a list of pre-determined tables. This allows the encapsulation of queries, using specific table names in a consistent manner that is striaghtforward to maintain over time.
tablePrefix
tablePrefix to use
new()
initialize class
QueryNamespace$new( connectionHandler = NULL, tableSpecification = NULL, tablePrefix = "", ... )
connectionHandler
ConnectionHandler instance @seealsoConnectionHandler
tableSpecification
tableSpecification data.frame
tablePrefix
constant string to prefix all tables with
...
additional replacement variables e.g. database_schema, vocabulary_schema etc Set Connection Handler
setConnectionHandler()
set connection handler object for object
QueryNamespace$setConnectionHandler(connectionHandler)
connectionHandler
ConnectionHandler instance Get connection handler
getConnectionHandler()
get connection handler obeject or throw error if not set
QueryNamespace$getConnectionHandler()
addReplacementVariable()
add a variable to automatically be replaced in query strings (e.g. @database_schema.@table_name becomes 'database_schema.table_1')
QueryNamespace$addReplacementVariable(key, value, replace = FALSE)
key
variable name string (without @) to be replaced, eg. "table_name"
value
atomic value for replacement
replace
if a variable of the same key is found, overwrite it add table specification
addTableSpecification()
add a variable to automatically be replaced in query strings (e.g. @database_schema.@table_name becomes 'database_schema.table_1')
QueryNamespace$addTableSpecification( tableSpecification, useTablePrefix = TRUE, tablePrefix = self$tablePrefix, replace = TRUE )
tableSpecification
table specification data.frame conforming to column names tableName, columnName, dataType and primaryKey
useTablePrefix
prefix the results with the tablePrefix (TRUE)
tablePrefix
prefix string - defaults to class variable set during initialization
replace
replace existing variables of the same name Render
render()
Call to SqlRender::render replacing names stored in this class
QueryNamespace$render(sql, ...)
sql
query string
...
additional variables to be passed to SqlRender::render - will overwrite anything in namespace query Sql
queryDb()
Call to
QueryNamespace$queryDb(sql, ...)
sql
query string
...
additional variables to send to SqlRender::render execute Sql
executeSql()
Call to execute sql within namespaced queries
QueryNamespace$executeSql(sql, ...)
sql
query string
...
additional variables to send to SqlRender::render get vars
getVars()
returns full list of variables that will be replaced Destruct object
QueryNamespace$getVars()
finalize()
Close connections etc
QueryNamespace$finalize()
clone()
The objects of this class are cloneable with this method.
QueryNamespace$clone(deep = FALSE)
deep
Whether to make a deep clone.
library(ResultModelManager) # Create some junk test data connectionDetails <- DatabaseConnector::createConnectionDetails( server = "test_db.sqlite", dbms = "sqlite" ) conn <- DatabaseConnector::connect(connectionDetails) DatabaseConnector::insertTable( connection = conn, tableName = "cd_cohort", data = data.frame( cohort_id = c(1, 2, 3), cohort_name = c("cohort one", "cohort two", "cohort three"), json = "{}", sql = "SELECT 1" ) ) DatabaseConnector::disconnect(conn) connectionHandler <- ConnectionHandler$new(connectionDetails = connectionDetails) tableSpecification <- data.frame( tableName = "cohort", columnName = c( "cohort_id", "cohort_name", "json", "sql" ), primaryKey = c(TRUE, FALSE, FALSE, FALSE), dataType = c("int", "varchar", "varchar", "varchar") ) cohortNamespace <- QueryNamespace$new( connectionHandler = connectionHandler, tableSpecification = tableSpecification, result_schema = "main", tablePrefix = "cd_" ) sql <- "SELECT * FROM @result_schema.@cohort WHERE cohort_id = @cohort_id" # Returns : "SELECT * FROM main.cd_cohort WHERE cohort_id = @cohort_id" print(cohortNamespace$render(sql)) # Returns query result result <- cohortNamespace$queryDb(sql, cohort_id = 1) # cleanup test data unlink("test_db.sqlite")
library(ResultModelManager) # Create some junk test data connectionDetails <- DatabaseConnector::createConnectionDetails( server = "test_db.sqlite", dbms = "sqlite" ) conn <- DatabaseConnector::connect(connectionDetails) DatabaseConnector::insertTable( connection = conn, tableName = "cd_cohort", data = data.frame( cohort_id = c(1, 2, 3), cohort_name = c("cohort one", "cohort two", "cohort three"), json = "{}", sql = "SELECT 1" ) ) DatabaseConnector::disconnect(conn) connectionHandler <- ConnectionHandler$new(connectionDetails = connectionDetails) tableSpecification <- data.frame( tableName = "cohort", columnName = c( "cohort_id", "cohort_name", "json", "sql" ), primaryKey = c(TRUE, FALSE, FALSE, FALSE), dataType = c("int", "varchar", "varchar", "varchar") ) cohortNamespace <- QueryNamespace$new( connectionHandler = connectionHandler, tableSpecification = tableSpecification, result_schema = "main", tablePrefix = "cd_" ) sql <- "SELECT * FROM @result_schema.@cohort WHERE cohort_id = @cohort_id" # Returns : "SELECT * FROM main.cd_cohort WHERE cohort_id = @cohort_id" print(cohortNamespace$render(sql)) # Returns query result result <- cohortNamespace$queryDb(sql, cohort_id = 1) # cleanup test data unlink("test_db.sqlite")
EXPERIMENTAL - this feature is still in design stage and it is not reccomended that you implement this for your package at this stage. Utility for simplifying export of results to files from sql queries
Note that this utility is not strictly thread safe though seperate processes can export separate tables without issue. When exporting a the same table across multiple threads primary key checks may create issues.
exportDir
direcotry path to export files to Init
new()
Create a class for exporting results from a study in a standard, consistend manner
ResultExportManager$new( tableSpecification, exportDir, minCellCount = getOption("ohdsi.minCellCount", default = 5), databaseId = NULL )
tableSpecification
Table specification data.frame
exportDir
Directory files are being exported to
minCellCount
Minimum cell count - reccomended that you set with options("ohdsi.minCellCount" = count) in all R projects. Default is 5
databaseId
database identifier - required when exporting according to many specs get table spec
getTableSpec()
Get specification of table
ResultExportManager$getTableSpec(exportTableName)
exportTableName
table name Get min col values
getMinColValues()
Columns to convert to minimum for a given table name
ResultExportManager$getMinColValues(rows, exportTableName)
rows
data.frame of rows
exportTableName
stering table name - must be defined in spec Check row types
checkRowTypes()
Check types of rows before exporting
ResultExportManager$checkRowTypes(rows, exportTableName)
rows
data.frame of rows to export
exportTableName
table name List tables
listTables()
list all tables in schema Check primary keys of exported data
ResultExportManager$listTables()
checkPrimaryKeys()
Checks to see if the rows conform to the valid primary keys If the same table has already been checked in the life of this object set "invalidateCache" to TRUE as the keys will be cached in a temporary file on disk.
ResultExportManager$checkPrimaryKeys( rows, exportTableName, invalidateCache = FALSE )
rows
data.frame to export
exportTableName
Table name (must be in spec)
invalidateCache
logical - if starting a fresh export use this to delete cache of primary keys Export data frame
exportDataFrame()
This method is intended for use where exporting a data.frame and not a query from a rdbms table For example, if you perform a transformation in R this method will check primary keys, min cell counts and data types before writing the file to according to the table spec
ResultExportManager$exportDataFrame(rows, exportTableName, append = FALSE)
rows
Rows to export
exportTableName
Table name
append
logical - if true will append the result to a file, otherwise the file will be overwritten Export Data table with sql query
exportQuery()
Writes files in batch to stop overflowing system memory Checks primary keys on write Checks minimum cell count
ResultExportManager$exportQuery( connection, sql, exportTableName, transformFunction = NULL, transformFunctionArgs = list(), append = FALSE, ... )
connection
DatabaseConnector connection instance
sql
OHDSI sql string to export tables
exportTableName
Name of table to export (in snake_case format)
transformFunction
(optional) transformation of the data set callback. must take two paramters - rows and pos
Following this transformation callback, results will be verified against data model, Primary keys will be checked and minCellValue rules will be enforced
transformFunctionArgs
arguments to be passed to the transformation function
append
Logical add results to existing file, if FALSE (default) creates a new file and removes primary key validation cache
...
extra parameters passed to sql get manifest list
getManifestList()
Create a meta data set for each collection of result files with sha256 has for all files
ResultExportManager$getManifestList( packageName = NULL, packageVersion = NULL, migrationsPath = NULL, migrationRegexp = .defaultMigrationRegexp )
packageName
if an R analysis package, specify the name
packageVersion
if an analysis package, specify the version
migrationsPath
path to sql migrations (use top level folder (e.g. sql/sql_server/migrations)
migrationRegexp
(optional) regular expression to search for sql files. It is not reccomended to change the default. Write manifest
writeManifest()
Write manifest json
ResultExportManager$writeManifest(...)
...
@seealso getManifestList
clone()
The objects of this class are cloneable with this method.
ResultExportManager$clone(deep = FALSE)
deep
Whether to make a deep clone.
uploadResults
This function will unzip the zipFile to the resultsFolder and assert
that the file resultsDataModelSpecification.csv exists in the resultsFolder
to ensure that it will work with uploadResults
unzipResults(zipFile, resultsFolder)
unzipResults(zipFile, resultsFolder)
zipFile |
The location of the .zip file that holds the results to upload |
resultsFolder |
The folder to use when unzipping the .zip file. If this folder does not exist, this function will attempt to create the folder. |
Requires the results data model tables have been created using following the specifications, generateSqlSchema function.
Results files should be in the snake_case format for table headers and not camelCase
Set the POSTGRES_PATH environmental variable to the path to the folder containing the psql executable to enable bulk upload (recommended).
uploadResults( connection = NULL, connectionDetails = NULL, schema, resultsFolder, tablePrefix = "", forceOverWriteOfSpecifications = FALSE, purgeSiteDataBeforeUploading = TRUE, databaseIdentifierFile = "cdm_source_info.csv", runCheckAndFixCommands = FALSE, warnOnMissingTable = TRUE, purgeDataModel = FALSE, specifications )
uploadResults( connection = NULL, connectionDetails = NULL, schema, resultsFolder, tablePrefix = "", forceOverWriteOfSpecifications = FALSE, purgeSiteDataBeforeUploading = TRUE, databaseIdentifierFile = "cdm_source_info.csv", runCheckAndFixCommands = FALSE, warnOnMissingTable = TRUE, purgeDataModel = FALSE, specifications )
connection |
An object of type |
connectionDetails |
An object of type |
schema |
The schema on the postgres server where the tables have been created. |
resultsFolder |
The path to the folder containing the results to upload.
See |
tablePrefix |
String to prefix table names with - default is empty string |
forceOverWriteOfSpecifications |
If TRUE, specifications of the phenotypes, cohort definitions, and analysis will be overwritten if they already exist on the database. Only use this if these specifications have changed since the last upload. |
purgeSiteDataBeforeUploading |
If TRUE, before inserting data for a specific databaseId all the data for that site will be dropped. This assumes the results folder contains the full data for that data site. |
databaseIdentifierFile |
File contained that references databaseId field (used when purgeSiteDataBeforeUploading == TRUE). You may specify a relative path for the cdmSourceFile and the function will assume it resides in the resultsFolder. Alternatively, you can provide a path outside of the resultsFolder for this file. |
runCheckAndFixCommands |
If TRUE, the upload code will attempt to fix column names, data types and duplicate rows. This parameter is kept for legacy reasons - it is strongly recommended that you correct errors in your results where those results are assembled instead of relying on this option to try and fix it during upload. |
warnOnMissingTable |
Boolean, print a warning if a table file is missing. |
purgeDataModel |
This function will purge all data from the tables in the specification prior to upload. Use with care. If interactive this will require further input. |
specifications |
A tibble data frame object with specifications. |