Title: | C++ Interface to PostgreSQL |
---|---|
Description: | Fully DBI-compliant C++-backed interface to PostgreSQL <https://www.postgresql.org/>, an open-source relational database. |
Authors: | Hadley Wickham [aut], Jeroen Ooms [aut], Kirill Müller [aut, cre] , RStudio [cph], R Consortium [fnd], Tomoaki Nishiyama [ctb] (Code for encoding vectors into strings derived from RPostgreSQL) |
Maintainer: | Kirill Müller <[email protected]> |
License: | MIT + file LICENSE |
Version: | 1.4.7 |
Built: | 2024-11-24 06:32:30 UTC |
Source: | CRAN |
Fully DBI-compliant Rcpp-backed interface to PostgreSQL https://www.postgresql.org/, an open-source relational database.
Maintainer: Kirill Müller [email protected] (ORCID)
Authors:
Hadley Wickham
Jeroen Ooms
Other contributors:
RStudio [copyright holder]
R Consortium [funder]
Tomoaki Nishiyama (Code for encoding vectors into strings derived from RPostgreSQL) [contributor]
Useful links:
Report bugs at https://github.com/r-dbi/RPostgres/issues
DBI::dbConnect()
establishes a connection to a database.
Set drv = Postgres()
to connect to a PostgreSQL(-ish) database. Use drv = Redshift()
instead to connect to an AWS Redshift cluster.
Manually disconnecting a connection is not necessary with RPostgres, but still recommended; if you delete the object containing the connection, it will be automatically disconnected during the next GC with a warning.
Postgres() ## S4 method for signature 'PqDriver' dbConnect( drv, dbname = NULL, host = NULL, port = NULL, password = NULL, user = NULL, service = NULL, ..., bigint = c("integer64", "integer", "numeric", "character"), check_interrupts = FALSE, timezone = "UTC", timezone_out = NULL ) ## S4 method for signature 'PqConnection' dbDisconnect(conn, ...)
Postgres() ## S4 method for signature 'PqDriver' dbConnect( drv, dbname = NULL, host = NULL, port = NULL, password = NULL, user = NULL, service = NULL, ..., bigint = c("integer64", "integer", "numeric", "character"), check_interrupts = FALSE, timezone = "UTC", timezone_out = NULL ) ## S4 method for signature 'PqConnection' dbDisconnect(conn, ...)
drv |
DBI::DBIDriver. Use |
dbname |
Database name. If |
host , port
|
Host and port. If |
user , password
|
User name and password. If |
service |
Name of service to connect as. If |
... |
Other name-value pairs that describe additional connection options as described at https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-PARAMKEYWORDS |
bigint |
The R type that 64-bit integer types should be mapped to, default is bit64::integer64, which allows the full range of 64 bit integers. |
check_interrupts |
Should user interrupts be checked during the query execution (before
first row of data is available)? Setting to |
timezone |
Sets the timezone for the connection. The default is |
timezone_out |
The time zone returned to R, defaults to |
conn |
Connection to disconnect. |
library(DBI) # Pass more arguments as necessary to dbConnect() con <- dbConnect(RPostgres::Postgres()) dbDisconnect(con)
library(DBI) # Pass more arguments as necessary to dbConnect() con <- dbConnect(RPostgres::Postgres()) dbDisconnect(con)
To retrieve results a chunk at a time, use dbSendQuery()
,
dbFetch()
, then dbClearResult()
. Alternatively, if you want all the
results (and they'll fit in memory) use dbGetQuery()
which sends,
fetches and clears for you.
## S4 method for signature 'PqResult' dbBind(res, params, ...) ## S4 method for signature 'PqResult' dbClearResult(res, ...) ## S4 method for signature 'PqResult' dbFetch(res, n = -1, ..., row.names = FALSE) ## S4 method for signature 'PqResult' dbHasCompleted(res, ...) ## S4 method for signature 'PqConnection' dbSendQuery(conn, statement, params = NULL, ..., immediate = FALSE)
## S4 method for signature 'PqResult' dbBind(res, params, ...) ## S4 method for signature 'PqResult' dbClearResult(res, ...) ## S4 method for signature 'PqResult' dbFetch(res, n = -1, ..., row.names = FALSE) ## S4 method for signature 'PqResult' dbHasCompleted(res, ...) ## S4 method for signature 'PqConnection' dbSendQuery(conn, statement, params = NULL, ..., immediate = FALSE)
res |
Code a PqResult produced by
|
params |
A list of query parameters to be substituted into
a parameterised query. Query parameters are sent as strings, and the
correct type is imputed by PostgreSQL. If this fails, you can manually
cast the parameter with e.g. |
... |
Other arguments needed for compatibility with generic (currently ignored). |
n |
Number of rows to return. If less than zero returns all rows. |
row.names |
Either If A string is equivalent to For backward compatibility, |
conn |
A PqConnection created by |
statement |
An SQL string to execute. |
immediate |
If |
With immediate = TRUE
, it is possible to pass multiple queries or statements,
separated by semicolons.
For multiple statements, the resulting value of dbGetRowsAffected()
corresponds to the total number of affected rows.
If multiple queries are used, all queries must return data with the same
column names and types.
Queries and statements can be mixed.
library(DBI) db <- dbConnect(RPostgres::Postgres()) dbWriteTable(db, "usarrests", datasets::USArrests, temporary = TRUE) # Run query to get results as dataframe dbGetQuery(db, "SELECT * FROM usarrests LIMIT 3") # Send query to pull requests in batches res <- dbSendQuery(db, "SELECT * FROM usarrests") dbFetch(res, n = 2) dbFetch(res, n = 2) dbHasCompleted(res) dbClearResult(res) dbRemoveTable(db, "usarrests") dbDisconnect(db)
library(DBI) db <- dbConnect(RPostgres::Postgres()) dbWriteTable(db, "usarrests", datasets::USArrests, temporary = TRUE) # Run query to get results as dataframe dbGetQuery(db, "SELECT * FROM usarrests LIMIT 3") # Send query to pull requests in batches res <- dbSendQuery(db, "SELECT * FROM usarrests") dbFetch(res, n = 2) dbFetch(res, n = 2) dbHasCompleted(res) dbClearResult(res) dbRemoveTable(db, "usarrests") dbDisconnect(db)
dbAppendTable()
is overridden because RPostgres
uses placeholders of the form $1
, $2
etc. instead of ?
.
dbWriteTable()
executes several SQL statements that
create/overwrite a table and fill it with values.
RPostgres does not use parameterised queries to insert rows because
benchmarks revealed that this was considerably slower than using a single
SQL string.
## S4 method for signature 'PqConnection' dbAppendTable(conn, name, value, copy = NULL, ..., row.names = NULL) ## S4 method for signature 'PqConnection,Id' dbExistsTable(conn, name, ...) ## S4 method for signature 'PqConnection,character' dbExistsTable(conn, name, ...) ## S4 method for signature 'PqConnection,Id' dbListFields(conn, name, ...) ## S4 method for signature 'PqConnection,character' dbListFields(conn, name, ...) ## S4 method for signature 'PqConnection' dbListObjects(conn, prefix = NULL, ...) ## S4 method for signature 'PqConnection' dbListTables(conn, ...) ## S4 method for signature 'PqConnection,character' dbReadTable(conn, name, ..., check.names = TRUE, row.names = FALSE) ## S4 method for signature 'PqConnection,character' dbRemoveTable(conn, name, ..., temporary = FALSE, fail_if_missing = TRUE) ## S4 method for signature 'PqConnection,character,data.frame' dbWriteTable( conn, name, value, ..., row.names = FALSE, overwrite = FALSE, append = FALSE, field.types = NULL, temporary = FALSE, copy = NULL ) ## S4 method for signature 'PqConnection' sqlData(con, value, row.names = FALSE, ...)
## S4 method for signature 'PqConnection' dbAppendTable(conn, name, value, copy = NULL, ..., row.names = NULL) ## S4 method for signature 'PqConnection,Id' dbExistsTable(conn, name, ...) ## S4 method for signature 'PqConnection,character' dbExistsTable(conn, name, ...) ## S4 method for signature 'PqConnection,Id' dbListFields(conn, name, ...) ## S4 method for signature 'PqConnection,character' dbListFields(conn, name, ...) ## S4 method for signature 'PqConnection' dbListObjects(conn, prefix = NULL, ...) ## S4 method for signature 'PqConnection' dbListTables(conn, ...) ## S4 method for signature 'PqConnection,character' dbReadTable(conn, name, ..., check.names = TRUE, row.names = FALSE) ## S4 method for signature 'PqConnection,character' dbRemoveTable(conn, name, ..., temporary = FALSE, fail_if_missing = TRUE) ## S4 method for signature 'PqConnection,character,data.frame' dbWriteTable( conn, name, value, ..., row.names = FALSE, overwrite = FALSE, append = FALSE, field.types = NULL, temporary = FALSE, copy = NULL ) ## S4 method for signature 'PqConnection' sqlData(con, value, row.names = FALSE, ...)
conn |
a PqConnection object, produced by
|
name |
a character string specifying a table name. Names will be
automatically quoted so you can use any sequence of characters, not
just any valid bare table name.
Alternatively, pass a name quoted with |
value |
A data.frame to write to the database. |
copy |
If |
... |
Ignored. |
row.names |
Either If A string is equivalent to For backward compatibility, |
prefix |
A fully qualified path in the database's namespace, or |
check.names |
If |
temporary |
If |
fail_if_missing |
If |
overwrite |
a logical specifying whether to overwrite an existing table
or not. Its default is |
append |
a logical specifying whether to append to an existing table
in the DBMS. Its default is |
field.types |
character vector of named SQL field types where
the names are the names of new table's columns.
If missing, types are inferred with |
con |
A database connection. |
Pass an identifier created with Id()
as the name
argument
to specify the schema or catalog, e.g.
name = Id(catalog = "my_catalog", schema = "my_schema", table = "my_table")
.
To specify the tablespace, use
dbExecute(conn, "SET default_tablespace TO my_tablespace")
before creating the table.
library(DBI) con <- dbConnect(RPostgres::Postgres()) dbListTables(con) dbWriteTable(con, "mtcars", mtcars, temporary = TRUE) dbReadTable(con, "mtcars") dbListTables(con) dbExistsTable(con, "mtcars") # A zero row data frame just creates a table definition. dbWriteTable(con, "mtcars2", mtcars[0, ], temporary = TRUE) dbReadTable(con, "mtcars2") dbDisconnect(con)
library(DBI) con <- dbConnect(RPostgres::Postgres()) dbListTables(con) dbWriteTable(con, "mtcars", mtcars, temporary = TRUE) dbReadTable(con, "mtcars") dbListTables(con) dbExistsTable(con, "mtcars") # A zero row data frame just creates a table definition. dbWriteTable(con, "mtcars2", mtcars[0, ], temporary = TRUE) dbReadTable(con, "mtcars2") dbDisconnect(con)
dbBegin()
starts a transaction. dbCommit()
and dbRollback()
end the transaction by either committing or rolling back the changes.
## S4 method for signature 'PqConnection' dbBegin(conn, ..., name = NULL) ## S4 method for signature 'PqConnection' dbCommit(conn, ..., name = NULL) ## S4 method for signature 'PqConnection' dbRollback(conn, ..., name = NULL)
## S4 method for signature 'PqConnection' dbBegin(conn, ..., name = NULL) ## S4 method for signature 'PqConnection' dbCommit(conn, ..., name = NULL) ## S4 method for signature 'PqConnection' dbRollback(conn, ..., name = NULL)
conn |
a PqConnection object, produced by
|
... |
Unused, for extensibility. |
name |
If provided, uses the |
A boolean, indicating success or failure.
library(DBI) con <- dbConnect(RPostgres::Postgres()) dbWriteTable(con, "USarrests", datasets::USArrests, temporary = TRUE) dbGetQuery(con, 'SELECT count(*) from "USarrests"') dbBegin(con) dbExecute(con, 'DELETE from "USarrests" WHERE "Murder" > 1') dbGetQuery(con, 'SELECT count(*) from "USarrests"') dbRollback(con) # Rolling back changes leads to original count dbGetQuery(con, 'SELECT count(*) from "USarrests"') dbRemoveTable(con, "USarrests") dbDisconnect(con)
library(DBI) con <- dbConnect(RPostgres::Postgres()) dbWriteTable(con, "USarrests", datasets::USArrests, temporary = TRUE) dbGetQuery(con, 'SELECT count(*) from "USarrests"') dbBegin(con) dbExecute(con, 'DELETE from "USarrests" WHERE "Murder" > 1') dbGetQuery(con, 'SELECT count(*) from "USarrests"') dbRollback(con) # Rolling back changes leads to original count dbGetQuery(con, 'SELECT count(*) from "USarrests"') dbRemoveTable(con, "USarrests") dbDisconnect(con)
RPostgres examples and tests connect to a default database via
dbConnect(
Postgres()
)
. This function checks if that
database is available, and if not, displays an informative message.
postgresDefault()
works similarly but returns a connection on success and
throws a testthat skip condition on failure, making it suitable for use in
tests.
postgresHasDefault(...) postgresDefault(...)
postgresHasDefault(...) postgresDefault(...)
... |
Additional arguments passed on to |
if (postgresHasDefault()) { db <- postgresDefault() print(dbListTables(db)) dbDisconnect(db) } else { message("No database connection.") }
if (postgresHasDefault()) { db <- postgresDefault() print(dbListTables(db)) dbDisconnect(db) } else { message("No database connection.") }
Detect whether the transaction is active for the given connection. A
transaction might be started with dbBegin()
or wrapped within
DBI::dbWithTransaction()
.
postgresIsTransacting(conn)
postgresIsTransacting(conn)
conn |
a PqConnection object, produced by
|
A boolean, indicating if a transaction is ongoing.
Once you subscribe to notifications with LISTEN, use this to wait for responses on each channel.
postgresWaitForNotify(conn, timeout = 1)
postgresWaitForNotify(conn, timeout = 1)
conn |
a PqConnection object, produced by
|
timeout |
How long to wait, in seconds. Default 1 |
If a notification was available, a list of:
Name of channel
PID of notifying server process
Content of notification
If no notifications are available, return NULL
library(DBI) library(callr) # listen for messages on the grapevine db_listen <- dbConnect(RPostgres::Postgres()) dbExecute(db_listen, "LISTEN grapevine") # Start another process, which sends a message after a delay rp <- r_bg(function() { library(DBI) Sys.sleep(0.3) db_notify <- dbConnect(RPostgres::Postgres()) dbExecute(db_notify, "NOTIFY grapevine, 'psst'") dbDisconnect(db_notify) }) # Sleep until we get the message n <- NULL while (is.null(n)) { n <- RPostgres::postgresWaitForNotify(db_listen, 60) } stopifnot(n$payload == 'psst') # Tidy up rp$wait() dbDisconnect(db_listen)
library(DBI) library(callr) # listen for messages on the grapevine db_listen <- dbConnect(RPostgres::Postgres()) dbExecute(db_listen, "LISTEN grapevine") # Start another process, which sends a message after a delay rp <- r_bg(function() { library(DBI) Sys.sleep(0.3) db_notify <- dbConnect(RPostgres::Postgres()) dbExecute(db_notify, "NOTIFY grapevine, 'psst'") dbDisconnect(db_notify) }) # Sleep until we get the message n <- NULL while (is.null(n)) { n <- RPostgres::postgresWaitForNotify(db_listen, 60) } stopifnot(n$payload == 'psst') # Tidy up rp$wait() dbDisconnect(db_listen)
If an object of class Id is used for dbQuoteIdentifier()
, it needs
at most one table
component and at most one schema
component.
## S4 method for signature 'PqConnection,Id' dbQuoteIdentifier(conn, x, ...) ## S4 method for signature 'PqConnection,SQL' dbQuoteIdentifier(conn, x, ...) ## S4 method for signature 'PqConnection,character' dbQuoteIdentifier(conn, x, ...) ## S4 method for signature 'PqConnection' dbQuoteLiteral(conn, x, ...) ## S4 method for signature 'PqConnection,SQL' dbQuoteString(conn, x, ...) ## S4 method for signature 'PqConnection,character' dbQuoteString(conn, x, ...) ## S4 method for signature 'PqConnection,SQL' dbUnquoteIdentifier(conn, x, ...)
## S4 method for signature 'PqConnection,Id' dbQuoteIdentifier(conn, x, ...) ## S4 method for signature 'PqConnection,SQL' dbQuoteIdentifier(conn, x, ...) ## S4 method for signature 'PqConnection,character' dbQuoteIdentifier(conn, x, ...) ## S4 method for signature 'PqConnection' dbQuoteLiteral(conn, x, ...) ## S4 method for signature 'PqConnection,SQL' dbQuoteString(conn, x, ...) ## S4 method for signature 'PqConnection,character' dbQuoteString(conn, x, ...) ## S4 method for signature 'PqConnection,SQL' dbUnquoteIdentifier(conn, x, ...)
conn |
A PqConnection created by |
x |
A character vector to be quoted. |
... |
Other arguments needed for compatibility with generic (currently ignored). |
library(DBI) con <- dbConnect(RPostgres::Postgres()) x <- c("a", "b c", "d'e", "\\f") dbQuoteString(con, x) dbQuoteIdentifier(con, x) dbDisconnect(con)
library(DBI) con <- dbConnect(RPostgres::Postgres()) x <- c("a", "b c", "d'e", "\\f") dbQuoteString(con, x) dbQuoteIdentifier(con, x) dbDisconnect(con)
Use drv = Redshift()
instead of drv = Postgres()
to connect to an AWS Redshift cluster.
All methods in RPostgres and downstream packages can be called on such connections.
Some have different behavior for Redshift connections, to ensure better interoperability.
Redshift() ## S4 method for signature 'RedshiftDriver' dbConnect( drv, dbname = NULL, host = NULL, port = NULL, password = NULL, user = NULL, service = NULL, ..., bigint = c("integer64", "integer", "numeric", "character"), check_interrupts = FALSE, timezone = "UTC" )
Redshift() ## S4 method for signature 'RedshiftDriver' dbConnect( drv, dbname = NULL, host = NULL, port = NULL, password = NULL, user = NULL, service = NULL, ..., bigint = c("integer64", "integer", "numeric", "character"), check_interrupts = FALSE, timezone = "UTC" )
drv |
DBI::DBIDriver. Use |
dbname |
Database name. If |
host , port
|
Host and port. If |
user , password
|
User name and password. If |
service |
Name of service to connect as. If |
... |
Other name-value pairs that describe additional connection options as described at https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-PARAMKEYWORDS |
bigint |
The R type that 64-bit integer types should be mapped to, default is bit64::integer64, which allows the full range of 64 bit integers. |
check_interrupts |
Should user interrupts be checked during the query execution (before
first row of data is available)? Setting to |
timezone |
Sets the timezone for the connection. The default is |