Title: | Working with netCDF for Essential Biodiversity Variables |
---|---|
Description: | The concept of Essential Biodiversity Variables (EBV, <https://geobon.org/ebvs/what-are-ebvs/>) comes with a data structure based on the Network Common Data Form (netCDF). The 'ebvcube' 'R' package provides functionality to easily create, access and visualise this data. The EBV netCDFs can be downloaded from the EBV Data Portal: Christian Langer/ iDiv (2020) <https://portal.geobon.org/>. |
Authors: | Luise Quoss [aut, cre] , Nestor Fernandez [aut] , Christian Langer [aut] , Jose Valdez [aut] , Henrique Miguel Pereira [aut] |
Maintainer: | Luise Quoss <[email protected]> |
License: | GPL (>= 3) |
Version: | 0.3.3 |
Built: | 2024-12-17 12:50:09 UTC |
Source: | CRAN |
EBV netCDF properties class (S4)
S4 class containing the EBV netCDF properties
general
Named list. Elements: title, description, doi, ebv_class, ebv_name, ebv_domain, references, source, project_name, project_url, creator_name, creator_institution, creator_email, contributor_name, publisher_name, publisher_institution, publisher_email, comment, keywords, id, history, licence, conventions, naming_authority, date_created, date_issued, entity_names, entity_type, entity_scope, entity_classification_name, entity_classification_url, taxonomy, taxonomy_lsid
spatial
Named list. Elements: wkt2, epsg, extent, resolution, crs_units, dimensions, scope, description
temporal
Named list. Elements: resolution, units, timesteps, dates
metric
Named list. Elements: name, description, units
scenario
Named list. Elements: name, description
ebv_cube
Named list. Elements: units, coverage_content_type, fillvalue, type
If the properties class holds e.g. no scenario information this is
indicated with an element called status in the list.
If you read an EBV
netCDF based on an older standard, the properties will differ from the
definition above. If the dataset does not encompass taxonomic info, the
'taxonomy' is NA. Besides, even if a dataset encompasses the taxonomy
information, the 'taxonomy_lsid' can be NA.
Add data to your EBV netCDF from GeoTiffs or in-memory arrays.
First, create a new EBV netCDF using ebv_create()
.
ebv_add_data( filepath_nc, datacubepath = NULL, entity = NULL, timestep = 1, data, band = 1, scenario = NULL, metric = NULL, ignore_RAM = FALSE, raw = FALSE, verbose = TRUE )
ebv_add_data( filepath_nc, datacubepath = NULL, entity = NULL, timestep = 1, data, band = 1, scenario = NULL, metric = NULL, ignore_RAM = FALSE, raw = FALSE, verbose = TRUE )
filepath_nc |
Character. Path to the self-created netCDF file. |
datacubepath |
Character. Optional. Default: NULL. Path to the datacube
(use |
entity |
Character or Integer. Default is NULL. If the structure is 3D,
the entity argument is set to NULL. Else, a character string or single
integer value must indicate the entity of the 4D structure of the EBV
netCDFs. The character string can be obtained using
|
timestep |
Integer or character. Default: 1. Define to which timestep or timesteps the data should be added. If several timesteps are given they have to be in a continuous and in order. Meaning c(4,5,6) is right but c(2,5,6) is wrong. Alternatively you can provide a date or list of dates in ISO format, such as '2015-01-01' (also in order). |
data |
Character or matrix or array. If character: Path to the GeoTiff file containing the data. Ending needs to be *.tif. If matrix or array: in-memory object holding the data. |
band |
Integer. Default: 1. Define which band(s) to read from GeoTiff. Can be several. Don't have to be in order as the timesteps definition requires. |
scenario |
Character or integer. Optional. Default: NULL. Define the
scenario you want to access. If the EBV netCDF has no scenarios, leave the
default value (NULL). You can use an integer value defining the scenario or
give the name of the scenario as a character string. To check the available
scenarios and their name or number (integer), use
|
metric |
Character or integer. Optional. Define the metric you want to
access. You can use an integer value defining the metric or give the name
of the scenario as a character string. To check the available metrics and
their name or number (integer), use |
ignore_RAM |
Logical. Default: FALSE. Checks if there is enough space in your memory to read the data. Can be switched off (set to TRUE). Ignore this argument when you give an array or a matrix for 'data' (it will do nothing). |
raw |
Logical. Default: FALSE. If you set it to TRUE the offset and scale value in the GeoTiff are ignored. Only relevant if you give a path to a GeoTiff. |
verbose |
Logical. Default: TRUE. Turn off additional prints by setting it to FALSE. |
Adds data to the EBV netCDF. Check your results using
ebv_read()
and/or ebv_analyse()
and/or
ebv_map()
and/or ebv_trend()
.
If the data exceeds your memory the RAM check will throw an error. No block-processing or other method implemented so far. Move to a machine with more capacities if needed.
#set path to EBV netCDF file <- system.file(file.path("extdata","test.nc"), package="ebvcube") #get all datacubepaths of EBV netCDF datacubepaths <- ebv_datacubepaths(file, verbose=FALSE) #set path to GeoTiff with data tif <- system.file(file.path("extdata","entity1.tif"), package="ebvcube") # add data to the timestep 1, 2 and 3 using the first three bands of the GeoTiff ## Not run: #use datacubepath argument and define timestep by integer ebv_add_data(filepath_nc = file, datacubepath = datacubepaths[1,1], entity = 1, timestep = 1:3, data = tif, band = 1:3) #use metric argument and define timestep by ISO-format ebv_add_data(filepath_nc = file, entity = 1, timestep = paste0(as.character(seq(1900,1920,10)), '-01-01'), metric = 1, data = tif, band = 1:3, verbose = FALSE) ## End(Not run)
#set path to EBV netCDF file <- system.file(file.path("extdata","test.nc"), package="ebvcube") #get all datacubepaths of EBV netCDF datacubepaths <- ebv_datacubepaths(file, verbose=FALSE) #set path to GeoTiff with data tif <- system.file(file.path("extdata","entity1.tif"), package="ebvcube") # add data to the timestep 1, 2 and 3 using the first three bands of the GeoTiff ## Not run: #use datacubepath argument and define timestep by integer ebv_add_data(filepath_nc = file, datacubepath = datacubepaths[1,1], entity = 1, timestep = 1:3, data = tif, band = 1:3) #use metric argument and define timestep by ISO-format ebv_add_data(filepath_nc = file, entity = 1, timestep = paste0(as.character(seq(1900,1920,10)), '-01-01'), metric = 1, data = tif, band = 1:3, verbose = FALSE) ## End(Not run)
Get basic measurements of the data, including min, max, mean, sd, n, #NAs, q25, q50, q75 (no mean for categorical data).
ebv_analyse( filepath, datacubepath = NULL, entity = NULL, timestep = 1, subset = NULL, touches = TRUE, epsg = 4326, scenario = NULL, metric = NULL, numerical = TRUE, na_rm = TRUE, verbose = TRUE )
ebv_analyse( filepath, datacubepath = NULL, entity = NULL, timestep = 1, subset = NULL, touches = TRUE, epsg = 4326, scenario = NULL, metric = NULL, numerical = TRUE, na_rm = TRUE, verbose = TRUE )
filepath |
Character. Path to the netCDF file. |
datacubepath |
Character. Optional. Default: NULL. Path to the datacube
(use |
entity |
Character or Integer. Default is NULL. If the structure is 3D, the entity argument is set to NULL. Else, a character string or single integer value must indicate the entity of the 4D structure of the EBV netCDFs. |
timestep |
Integer or character. Select one or several timestep(s). Either provide an integer value or list of values that refer(s) to the index of the timestep(s) (minimum value: 1) or provide a date or list of dates in ISO format, such as '2015-01-01'. |
subset |
Optional if you want measurements on a smaller subset. Possible via the path to a shapefile (character) or the indication of a bounding box (vector of four numeric values) defining the subset. Else the whole area is analysed. |
touches |
Logical. Optional. Default: TRUE. Only relevant if the subset
is indicated by a shapefile. See |
epsg |
Numeric. Optional. Only relevant if the subset is indicated by a
bounding box and the coordinate reference system differs from WGS84. See
|
scenario |
Character or integer. Optional. Default: NULL. Define the
scenario you want to access. If the EBV netCDF has no scenarios, leave the
default value (NULL). You can use an integer value defining the scenario or
give the name of the scenario as a character string. To check the available
scenarios and their name or number (integer), use
|
metric |
Character or integer. Optional. Define the metric you want to
access. You can use an integer value defining the metric or give the name
of the scenario as a character string. To check the available metrics and
their name or number (integer), use |
numerical |
Logical. Default: TRUE. Change to FALSE if the data covered by the netCDF contains categorical data. |
na_rm |
Logical. Default: TRUE. NA values are removed in the analysis. Change to FALSE to include NAs. |
verbose |
Logical. Default: TRUE. Turn off additional prints by setting it to FALSE. |
Returns a named list containing the measurements.
ebv_read_bb()
and ebv_read_shp()
for the usage
of subsets.
#set path to EBV netCDF file <- system.file(file.path("extdata","martins_comcom_subset.nc"), package="ebvcube") #get all datacubepaths of EBV netCDF datacubes <- ebv_datacubepaths(file, verbose=FALSE) #set path to shp file shp_path <- system.file(file.path("extdata","cameroon.shp"), package="ebvcube") #get measurements for full extent and the first three timesteps data_global <- ebv_analyse(filepath = file, datacubepath = datacubes[1,1], entity = 1, timestep = 1:3, verbose = FALSE) #get measurements for subset of Africa only (using bounding box) and one timestep data_1910 <- ebv_analyse(filepath = file, datacubepath = datacubes[1,1], entity = 1, timestep = "1900-01-01", subset = c(-26, 64, 30, 38), verbose = FALSE) #get measurements for cameroon only (using shp) and one timestep data_1930 <- ebv_analyse(filepath = file, entity = 1, timestep = "1930-01-01", subset = shp_path, verbose = FALSE, metric = 'Absolute change in the number of species',)
#set path to EBV netCDF file <- system.file(file.path("extdata","martins_comcom_subset.nc"), package="ebvcube") #get all datacubepaths of EBV netCDF datacubes <- ebv_datacubepaths(file, verbose=FALSE) #set path to shp file shp_path <- system.file(file.path("extdata","cameroon.shp"), package="ebvcube") #get measurements for full extent and the first three timesteps data_global <- ebv_analyse(filepath = file, datacubepath = datacubes[1,1], entity = 1, timestep = 1:3, verbose = FALSE) #get measurements for subset of Africa only (using bounding box) and one timestep data_1910 <- ebv_analyse(filepath = file, datacubepath = datacubes[1,1], entity = 1, timestep = "1900-01-01", subset = c(-26, 64, 30, 38), verbose = FALSE) #get measurements for cameroon only (using shp) and one timestep data_1930 <- ebv_analyse(filepath = file, entity = 1, timestep = "1930-01-01", subset = shp_path, verbose = FALSE, metric = 'Absolute change in the number of species',)
Write a new attribute value to an EBV netCDF. Not all attributes can be changed. Some are always created automatically, e.g. the attributes belonging to the crs, time and var_entity datasets. In this case you have to re-create the netCDF file.
ebv_attribute( filepath, attribute_name, value, levelpath = NULL, verbose = TRUE )
ebv_attribute( filepath, attribute_name, value, levelpath = NULL, verbose = TRUE )
filepath |
Character. Path to the netCDF file. |
attribute_name |
Character. Name of the attribute that should be changed. |
value |
New value that should be assigned to the attribute. |
levelpath |
Character. Default: NULL. Indicates the location of the attribute. The default means that the attribute is located at a global level. If the attribute is located at the datacubelevel just add the datacubepath, e.g. metric_1/ebv_cube. For the metric level the value may be 'metric_1' or 'scenario_1/metric_1'. This path depends on whether the netCDF hierarchy has scenarios or not. |
verbose |
Logical. Default: TRUE. Turn off additional prints by setting it to FALSE. |
Adds the new value to the attribute. Check your results using
ebv_properties()
.
You can change the ebv_class and the ebv_name. In this case you need to change the ebv_class first. Don't forget to change the ebv_name accordingly!
#set path to EBV netCDF file <- system.file(file.path("extdata","baisero_spepop_id5_20220405_v1_empty.nc"), package="ebvcube") ## Not run: try({ #change the standard_name of the metric attribute1 <- 'standard_name' value1 <- 'habitat availability' level1 <- 'scenario_1/metric_1' ebv_attribute(filepath = file, attribute_name = attribute1, value = value1, level = level1) #change the units of the ebv_cube attribute2 <- 'units' value2 <- 'Land-use of 5,090 mammals calculated in sqkm' level2 <- 'scenario_1/metric_1/ebv_cube' #equal to the datacubepath ebv_attribute(filepath = file, attribute_name = attribute2, value = value2, level = level2) #change the name of the creator at the global level attribute3 <- 'creator_name' value3 <- 'Jane Doe' ebv_attribute(filepath = file, attribute_name = attribute3, value = value3) }, TRUE) ## End(Not run)
#set path to EBV netCDF file <- system.file(file.path("extdata","baisero_spepop_id5_20220405_v1_empty.nc"), package="ebvcube") ## Not run: try({ #change the standard_name of the metric attribute1 <- 'standard_name' value1 <- 'habitat availability' level1 <- 'scenario_1/metric_1' ebv_attribute(filepath = file, attribute_name = attribute1, value = value1, level = level1) #change the units of the ebv_cube attribute2 <- 'units' value2 <- 'Land-use of 5,090 mammals calculated in sqkm' level2 <- 'scenario_1/metric_1/ebv_cube' #equal to the datacubepath ebv_attribute(filepath = file, attribute_name = attribute2, value = value2, level = level2) #change the name of the creator at the global level attribute3 <- 'creator_name' value3 <- 'Jane Doe' ebv_attribute(filepath = file, attribute_name = attribute3, value = value3) }, TRUE) ## End(Not run)
Create the core structure of the EBV netCDF based on the json
from the EBV Data Portal. Data will be
added afterwards using ebv_add_data()
.
ebv_create( jsonpath, outputpath, entities, epsg = 4326, extent = c(-180, 180, -90, 90), resolution = c(1, 1), timesteps = NULL, fillvalue, prec = "double", sep = ",", force_4D = TRUE, overwrite = FALSE, verbose = TRUE )
ebv_create( jsonpath, outputpath, entities, epsg = 4326, extent = c(-180, 180, -90, 90), resolution = c(1, 1), timesteps = NULL, fillvalue, prec = "double", sep = ",", force_4D = TRUE, overwrite = FALSE, verbose = TRUE )
jsonpath |
Character. Path to the json file downloaded from the EBV Data Portal. Login to the page and click on 'Uploads' and 'New Upload' to start the process. |
outputpath |
Character. Set path where the netCDF file should be created. |
entities |
Character string or vector of character strings. In case of
single character string: Path to the csv table holding the entity names.
Default: comma-separated delimiter, else change the |
epsg |
Integer. Default: 4326 (WGS84). Defines the coordinate reference system via the corresponding epsg code. |
extent |
Numeric. Default: c(-180,180,-90,90). Defines the extent of the data: c(xmin, xmax, ymin, ymax). |
resolution |
Numerical. Vector of two numerical values defining the longitudinal and latitudinal resolution of the pixel: c(lon,lat). |
timesteps |
Character. Vector of the timesteps in the dataset. Default: NULL - in this case the time will be calculated from the start-, endpoint and temporal resolution given in the metadata file (json). Else, the dates must be given in in ISO format 'YYYY-MM-DD' or shortened 'YYYY' in case of yearly timesteps. |
fillvalue |
Numeric. Value of the missing data (NoData value) in the array. Has to be a single numeric value or NA. |
prec |
Character. Default: 'double'. Precision of the data set. Valid options: 'short' 'integer' 'float' 'double' 'char' 'byte'. |
sep |
Character. Default: ','. If the delimiter of the csv specifying the entity-names differs from the default, indicate here. |
force_4D |
Logical. Default is TRUE. If the argument is TRUE, there will be 4D cubes (lon, lat, time, entity) per metric. If this argument is changed to FALSE, there will be 3D cubes (lon, lat, time) per entity (per metric). So the latter yields a higher amount of cubes and does not bundle all information per metric. In the future the standard will be restricted to the 4D version. Recommendation: go with the 4D cubes! |
overwrite |
Logical. Default: FALSE. Set to TRUE to overwrite the output file defined by 'outputpath'. |
verbose |
Logical. Default: TRUE. Turn off additional prints by setting it to FALSE. |
Creates the netCDF file at the 'outputpath' location.
To check out the results take a look at your netCDF file with Panoply provided by the NASA.
#set path to JSON file json <- system.file(file.path("extdata","metadata.json"), package="ebvcube") #set output path of the new EBV netCDF out <- file.path(system.file(package='ebvcube'),"extdata","sCAR_new.nc") #set path to the csv holding the entity names entities <- file.path(system.file(package='ebvcube'),"extdata","entities.csv") #create new EBV netCDF ## Not run: ebv_create(jsonpath = json, outputpath = out, entities = entities, fillvalue=-3.4E38) ## End(Not run)
#set path to JSON file json <- system.file(file.path("extdata","metadata.json"), package="ebvcube") #set output path of the new EBV netCDF out <- file.path(system.file(package='ebvcube'),"extdata","sCAR_new.nc") #set path to the csv holding the entity names entities <- file.path(system.file(package='ebvcube'),"extdata","entities.csv") #create new EBV netCDF ## Not run: ebv_create(jsonpath = json, outputpath = out, entities = entities, fillvalue=-3.4E38) ## End(Not run)
Create the core structure of the EBV netCDF based on the json
from the EBV Data Portal. Additionally,
you can add the hierarchy of the taxonomy. This is not provided in the
ebv_create()
function. Use the ebv_create()
function
if your dataset holds no taxonomic information. Data will be added
afterwards using ebv_add_data()
.
ebv_create_taxonomy( jsonpath, outputpath, taxonomy, lsid = FALSE, epsg = 4326, extent = c(-180, 180, -90, 90), resolution = c(1, 1), timesteps = NULL, fillvalue, prec = "double", sep = ",", force_4D = TRUE, overwrite = FALSE, verbose = TRUE )
ebv_create_taxonomy( jsonpath, outputpath, taxonomy, lsid = FALSE, epsg = 4326, extent = c(-180, 180, -90, 90), resolution = c(1, 1), timesteps = NULL, fillvalue, prec = "double", sep = ",", force_4D = TRUE, overwrite = FALSE, verbose = TRUE )
jsonpath |
Character. Path to the json file downloaded from the EBV Data Portal. Login to the page and click on 'Uploads' and 'New Upload' to start the process. |
outputpath |
Character. Set path where the netCDF file should be created. |
taxonomy |
Character. Path to the csv table holding the taxonomy.
Default: comma-separated delimiter, else change the |
lsid |
Logical. Default: FALSE. Set to TRUE if the last column in your taxonomy csv file defines the lsid for each entity. For more info check CF convention 1.8: Taxon Names and Identifiers. |
epsg |
Integer. Default: 4326 (WGS84). Defines the coordinate reference system via the corresponding epsg code. |
extent |
Numeric. Default: c(-180,180,-90,90). Defines the extent of the data: c(xmin, xmax, ymin, ymax). |
resolution |
Numerical. Vector of two numerical values defining the longitudinal and latitudinal resolution of the pixel: c(lon,lat). |
timesteps |
Character. Vector of the timesteps in the dataset. Default: NULL - in this case the time will be calculated from the start-, endpoint and temporal resolution given in the metadata file (json). Else, the dates must be given in in ISO format 'YYYY-MM-DD' or shortened 'YYYY' in case of yearly timesteps. |
fillvalue |
Numeric. Value of the missing data (NoData value) in the array. Has to be a single numeric value or NA. |
prec |
Character. Default: 'double'. Precision of the data set. Valid options: 'short' 'integer' 'float' 'double' 'char' 'byte'. |
sep |
Character. Default: ','. If the delimiter of the csv specifying the entity-names differs from the default, indicate here. |
force_4D |
Logical. Default is TRUE. If the argument is TRUE, there will be 4D cubes (lon, lat, time, entity) per metric. If this argument is changed to FALSE, there will be 3D cubes (lon, lat, time) per entity (per metric). So the latter yields a higher amount of cubes and does not bundle all information per metric. In the future the standard will be restricted to the 4D version. Recommendation: go with the 4D cubes! |
overwrite |
Logical. Default: FALSE. Set to TRUE to overwrite the output file defined by 'outputpath' |
verbose |
Logical. Default: TRUE. Turn off additional prints by setting it to FALSE. |
Creates the netCDF file at the 'outputpath' location including the taxonomy information.
To check out the results take a look at your netCDF file with Panoply provided by the NASA.
You can check the taxonomy info with ebv_properties()
in the
slot 'general' under the name 'taxonomy' and 'taxonomy_lsid'.
#set path to JSON file json <- system.file(file.path("extdata/testdata","5.json"), package="ebvcube") #set output path of the new EBV netCDF out <- tempfile(fileext='.nc') #set path to the csv holding the taxonomy names taxonomy <- file.path(system.file(package='ebvcube'),"extdata/testdata","id5_entities.csv") #create new EBV netCDF with taxonomy ## Not run: ebv_create_taxonomy(jsonpath = json, outputpath = out, taxonomy = taxonomy, fillvalue = -127, resolution = c(0.25, 0.25), verbose = FALSE) #remove file file.remove(out) ## End(Not run)
#set path to JSON file json <- system.file(file.path("extdata/testdata","5.json"), package="ebvcube") #set output path of the new EBV netCDF out <- tempfile(fileext='.nc') #set path to the csv holding the taxonomy names taxonomy <- file.path(system.file(package='ebvcube'),"extdata/testdata","id5_entities.csv") #create new EBV netCDF with taxonomy ## Not run: ebv_create_taxonomy(jsonpath = json, outputpath = out, taxonomy = taxonomy, fillvalue = -127, resolution = c(0.25, 0.25), verbose = FALSE) #remove file file.remove(out) ## End(Not run)
Get the paths to the datacubes of the EBV netCDF to access the data.
ebv_datacubepaths(filepath, verbose = TRUE)
ebv_datacubepaths(filepath, verbose = TRUE)
filepath |
Character. Path to the netCDF file. |
verbose |
Logical. Default: TRUE. Turn off additional prints by setting it to FALSE. |
Dataframe containing the paths to access the datacubes and descriptions of scenario, metric and entity if existing.
#set path to EBV netCDF file <- system.file(file.path("extdata","martins_comcom_subset.nc"), package="ebvcube") #get all datacubepaths of EBV netCDF datacubes <- ebv_datacubepaths(file)
#set path to EBV netCDF file <- system.file(file.path("extdata","martins_comcom_subset.nc"), package="ebvcube") #get all datacubepaths of EBV netCDF datacubes <- ebv_datacubepaths(file)
Returns the list of all available data sets at the EBV Portal if you no arguments are given. If an ID is given, the corresponding file (netCDF) and its metadata (json file) will be downloaded to the given output directory.
ebv_download(id = NULL, outputdir, overwrite = FALSE, verbose = TRUE)
ebv_download(id = NULL, outputdir, overwrite = FALSE, verbose = TRUE)
id |
Integer or Character. There are three option to identify the
dataset to be downloaded. (1) It can be a single integer value representing
the ID of the dataset. (2) It can be a character string representing the
title of the data set. (3) It can be a character string representing the
DOI of the dataset in the format '10.25829/f2rdp4' (Dataset 'Habitat
availability for African great apes' by Jessica Junker from the EBV Data
Portal). All three identifier can be retrieved by running
|
outputdir |
Character. Output directory of the downloaded files. |
overwrite |
Logical. Default: FALSE. Set to TRUE if you want to overwrite the netCDF and json. |
verbose |
Logical. Default: TRUE. Turn off additional prints by setting it to FALSE. |
Downloads a netCDF and json file (ACDD metadata) to the given output directory. If run empty returns a data.frame of all available data sets and their ID, title and DOI.
#' #get all available datasets datasets <- ebv_download() ebv_download(id = datasets$id[1], outputdir = tempdir(), overwrite=TRUE, verbose=FALSE)
#' #get all available datasets datasets <- ebv_download() ebv_download(id = datasets$id[1], outputdir = tempdir(), overwrite=TRUE, verbose=FALSE)
Map plot of the data of one timestep in one datacube of an EBV netCDF.
ebv_map( filepath, datacubepath = NULL, entity = NULL, timestep = 1, countries = TRUE, col_rev = FALSE, classes = 5, scenario = NULL, metric = NULL, all_data = FALSE, ignore_RAM = FALSE, verbose = TRUE )
ebv_map( filepath, datacubepath = NULL, entity = NULL, timestep = 1, countries = TRUE, col_rev = FALSE, classes = 5, scenario = NULL, metric = NULL, all_data = FALSE, ignore_RAM = FALSE, verbose = TRUE )
filepath |
Character. Path to the netCDF file. |
datacubepath |
Character. Optional. Default: NULL. Path to the datacube
(use |
entity |
Character or Integer. Default is NULL. If the structure is 3D, the entity argument is set to NULL. Else, a character string or single integer value must indicate the entity of the 4D structure of the EBV netCDFs. |
timestep |
Integer or character. Select a timestep. Either provide an integer value that refers to the index of the timestep (minimum value: 1) or provide a date in ISO format, such as '2015-01-01'. |
countries |
Logical. Default: TRUE. Simple country outlines will be plotted on top of the raster data. Disable by setting this option to FALSE. |
col_rev |
Logical. Default: FALSE Set to TRUE if you want the color ramp to be the other way around. |
classes |
Integer. Default: 5. Define the amount of classes (quantiles) for the symbology. Currently restricted to maximum 11 classes (allowed maximum for palette RdYlBu is 11). |
scenario |
Character or integer. Optional. Default: NULL. Define the
scenario you want to access. If the EBV netCDF has no scenarios, leave the
default value (NULL). You can use an integer value defining the scenario or
give the name of the scenario as a character string. To check the available
scenarios and their name or number (integer), use
|
metric |
Character or integer. Optional. Define the metric you want to
access. You can use an integer value defining the metric or give the name
of the scenario as a character string. To check the available metrics and
their name or number (integer), use |
all_data |
Logical. Default: FALSE. The quantiles are based on the one timestep you chose (default). If you want include the full data of the datacube to produce several maps that are based on the same color scale, set this argument to TRUE (to allow for viusual comparison between entities or timesteps. Does not cover different datacubes.) |
ignore_RAM |
Logical. Default: FALSE. Checks if there is enough space in your memory to read the data. Can be switched off (set to TRUE). |
verbose |
Logical. Default: TRUE. Turn off additional prints by setting it to FALSE. |
Plots a map.
#set path to EBV netCDF file <- system.file(file.path("extdata","martins_comcom_subset.nc"), package="ebvcube") #get all datacubepaths of EBV netCDF datacubes <- ebv_datacubepaths(file, verbose=FALSE) #plot a map for the 3rd timestep, divide into 7 classes ebv_map(filepath = file, datacubepath = datacubes[1,1], entity = 1, timestep = 3, classes = 7, verbose = FALSE) ebv_map(filepath = file, entity = 'all bird species', timestep = "1950-01-01", metric = 'Relative change in the number of species (%)', classes = 7, verbose = FALSE)
#set path to EBV netCDF file <- system.file(file.path("extdata","martins_comcom_subset.nc"), package="ebvcube") #get all datacubepaths of EBV netCDF datacubes <- ebv_datacubepaths(file, verbose=FALSE) #plot a map for the 3rd timestep, divide into 7 classes ebv_map(filepath = file, datacubepath = datacubes[1,1], entity = 1, timestep = 3, classes = 7, verbose = FALSE) ebv_map(filepath = file, entity = 'all bird species', timestep = "1950-01-01", metric = 'Relative change in the number of species (%)', classes = 7, verbose = FALSE)
This function collects the metadata terms of the EBV netCDF to
create and collects them in a text file in JSON format. Use
ebv_create()
with the output file of this function to create
your EBV netCDF. During the actual creation you will be asked for more
information regarding the spatial resolution, CRS etc.
ebv_metadata( outputpath, title, summary, references = NULL, source, project_name = NULL, project_url = NULL, date_created, creator_name, creator_email = NULL, creator_institution, contributor_name = NULL, license, comment = NULL, ebv_class, ebv_name, ebv_scenario_classification_name = NULL, ebv_scenario_classification_version = NULL, ebv_scenario_classification_url = NULL, ebv_spatial_scope, ebv_spatial_description = NULL, ebv_domain, coverage_content_type, ebv_entity_type, ebv_entity_scope, ebv_entity_classification_name = NULL, ebv_entity_classification_url = NULL, time_coverage_start, time_coverage_end, time_coverage_resolution, metric = list(standard_name = "", long_name = "", units = ""), scenario = list(standard_name = "", long_name = ""), overwrite = FALSE, verbose = TRUE )
ebv_metadata( outputpath, title, summary, references = NULL, source, project_name = NULL, project_url = NULL, date_created, creator_name, creator_email = NULL, creator_institution, contributor_name = NULL, license, comment = NULL, ebv_class, ebv_name, ebv_scenario_classification_name = NULL, ebv_scenario_classification_version = NULL, ebv_scenario_classification_url = NULL, ebv_spatial_scope, ebv_spatial_description = NULL, ebv_domain, coverage_content_type, ebv_entity_type, ebv_entity_scope, ebv_entity_classification_name = NULL, ebv_entity_classification_url = NULL, time_coverage_start, time_coverage_end, time_coverage_resolution, metric = list(standard_name = "", long_name = "", units = ""), scenario = list(standard_name = "", long_name = ""), overwrite = FALSE, verbose = TRUE )
outputpath |
Character. Outputpath of the text-file (JSON format) containing the metadata. File ending: *.json |
title |
Character. A short phrase or sentence describing the dataset. |
summary |
Character. A paragraph describing the dataset, analogous to an abstract for a paper. Maximum character length: 1500. |
references |
Character. Optional. DOIs (URLs) that describe the data or methods used to produce it. Multiple DOIs can be given in a vector. |
source |
Character. The method of production of the original data. If it was model-generated, source should name the model and its version. If it is observational, source should characterize it. Corresponding term in the EBV Data Portal: 'methods'. |
project_name |
Character. Optional. The name of the project principally responsible for originating this data. |
project_url |
Character. Optional. The URL of the project. |
date_created |
Character. The date on which this version of the data was created in YYYY-MM-DD format. |
creator_name |
Character. The name of the person principally responsible for creating this data. |
creator_email |
Character. Optional. The email address of the person principally responsible for creating this data. |
creator_institution |
Character. The institution of the creator; should uniquely identify the creator's institution. |
contributor_name |
Character. Optional. The name of any individuals, projects, or institutions that contributed to the creation of this data. Corresponding term in the EBV Data Portal: 'Co-creators'. Multiple contributors can be given in a vector. |
license |
Character. License of the dataset. Will be stored in the netCDF as the URL to the CC license. Choose one of: 'CC0', 'CC BY', 'CC BY-SA', 'CC BY-NC', 'CC BY-NC-SA', 'CC BY-ND', 'CC BY-NC-ND'. |
comment |
Character. Optional. Miscellaneous information about the data, not captured elsewhere. |
ebv_class |
Character. EBV Class of the data set. One of: 'Genetic composition', 'Species populations', 'Species traits', 'Community composition', 'Ecosystem functioning', 'Ecosystem structure', 'Ecosystem services'. For more info see note. |
ebv_name |
Character. EBV Name of the data set. The possible options depend on the EBV Class. One of: 'Intraspecific genetic diversity', 'Genetic differentiation', 'Effective population size', 'Inbreeding', 'Species distributions', 'Species abundances', 'Morphology', 'Physiology', 'Phenology', 'Movement', 'Community abundance', 'Taxonomic and phylogenetic diversity', 'Trait diversity', 'Interaction diversity', 'Primary productivity', 'Ecosystem phenology', 'Ecosystem disturbances', 'Live cover fraction', 'Ecosystem distribution', 'Ecosystem Vertical Profile', 'Pollination' For more info see note. |
ebv_scenario_classification_name |
Character. Optional. Name of the classification system used for the scenarios. |
ebv_scenario_classification_version |
Character. Optional. Version of the classification system used for the scenarios. |
ebv_scenario_classification_url |
Character. Optional. URL of the classification system used for the scenarios. |
ebv_spatial_scope |
Character. Spatial scope of the data set. One of: 'Continental/Regional', 'National', 'Sub-national/Local' or 'Global'. |
ebv_spatial_description |
Character. Specific information about the spatial scope. Mandatory if spatial scope is not Global. |
ebv_domain |
Character. Environmental domain of the data set. Choose one or several of: 'Terrestrial', 'Marine' or 'Freshwater'. Multiple domains can be given in a vector. |
coverage_content_type |
Character. Describes the source of the data based on an ISO 19115-1 code. Choose one or several of: 'image', 'thematicClassification', 'physicalMeasurement', 'auxiliaryInformation', 'qualityInformation', 'referenceInformation', 'modelResult', or 'coordinate'. Multiple types can be given in a vector. |
ebv_entity_type |
Character. EBV entity type. This is a free entry field. Still, if one of the following terms fits your entity scope please use: 'Species', 'Communities', 'Ecosystems' or 'None'. |
ebv_entity_scope |
Character. Specifies the entity scope in more detail, e.g., 'African great apes'. If the entity type is 'None' this term is not applied to the metadata. |
ebv_entity_classification_name |
Character. Optional. Name of the classification system used for the entity types. |
ebv_entity_classification_url |
Character. Optional. URL of the classification system used for the entity types. |
time_coverage_start |
Character. Start date of the time span covered by the dataset in YYYY-MM-DD format. |
time_coverage_end |
Character. End date of the time span covered by the dataset in YYYY-MM-DD format. |
time_coverage_resolution |
Character. Describes the targeted time period
between each value in the data set (ISO 8601:2004 date format). Provide the
definition of your temporal resolution in the ISO 8601:2004 duration format
P(YYYY)-(MM)-(DD). Examples: decadal: 'P0010-00-00', daily: 'P0000-00-01',
single timestep: 'P0000-00-00' and irregular time intervals: 'Irregular'.
For the latter the timesteps can be directly defined when using
|
metric |
List. Metric attributes defined accordingly: list(standard_name=”, long_name=”, units=”). If you have several metrics give a list of lists, e.g. for two: list(list(standard_name=”, long_name=”, units=”),list(standard_name=”, long_name=”, units=”)) At least one metric is mandatory. |
scenario |
List. Optional. Scenario attributes defined accordingly: list(standard_name=”, long_name=”). If you have several scenarios give a list of lists, e.g. for two: list(list(standard_name=”, long_name=”),list(standard_name=”, long_name=”)). It is not mandatory to have a scenario. |
overwrite |
Logical. Default: FALSE. Set to TRUE to overwrite the output file defined by 'outputpath'. |
verbose |
Logical. Default: TRUE. Turn off additional prints and warnings by setting it to FALSE. |
See EBV Classes and their corresponding EBV Names:
EBV Class | EBV Names |
Genetic composition | 'Intraspecific genetic diversity', 'Genetic differentiation','Effective population size', 'Inbreeding' |
Species populations | 'Species distributions', 'Species abundances' |
Species traits | 'Morphology', 'Physiology', 'Phenology', 'Movement' |
Community composition | 'Community abundance', 'Taxonomic and phylogenetic diversity', 'Trait diversity', 'Interaction diversity' |
Ecosystem functioning | 'Primary productivity', 'Ecosystem phenology', 'Ecosystem disturbances' |
Ecosystem structure | 'Live cover fraction', 'Ecosystem distribution', 'Ecosystem Vertical Profile' |
Ecosystem services | 'Pollination', 'other' |
Returns the outputpath of the JSON file with the metadata.
Metadata Conventions: The metadata termns of the EBV netCDFs
are based on
CF
1.8 and
ACDD
1.3 Conventions.
Find more help in the
How-To
on the EBV Portal Website.
EBV Class: The EBV Class is GEO BON's
classification system for biodiversity monitoring. It categorizes and
organizes essential ecological variables across scales and biological
levels. This standardized framework helps identify and prioritize key
variables representing biodiversity aspects like species composition,
population dynamics, ecosystem functioning, and habitat quality.
EBV Name: The EBV Name is a unique identifier for a specific
variable in the EBV Class, representing a distinct aspect of biodiversity.
It helps identify and categorize measured or monitored biodiversity
information. The EBV Name reflects the ecological attribute being assessed,
such as "Species Richness," "Population Abundance," "Functional Diversity,"
"Habitat Fragmentation," or "Ecosystem Productivity".
Authorship: Besides the creator and the contributors there is a
third metadata term regarding authorship: the 'publisher'. This is the
person that logs into the EBV Data Portal and uploads the dataset. This
person's personal data is automatically added to the metadata from the
account information.
#create minimal metadata ## Not run: ebv_metadata(outputpath=out, overwrite = TRUE, title = 'Not a real title', summary = 'Summary summary summary', source = 'this was created by doing...', date_created = as.Date('2024-07-10'), creator_name = 'Name Name', creator_institution = 'lame name', license = 'CC0', ebv_class = 'Genetic composition', ebv_name = 'Intraspecific genetic diversity', ebv_spatial_scope = 'National', ebv_spatial_description = 'Finland', ebv_domain = 'Terrestrial', ebv_entity_type = 'Species', ebv_entity_scope = '50 mammal species', coverage_content_type = c('modelResult'), time_coverage_start = as.Date('1900-01-01'), time_coverage_end =as.Date('1950-01-01'), time_coverage_resolution = 'P0010-00-00', metric = list(list(standard_name='relative change of habitat', long_name='relative change to year 1800', units='percentage'), list(standard_name='absolute change habitat', long_name='absolute change since year 1800', units='square kilometers')), scenario = list(list(standard_name='SSP1xRCP2.6 LU', long_name='Global Sustainability (SSP1xRCP2.6), with only effects of land-use.'), list(standard_name='SSP3xRCP6.0 LU', long_name='Regional Rivalry (SSP3xRCP6.0), with only effects of land-use')), scenario_classification_name = 'SSP-RCP', ebv_scenario_classification_version = 'LUH2, CMIP5/ISIMIP2a', verbose = TRUE ) ## End(Not run)
#create minimal metadata ## Not run: ebv_metadata(outputpath=out, overwrite = TRUE, title = 'Not a real title', summary = 'Summary summary summary', source = 'this was created by doing...', date_created = as.Date('2024-07-10'), creator_name = 'Name Name', creator_institution = 'lame name', license = 'CC0', ebv_class = 'Genetic composition', ebv_name = 'Intraspecific genetic diversity', ebv_spatial_scope = 'National', ebv_spatial_description = 'Finland', ebv_domain = 'Terrestrial', ebv_entity_type = 'Species', ebv_entity_scope = '50 mammal species', coverage_content_type = c('modelResult'), time_coverage_start = as.Date('1900-01-01'), time_coverage_end =as.Date('1950-01-01'), time_coverage_resolution = 'P0010-00-00', metric = list(list(standard_name='relative change of habitat', long_name='relative change to year 1800', units='percentage'), list(standard_name='absolute change habitat', long_name='absolute change since year 1800', units='square kilometers')), scenario = list(list(standard_name='SSP1xRCP2.6 LU', long_name='Global Sustainability (SSP1xRCP2.6), with only effects of land-use.'), list(standard_name='SSP3xRCP6.0 LU', long_name='Regional Rivalry (SSP3xRCP6.0), with only effects of land-use')), scenario_classification_name = 'SSP-RCP', ebv_scenario_classification_version = 'LUH2, CMIP5/ISIMIP2a', verbose = TRUE ) ## End(Not run)
Structured access to all attributes of the netCDF file.
ebv_properties( filepath, datacubepath = NULL, scenario = NULL, metric = NULL, verbose = TRUE )
ebv_properties( filepath, datacubepath = NULL, scenario = NULL, metric = NULL, verbose = TRUE )
filepath |
Character. Path to the netCDF file. |
datacubepath |
Character. Optional. Default: NULL. Path to the datacube
(use |
scenario |
Character or integer. Optional. Default: NULL. Define the
scenario you want to access. If the EBV netCDF has no scenarios, leave the
default value (NULL). You can use an integer value defining the scenario or
give the name of the scenario as a character string. To check the available
scenarios and their name or number (integer), use
|
metric |
Character or integer. Optional. Define the metric you want to
access. You can use an integer value defining the metric or give the name
of the scenario as a character string. To check the available metrics and
their name or number (integer), use |
verbose |
Logical. Default: TRUE. Turn off additional prints by setting it to FALSE. |
S4 class containing information about file or file and datacube depending on input.
#define path to EBV netCDF file <- system.file(file.path("extdata","martins_comcom_subset.nc"), package="ebvcube") #get all datacubepaths of EBV netCDF datacubes <- ebv_datacubepaths(file, verbose=FALSE) #get properties only for the file prop_file <- ebv_properties(file, verbose=FALSE) #get properties for the file and a specific datacube - use datacubepath prop_dc <- ebv_properties(file, datacubepath = datacubes[1,1], verbose=FALSE) #get properties for the file and a specific datacube - use scenario & metric #note: this dataset has no scenario -> only metric is defined prop_dc <- ebv_properties(file, metric = 2, verbose=FALSE)
#define path to EBV netCDF file <- system.file(file.path("extdata","martins_comcom_subset.nc"), package="ebvcube") #get all datacubepaths of EBV netCDF datacubes <- ebv_datacubepaths(file, verbose=FALSE) #get properties only for the file prop_file <- ebv_properties(file, verbose=FALSE) #get properties for the file and a specific datacube - use datacubepath prop_dc <- ebv_properties(file, datacubepath = datacubes[1,1], verbose=FALSE) #get properties for the file and a specific datacube - use scenario & metric #note: this dataset has no scenario -> only metric is defined prop_dc <- ebv_properties(file, metric = 2, verbose=FALSE)
Read one or more layers from one datacube of the netCDF file. Decide between in-memory array, in-memory SpatRaster or an array-like object (DelayedMatrix) pointing to the on-disk netCDF file. The latter is useful for data that exceeds your memory.
ebv_read( filepath, datacubepath = NULL, entity = NULL, timestep = 1, type = "r", scenario = NULL, metric = NULL, sparse = FALSE, ignore_RAM = FALSE, verbose = FALSE )
ebv_read( filepath, datacubepath = NULL, entity = NULL, timestep = 1, type = "r", scenario = NULL, metric = NULL, sparse = FALSE, ignore_RAM = FALSE, verbose = FALSE )
filepath |
Character. Path to the netCDF file. |
datacubepath |
Character. Optional. Default: NULL. Path to the datacube
(use |
entity |
Character or Integer. Default is NULL. If the structure is 3D, the entity argument is set to NULL. Else, a character string or single integer value must indicate the entity of the 4D structure of the EBV netCDFs. |
timestep |
Integer or character. Select one or several timestep(s). Either provide an integer value or list of values that refer(s) to the index of the timestep(s) (minimum value: 1) or provide a date or list of dates in ISO format, such as '2015-01-01'. |
type |
Character. Choose between 'a', 'r' and 'da'. The first returns an array or matrix object. The 'r' indicates that a SpatRaster object from the terra package will be returned (default). The latter ('da') returns a DelayedArray or DelayedMatrix object. |
scenario |
Character or integer. Optional. Default: NULL. Define the
scenario you want to access. If the EBV netCDF has no scenarios, leave the
default value (NULL). You can use an integer value defining the scenario or
give the name of the scenario as a character string. To check the available
scenarios and their name or number (integer), use
|
metric |
Character or integer. Optional. Define the metric you want to
access. You can use an integer value defining the metric or give the name
of the scenario as a character string. To check the available metrics and
their name or number (integer), use |
sparse |
Logical. Default: FALSE. Set to TRUE if the data contains a lot empty raster cells. Only relevant for DelayedArray return value. |
ignore_RAM |
Logical. Default: FALSE. Checks if there is enough space in your memory to read the data. Can be switched off (set to TRUE). |
verbose |
Logical. Default: TRUE. Turn off additional prints by setting it to FALSE. |
Array, SpatRaster or DelayedArray object containing the data of the corresponding datacube and timestep(s).
For working with the DelayedMatrix take a look at
DelayedArray::DelayedArray()
and the
DelayedArray-utils.
#set path to EBV netCDF file <- system.file(file.path("extdata","martins_comcom_subset.nc"), package="ebvcube") #get all datacubepaths of EBV netCDF datacubes <- ebv_datacubepaths(file, verbose=FALSE) #read data as DelayedArray cSAR.delayedarray <- ebv_read(filepath = file, datacubepath = datacubes[1,1], entity = 1, timestep = c(1,3), type='da', sparse = TRUE) #read data as SpatRaster cSAR.raster <- ebv_read(filepath = file,entity = 1, timestep = "2000-01-01", type='r', metric = 1) #read data as Array cSAR.array <- ebv_read(filepath = file, datacubepath = datacubes[1,1], entity = 1, timestep = 1, type='r')
#set path to EBV netCDF file <- system.file(file.path("extdata","martins_comcom_subset.nc"), package="ebvcube") #get all datacubepaths of EBV netCDF datacubes <- ebv_datacubepaths(file, verbose=FALSE) #read data as DelayedArray cSAR.delayedarray <- ebv_read(filepath = file, datacubepath = datacubes[1,1], entity = 1, timestep = c(1,3), type='da', sparse = TRUE) #read data as SpatRaster cSAR.raster <- ebv_read(filepath = file,entity = 1, timestep = "2000-01-01", type='r', metric = 1) #read data as Array cSAR.array <- ebv_read(filepath = file, datacubepath = datacubes[1,1], entity = 1, timestep = 1, type='r')
Read a subset of one or more layers from one datacube of the NetCDF file. Subset definition by a bounding box.
ebv_read_bb( filepath, datacubepath = NULL, entity = NULL, timestep = 1, bb, outputpath = NULL, epsg = 4326, scenario = NULL, metric = NULL, overwrite = FALSE, ignore_RAM = FALSE, verbose = TRUE )
ebv_read_bb( filepath, datacubepath = NULL, entity = NULL, timestep = 1, bb, outputpath = NULL, epsg = 4326, scenario = NULL, metric = NULL, overwrite = FALSE, ignore_RAM = FALSE, verbose = TRUE )
filepath |
Character. Path to the netCDF file. |
datacubepath |
Character. Optional. Default: NULL. Path to the datacube
(use |
entity |
Character or Integer. Default is NULL. If the structure is 3D, the entity argument is set to NULL. Else, a character string or single integer value must indicate the entity of the 4D structure of the EBV netCDFs. |
timestep |
Integer or character. Select one or several timestep(s). Either provide an integer value or list of values that refer(s) to the index of the timestep(s) (minimum value: 1) or provide a date or list of dates in ISO format, such as '2015-01-01'. |
bb |
Integer Vector. Definition of subset by bounding box: c(xmin, xmax, ymin, ymax). |
outputpath |
Character. Default: NULL, returns the data as a SpatRaster object in memory. Optional: set path to write subset as GeoTiff on disk. |
epsg |
Integer. Default: 4326 (WGS84). Change accordingly if your bounding box coordinates are based on a different coordinate reference system. |
scenario |
Character or integer. Optional. Default: NULL. Define the
scenario you want to access. If the EBV netCDF has no scenarios, leave the
default value (NULL). You can use an integer value defining the scenario or
give the name of the scenario as a character string. To check the available
scenarios and their name or number (integer), use
|
metric |
Character or integer. Optional. Define the metric you want to
access. You can use an integer value defining the metric or give the name
of the scenario as a character string. To check the available metrics and
their name or number (integer), use |
overwrite |
Logical. Default: FALSE. Set to TRUE to overwrite the outputfile defined by 'outputpath'. |
ignore_RAM |
Logical. Default: FALSE. Checks if there is enough space in your memory to read the data. Can be switched off (set to TRUE). |
verbose |
Logical. Default: TRUE. Turn off additional prints by setting it to FALSE. |
Returns a SpatRaster object if no outputpath is given. Otherwise the subset is written onto the disk and the outputpath is returned.
In case the epsg of the Bounding Box and the netCDF differ, the data is returned based on the epsg of the netCDF Dataset.
ebv_read_shp()
for subsetting via shapefile.
#set path to EBV netCDF file <- system.file(file.path("extdata","martins_comcom_subset.nc"), package="ebvcube") #get all datacubepaths of EBV netCDF datacubes <- ebv_datacubepaths(file, verbose=FALSE) #set outputpath out <- file.path(system.file(package='ebvcube'),"extdata","subset_bb.tif") #define two different bounding boxes based on different EPSG codes bb_wgs84 <- c(-26, 64, 30, 38) bb_utm32 <- c(-2383703, 5532302, 3643854, 4564646) ## Not run: #read bb (based on EPSG 4326) - return SpatRaster cSAR_subset <- ebv_read_bb(filepath = file, datacubepath = datacubes[1,1], entity = 1, timestep = 1:3, bb = bb_wgs84) #read bb (based on EPSG 4326) - write to GeoTiff path <- ebv_read_bb(filepath = file, datacubepath = datacubes[1,1], entity = 1, timestep = 1, bb = bb_wgs84, outputpath = out, overwrite = TRUE) #read bb (based on ESRI 54009) - write to GeoTiff path <- ebv_read_bb(filepath = file, datacubepath = datacubes[1,1], entity = 1, timestep = 1:2, bb = bb_utm32, epsg = 32632, outputpath = out, overwrite = TRUE) ## End(Not run)
#set path to EBV netCDF file <- system.file(file.path("extdata","martins_comcom_subset.nc"), package="ebvcube") #get all datacubepaths of EBV netCDF datacubes <- ebv_datacubepaths(file, verbose=FALSE) #set outputpath out <- file.path(system.file(package='ebvcube'),"extdata","subset_bb.tif") #define two different bounding boxes based on different EPSG codes bb_wgs84 <- c(-26, 64, 30, 38) bb_utm32 <- c(-2383703, 5532302, 3643854, 4564646) ## Not run: #read bb (based on EPSG 4326) - return SpatRaster cSAR_subset <- ebv_read_bb(filepath = file, datacubepath = datacubes[1,1], entity = 1, timestep = 1:3, bb = bb_wgs84) #read bb (based on EPSG 4326) - write to GeoTiff path <- ebv_read_bb(filepath = file, datacubepath = datacubes[1,1], entity = 1, timestep = 1, bb = bb_wgs84, outputpath = out, overwrite = TRUE) #read bb (based on ESRI 54009) - write to GeoTiff path <- ebv_read_bb(filepath = file, datacubepath = datacubes[1,1], entity = 1, timestep = 1:2, bb = bb_utm32, epsg = 32632, outputpath = out, overwrite = TRUE) ## End(Not run)
Read a subset of one or more layers from one datacube of the netCDF file. Subset definition by a shapefile.
ebv_read_shp( filepath, datacubepath = NULL, entity = NULL, timestep = 1, shp, outputpath = NULL, touches = TRUE, scenario = NULL, metric = NULL, overwrite = FALSE, ignore_RAM = FALSE, verbose = TRUE )
ebv_read_shp( filepath, datacubepath = NULL, entity = NULL, timestep = 1, shp, outputpath = NULL, touches = TRUE, scenario = NULL, metric = NULL, overwrite = FALSE, ignore_RAM = FALSE, verbose = TRUE )
filepath |
Character. Path to the netCDF file. |
datacubepath |
Character. Optional. Default: NULL. Path to the datacube
(use |
entity |
Character or Integer. Default is NULL. If the structure is 3D, the entity argument is set to NULL. Else, a character string or single integer value must indicate the entity of the 4D structure of the EBV netCDFs. |
timestep |
Integer or character. Select one or several timestep(s). Either provide an integer value or list of values that refer(s) to the index of the timestep(s) (minimum value: 1) or provide a date or list of dates in ISO format, such as '2015-01-01'. |
shp |
Character. Path to the shapefile defining the subset. Ending needs to be *.shp. |
outputpath |
Character. Default: NULL, returns the data as a SpatRaster object in memory. Optional: set path to write subset as GeoTiff on disk. |
touches |
Logical. Default: TRUE, all pixels touched by the polygon(s) will be updated. Set to FALSE to only include pixels that are on the line render path or have center points inside the polygon(s). |
scenario |
Character or integer. Optional. Default: NULL. Define the
scenario you want to access. If the EBV netCDF has no scenarios, leave the
default value (NULL). You can use an integer value defining the scenario or
give the name of the scenario as a character string. To check the available
scenarios and their name or number (integer), use
|
metric |
Character or integer. Optional. Define the metric you want to
access. You can use an integer value defining the metric or give the name
of the scenario as a character string. To check the available metrics and
their name or number (integer), use |
overwrite |
Logical. Default: FALSE. Set to TRUE to overwrite the outputfile defined by 'outputpath'. |
ignore_RAM |
Logical. Default: FALSE. Checks if there is enough space in your memory to read the data. Can be switched off (set to TRUE). |
verbose |
Logical. Default: TRUE. Turn off additional prints by setting it to FALSE. |
Returns a SpatRaster object if no output path is given. Otherwise the subset is written onto the disk and the ouput path is returned.
ebv_read_bb()
for subsetting via bounding box.
#set path to EBV netCDF file <- system.file(file.path("extdata","martins_comcom_subset.nc"), package="ebvcube") #get all datacubepaths of EBV netCDF datacubes <- ebv_datacubepaths(file, verbose=FALSE) #set path to shp file - cameroon country borders shp_path <- system.file(file.path("extdata","cameroon.shp"), package="ebvcube") #read subset - return SpatRaster cSAR_cameroon <- ebv_read_shp(filepath = file, datacubepath = datacubes[1,1], entity = 1, timestep = 1, shp = shp_path, outputpath = NULL, ignore_RAM = TRUE)
#set path to EBV netCDF file <- system.file(file.path("extdata","martins_comcom_subset.nc"), package="ebvcube") #get all datacubepaths of EBV netCDF datacubes <- ebv_datacubepaths(file, verbose=FALSE) #set path to shp file - cameroon country borders shp_path <- system.file(file.path("extdata","cameroon.shp"), package="ebvcube") #read subset - return SpatRaster cSAR_cameroon <- ebv_read_shp(filepath = file, datacubepath = datacubes[1,1], entity = 1, timestep = 1, shp = shp_path, outputpath = NULL, ignore_RAM = TRUE)
Change the resolution of one datacube of a EBV netCDF based on another EBV netCDF or a given resolution.
ebv_resample( filepath_src, datacubepath_src = NULL, entity_src = NULL, timestep_src = 1, resolution, outputpath, method = "bilinear", scenario = NULL, metric = NULL, return_raster = FALSE, overwrite = FALSE, ignore_RAM = FALSE, verbose = TRUE )
ebv_resample( filepath_src, datacubepath_src = NULL, entity_src = NULL, timestep_src = 1, resolution, outputpath, method = "bilinear", scenario = NULL, metric = NULL, return_raster = FALSE, overwrite = FALSE, ignore_RAM = FALSE, verbose = TRUE )
filepath_src |
Character. Path to the netCDF file whose resolution should be changed. |
datacubepath_src |
Character. Optional. Default: NULL. Path to the
datacube (use |
entity_src |
Character or Integer. Default is NULL. If the structure is 3D, the entity argument is set to NULL. Else, a character string or single integer value must indicate the entity of the 4D structure of the EBV netCDFs. |
timestep_src |
Integer or character. Select one or several timestep(s). Either provide an integer value or list of values that refer(s) to the index of the timestep(s) (minimum value: 1) or provide a date or list of dates in ISO format, such as '2015-01-01'. |
resolution |
Character or Numeric. Either the path to an EBV netCDF file that determines the resolution (character) or the resolution defined directly (numeric). The vector defining the resolution directly must contain three elements: the x-resolution, the y-resolution and the corresponding EPSG code, e.g. c(0.25, 0.25, 4326). |
outputpath |
Character. Set path to write data as GeoTiff on disk. |
method |
Character. Default: bilinear. Define resampling method. Choose
from: "near","bilinear","cubic", "cubicspline", "lanczos", "sum", "min",
"q1", "med", "q3", "max", "average", "mode" and "rms". For categorical
data, use 'near'. Based on |
scenario |
Character or integer. Optional. Default: NULL. Define the
scenario you want to access. If the EBV netCDF has no scenarios, leave the
default value (NULL). You can use an integer value defining the scenario or
give the name of the scenario as a character string. To check the available
scenarios and their name or number (integer), use
|
metric |
Character or integer. Optional. Define the metric you want to
access. You can use an integer value defining the metric or give the name
of the scenario as a character string. To check the available metrics and
their name or number (integer), use |
return_raster |
Logical. Default: FALSE. Set to TRUE to directly get the corresponding SpatRaster object. |
overwrite |
Logical. Default: FALSE. Set to TRUE to overwrite the output file defined by 'outputpath'. |
ignore_RAM |
Logical. Default: FALSE. Checks if there is enough space in your memory to read the data. Can be switched off (set to TRUE). |
verbose |
Logical. Default: TRUE. Turn off additional prints by setting it to FALSE. |
Default: returns the output path of the GeoTiff with the new resolution. Optional: return the SpatRaster object with the new resolution.
#set path to EBV netCDF file <- system.file(file.path("extdata","martins_comcom_subset.nc"), package="ebvcube") #get all datacubepaths of EBV netCDF datacubes <- ebv_datacubepaths(file, verbose=FALSE) #define different resolutions res1 <- system.file(file.path("extdata", "baisero_spepop_id5_20220405_v1_empty.nc"), package="ebvcube") res2 <- c(0.5,0.5,4326) #define output path out <- file.path(system.file(package='ebvcube'),"extdata","changeRes.tif") ## Not run: #resample defining the resolution and EPSG code by hand - return SpatRaster data_raster <- ebv_resample(filepath_src = file, datacubepath_src = datacubes[1,1], entity_src=1, timestep_src = 1, resolution = res2, outputpath = out, method='near', return_raster=TRUE, overwrite=TRUE) #resample using a netCDF file - return GeoTiff ebv_resample(filepath_src = file, datacubepath_src = datacubes[1,1], entity_src=1, timestep_src = 1, resolution = res1, outputpath = out, overwrite=TRUE) ## End(Not run)
#set path to EBV netCDF file <- system.file(file.path("extdata","martins_comcom_subset.nc"), package="ebvcube") #get all datacubepaths of EBV netCDF datacubes <- ebv_datacubepaths(file, verbose=FALSE) #define different resolutions res1 <- system.file(file.path("extdata", "baisero_spepop_id5_20220405_v1_empty.nc"), package="ebvcube") res2 <- c(0.5,0.5,4326) #define output path out <- file.path(system.file(package='ebvcube'),"extdata","changeRes.tif") ## Not run: #resample defining the resolution and EPSG code by hand - return SpatRaster data_raster <- ebv_resample(filepath_src = file, datacubepath_src = datacubes[1,1], entity_src=1, timestep_src = 1, resolution = res2, outputpath = out, method='near', return_raster=TRUE, overwrite=TRUE) #resample using a netCDF file - return GeoTiff ebv_resample(filepath_src = file, datacubepath_src = datacubes[1,1], entity_src=1, timestep_src = 1, resolution = res1, outputpath = out, overwrite=TRUE) ## End(Not run)
Plot the trend of one datacube of a EBV netCDF over time
(x-axis). Different options can be chosen based on the method
argument.
ebv_trend( filepath, datacubepath = NULL, entity = NULL, method = "mean", subset = NULL, color = "dodgerblue4", touches = TRUE, scenario = NULL, metric = NULL, verbose = TRUE )
ebv_trend( filepath, datacubepath = NULL, entity = NULL, method = "mean", subset = NULL, color = "dodgerblue4", touches = TRUE, scenario = NULL, metric = NULL, verbose = TRUE )
filepath |
Character. Path to the netCDF file. |
datacubepath |
Character. Optional. Default: NULL. Path to the datacube
(use |
entity |
Character or Integer. Default is NULL. If the structure is 3D, the entity argument is set to NULL. Else, a character string or single integer value must indicate the entity of the 4D structure of the EBV netCDFs. |
method |
Character. Default: mean. Choose one of the following options for different plots: mean, min, max, boxplot. See Note for more |
subset |
Character. Default: NULL. If you want to look at the trend for a spatial subset, define the path to the shapefile encompassing the area. Ending needs to be *.shp. |
color |
Character. Default: dodgerblue4. Change to any color known by R
|
touches |
Logical. Optional. Default: TRUE. Only relevant if the subset is
indicated by a shapefile. See |
scenario |
Character or integer. Optional. Default: NULL. Define the
scenario you want to access. If the EBV netCDF has no scenarios, leave the
default value (NULL). You can use an integer value defining the scenario or
give the name of the scenario as a character string. To check the available
scenarios and their name or number (integer), use
|
metric |
Character or integer. Optional. Define the metric you want to
access. You can use an integer value defining the metric or give the name
of the scenario as a character string. To check the available metrics and
their name or number (integer), use |
verbose |
Logical. Default: TRUE. Turn off additional prints by setting it to FALSE. |
Returns plots and eventually values based on the method
argument.
See Note for more information
More information on the method
argument: using mean
will result in
a plot of the mean over time, additionally a vector of the mean values is
returned. If the data encompasses only one timestep a single mean is
returned. Corresponding behavior can be expected for min
and max
. The
boxplot
option results in boxplots over time (no values are returned).
#set path to EBV netCDF file <- system.file(file.path("extdata","martins_comcom_subset.nc"), package="ebvcube") #get all datacubepaths of EBV netCDF datacubes <- ebv_datacubepaths(file, verbose=FALSE) #plot the change of the mean over time of the first datacube ebv_trend(filepath = file, datacubepath = datacubes[1,1], entity = 1)
#set path to EBV netCDF file <- system.file(file.path("extdata","martins_comcom_subset.nc"), package="ebvcube") #get all datacubepaths of EBV netCDF datacubes <- ebv_datacubepaths(file, verbose=FALSE) #plot the change of the mean over time of the first datacube ebv_trend(filepath = file, datacubepath = datacubes[1,1], entity = 1)
After you extracted data from the EBV netCDF and worked with it this function gives you the possibility to write it to disk as a GeoTiff.
ebv_write( data, outputpath, epsg = 4326, extent = c(-180, 180, -90, 90), type = "FLT8S", overwrite = FALSE, verbose = TRUE )
ebv_write( data, outputpath, epsg = 4326, extent = c(-180, 180, -90, 90), type = "FLT8S", overwrite = FALSE, verbose = TRUE )
data |
Your data object. May be SpatRaster, array, DelayedMatrix or list
of DelayedMatrix (see return values of |
outputpath |
Character. Set the path where you want to write the data to disk as a GeoTiff. Ending needs to be *.tif. |
epsg |
Integer. Default: 4326 (WGS84). Defines the coordinate reference system via the corresponding epsg code. |
extent |
Numeric. Default: c(-180,180,-90,90). Defines the extent of the data: c(xmin, xmax, ymin, ymax). |
type |
Character. Default is FLT8S Indicate the datatype of the GeoTiff file. Possible values: INT1S, INT2S, INT2U, INT4S, INT4U, FLT4S, FLT8S. |
overwrite |
Locigal. Default: FALSE. Set to TRUE to overwrite the outputfile defined by 'outputpath'. |
verbose |
Logical. Default: TRUE. Turn off additional prints by setting it to FALSE. |
Returns the outputpath.
If the nodata value of your data is not detected correctly, this could be due to the wrong choice of the datatype (type argument).
#set path to EBV netCDF file <- system.file(file.path("extdata","martins_comcom_subset.nc"), package="ebvcube") #get all datacubepaths of EBV netCDF datacubes <- ebv_datacubepaths(file, verbose=FALSE) ## Not run: #read data data <- ebv_read(filepath = file, datacubepath = datacubes[1,1], timestep = 1, entity = 1) # HERE YOU CAN WORK WITH YOUR DATA #write data to disk as GeoTiff out <- file.path(system.file(package='ebvcube'),"extdata","write_data.tif") ebv_write(data = data, outputpath = out, overwrite = TRUE) #read a subset data_bb <- ebv_read_bb(filepath = file, datacubepath = datacubes[1,1], entity = 1, timestep = 1:3, bb = c(-26, 64, 30, 38)) #write subset to disk as GeoTiff ebv_write(data = data_bb, outputpath = out, extent = c(-26, 64, 30, 38), overwrite = TRUE) ## End(Not run)
#set path to EBV netCDF file <- system.file(file.path("extdata","martins_comcom_subset.nc"), package="ebvcube") #get all datacubepaths of EBV netCDF datacubes <- ebv_datacubepaths(file, verbose=FALSE) ## Not run: #read data data <- ebv_read(filepath = file, datacubepath = datacubes[1,1], timestep = 1, entity = 1) # HERE YOU CAN WORK WITH YOUR DATA #write data to disk as GeoTiff out <- file.path(system.file(package='ebvcube'),"extdata","write_data.tif") ebv_write(data = data, outputpath = out, overwrite = TRUE) #read a subset data_bb <- ebv_read_bb(filepath = file, datacubepath = datacubes[1,1], entity = 1, timestep = 1:3, bb = c(-26, 64, 30, 38)) #write subset to disk as GeoTiff ebv_write(data = data_bb, outputpath = out, extent = c(-26, 64, 30, 38), overwrite = TRUE) ## End(Not run)
This package can be used to easily access the data of the EBV netCDFs which can be downloaded from the Geobon Portal. It also provides some basic visualization of the data. Advanced users can build their own netCDFs with the EBV standard using this package.
This package contains three main usecases: accessing the data, visualising it and creating your own data in the EBV netCDF standard.
Maintainer: Luise Quoss [email protected] (ORCID)
Authors:
Nestor Fernandez [email protected] (ORCID)
Christian Langer [email protected] (ORCID)
Jose Valdez [email protected] (ORCID)
Henrique Miguel Pereira [email protected] (ORCID)
Useful links:
Simple outlines of world countries
world_boundaries
world_boundaries
A data.frame with 177 elements and geometry as WKT
Data downloaded from Natural Earth. Used version 4.0.0 and reduced attributes.