Title: | Tools for Highfrequency Data Analysis |
---|---|
Description: | Provide functionality to manage, clean and match highfrequency trades and quotes data, calculate various liquidity measures, estimate and forecast volatility, detect price jumps and investigate microstructure noise and intraday periodicity. A detailed vignette can be found in the open-access paper "Analyzing Intraday Financial Data in R: The highfrequency Package" by Boudt, Kleen, and Sjoerup (2022, <doi:10.18637/jss.v104.i08>). |
Authors: | Kris Boudt [aut, cre] , Jonathan Cornelissen [aut], Scott Payseur [aut], Giang Nguyen [ctb], Onno Kleen [aut] , Emil Sjoerup [aut] |
Maintainer: | Kris Boudt <[email protected]> |
License: | GPL (>= 2) |
Version: | 1.0.1 |
Built: | 2024-10-31 06:57:52 UTC |
Source: | CRAN |
The highfrequency package provides numerous tools for analyzing high-frequency financial data, including functionality to:
Clean, handle, and manage high frequency trades and quotes data.
Calculate liquidity measures
Calculate (multivariate) realized measures of the distribution of high-frequency returns
Estimate models for realized measures of volatility and the corresponding forecasts
Detect jumps in prices
Analyze market microstructure noise in asset prices
Estimate spot volatility and drift as well as analyze intraday periodicity of spot volatility
Kris Boudt, Jonathan Cornelissen, Onno Kleen, Scott Payseur, Emil Sjoerup Maintainer: Kris Boudt <[email protected]>
Contributors: Giang Nguyen
Thanks: We would like to thank Brian Peterson, Chris Blakely, Dirk Eddelbuettel, Maarten Schermer, and Eric Zivot
Useful links:
Report bugs at https://github.com/jonathancornelissen/highfrequency/issues
Function to aggregate high frequency data by last tick aggregation to an arbitrary periodicity based on wall clocks.
Alternatively the aggregation can be done by number of ticks. In case we DON'T do tick-based aggregation,
this function accepts arbitrary number of symbols over a arbitrary number of days. Although the function has the word Price in the name,
the function is general and works on arbitrary time series, either xts
or data.table
objects the latter requires a DT
column containing POSIXct time stamps.
aggregatePrice( pData, alignBy = "minutes", alignPeriod = 1, marketOpen = "09:30:00", marketClose = "16:00:00", fill = FALSE, tz = NULL )
aggregatePrice( pData, alignBy = "minutes", alignPeriod = 1, marketOpen = "09:30:00", marketClose = "16:00:00", fill = FALSE, tz = NULL )
pData |
|
alignBy |
character, indicating the time scale in which |
alignPeriod |
positive numeric, indicating the number of periods to aggregate over. E.g. to aggregate
based on a 5 minute frequency, set |
marketOpen |
the market opening time, by default: |
marketClose |
the market closing time, by default: |
fill |
indicates whether rows without trades should be added with the most recent value, FALSE by default. |
tz |
fallback time zone used in case we we are unable to identify the timezone of the data, by default: |
The time stamps of the new time series are the closing times and/or days of the intervals. The element of the returned series with e.g. time stamp 09:35:00 contains the last observation up to that point, including the value at 09:35:00 itself.
In case alignBy = "ticks"
, the sampling is done such the sampling starts on the first tick, and the last tick is always included.
For example, if 14 observations are made on one day, and these are 1, 2, 3, ... 14.
Then, with alignBy = "ticks"
and alignPeriod = 3
, the output will be 1, 4, 7, 10, 13, 14.
A data.table
or xts
object containing the aggregated time series.
Jonathan Cornelissen, Kris Boudt, Onno Kleen, and Emil Sjoerup.
# Aggregate price data to the 30-second frequency aggregatePrice(sampleTData, alignBy = "secs", alignPeriod = 30) # Aggregate price data to 30-minute frequency including zero return price changes aggregatePrice(sampleTData, alignBy = "minutes", alignPeriod = 30, fill = TRUE)
# Aggregate price data to the 30-second frequency aggregatePrice(sampleTData, alignBy = "secs", alignPeriod = 30) # Aggregate price data to 30-minute frequency including zero return price changes aggregatePrice(sampleTData, alignBy = "minutes", alignPeriod = 30, fill = TRUE)
data.table
or xts
object containing quote dataAggregate tick-by-tick quote data and return a data.table
or xts
object containing the aggregated quote data.
See sampleQData
for an example of the argument qData. This function accepts arbitrary number of symbols over an arbitrary number of days.
aggregateQuotes( qData, alignBy = "minutes", alignPeriod = 5, marketOpen = "09:30:00", marketClose = "16:00:00", tz = NULL )
aggregateQuotes( qData, alignBy = "minutes", alignPeriod = 5, marketOpen = "09:30:00", marketClose = "16:00:00", tz = NULL )
qData |
|
alignBy |
character, indicating the time scale in which |
alignPeriod |
positive numeric, indicating the number of periods to aggregate over. E.g. to aggregate
based on a 5 minute frequency, set |
marketOpen |
the market opening time, by default: |
marketClose |
the market closing time, by default: |
tz |
fallback time zone used in case we we are unable to identify the timezone of the data, by default: |
The output "BID" and "OFR" columns are constructed using previous tick aggregation.
The variables "BIDSIZ" and "OFRSIZ" are aggregated by taking the sum of the respective inputs over each interval.
The timestamps of the new time series are the closing times of the intervals.
Please note: Returned objects always contain the first observation (i.e. opening quotes,...).
A data.table
or an xts
object containing the aggregated quote data.
Jonathan Cornelissen, Kris Boudt, Onno Kleen, and Emil Sjoerup.
# Aggregate quote data to the 30 second frequency qDataAggregated <- aggregateQuotes(sampleQData, alignBy = "seconds", alignPeriod = 30) qDataAggregated # Show the aggregated data
# Aggregate quote data to the 30 second frequency qDataAggregated <- aggregateQuotes(sampleQData, alignBy = "seconds", alignPeriod = 30) qDataAggregated # Show the aggregated data
data.table
or xts
object containing trades data´Aggregate tick-by-tick trade data and return a time series as a data.table
or xts
object where first observation is always the opening price
and subsequent observations are the closing prices over the interval. This function accepts arbitrary number of symbols over an arbitrary number of days.
aggregateTrades( tData, alignBy = "minutes", alignPeriod = 5, marketOpen = "09:30:00", marketClose = "16:00:00", tz = NULL )
aggregateTrades( tData, alignBy = "minutes", alignPeriod = 5, marketOpen = "09:30:00", marketClose = "16:00:00", tz = NULL )
tData |
|
alignBy |
character, indicating the time scale in which |
alignPeriod |
positive numeric, indicating the number of periods to aggregate over. For example, to aggregate
based on a 5 minute frequency, set |
marketOpen |
the market opening time, by default: |
marketClose |
the market closing time, by default: |
tz |
fallback time zone used in case we we are unable to identify the timezone of the data, by default: |
The time stamps of the new time series are the closing times and/or days of the intervals.
The output "PRICE"
column is constructed using previous tick aggregation.
The variable "SIZE"
is aggregated by taking the sum over each interval.
The variable "VWPRICE"
is the aggregated price weighted by volume.
The time stamps of the new time series are the closing times of the intervals.
In case of previous tick aggregation or alignBy = "seconds"/"minutes"/"hours"
,
the element of the returned series with e.g. time stamp 09:35:00 contains
the last observation up to that point, including the value at 09:35:00 itself.
A data.table
or xts
object containing the aggregated time series.
Jonathan Cornelissen, Kris Boudt, Onno Kleen, and Emil Sjoerup.
# Aggregate trade data to 5 minute frequency tDataAggregated <- aggregateTrades(sampleTData, alignBy = "minutes", alignPeriod = 5) tDataAggregated
# Aggregate trade data to 5 minute frequency tDataAggregated <- aggregateTrades(sampleTData, alignBy = "minutes", alignPeriod = 5) tDataAggregated
Aggregate a time series as xts
or data.table
object.
It can handle irregularly spaced time series and returns a regularly spaced one.
Use univariate time series as input for this function and check out aggregateTrades
and aggregateQuotes
to aggregate Trade or Quote data objects.
aggregateTS( ts, FUN = "previoustick", alignBy = "minutes", alignPeriod = 1, weights = NULL, dropna = FALSE, tz = NULL, ... )
aggregateTS( ts, FUN = "previoustick", alignBy = "minutes", alignPeriod = 1, weights = NULL, dropna = FALSE, tz = NULL, ... )
ts |
|
FUN |
function to apply over each interval. By default, previous tick aggregation is done. Alternatively one can set e.g. FUN = "mean". In case weights are supplied, this argument is ignored and a weighted average is taken. |
alignBy |
character, indicating the time scale in which |
alignPeriod |
positive numeric, indicating the number of periods to aggregate over. For example, to aggregate
based on a 5 minute frequency, set |
weights |
By default, no weighting scheme is used.
When you assign an |
dropna |
boolean, which determines whether empty intervals should be dropped.
By default, an NA is returned in case an interval is empty, except when the user opts
for previous tick aggregation, by setting |
tz |
character denoting which timezone the output should be in. Defaults to |
... |
extra parameters passed on to |
The time stamps of the new time series are the closing times and/or days of the intervals. For example, for a weekly aggregation the new time stamp is the last day in that particular week (namely Sunday).
In case of previous tick aggregation,
for alignBy
is either "seconds"
"minutes"
, or "hours"
,
the element of the returned series with e.g. timestamp 09:35:00 contains
the last observation up to that point, including the value at 09:35:00 itself.
Please note: In case an interval is empty, by default an NA is returned.. In case e.g. previous
tick aggregation it makes sense to fill these NAs by the function na.locf
(last observation carried forward) from the zoo package.
In case alignBy = "ticks"
, the sampling is done such the sampling starts on the first tick and the last tick is always included.
For example, if 14 observations are made on one day, and these are 1, 2, 3, ... 14.
Then, with alignBy = "ticks"
and alignPeriod = 3
, the output will be 1, 4, 7, 10, 13, 14.
An xts
object containing the aggregated time series.
Jonathan Cornelissen, Kris Boudt, and Emil Sjoerup.
# Load sample price data ## Not run: library(xts) ts <- as.xts(sampleTData[, list(DT, PRICE, SIZE)]) # Previous tick aggregation to the 5-minute sampling frequency: tsagg5min <- aggregateTS(ts, alignBy = "minutes", alignPeriod = 5) head(tsagg5min) # Previous tick aggregation to the 30-second sampling frequency: tsagg30sec <- aggregateTS(ts, alignBy = "seconds", alignPeriod = 30) tail(tsagg30sec) tsagg3ticks <- aggregateTS(ts, alignBy = "ticks", alignPeriod = 3) ## End(Not run)
# Load sample price data ## Not run: library(xts) ts <- as.xts(sampleTData[, list(DT, PRICE, SIZE)]) # Previous tick aggregation to the 5-minute sampling frequency: tsagg5min <- aggregateTS(ts, alignBy = "minutes", alignPeriod = 5) head(tsagg5min) # Previous tick aggregation to the 30-second sampling frequency: tsagg30sec <- aggregateTS(ts, alignBy = "seconds", alignPeriod = 30) tail(tsagg30sec) tsagg3ticks <- aggregateTS(ts, alignBy = "ticks", alignPeriod = 3) ## End(Not run)
This test examines the presence of jumps in highfrequency price series. It is based on the theory of Ait-Sahalia and Jacod (2009).
It consists in comparing the multi-power variation of equi-spaced returns computed at a fast time scale (),
(
) and those computed at the slower time scale (
),
(
).
They found that the limit (for
) of the realized power variation is invariant for different sampling scales and that their ratio is
in case of jumps and
if no jumps.
Therefore the AJ test detects the presence of jump using the ratio of realized power variation sampled from two scales. The null hypothesis is no jumps.
The function returns three outcomes: 1.z-test value 2.critical value under confidence level of and 3.
-value.
Assume there is equispaced returns in period
. Let
be a return (with
) in period
.
And there is equispaced returns in period
. Let
be a return (with
) in period
.
Then the AJjumpTest is given by:
in which,
: independent standard normal random variables;
;
: parameters.
AJjumpTest( pData, p = 4, k = 2, alignBy = NULL, alignPeriod = NULL, alphaMultiplier = 4, alpha = 0.975, ... )
AJjumpTest( pData, p = 4, k = 2, alignBy = NULL, alignPeriod = NULL, alphaMultiplier = 4, alpha = 0.975, ... )
pData |
either an |
p |
can be chosen among 2 or 3 or 4. The author suggests 4. 4 by default. |
k |
can be chosen among 2 or 3 or 4. The author suggests 2. 2 by default. |
alignBy |
character, indicating the time scale in which |
alignPeriod |
positive numeric, indicating the number of periods to aggregate over. For example, to aggregate
based on a 5 minute frequency, set |
alphaMultiplier |
alpha multiplier |
alpha |
numeric of length one with the significance level to use for the jump test(s). Defaults to 0.975. |
... |
used internally |
The theoretical framework underlying jump test is that the logarithmic price process belongs to the class of Brownian semimartingales, which can be written as:
where is the drift term,
denotes the spot volatility process,
is a standard Brownian motion and
is a jump process defined by:
where are nonzero random variables. The counting process can be either finite or infinite for finite or infinite activity jumps.
Using the convergence properties of power variation and its dependence on the time scale on which it is measured, Ait-Sahalia and Jacod (2009) define a new variable which converges to 1 in the presence of jumps in the underlying return series, or to another deterministic and known number in the absence of jumps (Theodosiou and Zikes, 2009).
a list or xts
in depending on whether input prices span more than one day.
Giang Nguyen, Jonathan Cornelissen, Kris Boudt, and Emil Sjoerup.
Ait-Sahalia, Y. and Jacod, J. (2009). Testing for jumps in a discretely observed process. The Annals of Statistics, 37(1), 184-222.
Theodosiou, M., & Zikes, F. (2009). A comprehensive comparison of alternative tests for jumps in asset prices. Unpublished manuscript, Graduate School of Business, Imperial College London.
jt <- AJjumpTest(sampleTData[, list(DT, PRICE)], p = 2, k = 3, alignBy = "seconds", alignPeriod = 5, makeReturns = TRUE)
jt <- AJjumpTest(sampleTData[, list(DT, PRICE)], p = 2, k = 3, alignBy = "seconds", alignPeriod = 5, makeReturns = TRUE)
Filters raw quote data and return only data that stems from the exchange with the highest
value for the sum of "BIDSIZ"
and "OFRSIZ"
, i.e. the highest quote volume.
autoSelectExchangeQuotes(qData, printExchange = TRUE)
autoSelectExchangeQuotes(qData, printExchange = TRUE)
qData |
a |
printExchange |
indicates whether the chosen exchange is printed on the console, default is
|
data.table
or xts
object depending on input.
Jonathan Cornelissen, Kris Boudt, Onno Kleen, and Emil Sjoerup.
autoSelectExchangeQuotes(sampleQDataRaw)
autoSelectExchangeQuotes(sampleQDataRaw)
Filters raw trade data and return only data that stems from the exchange with the highest
value for the variable "SIZE"
, i.e. the highest trade volume.
autoSelectExchangeTrades(tData, printExchange = TRUE)
autoSelectExchangeTrades(tData, printExchange = TRUE)
tData |
an |
printExchange |
indicates whether the chosen exchange is printed on the console, default is
|
data.table
or xts
object depending on input.
Jonathan Cornelissen, Kris Boudt, Onno Kleen, and Emil Sjoerup.
autoSelectExchangeTrades(sampleTDataRaw)
autoSelectExchangeTrades(sampleTDataRaw)
This test examines the presence of jumps in highfrequency price series. It is based on theory of Barndorff-Nielsen and Shephard (2006). The null hypothesis is that there are no jumps.
BNSjumpTest( rData, IVestimator = "BV", IQestimator = "TP", type = "linear", logTransform = FALSE, max = FALSE, alignBy = NULL, alignPeriod = NULL, makeReturns = FALSE, alpha = 0.975 )
BNSjumpTest( rData, IVestimator = "BV", IQestimator = "TP", type = "linear", logTransform = FALSE, max = FALSE, alignBy = NULL, alignPeriod = NULL, makeReturns = FALSE, alpha = 0.975 )
rData |
either an |
IVestimator |
can be chosen among jump robust integrated variance estimators:
|
IQestimator |
can be chosen among jump robust integrated quarticity estimators: |
type |
a method of BNS testing: can be linear or ratio. Linear by default. |
logTransform |
boolean, should be |
max |
boolean, should be |
alignBy |
character, indicating the time scale in which |
alignPeriod |
positive numeric, indicating the number of periods to aggregate over. For example, to aggregate
based on a 5 minute frequency, set |
makeReturns |
boolean, should be |
alpha |
numeric of length one with the significance level to use for the jump test(s). Defaults to 0.975. |
Assume there is equispaced returns in period
.
Assume the Realized variance (RV), IVestimator and IQestimator are based on
equi-spaced returns.
Let be a return (with
) in period
.
Then the BNSjumpTest is given by
The options for IVestimator
and IQestimator
are listed above. depends on the chosen
IVestimator
(Huang and Tauchen, 2005).
The theoretical framework underlying the jump test is that the logarithmic price process belongs to the class of Brownian semimartingales, which can be written as:
where is the drift term,
denotes the spot volatility process,
is a standard Brownian motion and
is a jump process defined by:
where are nonzero random variables. The counting process can be either finite or infinite for finite or infinite activity jumps.
Since the realized volatility converges to the sum of integrated variance and jump variation, while the robust IVestimator
converges to the integrated variance,
it follows that the difference between RV
and the IVestimator
captures the jump part only, and this observation underlines the BNS test for jumps (Theodosiou and Zikes, 2009).
a list or xts
(depending on whether input prices span more than one day)
with the following values:
-test value.
critical value (with confidence level of 95%).
-value of the test.
Giang Nguyen, Jonathan Cornelissen, Kris Boudt, and Emil Sjoerup.
Barndorff-Nielsen, O. E., and Shephard, N. (2006). Econometrics of testing for jumps in financial economics using bipower variation. Journal of Financial Econometrics, 4, 1-30.
Corsi, F., Pirino, D., and Reno, R. (2010). Threshold bipower variation and the impact of jumps on volatility forecasting. Journal of Econometrics, 159, 276-288.
Huang, X., and Tauchen, G. (2005). The relative contribution of jumps to total price variance. Journal of Financial Econometrics, 3, 456-499.
Theodosiou, M., and Zikes, F. (2009). A comprehensive comparison of alternative tests for jumps in asset prices. Unpublished manuscript, Graduate School of Business, Imperial College London.
bns <- BNSjumpTest(sampleTData[, list(DT, PRICE)], IVestimator= "rMinRVar", IQestimator = "rMedRQuar", type= "linear", makeReturns = TRUE) bns
bns <- BNSjumpTest(sampleTData[, list(DT, PRICE)], IVestimator= "rMinRVar", IQestimator = "rMedRQuar", type= "linear", makeReturns = TRUE) bns
Time series aggregation based on 'business time' statistics. Instead of equidistant sampling based on time during a trading day, business time sampling creates measures and samples equidistantly using these instead. For example when sampling based on volume, business time aggregation will result in a time series that has an equal amount of volume between each observation (if possible).
businessTimeAggregation( pData, measure = "volume", obs = 390, bandwidth = 0.075, tz = NULL, ... )
businessTimeAggregation( pData, measure = "volume", obs = 390, bandwidth = 0.075, tz = NULL, ... )
pData |
|
measure |
character denoting which measure to use. Valid options are |
obs |
integer valued numeric of length 1 denoting how many observations is wanted after the aggregation procedure. |
bandwidth |
numeric of length one, denoting which bandwidth parameter to use in the trade intensity process estimation of Oomen (2005). |
tz |
fallback time zone used in case we we are unable to identify the timezone of the data, by default: |
... |
extra arguments passed on to |
A list containing "pData"
which is the aggregated data and a list containing the intensity process, split up day by day.
Emil Sjoerup.
Dong, Y., and Tse, Y. K. (2017). Business time sampling scheme with applications to testing semi-martingale hypothesis and estimating integrated volatility. Econometrics, 5, 51.
Oomen, R. C. A. (2006). Properties of realized variance under alternative sampling schemes. Journal of Business & Economic Statistics, 24, 219-237
pData <- sampleTData[,list(DT, PRICE, SIZE)] # Aggregate based on the trade intensity measure. Getting 390 observations. agged <- businessTimeAggregation(pData, measure = "intensity", obs = 390, bandwidth = 0.075) # Plot the trade intensity measure plot.ts(agged$intensityProcess$`2018-01-02`) rCov(agged$pData[, list(DT, PRICE)], makeReturns = TRUE) rCov(pData[,list(DT, PRICE)], makeReturns = TRUE, alignBy = "minutes", alignPeriod = 1) # Aggregate based on the volume measure. Getting 78 observations. agged <- businessTimeAggregation(pData, measure = "volume", obs = 78) rCov(agged$pData[,list(DT, PRICE)], makeReturns = TRUE) rCov(pData[,list(DT, PRICE)], makeReturns = TRUE, alignBy = "minutes", alignPeriod = 5)
pData <- sampleTData[,list(DT, PRICE, SIZE)] # Aggregate based on the trade intensity measure. Getting 390 observations. agged <- businessTimeAggregation(pData, measure = "intensity", obs = 390, bandwidth = 0.075) # Plot the trade intensity measure plot.ts(agged$intensityProcess$`2018-01-02`) rCov(agged$pData[, list(DT, PRICE)], makeReturns = TRUE) rCov(pData[,list(DT, PRICE)], makeReturns = TRUE, alignBy = "minutes", alignPeriod = 1) # Aggregate based on the volume measure. Getting 78 observations. agged <- businessTimeAggregation(pData, measure = "volume", obs = 78) rCov(agged$pData[,list(DT, PRICE)], makeReturns = TRUE) rCov(pData[,list(DT, PRICE)], makeReturns = TRUE, alignBy = "minutes", alignPeriod = 5)
Calculates the test-statistic for the drift burst hypothesis
Let the efficient log-price be defined as:
where ,
, and
are the spot drift, the spot volatility, and a jump process respectively.
However, due to microstructure noise, the observed log-price is
In order robustify the results to the presence of market microstructure noise, the pre-averaged returns are used:
where is a weighting function,
, and
is the pre-averaging horizon.
The test statistic for the Drift Burst Hypothesis can then be calculated as
where
and
where is a smooth kernel function, in this case the Parzen kernel.
is the lag length for adjusting for auto-correlation and
is a kernel weighting function, which in this case is the left-sided exponential kernel.
driftBursts( pData, testTimes = seq(34260, 57600, 60), preAverage = 5, ACLag = -1L, meanBandwidth = 300L, varianceBandwidth = 900L, parallelize = FALSE, nCores = NA, warnings = TRUE )
driftBursts( pData, testTimes = seq(34260, 57600, 60), preAverage = 5, ACLag = -1L, meanBandwidth = 300L, varianceBandwidth = 900L, parallelize = FALSE, nCores = NA, warnings = TRUE )
pData |
Either a |
testTimes |
A |
preAverage |
A positive |
ACLag |
A positive |
meanBandwidth |
An |
varianceBandwidth |
An |
parallelize |
A |
nCores |
An |
warnings |
A |
If the testTimes
vector contains instructions to test before the first trade, or more than 15 minutes after the last trade, these entries will be deleted, as not doing so may cause crashes.
The test statistic is unstable before max(meanBandwidth , varianceBandwidth)
seconds has passed.
The lags from the Newey-West algorithm is increased by 2 * (preAveage-1)
due to the pre-averaging we know at least this many lags should be corrected for.
The maximum of 20 lags is also increased by this factor for the same reason.
An object of class DBH
and list
containing the series of the drift burst hypothesis test-statistic as well as the estimated spot drift and variance series.
The list also contains some information such as the variance and mean bandwidths along with the pre-averaging setting and the amount of observations.
Additionally, the list will contain information on whether testing happened for all testTimes
entries.
Objects of class DBH
has the methods print.DBH
, plot.DBH
, and getCriticalValues.DBH
which prints, plots, and
retrieves critical values for the test described in appendix B of Christensen, Oomen, and Reno (2020).
Emil Sjoerup
Christensen, K., Oomen, R., and Reno, R. (2020) The drift burst hypothesis. Journal of Econometrics. Forthcoming.
# Usage with data.table object dat <- sampleTData[as.Date(DT) == "2018-01-02"] # Testing every 60 seconds after 09:45:00 DBH1 <- driftBursts(dat, testTimes = seq(35100, 57600, 60), preAverage = 2, ACLag = -1L, meanBandwidth = 300L, varianceBandwidth = 900L) print(DBH1) plot(DBH1, pData = dat) # Usage with xts object (1 column) library("xts") dat <- xts(sampleTData[as.Date(DT) == "2018-01-03"]$PRICE, order.by = sampleTData[as.Date(DT) == "2018-01-03"]$DT) # Testing every 60 seconds after 09:45:00 DBH2 <- driftBursts(dat, testTimes = seq(35100, 57600, 60), preAverage = 2, ACLag = -1L, meanBandwidth = 300L, varianceBandwidth = 900L) plot(DBH2, pData = dat) ## Not run: # This block takes some time dat <- xts(sampleTDataEurope$PRICE, order.by = sampleTDataEurope$DT) # Testing every 60 seconds after 09:00:00 system.time({DBH4 <- driftBursts(dat, testTimes = seq(32400 + 900, 63000, 60), preAverage = 2, ACLag = -1L, meanBandwidth = 300L, varianceBandwidth = 900L)}) system.time({DBH4 <- driftBursts(dat, testTimes = seq(32400 + 900, 63000, 60), preAverage = 2, ACLag = -1L, meanBandwidth = 300L, varianceBandwidth = 900L, parallelize = TRUE, nCores = 8)}) plot(DBH4, pData = dat) # The print method for DBH objects takes an argument alpha that determines the confidence level # of the test performed print(DBH4, alpha = 0.99) # Additionally, criticalValue can be passed directly print(DBH4, criticalValue = 3) max(abs(DBH4$tStat)) > getCriticalValues(DBH4, 0.99)$quantile ## End(Not run)
# Usage with data.table object dat <- sampleTData[as.Date(DT) == "2018-01-02"] # Testing every 60 seconds after 09:45:00 DBH1 <- driftBursts(dat, testTimes = seq(35100, 57600, 60), preAverage = 2, ACLag = -1L, meanBandwidth = 300L, varianceBandwidth = 900L) print(DBH1) plot(DBH1, pData = dat) # Usage with xts object (1 column) library("xts") dat <- xts(sampleTData[as.Date(DT) == "2018-01-03"]$PRICE, order.by = sampleTData[as.Date(DT) == "2018-01-03"]$DT) # Testing every 60 seconds after 09:45:00 DBH2 <- driftBursts(dat, testTimes = seq(35100, 57600, 60), preAverage = 2, ACLag = -1L, meanBandwidth = 300L, varianceBandwidth = 900L) plot(DBH2, pData = dat) ## Not run: # This block takes some time dat <- xts(sampleTDataEurope$PRICE, order.by = sampleTDataEurope$DT) # Testing every 60 seconds after 09:00:00 system.time({DBH4 <- driftBursts(dat, testTimes = seq(32400 + 900, 63000, 60), preAverage = 2, ACLag = -1L, meanBandwidth = 300L, varianceBandwidth = 900L)}) system.time({DBH4 <- driftBursts(dat, testTimes = seq(32400 + 900, 63000, 60), preAverage = 2, ACLag = -1L, meanBandwidth = 300L, varianceBandwidth = 900L, parallelize = TRUE, nCores = 8)}) plot(DBH4, pData = dat) # The print method for DBH objects takes an argument alpha that determines the confidence level # of the test performed print(DBH4, alpha = 0.99) # Additionally, criticalValue can be passed directly print(DBH4, criticalValue = 3) max(abs(DBH4$tStat)) > getCriticalValues(DBH4, 0.99)$quantile ## End(Not run)
xts
object for the exchange hours onlyFilter raw trade data such and return only data between market close and market open.
By default, marketOpen = "09:30:00"
and marketClose = "16:00:00"
(see Brownlees and Gallo (2006) for more information on good choices for these arguments).
exchangeHoursOnly( data, marketOpen = "09:30:00", marketClose = "16:00:00", tz = NULL )
exchangeHoursOnly( data, marketOpen = "09:30:00", marketClose = "16:00:00", tz = NULL )
data |
a |
marketOpen |
character in the format of |
marketClose |
character in the format of |
tz |
fallback time zone used in case we we are unable to identify the timezone of the data, by default: |
xts
or data.table
object depending on input.
Jonathan Cornelissen, Kris Boudt, Onno Kleen, and Emil Sjoerup.
Brownlees, C. T. and Gallo, G. M. (2006). Financial econometric analysis at ultra-high frequency: Data handling concerns. Computational Statistics & Data Analysis, 51, pages 2232-2245.
exchangeHoursOnly(sampleTDataRaw)
exchangeHoursOnly(sampleTDataRaw)
Convenience function to gather data from one xts
or data.table
with at least "DT"
, and d columns containing price data to a "DT"
, "SYMBOL"
, and "PRICE"
column. This function the opposite of spreadPrices
.
gatherPrices(data)
gatherPrices(data)
data |
An |
a data.table
with columns DT
, SYMBOL
, and PRICE
Emil Sjoerup
## Not run: library(data.table) data1 <- copy(sampleTData)[, `:=`(PRICE = PRICE * runif(.N, min = 0.99, max = 1.01), DT = DT + runif(.N, 0.01, 0.02))] data2 <- copy(sampleTData)[, SYMBOL := 'XYZ'] dat1 <- rbind(data1[, list(DT, SYMBOL, PRICE)], data2[, list(DT, SYMBOL, PRICE)]) setkeyv(dat1, c("DT", "SYMBOL")) dat1 dat <- spreadPrices(dat1) # Easy to use for realized measures dat dat <- gatherPrices(dat) dat all.equal(dat1, dat) # We have changed to RM format and back. ## End(Not run)
## Not run: library(data.table) data1 <- copy(sampleTData)[, `:=`(PRICE = PRICE * runif(.N, min = 0.99, max = 1.01), DT = DT + runif(.N, 0.01, 0.02))] data2 <- copy(sampleTData)[, SYMBOL := 'XYZ'] dat1 <- rbind(data1[, list(DT, SYMBOL, PRICE)], data2[, list(DT, SYMBOL, PRICE)]) setkeyv(dat1, c("DT", "SYMBOL")) dat1 dat <- spreadPrices(dat1) # Easy to use for realized measures dat dat <- gatherPrices(dat) dat all.equal(dat1, dat) # We have changed to RM format and back. ## End(Not run)
Function to retrieve high frequency data from Alpha Vantage - wrapper around quantmod's getSymbols.av function
getAlphaVantageData( symbols = NULL, interval = "5min", outputType = "xts", apiKey = NULL, doSleep = TRUE )
getAlphaVantageData( symbols = NULL, interval = "5min", outputType = "xts", apiKey = NULL, doSleep = TRUE )
symbols |
character vector with the symbols to import. |
interval |
the sampling interval of the data retrieved. Should be one of one of "1min", "5min", "15min", "30min", or "60min" |
outputType |
string either |
apiKey |
string with the api key provided by Alpha Vantage. |
doSleep |
logical when the length of symbols > 5 the function will sleep for 12 seconds by default. |
The doSleep
argument is set to true as default because Alpha Vantage has a limit of five calls per minute.
The function does not try to extract when the last API call was made which means that if
you made successive calls to get 3 symbols in rapid succession, the function may not retrieve all the data.
An object of type xts
or data.table
in case the length of symbols is 1. If the length of symbols > 1 the xts
and
data.table
objects will be put into a list.
Emil Sjoerup (wrapper only) Paul Teetor (for quantmod's getSymbols.av)
The getSymbols.av function in the quantmod package
## Not run: # Get data for SPY at an interval of 1 minute in the standard xts format. data <- getAlphaVantageData(symbols = "SPY", apiKey = "yourKey", interval = "1min") # Get data for 3M and Goldman Sachs at a 5 minute interval in the data.table format. # The data.tables will be put in a list. data <- getAlphaVantageData(symbols = c("MMM", "GS"), interval = "5min", outputType = "DT", apiKey = 'yourKey') # Get data for JPM and Citicorp at a 15 minute interval in the xts format. # The xts objects will be put in a list. data <- getAlphaVantageData(symbols = c("JPM", "C"), interval = "15min", outputType = "xts", apiKey = "yourKey") ## End(Not run)
## Not run: # Get data for SPY at an interval of 1 minute in the standard xts format. data <- getAlphaVantageData(symbols = "SPY", apiKey = "yourKey", interval = "1min") # Get data for 3M and Goldman Sachs at a 5 minute interval in the data.table format. # The data.tables will be put in a list. data <- getAlphaVantageData(symbols = c("MMM", "GS"), interval = "5min", outputType = "DT", apiKey = 'yourKey') # Get data for JPM and Citicorp at a 15 minute interval in the xts format. # The xts objects will be put in a list. data <- getAlphaVantageData(symbols = c("JPM", "C"), interval = "15min", outputType = "xts", apiKey = "yourKey") ## End(Not run)
Method for DBH objects to calculate the critical value for the presence of a burst of drift. The critical value is that of the test described in appendix B in Christensen Oomen Reno
getCriticalValues(x, alpha = 0.95)
getCriticalValues(x, alpha = 0.95)
x |
object of class |
alpha |
numeric denoting the confidence level for the critical value. Possible values are |
Emil Sjoerup
Christensen, K., Oomen, R., and Reno, R. (2020) The drift burst hypothesis. Journal of Econometrics. Forthcoming.
Function returns an xts
or data.table
object containing 23 liquidity measures. Please see details below.
Note that this assumes a regular time grid.
getLiquidityMeasures(tqData, win = 300)
getLiquidityMeasures(tqData, win = 300)
tqData |
A |
win |
A windows length for the forward-prices used for ‘realized’ spread |
NOTE: xts
or data.table
should only contain one day of observations
Some markets have publish information about whether it was a buyer or a seller who initiated the trade.
This information can be passed in a column DIRECTION
this column must only have 1 or -1 as values.
The respective liquidity measures are defined as follows:
effectiveSpread
where is 1 (-1) if
was buy (sell) (see Boehmer (2005), Bessembinder (2003)).
Note that the input of this function consists of the matched trades and quotes,
so this is were the time indication refers to (and thus not to the registered quote timestamp).
realizedSpread: realized spread
where is 1 (-1) if
was buy (sell) (see Boehmer (2005), Bessembinder (2003)).
Note that the time indication of
and
refers
to the registered time of the quote in seconds.
valueTrade: trade value
signedValueTrade: signed trade value
where is 1 (-1) if
was buy (sell)
(see Boehmer (2005), Bessembinder (2003)).
depthImbalanceDifference: depth imbalance (as a difference)
where is 1 (-1) if
was buy (sell) (see Boehmer (2005), Bessembinder (2003)).
Note that the input of this function consists of the matched trades and quotes,
so this is were the time indication refers to (and thus not to the registered quote timestamp).
depthImbalanceRatio: depth imbalance (as ratio)
where is 1 (-1) if
was buy (sell) (see Boehmer (2005), Bessembinder (2003)).
Note that the input of this function consists of the matched trades and quotes,
so this is were the time indication refers to (and thus not to the registered quote timestamp).
proportionalEffectiveSpread: proportional effective spread
(Venkataraman, 2001).
Note that the input of this function consists of the matched trades and quotes, so this is were the time indication refers to (and thus not to the registered quote timestamp).
proportionalRealizedSpread: proportional realized spread
(Venkataraman, 2001).
Note that the input of this function consists of the matched trades and quotes, so this is were the time indication refers to (and thus not to the registered
priceImpact: price impact
(see Boehmer (2005), Bessembinder (2003)).
proportionalPriceImpact: proportional price impact
(Venkataraman, 2001). Note that the input of this function consists of the matched trades and quotes, so this is where the time indication refers to (and thus not to the registered quote timestamp).
halfTradedSpread: half traded spread
where is 1 (-1) if
was buy (sell) (see Boehmer (2005), Bessembinder (2003)).
Note that the input of this function consists of the matched trades and quotes,
so this is were the time indication refers to (and thus not to the registered quote timestamp).
proportionalHalfTradedSpread: proportional half traded spread
Note that the input of this function consists of the matched trades and quotes, so this is were the time indication refers to (and thus not to the registered quote timestamp).
squaredLogReturn: squared log return on trade prices
absLogReturn: absolute log return on trade prices
quotedSpread: quoted spread
Note that the input of this function consists of the matched trades and quotes, so this is where the time indication refers to (and thus not to the registered quote timestamp).
proportionalQuotedSpread: proportional quoted spread
(Venkataraman, 2001). Note that the input of this function consists of the matched trades and quotes, so this is where the time indication refers to (and thus not to the registered quote timestamp).
logQuotedSpread: log quoted spread
(Hasbrouck and Seppi, 2001). Note that the input of this function consists of the matched trades and quotes, so this is where the time indication refers to (and thus not to the registered quote timestamp).
logQuotedSize: log quoted size
(Hasbrouck and Seppi, 2001). Note that the input of this function consists of the matched trades and quotes, so this is where the time indication refers to (and thus not to the registered quote timestamp).
quotedSlope: quoted slope
(Hasbrouck and Seppi, 2001).
logQSlope: log quoted slope
midQuoteSquaredReturn: midquote squared return
where .
midQuoteAbsReturn: midquote absolute return
where .
signedTradeSize: signed trade size
where is 1 (-1) if
was buy (sell).
A modified (enlarged) xts
or data.table
with the new measures.
Bessembinder, H. (2003). Issues in assessing trade execution costs. Journal of Financial Markets, 223-257.
Boehmer, E. (2005). Dimensions of execution quality: Recent evidence for US equity markets. Journal of Financial Economics, 78, 553-582.
Hasbrouck, J. and Seppi, D. J. (2001). Common factors in prices, order flows and liquidity. Journal of Financial Economics, 59, 383-411.
Venkataraman, K. (2001). Automated versus floor trading: An analysis of execution costs on the Paris and New York exchanges. The Journal of Finance, 56, 1445-1485.
tqData <- matchTradesQuotes(sampleTData[as.Date(DT) == "2018-01-02"], sampleQData[as.Date(DT) == "2018-01-02"]) res <- getLiquidityMeasures(tqData) res
tqData <- matchTradesQuotes(sampleTData[as.Date(DT) == "2018-01-02"], sampleQData[as.Date(DT) == "2018-01-02"]) res <- getLiquidityMeasures(tqData) res
Function returns a vector with the inferred trade direction which is determined using the Lee and Ready algorithm (Lee and Ready, 1991).
getTradeDirection(tqData)
getTradeDirection(tqData)
tqData |
|
NOTE: By convention the first observation is always marked as a buy.
A vector which has values 1 or (-1) if the inferred trade direction is buy or sell respectively.
Jonathan Cornelissen, Kris Boudt, Onno Kleen, and Emil Sjoerup. Special thanks to Dirk Eddelbuettel.
Lee, C. M. C. and Ready, M. J. (1991). Inferring trade direction from intraday data. Journal of Finance, 46, 733-746.
# Generate matched trades and quote data set tqData <- matchTradesQuotes(sampleTData[as.Date(DT) == "2018-01-02"], sampleQData[as.Date(DT) == "2018-01-02"]) directions <- getTradeDirection(tqData) head(directions)
# Generate matched trades and quote data set tqData <- matchTradesQuotes(sampleTData[as.Date(DT) == "2018-01-02"], sampleQData[as.Date(DT) == "2018-01-02"]) directions <- getTradeDirection(tqData) head(directions)
Function returns the estimates for the heterogeneous autoregressive model (HAR) for realized volatility discussed in Andersen et al. (2007) and Corsi (2009). This model is mainly used to forecast the next day's volatility based on the high-frequency returns of the past.
HARmodel( data, periods = c(1, 5, 22), periodsJ = c(1, 5, 22), periodsQ = c(1), leverage = NULL, RVest = c("rCov", "rBPCov", "rQuar"), type = "HAR", inputType = "RM", jumpTest = "ABDJumptest", alpha = 0.05, h = 1, transform = NULL, externalRegressor = NULL, periodsExternal = c(1), ... )
HARmodel( data, periods = c(1, 5, 22), periodsJ = c(1, 5, 22), periodsQ = c(1), leverage = NULL, RVest = c("rCov", "rBPCov", "rQuar"), type = "HAR", inputType = "RM", jumpTest = "ABDJumptest", alpha = 0.05, h = 1, transform = NULL, externalRegressor = NULL, periodsExternal = c(1), ... )
data |
an |
periods |
a vector of integers indicating over how days the realized measures in the model should be aggregated.
By default |
periodsJ |
a vector of integers indicating over what time periods the jump components in the model should be aggregated.
By default |
periodsQ |
a vector of integers indicating over what time periods the realized quarticity in the model should be aggregated.
By default |
leverage |
a vector of integers indicating over what periods the negative returns should be aggregated.
See Corsi and Reno (2012) for more information. By default |
RVest |
a character vector with one, two, or three elements. The first element always refers to the name of the function to estimate the daily integrated variance (non-jump-robust).
The second and third element depends on which type of model is estimated:
If |
type |
a string referring to the type of HAR model you would like to estimate. By default |
inputType |
a string denoting if the input data consists of realized measures or high-frequency returns. Default "RM" is the only way to denote realized measures and everything else denotes returns. |
jumpTest |
the function name of a function used to test whether the test statistic which determines whether the jump variability is significant that day. By default |
alpha |
a real indicating the confidence level used in testing for jumps. By default |
h |
an integer indicating the number over how many days the dependent variable should be aggregated.
By default, |
transform |
optionally a string referring to a function that transforms both the dependent and explanatory variables in the model. By default |
externalRegressor |
an |
periodsExternal |
a vector of integers indicating over how days |
... |
extra arguments for jump test. |
The basic specification in Corsi (2009) is as follows.
Let be the realized variances at day
and
the average
realized variance in between
and
,
.
The dynamics of the model are given by
which is estimated by ordinary least squares under the assumption that at time ,
the conditional mean of
is equal to zero.
For other specifications, please refer to the cited papers.
The standard errors reporting in the print
and summary
methods are Newey-West standard errors calculated with the sandwich package.
The function outputs an object of class HARmodel
and lm
(so HARmodel
is a subclass of lm
). Objects
of class HARmodel
has the following methods plot.HARmodel
, predict.HARmodel
, print.HARmodel
, and summary.HARmodel
.
Jonathan Cornelissen, Kris Boudt, Onno Kleen, and Emil Sjoerup.
Andersen, T. G., Bollerslev, T., and Diebold, F. (2007). Roughing it up: Including jump components in the measurement, modelling and forecasting of return volatility. The Review of Economics and Statistics, 89, 701-720.
Corsi, F. (2009). A simple approximate long memory model of realized volatility. Journal of Financial Econometrics, 7, 174-196.
Corsi, F. and Reno R. (2012). Discrete-time volatility forecasting with persistent leverage effect and the link with continuous-time volatility modeling. Journal of Business & Economic Statistics, 30, 368-380.
Bollerslev, T., Patton, A., and Quaedvlieg, R. (2016). Exploiting the errors: A simple approach for improved volatility forecasting, Journal of Econometrics, 192, 1-18.
# Example 1: HAR # Forecasting daily Realized volatility for the S&P 500 using the basic HARmodel: HAR library(xts) RVSPY <- as.xts(SPYRM$RV5, order.by = SPYRM$DT) x <- HARmodel(data = RVSPY , periods = c(1,5,22), RVest = c("rCov"), type = "HAR", h = 1, transform = NULL, inputType = "RM") class(x) x summary(x) plot(x) predict(x) # Example 2: HARQ # Get the highfrequency returns dat <- as.xts(sampleOneMinuteData[, makeReturns(STOCK), by = list(DATE = as.Date(DT))]) x <- HARmodel(dat, periods = c(1,5,10), periodsJ = c(1,5,10), periodsQ = c(1), RVest = c("rCov", "rQuar"), type="HARQ", inputType = "returns") # Estimate the HAR model of type HARQ class(x) x # plot(x) # predict(x) # Example 3: HARQJ with already computed realized measures dat <- SPYRM[, list(DT, RV5, BPV5, RQ5)] x <- HARmodel(as.xts(dat), periods = c(1,5,22), periodsJ = c(1), periodsQ = c(1), type = "HARQJ") # Estimate the HAR model of type HARQJ class(x) x # plot(x) predict(x) # Example 4: CHAR with already computed realized measures dat <- SPYRM[, list(DT, RV5, BPV5)] x <- HARmodel(as.xts(dat), periods = c(1, 5, 22), type = "CHAR") # Estimate the HAR model of type CHAR class(x) x # plot(x) predict(x) # Example 5: CHARQ with already computed realized measures dat <- SPYRM[, list(DT, RV5, BPV5, RQ5)] x <- HARmodel(as.xts(dat), periods = c(1,5,22), periodsQ = c(1), type = "CHARQ") # Estimate the HAR model of type CHARQ class(x) x # plot(x) predict(x) # Example 6: HARCJ with pre-computed test-statistics ## BNSJumptest manually calculated. testStats <- sqrt(390) * (SPYRM$RV1 - SPYRM$BPV1)/sqrt((pi^2/4+pi-3 - 2) * SPYRM$medRQ1) model <- HARmodel(cbind(as.xts(SPYRM[, list(DT, RV5, BPV5)]), testStats), type = "HARCJ")
# Example 1: HAR # Forecasting daily Realized volatility for the S&P 500 using the basic HARmodel: HAR library(xts) RVSPY <- as.xts(SPYRM$RV5, order.by = SPYRM$DT) x <- HARmodel(data = RVSPY , periods = c(1,5,22), RVest = c("rCov"), type = "HAR", h = 1, transform = NULL, inputType = "RM") class(x) x summary(x) plot(x) predict(x) # Example 2: HARQ # Get the highfrequency returns dat <- as.xts(sampleOneMinuteData[, makeReturns(STOCK), by = list(DATE = as.Date(DT))]) x <- HARmodel(dat, periods = c(1,5,10), periodsJ = c(1,5,10), periodsQ = c(1), RVest = c("rCov", "rQuar"), type="HARQ", inputType = "returns") # Estimate the HAR model of type HARQ class(x) x # plot(x) # predict(x) # Example 3: HARQJ with already computed realized measures dat <- SPYRM[, list(DT, RV5, BPV5, RQ5)] x <- HARmodel(as.xts(dat), periods = c(1,5,22), periodsJ = c(1), periodsQ = c(1), type = "HARQJ") # Estimate the HAR model of type HARQJ class(x) x # plot(x) predict(x) # Example 4: CHAR with already computed realized measures dat <- SPYRM[, list(DT, RV5, BPV5)] x <- HARmodel(as.xts(dat), periods = c(1, 5, 22), type = "CHAR") # Estimate the HAR model of type CHAR class(x) x # plot(x) predict(x) # Example 5: CHARQ with already computed realized measures dat <- SPYRM[, list(DT, RV5, BPV5, RQ5)] x <- HARmodel(as.xts(dat), periods = c(1,5,22), periodsQ = c(1), type = "CHARQ") # Estimate the HAR model of type CHARQ class(x) x # plot(x) predict(x) # Example 6: HARCJ with pre-computed test-statistics ## BNSJumptest manually calculated. testStats <- sqrt(390) * (SPYRM$RV1 - SPYRM$BPV1)/sqrt((pi^2/4+pi-3 - 2) * SPYRM$medRQ1) model <- HARmodel(cbind(as.xts(SPYRM[, list(DT, RV5, BPV5)]), testStats), type = "HARCJ")
This function calculates the High frEquency bAsed VolatilitY (HEAVY) model proposed in Shephard and Sheppard (2010).
HEAVYmodel(data, startingValues = NULL)
HEAVYmodel(data, startingValues = NULL)
data |
an |
startingValues |
a vector of alternative starting values: first three arguments for variance equation and last three arguments for measurement equation. |
Let and
be series of demeaned returns and realized measures of
daily stock price variation. The HEAVY model is a two-component model.
We assume
where
is an i.i.d. zero-mean
and unit-variance innovation term. The dynamics of the HEAVY model are given by
and
The two equations are estimated separately as mentioned in Shephard and Sheppard (2010). We report robust standard errors based on the matrix-product of inverted Hessians and the outer product of gradients.
Note that we always demean the returns in the data input as we don't include a constant in the mean equation.
The function outputs an object of class HEAVYmodel
, a list containing
coefficients = estimated coefficients.
se = robust standard errors based on inverted Hessian matrix.
residuals = the residuals in the return equation.
llh = the two-component log-likelihood values.
varCondVariances = conditional variances in the variance equation.
RMCondVariances = conditional variances in the RM equation.
data = the input data.
The class HEAVYmodel has the following methods: plot.HEAVYmodel, predict.HEAVYmodel, print.HEAVYmodel, and summary.HEAVYmodel.
Onno Kleen and Emil Sjorup.
Shephard, N. and Sheppard, K. (2010). Realising the future: Forecasting with high frequency based volatility (HEAVY) models. Journal of Applied Econometrics 25, 197–231.
# Calculate returns in percentages logReturns <- 100 * makeReturns(SPYRM$CLOSE)[-1] # Combine both returns and realized measures into one xts # Due to return calculation, the first observation is missing dataSPY <- xts::xts(cbind(logReturns, SPYRM$BPV5[-1] * 10000), order.by = SPYRM$DT[-1]) # Fit the HEAVY model fittedHEAVY <- HEAVYmodel(dataSPY) # Examine the estimated coefficients and robust standard errors fittedHEAVY # Calculate iterative multi-step-ahead forecasts predict(fittedHEAVY, stepsAhead = 12)
# Calculate returns in percentages logReturns <- 100 * makeReturns(SPYRM$CLOSE)[-1] # Combine both returns and realized measures into one xts # Due to return calculation, the first observation is missing dataSPY <- xts::xts(cbind(logReturns, SPYRM$BPV5[-1] * 10000), order.by = SPYRM$DT[-1]) # Fit the HEAVY model fittedHEAVY <- HEAVYmodel(dataSPY) # Examine the estimated coefficients and robust standard errors fittedHEAVY # Calculate iterative multi-step-ahead forecasts predict(fittedHEAVY, stepsAhead = 12)
This documentation page functions as a point of reference to quickly look up the estimators of the integrated covariance provided in the highfrequency package.
The implemented estimators are:
Realized covariance rCov
Realized bipower covariance rBPCov
Hayashi-Yoshida realized covariance rHYCov
Realized kernel covariance rKernelCov
Realized outlyingness-weighted covariance rOWCov
Realized threshold covariance rThresholdCov
Realized two-scale covariance rTSCov
Robust realized two-scale covariance rRTSCov
Subsampled realized covariance rAVGCov
Realized semi-covariance rSemiCov
Modulated Realized covariance rMRCov
Realized Cholesky covariance rCholCov
Beta-adjusted realized covariance rBACov
IVar
for a list of implemented estimators of the integrated variance.
This function can be used to test for jumps in intraday price paths.
The tests are of the form .
See spotVol
and spotDrift
for Estimators for and
, respectively.
intradayJumpTest( pData, volEstimator = "RM", driftEstimator = "none", alpha = 0.95, alignBy = "minutes", alignPeriod = 5, marketOpen = "09:30:00", marketClose = "16:00:00", tz = NULL, n = NULL, ... )
intradayJumpTest( pData, volEstimator = "RM", driftEstimator = "none", alpha = 0.95, alignBy = "minutes", alignPeriod = 5, marketOpen = "09:30:00", marketClose = "16:00:00", tz = NULL, n = NULL, ... )
pData |
|
volEstimator |
character denoting which volatility estimator to use for the tests. See |
driftEstimator |
character denoting which drift estimator to use for the tests. See |
alpha |
numeric of length one determining what confidence level to use when constructing the critical values. |
alignBy |
character, indicating the time scale in which |
alignPeriod |
positive numeric, indicating the number of periods to aggregate over. E.g. to aggregate
based on a 5 minute frequency, set |
marketOpen |
the market opening time. This should be in the time zone
specified by |
marketClose |
the market closing time. This should be in the time zone
specified by |
tz |
fallback time zone used in case we we are unable to identify the timezone of the data, by default: |
n |
number of observation to use in the calculation of the critical values of the test statistic. If this is left as |
... |
extra arguments passed on to The null hypothesis of the tests in this function is that there are no jumps in the price series |
Emil Sjoerup
Christensen, K., Oomen, R. C. A., Podolskij, M. (2014): Fact or Friction: Jumps at ultra high frequency. Journal of Financial Economics, 144, 576-599
## Not run: # We can easily make a Lee-Mykland jump test. LMtest <- intradayJumpTest(pData = sampleTData[, list(DT, PRICE)], volEstimator = "RM", driftEstimator = "none", RM = "rBPCov", lookBackPeriod = 20, alignBy = "minutes", alignPeriod = 5, marketOpen = "09:30:00", marketClose = "16:00:00") plot(LMtest) # We can just as easily use the pre-averaged version from the "Fact or Friction" paper FoFtest <- intradayJumpTest(pData = sampleTData[, list(DT, PRICE)], volEstimator = "PARM", driftEstimator = "none", RM = "rBPCov", lookBackPeriod = 20, theta = 1.2, marketOpen = "09:30:00", marketClose = "16:00:00") plot(FoFtest) ## End(Not run)
## Not run: # We can easily make a Lee-Mykland jump test. LMtest <- intradayJumpTest(pData = sampleTData[, list(DT, PRICE)], volEstimator = "RM", driftEstimator = "none", RM = "rBPCov", lookBackPeriod = 20, alignBy = "minutes", alignPeriod = 5, marketOpen = "09:30:00", marketClose = "16:00:00") plot(LMtest) # We can just as easily use the pre-averaged version from the "Fact or Friction" paper FoFtest <- intradayJumpTest(pData = sampleTData[, list(DT, PRICE)], volEstimator = "PARM", driftEstimator = "none", RM = "rBPCov", lookBackPeriod = 20, theta = 1.2, marketOpen = "09:30:00", marketClose = "16:00:00") plot(FoFtest) ## End(Not run)
This documentation page functions as a point of reference to quickly look up the estimators of the integrated variance provided in the highfrequency package.
The implemented estimators are:
Realized Variance rRVar
Median realized variance rMedRVar
Minimum realized variance rMinRVar
Realized quadpower variance rQPVar
Realized multipower variance rMPVar
Realized semivariance rSVar
Note that almost all estimators in the list in ICov
also work yield estimates of the integrated variance on the diagonals.
ICov
for a list of implemented estimators of the integrated covariance.
This function supplies information about standard error and confidence band of integrated variance (IV) estimators under Brownian semimartingales model such as: bipower variation, rMinRV, rMedRV. Depending on users' choices of estimator (integrated variance (IVestimator), integrated quarticity (IQestimator)) and confidence level, the function returns the result.(Barndorff (2002)) Function returns three outcomes: 1.value of IV estimator 2.standard error of IV estimator and 3.confidence band of IV estimator.
Assume there is equispaced returns in period
.
Then the IVinference is given by:
in which,
critical value.
standard error.
depending on IQestimator,
can take different value (Andersen et al. (2012)).
integrated quarticity estimator.
IVinference( rData, IVestimator = "RV", IQestimator = "rQuar", confidence = 0.95, alignBy = NULL, alignPeriod = NULL, makeReturns = FALSE, ... )
IVinference( rData, IVestimator = "RV", IQestimator = "rQuar", confidence = 0.95, alignBy = NULL, alignPeriod = NULL, makeReturns = FALSE, ... )
rData |
|
IVestimator |
can be chosen among integrated variance estimators: RV, BV, rMinRV or rMedRV. RV by default. |
IQestimator |
can be chosen among integrated quarticity estimators: rQuar, realized tri-power quarticity (TPQ), quad-power quarticity (QPQ), rMinRQuar or rMedRQuar. TPQ by default. |
confidence |
confidence level set by users. 0.95 by default. |
alignBy |
character, indicating the time scale in which |
alignPeriod |
positive numeric, indicating the number of periods to aggregate over. E.g. to aggregate
based on a 5 minute frequency, set |
makeReturns |
boolean, should be |
... |
additional arguments. |
The theoretical framework is the logarithmic price process belongs to the class of Brownian semimartingales, which can be written as:
where is the drift term,
denotes the spot vivInferenceolatility process,
is a standard Brownian motion (assume that there are no jumps).
list
Giang Nguyen, Jonathan Cornelissen and Kris Boudt
Andersen, T. G., Dobrev, D., and Schaumburg, E. (2012). Jump-robust volatility estimation using nearest neighbor truncation. Journal of Econometrics, 169, 75-93.
Barndorff-Nielsen, O. E. (2002). Econometric analysis of realized volatility and its use in estimating stochastic volatility models. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 64, 253-280.
## Not run: library("xts") # This function only accepts xts data currently ivInf <- IVinference(as.xts(sampleTData[, list(DT, PRICE)]), IVestimator= "rMinRV", IQestimator = "rMedRQ", confidence = 0.95, makeReturns = TRUE) ivInf ## End(Not run)
## Not run: library("xts") # This function only accepts xts data currently ivInf <- IVinference(as.xts(sampleTData[, list(DT, PRICE)]), IVestimator= "rMinRV", IQestimator = "rMedRQ", confidence = 0.95, makeReturns = TRUE) ivInf ## End(Not run)
This test examines the jump in highfrequency data. It is based on theory of Jiang and Oomen (JO). They found that the difference of simple return and logarithmic return can capture one half of integrated variance if there is no jump in the underlying sample path. The null hypothesis is no jumps.
Function returns three outcomes: 1.z-test value 2.critical value under confidence level of and 3.p-value.
Assume there is equispaced returns in period
.
Let be a logarithmic return (with
) in period
.
Let be a simple return (with
) in period
.
Then the JOjumpTest is given by:
in which,
: bipower variance;
: realized variance (defined by Andersen et al. (2012));
: independent standard normal random variables
p: parameter (power).
JOjumpTest( pData, power = 4, alignBy = NULL, alignPeriod = NULL, alpha = 0.975, ... )
JOjumpTest( pData, power = 4, alignBy = NULL, alignPeriod = NULL, alpha = 0.975, ... )
pData |
a zoo/xts object containing all prices in period t for one asset. |
power |
can be chosen among 4 or 6. 4 by default. |
alignBy |
character, indicating the time scale in which |
alignPeriod |
positive numeric, indicating the number of periods to aggregate over. E.g. to aggregate
based on a 5 minute frequency, set |
alpha |
numeric of length one with the significance level to use for the jump test(s). Defaults to 0.975. |
... |
Used internally, do not set. |
The theoretical framework underlying jump test is that the logarithmic price process belongs to the class of Brownian semimartingales, which can be written as:
where is the drift term,
denotes the spot volatility process,
is a standard Brownian motion and
is a jump process defined by:
where are nonzero random variables. The counting process can be either finite or infinite for finite or infinite activity jumps.
The the Jiang and Ooment test is that in the absence of jumps, the accumulated difference between the simple returns and log returns captures half of the integrated variance. (Theodosiou and Zikes, 2009). If this difference is too great, the null hypothesis of no jumps is rejected.
list
Giang Nguyen, Jonathan Cornelissen, Kris Boudt, and Emil Sjoerup
Andersen, T. G., Dobrev, D., and Schaumburg, E. (2012). Jump-robust volatility estimation using nearest neighbor truncation. Journal of Econometrics, 169, 75- 93.
Jiang, J. G., and Oomen, R. C. A (2008). Testing for jumps when asset prices are observed with noise- a "swap variance" approach. Journal of Econometrics, 144, 352-370.
Theodosiou, M., Zikes, F. (2009). A comprehensive comparison of alternative tests for jumps in asset prices. Unpublished manuscript, Graduate School of Business, Imperial College London.
joDT <- JOjumpTest(sampleTData[, list(DT, PRICE)])
joDT <- JOjumpTest(sampleTData[, list(DT, PRICE)])
Function to choose the tuning parameter, kn in ReMeDI estimation.
The optimal parameter kn
is the smallest value that where the criterion:
is perceived to be zero. The tuning parameter tol
can be set to choose the tolerance of the perception of 'close to zero', a higher tolerance will lead to a higher optimal value.
knChooseReMeDI( pData, knMax = 10, tol = 0.05, size = 3, lower = 2, upper = 5, plot = FALSE )
knChooseReMeDI( pData, knMax = 10, tol = 0.05, size = 3, lower = 2, upper = 5, plot = FALSE )
pData |
|
knMax |
max value of |
tol |
tolerance for the minimizing value. If |
size |
size of the local window. |
lower |
lower boundary for the method if it fails to find an optimal value. If this is the case, the best kn between lower and upper is returned |
upper |
upper boundary for the method if it fails to find an optimal value. If this is the case, the best kn between lower and upper is returned |
plot |
logical whether to plot the errors. |
This is the algorithm B.2 in the appendix of the Li and Linton (2019) working paper.
integer containing the optimal kn
We Thank Merrick Li for contributing his Matlab code for this estimator.
Emil Sjoerup.
Li, M. and Linton, O. (2019). A ReMeDI for microstructure noise. Cambridge Working Papers in Economics 1908.
optimalKn <- knChooseReMeDI(sampleTData[as.Date(DT) == "2018-01-02",], knMax = 10, tol = 0.05, size = 3, lower = 2, upper = 5, plot = TRUE) optimalKn ## Not run: # We can also have a much larger search-space optimalKn <- knChooseReMeDI(sampleTDataEurope, knMax = 50, tol = 0.05, size = 3, lower = 2, upper = 5, plot = TRUE) optimalKn ## End(Not run)
optimalKn <- knChooseReMeDI(sampleTData[as.Date(DT) == "2018-01-02",], knMax = 10, tol = 0.05, size = 3, lower = 2, upper = 5, plot = TRUE) optimalKn ## Not run: # We can also have a much larger search-space optimalKn <- knChooseReMeDI(sampleTDataEurope, knMax = 50, tol = 0.05, size = 3, lower = 2, upper = 5, plot = TRUE) optimalKn ## End(Not run)
Function that estimates whether one series leads (or lags) another.
Let and
be two observed price over the time interval
.
For every integer , we form the shifted time series
is an interval for
, define the shift interval
then let
Which will be abbreviated:
Then the shifted HY contrast function is:
This contrast function is then calculated for all the lags passed in the argument lags
leadLag( price1 = NULL, price2 = NULL, lags = NULL, resolution = "seconds", normalize = TRUE, parallelize = FALSE, nCores = NA )
leadLag( price1 = NULL, price2 = NULL, lags = NULL, resolution = "seconds", normalize = TRUE, parallelize = FALSE, nCores = NA )
price1 |
|
price2 |
|
lags |
a numeric denoting which lags (in units of |
resolution |
the resolution at which the lags is measured. The default is "seconds", use this argument to gain 1000 times resolution by setting it to either "ms", "milliseconds", or "milli-seconds". |
normalize |
logical denoting whether the contrasts should be normalized by the product of the L2 norms of both the prices. Default = TRUE. This does not change the value of the lead-lag-ratio. |
parallelize |
logical denoting whether to use a parallelized version of the C++ code (parallelized using OPENMP). Default = FALSE |
nCores |
integer valued numeric denoting how many cores to use for the lead-lag estimation procedure in case parallelize is TRUE. Default is NA, which does not parallelize the code. |
The lead-lag-ratio (LLR) can be used to see if one asset leads the other. If LLR < 1, then price1 MAY be leading price2 and vice versa if LLR > 1.
A list with class leadLag
which contains contrasts
, lead-lag-ratio
, and lags
, denoting the estimated values for each lag calculated,
the lead-lag-ratio, and the tested lags respectively.
Hoffmann, M., Rosenbaum, M., and Yoshida, N. (2013). Estimation of the lead-lag parameter from non-synchronous data. Bernoulli, 19, 1-37.
## Not run: # Toy example to show the usage # Spread prices spread <- spreadPrices(sampleMultiTradeData[SYMBOL %in% c("ETF", "AAA")]) # Use lead-lag estimator llEmpirical <- leadLag(spread[!is.na(AAA), list(DT, PRICE = AAA)], spread[!is.na(ETF), list(DT, PRICE = ETF)], seq(-15,15)) plot(llEmpirical) ## End(Not run)
## Not run: # Toy example to show the usage # Spread prices spread <- spreadPrices(sampleMultiTradeData[SYMBOL %in% c("ETF", "AAA")]) # Use lead-lag estimator llEmpirical <- leadLag(spread[!is.na(AAA), list(DT, PRICE = AAA)], spread[!is.na(ETF), list(DT, PRICE = ETF)], seq(-15,15)) plot(llEmpirical) ## End(Not run)
Returns a vector of the available kernels.
listAvailableKernels()
listAvailableKernels()
The available kernels are:
Rectangular: .
Bartlett: .
Second-order: .
Epanechnikov: .
Cubic: .
Fifth: .
Sixth:
Seventh: .
Eighth: .
Parzen: if
and
if
.
TukeyHanning: .
ModifiedTukeyHanning: .
a character vector.
Scott Payseur.
Barndorff-Nielsen, O. E., Hansen, P. R., Lunde, A., and Shephard, N. (2008). Designing realized kernels to measure the ex post variation of equity prices in the presence of noise. Econometrica, 76, 1481-1536.
listAvailableKernels
listAvailableKernels
Utility function listing the available estimators for the CholCov estimation
listCholCovEstimators()
listCholCovEstimators()
This function returns a character vector containing the available estimators.
This function makes OHLC-V bars at arbitrary intervals. If the SIZE column is not present in the input, no volume column is created.
makeOHLCV(pData, alignBy = "minutes", alignPeriod = 5, tz = NULL)
makeOHLCV(pData, alignBy = "minutes", alignPeriod = 5, tz = NULL)
pData |
|
alignBy |
character, indicating the time scale in which |
alignPeriod |
positive numeric, indicating the number of periods to aggregate over. For example, to aggregate
based on a 5 minute frequency, set |
tz |
fallback time zone used in case we we are unable to identify the timezone of the data, by default: |
Emil Sjoerup
## Not run: minuteBars <- makeOHLCV(sampleTDataEurope, alignBy = "minutes", alignPeriod = 1) # We can use the quantmod package's chartSeries function to plot the ohlcv data quantmod::chartSeries(minuteBars) minuteBars <- makeOHLCV(sampleTDataEurope[,], alignBy = "minutes", alignPeriod = 1) # Again we plot the series with chartSeries quantmod::chartSeries(minuteBars) # We can also handle data across multiple days. fiveMinuteBars <- makeOHLCV(sampleTData) # Again we plot the series with chartSeries quantmod::chartSeries(fiveMinuteBars) # We can use arbitrary alignPeriod, here we choose pi bars <- makeOHLCV(sampleTDataEurope, alignBy = "seconds", alignPeriod = pi) # Again we plot the series with chartSeries quantmod::chartSeries(bars) ## End(Not run)
## Not run: minuteBars <- makeOHLCV(sampleTDataEurope, alignBy = "minutes", alignPeriod = 1) # We can use the quantmod package's chartSeries function to plot the ohlcv data quantmod::chartSeries(minuteBars) minuteBars <- makeOHLCV(sampleTDataEurope[,], alignBy = "minutes", alignPeriod = 1) # Again we plot the series with chartSeries quantmod::chartSeries(minuteBars) # We can also handle data across multiple days. fiveMinuteBars <- makeOHLCV(sampleTData) # Again we plot the series with chartSeries quantmod::chartSeries(fiveMinuteBars) # We can use arbitrary alignPeriod, here we choose pi bars <- makeOHLCV(sampleTDataEurope, alignBy = "seconds", alignPeriod = pi) # Again we plot the series with chartSeries quantmod::chartSeries(bars) ## End(Not run)
Function returns the positive semidefinite projection of a symmetric matrix using the eigenvalue method.
makePsd(S, method = "covariance")
makePsd(S, method = "covariance")
S |
a non-PSD matrix. |
method |
character, indicating whether the negative eigenvalues of the correlation or covariance should be replaced by zero. Possible values are "covariance" and "correlation". |
We use the eigenvalue method to transform into a positive
semidefinite covariance matrix (see, e.g., Barndorff-Nielsen and Shephard, 2004, and Rousseeuw and Molenberghs, 1993). Let
be the
orthogonal matrix consisting of the
eigenvectors of
. Denote
its
eigenvalues, whereby the negative eigenvalues have been replaced by zeroes.
Under this approach, the positive semi-definite
projection of
is
.
If method = "correlation", the eigenvalues of the correlation matrix corresponding to the matrix are
transformed, see Fan et al. (2010).
A matrix containing the positive semi definite matrix.
Jonathan Cornelissen, Kris Boudt, and Emil Sjoerup.
Barndorff-Nielsen, O. E. and Shephard, N. (2004). Measuring the impact of jumps in multivariate price processes using bipower covariation. Discussion paper, Nuffield College, Oxford University.
Fan, J., Li, Y., and Yu, K. (2012). Vast volatility matrix estimation using high frequency data for portfolio selection. Journal of the American Statistical Association, 107, 412-428
Rousseeuw, P. and Molenberghs, G. (1993). Transformation of non positive semidefinite correlation matrices. Communications in Statistics - Theory and Methods, 22, 965-984.
Convenience function to calculate log-returns, also used extensively internally.
Accepts xts
and matrix
-like objects. If you use this with a data.table
object, remember to not pass the DT
column.
makeReturns(ts)
makeReturns(ts)
ts |
a possibly multivariate matrix-like object containing prices in levels. If |
Note: the first (row of) observation(s) is set to zero.
Depending on input, either a matrix
or an xts
object containing the log returns.
Jonathan Cornelissen, Kris Boudt, and Emil Sjoerup
spreadPrices
DEPRECATED
use spreadPrices
makeRMFormat(data)
makeRMFormat(data)
data |
DEPRECATED |
Match the trades and quotes of the input data. All trades are retained and the latest bids and offers are retained, while 'old' quotes are discarded.
matchTradesQuotes( tData, qData, lagQuotes = 0, BFM = FALSE, backwardsWindow = 3600, forwardsWindow = 0.5, plot = FALSE, ... )
matchTradesQuotes( tData, qData, lagQuotes = 0, BFM = FALSE, backwardsWindow = 3600, forwardsWindow = 0.5, plot = FALSE, ... )
tData |
|
qData |
|
lagQuotes |
numeric, number of seconds the quotes are registered faster than the trades (should be round and positive). Default is 0. For older datasets, i.e. before 2010, it may be a good idea to set this to e.g. 2. See Vergote (2005) |
BFM |
a logical determining whether to conduct 'Backwards - Forwards matching' of trades and quotes. The algorithm tries to match trades that fall outside the bid - ask and first tries to match a small window forwards and if this fails, it tries to match backwards in a bigger window. The small window is a tolerance for inaccuracies in the timestamps of bids and asks. The backwards window allow for matching of late reported trades. I.e. block trades. |
backwardsWindow |
a numeric denoting the length of the backwards window used when |
forwardsWindow |
a numeric denoting the length of the forwards window used when |
plot |
a logical denoting whether to visualize the forwards, backwards, and unmatched trades in a plot. |
... |
used internally. Don't set this parameter |
Depending on the input data type, we return either a data.table
or an xts
object containing the matched trade and quote data.
When using the BFM algorithm, a report of the matched and unmatched trades are also returned (This is omitted when we call this function from the tradesCleanupUsingQuotes
function).
Jonathan Cornelissen, Kris Boudt, Onno Kleen, and Emil Sjoerup.
Vergote, O. (2005). How to match trades and quotes for NYSE stocks? K.U.Leuven working paper.
Christensen, K., Oomen, R. C. A., Podolskij, M. (2014): Fact or Friction: Jumps at ultra high frequency. Journal of Financial Economics, 144, 576-599
# Multi-day input allowed tqData <- matchTradesQuotes(sampleTData, sampleQData) # Show output tqData
# Multi-day input allowed tqData <- matchTradesQuotes(sampleTData, sampleQData) # Show output tqData
Merge quote entries that have the same time stamp to a single one and returns an xts
or a data.table
object
with unique time stamps only.
mergeQuotesSameTimestamp(qData, selection = "median")
mergeQuotesSameTimestamp(qData, selection = "median")
qData |
an |
selection |
indicates how the bid and ask price for a certain time stamp
should be calculated in case of multiple observation for a certain time
stamp. By default,
|
Depending on the input data type, we return either a data.table
or an xts
object containing the quote data which has been cleaned.
Jonathan Cornelissen, Kris Boudt, Onno Kleen, and Emil Sjoerup.
Merge trade entries that have the same time stamp to a single one and returns an xts
or a data.table
object
with unique time stamps only.
mergeTradesSameTimestamp(tData, selection = "median")
mergeTradesSameTimestamp(tData, selection = "median")
tData |
an |
selection |
indicates how the price for a certain time stamp
should be calculated in case of multiple observation for a certain time
stamp. By default,
|
data.table
or xts
object depending on input.
previously this function returned the mean of the size of the merged trades (pre version 0.7 and when not using max.volume as the criterion), now it returns the sum.
Jonathan Cornelissen, Kris Boudt, Onno Kleen, and Emil Sjoerup.
Function deletes the observations where the price is zero.
noZeroPrices(tData)
noZeroPrices(tData)
tData |
an |
an xts
or data.table
object depending on input.
Jonathan Cornelissen and Kris Boudt.
Function deletes the observations where the bid or ask is zero.
noZeroQuotes(qData)
noZeroQuotes(qData)
qData |
an |
xts
object or data.table
depending on type of input.
Jonathan Cornelissen and Kris Boudt.
DBH
objectsPlotting method for DBH
objects
## S3 method for class 'DBH' plot(x, ...)
## S3 method for class 'DBH' plot(x, ...)
x |
an object of class |
... |
optional arguments, see details |
The plotting method has the following optional parameters:
pData
A data.table
or an xts
object, containing the prices and timestamps of the data used to calculate the test statistic.
If specified, and which = "tStat"
, the price will be shown on the right y-axis along with the test statistic
which
A string denoting which of four plots to make. "tStat"
denotes plotting the test statistic. "sigma"
denotes plotting the
estimated volatility process. "mu"
denotes plotting the estimated drift process. If which = c("sigma", "mu")
or which = c("mu", "sigma")
,
both the drift and volatility processes are plotted. CaPiTAlizAtIOn doesn't matter
Emil Sjoerup
# Testing every 60 seconds after 09:15:00 DBH <- driftBursts(sampleTDataEurope, testTimes = seq(32400 + 900, 63000, 60), preAverage = 2, ACLag = -1L, meanBandwidth = 300L, varianceBandwidth = 900L) plot(DBH) plot(DBH, pData = sampleTDataEurope) plot(DBH, which = "sigma") plot(DBH, which = "mu") plot(DBH, which = c("sigma", "mu"))
# Testing every 60 seconds after 09:15:00 DBH <- driftBursts(sampleTDataEurope, testTimes = seq(32400 + 900, 63000, 60), preAverage = 2, ACLag = -1L, meanBandwidth = 300L, varianceBandwidth = 900L) plot(DBH) plot(DBH, pData = sampleTDataEurope) plot(DBH, which = "sigma") plot(DBH, which = "mu") plot(DBH, which = c("sigma", "mu"))
Plotting method for HARmodel objects
## S3 method for class 'HARmodel' plot(x, ...)
## S3 method for class 'HARmodel' plot(x, ...)
x |
an object of class |
... |
extra arguments, see details |
The plotting method has the following optional parameter:
legend.loc
A string denoting the location of the legend passed on to addLegend
of the xts package
Plotting method for HEAVYmodel objects
## S3 method for class 'HEAVYmodel' plot(x, ...)
## S3 method for class 'HEAVYmodel' plot(x, ...)
x |
an object of class |
... |
extra arguments, see details. |
The plotting method has the following optional parameter:
legend.loc
A string denoting the location of the legend passed on to addLegend
of the xts package
type
A string denoting the type of lot to be made. If type
is "condVar"
the fitted values of the conditional variance of the returns
is shown. If type
is different from "condVar"
, the fitted values of the realized measure is shown. Default is "condVar"
Plot trade and quote data, trades are marked by crosses, and quotes are plotted as boxes denoting the bid-offer spread for all the quotes.
plotTQData( tData, qData = NULL, xLim = NULL, tradeCol = "black", quoteCol = "darkgray", format = "%H:%M:%S", axisCol = "black", ... )
plotTQData( tData, qData = NULL, xLim = NULL, tradeCol = "black", quoteCol = "darkgray", format = "%H:%M:%S", axisCol = "black", ... )
tData |
cleaned trades data |
qData |
cleaned quotes data |
xLim |
timestamps for the start and the end of the plots. |
tradeCol |
color in which to paint the trade crosses. |
quoteCol |
color in which to fill out the bid-offer spread. |
format |
format string to pass to |
axisCol |
string to denote which color to use for the x axis |
... |
passed to |
## Not run: cleanedQuotes = quotesCleanup(qDataRaw = sampleQDataRaw, report = FALSE, printExchange = FALSE) cleanedTrades <- tradesCleanupUsingQuotes( tData = tradesCleanup(tDataRaw = sampleTDataRaw, report = FALSE, printExchange = FALSE), qData = quotesCleanup(qDataRaw = sampleQDataRaw, report = FALSE, printExchange = FALSE) )[as.Date(DT) == "2018-01-03"] xLim <- range(as.POSIXct(c("2018-01-03 15:30:00", "2018-01-03 16:00:00"), tz = "EST")) plotTQData(cleanedTrades, cleanedQuotes, xLim = xLim, main = "Raw trade and quote data from NYSE TAQ") ## End(Not run)
## Not run: cleanedQuotes = quotesCleanup(qDataRaw = sampleQDataRaw, report = FALSE, printExchange = FALSE) cleanedTrades <- tradesCleanupUsingQuotes( tData = tradesCleanup(tDataRaw = sampleTDataRaw, report = FALSE, printExchange = FALSE), qData = quotesCleanup(qDataRaw = sampleQDataRaw, report = FALSE, printExchange = FALSE) )[as.Date(DT) == "2018-01-03"] xLim <- range(as.POSIXct(c("2018-01-03 15:30:00", "2018-01-03 16:00:00"), tz = "EST")) plotTQData(cleanedTrades, cleanedQuotes, xLim = xLim, main = "Raw trade and quote data from NYSE TAQ") ## End(Not run)
HARmodel
Predict method for objects of type HARmodel
## S3 method for class 'HARmodel' predict(object, ...)
## S3 method for class 'HARmodel' predict(object, ...)
object |
an object of class |
... |
extra arguments. See details |
The print method has the following optional parameters:
newdata
new data to use for forecasting
warnings
A logical denoting whether to display warnings, detault is TRUE
backtransform
A string. If the model is estimated with transformation this parameter can be set to transform the prediction back into variance
The possible values are "simple"
which means inverse of transformation, i.e. exp
when log-transformation is applied. If using log transformation,
the option "parametric"
can also be used to transform back. The parametric method adds a correction that stems from using the log-transformation
Calculates forecasts for , where
denotes the end of the estimation
period for fitting the HEAVYmodel and
.
## S3 method for class 'HEAVYmodel' predict(object, stepsAhead = 10, ...)
## S3 method for class 'HEAVYmodel' predict(object, stepsAhead = 10, ...)
object |
an object of class HEAVYmodel. |
stepsAhead |
the number of days iterative forecasts are calculated for (default 10). |
... |
further arguments passed to or from other methods. |
DBH
objectsPrinting method for DBH
objects
## S3 method for class 'DBH' print(x, ...)
## S3 method for class 'DBH' print(x, ...)
x |
an object of class |
... |
optional arguments, see details |
The print method has the following optional parameters:
criticalValue
A numeric denoting a custom critical value of the test.
alpha
A numeric denoting the confidence level of the test. The alpha value is passed on to getCriticalValues
.
The default value is 0.95
Emil Sjoerup
## Not run: DBH <- driftBursts(sampleTDataEurope, testTimes = seq(32400 + 900, 63000, 300), preAverage = 2, ACLag = -1L, meanBandwidth = 300L, varianceBandwidth = 900L) print(DBH) print(DBH, criticalValue = 1) # This value doesn't make sense - don't actually use it! print(DBH, alpha = 0.95) # 5% confidence level - this is the standard print(DBH, alpha = 0.99) # 1% confidence level ## End(Not run)
## Not run: DBH <- driftBursts(sampleTDataEurope, testTimes = seq(32400 + 900, 63000, 300), preAverage = 2, ACLag = -1L, meanBandwidth = 300L, varianceBandwidth = 900L) print(DBH) print(DBH, criticalValue = 1) # This value doesn't make sense - don't actually use it! print(DBH, alpha = 0.95) # 5% confidence level - this is the standard print(DBH, alpha = 0.99) # 1% confidence level ## End(Not run)
HARmodel
objectsPrinting method for HARmodel
objects
## S3 method for class 'HARmodel' print(x, ...)
## S3 method for class 'HARmodel' print(x, ...)
x |
object of type |
... |
extra options |
The printing method has the extra option digits
which can be used to set the number of digits for printing pass lag
to determine the maximum order of the Newey West estimator. Default is 22
This is a wrapper function for cleaning the quote data in the entire folder dataSource
.
The result is saved in the folder dataDestination
.
In case you supply the argument qDataRaw
, the on-disk functionality is ignored
and the function returns the cleaned quotes as xts
or data.table
object (see examples).
The following cleaning functions are performed sequentially:
noZeroQuotes
, exchangeHoursOnly
, autoSelectExchangeQuotes
or selectExchange
, rmNegativeSpread
, rmLargeSpread
mergeQuotesSameTimestamp
, rmOutliersQuotes
.
quotesCleanup( dataSource = NULL, dataDestination = NULL, exchanges = "auto", qDataRaw = NULL, report = TRUE, selection = "median", maxi = 50, window = 50, type = "standard", marketOpen = "09:30:00", marketClose = "16:00:00", rmoutliersmaxi = 10, printExchange = TRUE, saveAsXTS = FALSE, tz = NULL )
quotesCleanup( dataSource = NULL, dataDestination = NULL, exchanges = "auto", qDataRaw = NULL, report = TRUE, selection = "median", maxi = 50, window = 50, type = "standard", marketOpen = "09:30:00", marketClose = "16:00:00", rmoutliersmaxi = 10, printExchange = TRUE, saveAsXTS = FALSE, tz = NULL )
dataSource |
character indicating the folder in which the original data is stored. |
dataDestination |
character indicating the folder in which the cleaned data is stored. |
exchanges |
vector of stock exchange symbols for all data in dataSource,
e.g.
. The default value is |
qDataRaw |
|
report |
boolean and |
selection |
argument to be passed on to the cleaning routine |
maxi |
spreads which are greater than median spreads of the day times |
window |
argument to be passed on to the cleaning routine |
type |
argument to be passed on to the cleaning routine |
marketOpen |
passed to |
marketClose |
passed to |
rmoutliersmaxi |
argument to be passed on to the cleaning routine |
printExchange |
Argument passed to |
saveAsXTS |
indicates whether data should be saved in |
tz |
fallback time zone used in case we we are unable to identify the timezone of the data, by default: |
Using the on-disk functionality with .csv.zip files which is the standard from the WRDS database will write temporary files on your machine - we try to clean up after it, but cannot guarantee that there won't be files that slip through the crack if the permission settings on your machine does not match ours.
If the input data.table
does not contain a DT
column but it does contain DATE
and TIME_M
columns, we create the DT
column by REFERENCE, altering the data.table
that may be in the user's environment!
The function converts every (compressed) csv (or rds) file in dataSource
into multiple xts
or data.table
files.
In dataDestination
, there will be one folder for each symbol containing .rds files with cleaned data stored either in data.table
or xts
format.
In case you supply the argument qDataRaw
, the on-disk functionality is ignored
and the function returns a list with the cleaned quotes as an xts
or data.table
object depending on input (see examples).
Jonathan Cornelissen, Kris Boudt, Onno Kleen, and Emil Sjoerup.
Barndorff-Nielsen, O. E., Hansen, P. R., Lunde, A., and Shephard, N. (2009). Realized kernels in practice: Trades and quotes. Econometrics Journal 12, C1-C32.
Brownlees, C.T. and Gallo, G.M. (2006). Financial econometric analysis at ultra-high frequency: Data handling concerns. Computational Statistics & Data Analysis, 51, pages 2232-2245.
Falkenberry, T.N. (2002). High frequency data filtering. Unpublished technical report.
# Consider you have raw quote data for 1 stock for 2 days head(sampleQDataRaw) dim(sampleQDataRaw) qDataAfterCleaning <- quotesCleanup(qDataRaw = sampleQDataRaw, exchanges = "N") qDataAfterCleaning$report dim(qDataAfterCleaning$qData) # In case you have more data it is advised to use the on-disk functionality # via "dataSource" and "dataDestination" arguments
# Consider you have raw quote data for 1 stock for 2 days head(sampleQDataRaw) dim(sampleQDataRaw) qDataAfterCleaning <- quotesCleanup(qDataRaw = sampleQDataRaw, exchanges = "N") qDataAfterCleaning$report dim(qDataAfterCleaning$qData) # In case you have more data it is advised to use the on-disk functionality # via "dataSource" and "dataDestination" arguments
Calculate the rank jump test of Li et al. (2019). The procedure tests for the rank of the jump matrix at simultaneous jump events in market returns as well as individual assets.
rankJumpTest( marketPrice, stockPrices, alpha = c(5, 3), coarseFreq = 10, localWindow = 30, rank = 1, BoxCox = 1, quantiles = c(0.9, 0.95, 0.99), nBoot = 1000, dontTestAtBoundaries = TRUE, alignBy = "minutes", alignPeriod = 5, marketOpen = "09:30:00", marketClose = "16:00:00", tz = NULL )
rankJumpTest( marketPrice, stockPrices, alpha = c(5, 3), coarseFreq = 10, localWindow = 30, rank = 1, BoxCox = 1, quantiles = c(0.9, 0.95, 0.99), nBoot = 1000, dontTestAtBoundaries = TRUE, alignBy = "minutes", alignPeriod = 5, marketOpen = "09:30:00", marketClose = "16:00:00", tz = NULL )
marketPrice |
data.table or |
stockPrices |
list containing the individual stock prices in either data.table or |
alpha |
significance level (in standard deviations) to use for the jump detections. Default is |
coarseFreq |
numeric denoting the coarse sampling frequency. Default is |
localWindow |
numeric denoting the local window for the bootstrap algorithm. Default is |
rank |
rank of the jump matrix under the null hypothesis. Default is |
BoxCox |
numeric of exponents for the Box-Cox transformation, default is |
quantiles |
numeric denoting which quantiles of the bootstrapped critical values to return and compare against. Default is |
nBoot |
numeric denoting how many replications to be used for the bootstrap algorithm. Default is |
dontTestAtBoundaries |
logical determining whether to exclude data across different days. Default is |
alignBy |
character, indicating the time scale in which |
alignPeriod |
positive numeric, indicating the number of periods to aggregate over. E.g. to aggregate
based on a 5 minute frequency, set |
marketOpen |
the market opening time, by default: |
marketClose |
the market closing time, by default: |
tz |
fallback time zone used in case we we are unable to identify the timezone of the data, by default: |
Let the jump times be defined as:
Then the estimated jump matrix is:
Let be the ordered eigenvalues of
, then test statistic is
The critical values are computed by applying a bootstrapping method
The singular value decomposition of the jump matrix is:
then and
such that
which is used to trim jumps. The bootstrapping method is calculated by the following algorithm
Step 1.
For each , draw
and draw with equal probability,
and set and
Step 2.
Repeat 1 for a large number of iterations. Set as as the
quantile of
in the simulated sample.
A list containing criticalValues
which are the bootstrapped critical values, testStatistic
the test statistic of the jump test, dimensions
which are the dimensions of the jump matrix
marketJumpDetections
the jumps detected in the market prices, stockJumpDetections
the co-jumps detected in the individual stock prices, and jumpIndices
which are the indices of the detected jumps.
Emil Sjoerup, based on Matlab code provided by Li et al. (2019)
Li, j., Todorov, V., Tauchen, G., and Lin, H. (2019). Rank Tests at Jump Events. Journal of Business & Economic Statistics, 37, 312-321.
Calculates realized variances via averaging across partially
overlapping grids, first introduced by Zhang et al. (2005).
This estimator is basically an average across different rCov
estimates that start at
different points in time, see details below.
rAVGCov( rData, cor = FALSE, alignBy = "minutes", alignPeriod = 5, k = 1, makeReturns = FALSE, ... )
rAVGCov( rData, cor = FALSE, alignBy = "minutes", alignPeriod = 5, k = 1, makeReturns = FALSE, ... )
rData |
an |
cor |
boolean, in case it is |
alignBy |
character, indicating the time scale in which |
alignPeriod |
positive numeric, indicating the number of periods to aggregate over. For example, to aggregate
based on a 5-minute frequency, set |
k |
numeric denoting which horizon to use for the subsambles. This can be a fraction as long as |
makeReturns |
boolean, should be |
... |
used internally, do not change. |
Suppose that in period , there are
equispaced returns
and let
be equal to
alignPeriod
. For ,
we define the subsampled
-period return as
Now define if
and
otherwise.
The
-th component of the
rAVGCov
estimator is given by
Now taking the average across the different defines the
rAVGCov
estimator.
The multivariate version follows analogously.
Note that Liu et al. (2015) show that rAVGCov
is not only theoretically but also empirically a more reliable estimator than rCov.
in case the input is and contains data from one day, an by
matrix is returned. If the data is a univariate
xts
object with multiple days, an xts
is returned.
If the data is multivariate and contains multiple days (xts
or data.table
), the function returns a list containing by
matrices.
Each item in the list has a name which corresponds to the date for the matrix.
Scott Payseur, Onno Kleen, and Emil Sjoerup.
Liu, L. Y., Patton, A. J., Sheppard, K. (2015). Does anything beat 5-minute RV? A comparison of realized measures across multiple asset classes. Journal of Econometrics, 187, 293-311.
Zhang, L., Mykland, P. A. , and Ait-Sahalia, Y. (2005). A tale of two time scales: Determining integrated volatility with noisy high-frequency data. Journal of the American Statistical Association, 100, 1394-1411.
ICov
for a list of implemented estimators of the integrated covariance.
# Average subsampled realized variance/covariance aligned at one minute returns at # 5 sub-grids (5 minutes). # Univariate subsampled realized variance rvAvgSub <- rAVGCov(rData = sampleTData[, list(DT, PRICE)], alignBy = "minutes", makeReturns = TRUE) rvAvgSub # Multivariate subsampled realized variance rvAvgCovSub <- rAVGCov(rData = sampleOneMinuteData[1:391], makeReturns = TRUE) rvAvgCovSub
# Average subsampled realized variance/covariance aligned at one minute returns at # 5 sub-grids (5 minutes). # Univariate subsampled realized variance rvAvgSub <- rAVGCov(rData = sampleTData[, list(DT, PRICE)], alignBy = "minutes", makeReturns = TRUE) rvAvgSub # Multivariate subsampled realized variance rvAvgCovSub <- rAVGCov(rData = sampleOneMinuteData[1:391], makeReturns = TRUE) rvAvgCovSub
The Beta Adjusted Covariance (BAC) equals the pre-estimator plus a minimal adjustment matrix such that the covariance-implied stock-ETF beta equals a target beta.
The BAC estimator works by applying a minimum correction factor to a pre-estimated covariance matrix such that a target beta derived from the ETF is reached.
Let
denote the implied beta derived from the pre-estimator, and
denote the target beta, then the correction factor is calculated as:
where
where is the number of assets in the ETF, and
is the number of trades in the
th asset, and
where is the weight of the
th asset in the ETF.
and
is defined by the following two cases:
has dimensions
and
has dimensions
.
The Beta-Adjusted Covariance is then
where is the pre-estimated covariance matrix.
rBACov( pData, shares, outstanding, nonEquity, ETFNAME = "ETF", unrestricted = TRUE, targetBeta = c("HY", "VAB", "expert"), expertBeta = NULL, preEstimator = "rCov", noiseRobustEstimator = rTSCov, noiseCorrection = FALSE, returnL = FALSE, ... )
rBACov( pData, shares, outstanding, nonEquity, ETFNAME = "ETF", unrestricted = TRUE, targetBeta = c("HY", "VAB", "expert"), expertBeta = NULL, preEstimator = "rCov", noiseRobustEstimator = rTSCov, noiseCorrection = FALSE, returnL = FALSE, ... )
pData |
a named list. Each list-item contains an |
shares |
a |
outstanding |
number of shares outstanding of the ETF |
nonEquity |
aggregated value of the additional components (like cash, money-market funds, bonds, etc.) of the ETF which are not included in the components in |
ETFNAME |
a |
unrestricted |
a |
targetBeta |
a |
expertBeta |
a |
preEstimator |
a |
noiseRobustEstimator |
a |
noiseCorrection |
a |
returnL |
a |
... |
extra arguments passed to |
Emil Sjoerup, (Kris Boudt and Kirill Dragun for the Python version)
Boudt, K., Dragun, K., Omauri, S., and Vanduffel, S. (2021) Beta-Adjusted Covariance Estimation (working paper).
ICov
for a list of implemented estimators of the integrated covariance.
## Not run: # Since we don't have any data in this package that is of the required format we must simulate it. library(xts) library(highfrequency) # The mvtnorm package is needed for this example # Please install this package before running this example library("mvtnorm") # Set the seed for replication set.seed(123) iT <- 23400 # Number of observations # Simulate returns rets <- rmvnorm(iT * 3 + 1, mean = rep(0,4), sigma = matrix(c(0.1, -0.03 , 0.02, 0.04, -0.03, 0.05, -0.03, 0.02, 0.02, -0.03, 0.05, -0.03, 0.04, 0.02, -0.03, 0.08), ncol = 4)) # We assume that the assets don't trade in a synchronous manner w1 <- rets[sort(sample(1:nrow(rets), size = nrow(rets) * 0.5)), 1] w2 <- rets[sort(sample(1:nrow(rets), size = nrow(rets) * 0.75)), 2] w3 <- rets[sort(sample(1:nrow(rets), size = nrow(rets) * 0.65)), 3] w4 <- rets[sort(sample(1:nrow(rets), size = nrow(rets) * 0.8)), 4] w5 <- rnorm(nrow(rets) * 0.9, mean = 0, sd = 0.005) timestamps1 <- seq(34200, 57600, length.out = length(w1)) timestamps2 <- seq(34200, 57600, length.out = length(w2)) timestamps3 <- seq(34200, 57600, length.out = length(w3)) timestamps4 <- seq(34200, 57600, length.out = length(w4)) timestamps4 <- seq(34200, 57600, length.out = length(w4)) timestamps5 <- seq(34200, 57600, length.out = length(w5)) w1 <- xts(w1 * c(0,sqrt(diff(timestamps1) / (max(timestamps1) - min(timestamps1)))), as.POSIXct(timestamps1, origin = "1970-01-01"), tzone = "UTC") w2 <- xts(w2 * c(0,sqrt(diff(timestamps2) / (max(timestamps2) - min(timestamps2)))), as.POSIXct(timestamps2, origin = "1970-01-01"), tzone = "UTC") w3 <- xts(w3 * c(0,sqrt(diff(timestamps3) / (max(timestamps3) - min(timestamps3)))), as.POSIXct(timestamps3, origin = "1970-01-01"), tzone = "UTC") w4 <- xts(w4 * c(0,sqrt(diff(timestamps4) / (max(timestamps4) - min(timestamps4)))), as.POSIXct(timestamps4, origin = "1970-01-01"), tzone = "UTC") w5 <- xts(w5 * c(0,sqrt(diff(timestamps5) / (max(timestamps5) - min(timestamps5)))), as.POSIXct(timestamps5, origin = "1970-01-01"), tzone = "UTC") p1 <- exp(cumsum(w1)) p2 <- exp(cumsum(w2)) p3 <- exp(cumsum(w3)) p4 <- exp(cumsum(w4)) weights <- runif(4) * 1:4 weights <- weights / sum(weights) p5 <- xts(rowSums(cbind(w1 * weights[1], w2 * weights[2], w3 * weights[3], w4 * weights[4]), na.rm = TRUE), index(cbind(p1, p2, p3, p4))) p5 <- xts(cumsum(rowSums(cbind(p5, w5), na.rm = TRUE)), index(cbind(p5, w5))) p5 <- exp(p5[sort(sample(1:length(p5), size = nrow(rets) * 0.9))]) BAC <- rBACov(pData = list( "ETF" = p5, "STOCK 1" = p1, "STOCK 2" = p2, "STOCK 3" = p3, "STOCK 4" = p4 ), shares = 1:4, outstanding = 1, nonEquity = 0, ETFNAME = "ETF", unrestricted = FALSE, preEstimator = "rCov", noiseCorrection = FALSE, returnL = FALSE, K = 2, J = 1) # Noise robust version of the estimator noiseRobustBAC <- rBACov(pData = list( "ETF" = p5, "STOCK 1" = p1, "STOCK 2" = p2, "STOCK 3" = p3, "STOCK 4" = p4 ), shares = 1:4, outstanding = 1, nonEquity = 0, ETFNAME = "ETF", unrestricted = FALSE, preEstimator = "rCov", noiseCorrection = TRUE, noiseRobustEstimator = rHYCov, returnL = FALSE, K = 2, J = 1) # Use the Variance Adjusted Beta method # Also use a different pre-estimator. VABBAC <- rBACov(pData = list( "ETF" = p5, "STOCK 1" = p1, "STOCK 2" = p2, "STOCK 3" = p3, "STOCK 4" = p4 ), shares = 1:4, outstanding = 1, nonEquity = 0, ETFNAME = "ETF", unrestricted = FALSE, targetBeta = "VAB", preEstimator = "rHYov", noiseCorrection = FALSE, returnL = FALSE, Lin = FALSE, L = 0, K = 2, J = 1) ## End(Not run)
## Not run: # Since we don't have any data in this package that is of the required format we must simulate it. library(xts) library(highfrequency) # The mvtnorm package is needed for this example # Please install this package before running this example library("mvtnorm") # Set the seed for replication set.seed(123) iT <- 23400 # Number of observations # Simulate returns rets <- rmvnorm(iT * 3 + 1, mean = rep(0,4), sigma = matrix(c(0.1, -0.03 , 0.02, 0.04, -0.03, 0.05, -0.03, 0.02, 0.02, -0.03, 0.05, -0.03, 0.04, 0.02, -0.03, 0.08), ncol = 4)) # We assume that the assets don't trade in a synchronous manner w1 <- rets[sort(sample(1:nrow(rets), size = nrow(rets) * 0.5)), 1] w2 <- rets[sort(sample(1:nrow(rets), size = nrow(rets) * 0.75)), 2] w3 <- rets[sort(sample(1:nrow(rets), size = nrow(rets) * 0.65)), 3] w4 <- rets[sort(sample(1:nrow(rets), size = nrow(rets) * 0.8)), 4] w5 <- rnorm(nrow(rets) * 0.9, mean = 0, sd = 0.005) timestamps1 <- seq(34200, 57600, length.out = length(w1)) timestamps2 <- seq(34200, 57600, length.out = length(w2)) timestamps3 <- seq(34200, 57600, length.out = length(w3)) timestamps4 <- seq(34200, 57600, length.out = length(w4)) timestamps4 <- seq(34200, 57600, length.out = length(w4)) timestamps5 <- seq(34200, 57600, length.out = length(w5)) w1 <- xts(w1 * c(0,sqrt(diff(timestamps1) / (max(timestamps1) - min(timestamps1)))), as.POSIXct(timestamps1, origin = "1970-01-01"), tzone = "UTC") w2 <- xts(w2 * c(0,sqrt(diff(timestamps2) / (max(timestamps2) - min(timestamps2)))), as.POSIXct(timestamps2, origin = "1970-01-01"), tzone = "UTC") w3 <- xts(w3 * c(0,sqrt(diff(timestamps3) / (max(timestamps3) - min(timestamps3)))), as.POSIXct(timestamps3, origin = "1970-01-01"), tzone = "UTC") w4 <- xts(w4 * c(0,sqrt(diff(timestamps4) / (max(timestamps4) - min(timestamps4)))), as.POSIXct(timestamps4, origin = "1970-01-01"), tzone = "UTC") w5 <- xts(w5 * c(0,sqrt(diff(timestamps5) / (max(timestamps5) - min(timestamps5)))), as.POSIXct(timestamps5, origin = "1970-01-01"), tzone = "UTC") p1 <- exp(cumsum(w1)) p2 <- exp(cumsum(w2)) p3 <- exp(cumsum(w3)) p4 <- exp(cumsum(w4)) weights <- runif(4) * 1:4 weights <- weights / sum(weights) p5 <- xts(rowSums(cbind(w1 * weights[1], w2 * weights[2], w3 * weights[3], w4 * weights[4]), na.rm = TRUE), index(cbind(p1, p2, p3, p4))) p5 <- xts(cumsum(rowSums(cbind(p5, w5), na.rm = TRUE)), index(cbind(p5, w5))) p5 <- exp(p5[sort(sample(1:length(p5), size = nrow(rets) * 0.9))]) BAC <- rBACov(pData = list( "ETF" = p5, "STOCK 1" = p1, "STOCK 2" = p2, "STOCK 3" = p3, "STOCK 4" = p4 ), shares = 1:4, outstanding = 1, nonEquity = 0, ETFNAME = "ETF", unrestricted = FALSE, preEstimator = "rCov", noiseCorrection = FALSE, returnL = FALSE, K = 2, J = 1) # Noise robust version of the estimator noiseRobustBAC <- rBACov(pData = list( "ETF" = p5, "STOCK 1" = p1, "STOCK 2" = p2, "STOCK 3" = p3, "STOCK 4" = p4 ), shares = 1:4, outstanding = 1, nonEquity = 0, ETFNAME = "ETF", unrestricted = FALSE, preEstimator = "rCov", noiseCorrection = TRUE, noiseRobustEstimator = rHYCov, returnL = FALSE, K = 2, J = 1) # Use the Variance Adjusted Beta method # Also use a different pre-estimator. VABBAC <- rBACov(pData = list( "ETF" = p5, "STOCK 1" = p1, "STOCK 2" = p2, "STOCK 3" = p3, "STOCK 4" = p4 ), shares = 1:4, outstanding = 1, nonEquity = 0, ETFNAME = "ETF", unrestricted = FALSE, targetBeta = "VAB", preEstimator = "rHYov", noiseCorrection = FALSE, returnL = FALSE, Lin = FALSE, L = 0, K = 2, J = 1) ## End(Not run)
Depending on users' choices of estimator (realized covariance (RCOVestimator) and realized variance (RVestimator)), the function returns the realized beta, defined as the ratio between both.
The realized beta is given by
in which
Realized covariance of asset j and market index
.
Realized variance of market index
.
rBeta( rData, rIndex, RCOVestimator = "rCov", RVestimator = "rRVar", makeReturns = FALSE, ... )
rBeta( rData, rIndex, RCOVestimator = "rCov", RVestimator = "rRVar", makeReturns = FALSE, ... )
rData |
a |
rIndex |
a |
RCOVestimator |
can be chosen among realized covariance estimators: |
RVestimator |
can be chosen among realized variance estimators: |
makeReturns |
boolean, should be |
... |
arguments passed to |
Suppose there are equispaced returns on day
for the asset
and the index
.
Denote
,
as the
th return on day
for asset
and index
(with
).
By default, the RCov is used and the realized beta coefficient is computed as:
Note: The function does not support to calculate betas across multiple days.
numeric
Giang Nguyen, Jonathan Cornelissen, Kris Boudt, Onno Kleen, and Emil Sjoerup.
Barndorff-Nielsen, O. E. and Shephard, N. (2004). Econometric analysis of realized covariation: high frequency based covariance, regression, and correlation in financial economics. Econometrica, 72, 885-925.
## Not run: library("xts") a <- as.xts(sampleOneMinuteData[as.Date(DT) == "2001-08-04", list(DT, MARKET)]) b <- as.xts(sampleOneMinuteData[as.Date(DT) == "2001-08-04", list(DT, STOCK)]) rBeta(a, b, RCOVestimator = "rBPCov", RVestimator = "rMinRVar", makeReturns = TRUE) ## End(Not run)
## Not run: library("xts") a <- as.xts(sampleOneMinuteData[as.Date(DT) == "2001-08-04", list(DT, MARKET)]) b <- as.xts(sampleOneMinuteData[as.Date(DT) == "2001-08-04", list(DT, STOCK)]) rBeta(a, b, RCOVestimator = "rBPCov", RVestimator = "rMinRVar", makeReturns = TRUE) ## End(Not run)
Calculate the Realized BiPower Covariance (rBPCov), defined in Barndorff-Nielsen and Shephard (2004).
Let be an intraday
return vector and
the number of intraday returns.
The rBPCov is defined as the process whose value at time
is the
-dimensional square matrix with
-th element equal to
where is the
-th component of the return vector
.
rBPCov( rData, cor = FALSE, alignBy = NULL, alignPeriod = NULL, makeReturns = FALSE, makePsd = FALSE, ... )
rBPCov( rData, cor = FALSE, alignBy = NULL, alignPeriod = NULL, makeReturns = FALSE, makePsd = FALSE, ... )
rData |
an |
cor |
boolean, in case it is |
alignBy |
character, indicating the time scale in which |
alignPeriod |
positive numeric, indicating the number of periods to aggregate over. E.g. to aggregate
based on a 5-minute frequency, set |
makeReturns |
boolean, should be |
makePsd |
boolean, in case it is |
... |
used internally, do not change. |
in case the input is and contains data from one day, an by
matrix is returned. If the data is a univariate
xts
object with multiple days, an xts
is returned.
If the data is multivariate and contains multiple days (xts
or data.table
), the function returns a list containing by
matrices. Each item in the list has a name which corresponds to the date for the matrix.
Jonathan Cornelissen, Kris Boudt, and Emil Sjoerup.
Barndorff-Nielsen, O. E., and Shephard, N. (2004). Measuring the impact of jumps in multivariate price processes using bipower covariation. Discussion paper, Nuffield College, Oxford University.
ICov
for a list of implemented estimators of the integrated covariance.
# Realized Bipower Variance/Covariance for a price series aligned # at 5 minutes. # Univariate: rbpv <- rBPCov(rData = sampleTData[, list(DT, PRICE)], alignBy ="minutes", alignPeriod = 5, makeReturns = TRUE) # Multivariate: rbpc <- rBPCov(rData = sampleOneMinuteData, makeReturns = TRUE, makePsd = TRUE) rbpc
# Realized Bipower Variance/Covariance for a price series aligned # at 5 minutes. # Univariate: rbpv <- rBPCov(rData = sampleTData[, list(DT, PRICE)], alignBy ="minutes", alignPeriod = 5, makeReturns = TRUE) # Multivariate: rbpc <- rBPCov(rData = sampleOneMinuteData, makeReturns = TRUE, makePsd = TRUE) rbpc
Positive semi-definite covariance estimation using the CholCov algorithm. The algorithm estimates the integrated covariance matrix by sequentially adding series and using 'refreshTime' to synchronize the observations. This is done in order of liquidity, which means that the algorithm uses more data points than most other estimation techniques.
rCholCov( pData, IVest = "rMRCov", COVest = "rMRCov", criterion = "squared duration", ... )
rCholCov( pData, IVest = "rMRCov", COVest = "rMRCov", criterion = "squared duration", ... )
pData |
a list. Each list-item i contains an |
IVest |
integrated variance estimator, default is |
COVest |
covariance estimator, default is |
criterion |
criterion to use for sorting the data according to liquidity.
Possible values are |
... |
additional arguments to pass to |
Additional arguments for IVest
and COVest
should be passed in the ... argument.
For the rMRCov
estimator, which is the default, the theta
and delta
parameters can be set. These default to 1 and 0.1 respectively.
The CholCov estimation algorithm is useful for estimating covariances of series that are sampled asynchronously and with different liquidities.
The CholCov estimation algorithm is as follows:
First sort the series in terms of decreasing liquidity according to a liquidity criterion, such that series is the most liquid, and series
the least.
Step 1:
Apply refresh-time on to obtain the grid
.
Estimate using an IV estimator on
.
Step 2:
Apply refresh-time on to obtain the grid
.
Estimate as the realized beta between
and
. Set
.
Estimate using an IV estimator on
.
Step 3:
Apply refresh-time on to obtain the grid
.
Estimate as the realized beta between
and
. Set
.
Apply refresh-time on to obtain the grid
.
Re-estimate at the new grid, such that the projections
and
are orthogonal.
Estimate as the realized beta between
and
. Set
.
Estimate using an IV estimator on
.
Step 4 to d:
Continue in the same fashion by sampling over to estimate
using the smallest possible set.
Re-estimate the with
at every new grid to obtain orthogonal projections.
Estimate the as the IV of projections based on the final estimates,
.
a list containing the covariance matrix "CholCov"
, and the Cholesky decomposition "L"
and "G"
such that .
Emil Sjoerup
Boudt, K., Laurent, S., Lunde, A., Quaedvlieg, R., and Sauri, O. (2017). Positive semidefinite integrated covariance estimation, factorizations and asynchronicity. Journal of Econometrics, 196, 347-367.
ICov
for a list of implemented estimators of the integrated covariance.
Function returns the Realized Covariation (rCov).
Let be an intraday
return vector and
the number of intraday returns.
Then, the rCov is given by
rCov( rData, cor = FALSE, alignBy = NULL, alignPeriod = NULL, makeReturns = FALSE, ... )
rCov( rData, cor = FALSE, alignBy = NULL, alignPeriod = NULL, makeReturns = FALSE, ... )
rData |
an |
cor |
boolean, in case it is |
alignBy |
character, indicating the time scale in which |
alignPeriod |
positive numeric, indicating the number of periods to aggregate over. For example, to aggregate
based on a 5-minute frequency, set |
makeReturns |
boolean, should be |
... |
used internally, do not change. |
in case the input is and contains data from one day, an matrix is returned. If the data is a univariate
xts
object with multiple days, an xts
is returned.
If the data is multivariate and contains multiple days (xts
or data.table
), the function returns a list containing N by N matrices. Each item in the list has a name which corresponds to the date for the matrix.
Jonathan Cornelissen, Kris Boudt, and Emil Sjoerup.
ICov
for a list of implemented estimators of the integrated covariance.
# Realized Variance/Covariance for prices aligned at 5 minutes. # Univariate: rv = rCov(rData = sampleTData[, list(DT, PRICE)], alignBy = "minutes", alignPeriod = 5, makeReturns = TRUE) rv # Multivariate: rc = rCov(rData = sampleOneMinuteData, makeReturns = TRUE) rc
# Realized Variance/Covariance for prices aligned at 5 minutes. # Univariate: rv = rCov(rData = sampleTData[, list(DT, PRICE)], alignBy = "minutes", alignPeriod = 5, makeReturns = TRUE) rv # Multivariate: rc = rCov(rData = sampleOneMinuteData, makeReturns = TRUE) rc
This function implements the refresh time synchronization scheme proposed by Harris et al. (1995). It picks the so-called refresh times at which all assets have traded at least once since the last refresh time point. For example, the first refresh time corresponds to the first time at which all stocks have traded. The subsequent refresh time is defined as the first time when all stocks have traded again. This process is repeated until the end of one time series is reached.
refreshTime(pData, sort = FALSE, criterion = "squared duration")
refreshTime(pData, sort = FALSE, criterion = "squared duration")
pData |
a list. Each list-item contains an |
sort |
logical determining whether to sort the index based on a criterion (will only sort descending; i.e., most liquid first). Default is |
criterion |
character determining which criterion used. Currently supports |
An xts
or data.table
object containing the synchronized time series - depending on the input.
Jonathan Cornelissen, Kris Boudt, Onno Kleen, and Emil Sjoerup.
Harris, F., T. McInish, Shoesmith, G., and Wood, R. (1995). Cointegration, error correction, and price discovery on informationally linked security markets. Journal of Financial and Quantitative Analysis, 30, 563-581.
# Suppose irregular timepoints: start <- as.POSIXct("2010-01-01 09:30:00") ta <- start + c(1,2,4,5,9) tb <- start + c(1,3,6,7,8,9,10,11) # Yielding the following timeseries: a <- xts::as.xts(1:length(ta), order.by = ta) b <- xts::as.xts(1:length(tb), order.by = tb) # Calculate the synchronized timeseries: refreshTime(list(a,b))
# Suppose irregular timepoints: start <- as.POSIXct("2010-01-01 09:30:00") ta <- start + c(1,2,4,5,9) tb <- start + c(1,3,6,7,8,9,10,11) # Yielding the following timeseries: a <- xts::as.xts(1:length(ta), order.by = ta) b <- xts::as.xts(1:length(tb), order.by = tb) # Calculate the synchronized timeseries: refreshTime(list(a,b))
This function estimates the auto-covariance of market-microstructure noise
Let the observed price be given as
, where
is the efficient price and
is the market microstructure noise
The estimator of the 'th lag of the market microstructure is defined as:
where is a tuning parameter. In the function
knChooseReMeDI
, we provide a function to estimate the optimal parameter.
ReMeDI(pData, kn = 1, lags = 1, makeCorrelation = FALSE)
ReMeDI(pData, kn = 1, lags = 1, makeCorrelation = FALSE)
pData |
|
kn |
numeric of length 1 determining the tuning parameter kn this controls the lengths of the non-overlapping interval in the ReMeDI estimation |
lags |
numeric containing integer values indicating the lags for which to estimate the (co)variance |
makeCorrelation |
logical indicating whether to transform the autocovariances into autocorrelations.
The estimate of variance is imprecise and thus, constructing the correlation like this may show correlations that fall outside |
We Thank Merrick Li for contributing his Matlab code for this estimator.
Emil Sjoerup.
Li, M. and Linton, O. (2021). A ReMeDI for microstructure noise. Econometrica, forthcoming
remed <- ReMeDI(sampleTData[as.Date(DT) == "2018-01-02", ], kn = 2, lags = 1:8) # We can also use the algorithm for choosing the kn tuning parameter optimalKn <- knChooseReMeDI(sampleTData[as.Date(DT) == "2018-01-02",], knMax = 10, tol = 0.05, size = 3, lower = 2, upper = 5, plot = TRUE) optimalKn remed <- ReMeDI(sampleTData[as.Date(DT) == "2018-01-02", ], kn = optimalKn, lags = 1:8)
remed <- ReMeDI(sampleTData[as.Date(DT) == "2018-01-02", ], kn = 2, lags = 1:8) # We can also use the algorithm for choosing the kn tuning parameter optimalKn <- knChooseReMeDI(sampleTData[as.Date(DT) == "2018-01-02",], knMax = 10, tol = 0.05, size = 3, lower = 2, upper = 5, plot = TRUE) optimalKn remed <- ReMeDI(sampleTData[as.Date(DT) == "2018-01-02", ], kn = optimalKn, lags = 1:8)
Estimates the asymptotic variance of the ReMeDI estimator.
ReMeDIAsymptoticVariance(pData, kn, lags, phi, i)
ReMeDIAsymptoticVariance(pData, kn, lags, phi, i)
pData |
|
kn |
numerical value determining the tuning parameter kn this controls the lengths of the non-overlapping interval in the ReMeDI estimation |
lags |
numeric containing integer values indicating the lags for which to estimate the (co)variance |
phi |
tuning parameter phi |
i |
tuning parameter i |
Some notation is needed for the estimator of the asymptotic covariance of the ReMeDI estimator. Let
Where the indices are given by:
The asymptotic variance estimator is then given by
where
a list with components ReMeDI
and asympVar
containing the ReMeDI estimation and it's asymptotic variance respectively
We Thank Merrick Li for contributing his Matlab code for this estimator.
kn <- knChooseReMeDI(sampleTDataEurope[, list(DT, PRICE)]) remedi <- ReMeDI(sampleTDataEurope[, list(DT, PRICE)], kn = kn, lags = 0:15) asympVar <- ReMeDIAsymptoticVariance(sampleTDataEurope[, list(DT, PRICE)], kn = kn, lags = 0:15, phi = 0.9, i = 2)
kn <- knChooseReMeDI(sampleTDataEurope[, list(DT, PRICE)]) remedi <- ReMeDI(sampleTDataEurope[, list(DT, PRICE)], kn = kn, lags = 0:15) asympVar <- ReMeDIAsymptoticVariance(sampleTDataEurope[, list(DT, PRICE)], kn = kn, lags = 0:15, phi = 0.9, i = 2)
Calculates the Hayashi-Yoshida Covariance estimator
rHYCov( rData, cor = FALSE, period = 1, alignBy = "seconds", alignPeriod = 1, makeReturns = FALSE, makePsd = TRUE, ... )
rHYCov( rData, cor = FALSE, period = 1, alignBy = "seconds", alignPeriod = 1, makeReturns = FALSE, makePsd = TRUE, ... )
rData |
an |
cor |
boolean, in case it is |
period |
Sampling period |
alignBy |
character, indicating the time scale in which |
alignPeriod |
positive numeric, indicating the number of periods to aggregate over. For example, to aggregate
based on a 5-minute frequency, set |
makeReturns |
boolean, should be |
makePsd |
boolean, in case it is |
... |
used internally. Do not set. |
Scott Payseur and Emil Sjoerup.
Hayashi, T. and Yoshida, N. (2005). On covariance estimation of non-synchronously observed diffusion processes. Bernoulli, 11, 359-379.
ICov
for a list of implemented estimators of the integrated covariance.
library("xts") hy <- rHYCov(rData = as.xts(sampleOneMinuteData)["2001-08-05"], period = 5, alignBy = "minutes", alignPeriod = 5, makeReturns = TRUE)
library("xts") hy <- rHYCov(rData = as.xts(sampleOneMinuteData)["2001-08-05"], period = 5, alignBy = "minutes", alignPeriod = 5, makeReturns = TRUE)
Realized covariance calculation using a kernel estimator.
The different types of kernels available can be found using listAvailableKernels
.
rKernelCov( rData, cor = FALSE, alignBy = NULL, alignPeriod = NULL, makeReturns = FALSE, kernelType = "rectangular", kernelParam = 1, kernelDOFadj = TRUE, ... )
rKernelCov( rData, cor = FALSE, alignBy = NULL, alignPeriod = NULL, makeReturns = FALSE, kernelType = "rectangular", kernelParam = 1, kernelDOFadj = TRUE, ... )
rData |
an |
cor |
boolean, in case it is |
alignBy |
character, indicating the time scale in which |
alignPeriod |
positive numeric, indicating the number of periods to aggregate over.
For example, to aggregate based on a 5-minute frequency, set |
makeReturns |
boolean, should be |
kernelType |
Kernel name. |
kernelParam |
Kernel parameter. |
kernelDOFadj |
Kernel degree of freedom adjustment. |
... |
used internally, do not change. |
Let be
returns in period
,
. The returns or prices
do not have to be equidistant. The kernel estimator for
is given by
where is the chosen kernel function and
is the empirical autocovariance function. The multivariate version employs the cross-covariances instead.
in case the input is and contains data from one day, an by
matrix is returned.
If the data is a univariate
xts
object with multiple days, an xts
is returned.
If the data is multivariate and contains multiple days (xts
or data.table
), the function returns a list containing by
matrices.
Each item in the list has a name which corresponds to the date for the matrix.
Scott Payseur, Onno Kleen, and Emil Sjoerup.
Barndorff-Nielsen, O. E., Hansen, P. R., Lunde, A., and Shephard, N. (2008). Designing realized kernels to measure the ex post variation of equity prices in the presence of noise. Econometrica, 76, 1481-1536.
Hansen, P. and Lunde, A. (2006). Realized variance and market microstructure noise. Journal of Business and Economic Statistics, 24, 127-218.
Zhou., B. (1996). High-frequency data and volatility in foreign-exchange rates. Journal of Business & Economic Statistics, 14, 45-52.
ICov
for a list of implemented estimators of the integrated covariance.
# Univariate: rvKernel <- rKernelCov(rData = sampleTData[, list(DT, PRICE)], alignBy = "minutes", alignPeriod = 5, makeReturns = TRUE) rvKernel # Multivariate: rcKernel <- rKernelCov(rData = sampleOneMinuteData, makeReturns = TRUE) rcKernel
# Univariate: rvKernel <- rKernelCov(rData = sampleTData[, list(DT, PRICE)], alignBy = "minutes", alignPeriod = 5, makeReturns = TRUE) rvKernel # Multivariate: rcKernel <- rKernelCov(rData = sampleOneMinuteData, makeReturns = TRUE) rcKernel
Calculate the realized kurtosis as defined in Amaya et al. (2015).
Assume there are equispaced returns in period
. Let
be a return (with
) in period
.
Then,
rKurt
is given by
rKurt(rData, alignBy = NULL, alignPeriod = NULL, makeReturns = FALSE)
rKurt(rData, alignBy = NULL, alignPeriod = NULL, makeReturns = FALSE)
rData |
an |
alignBy |
character, indicating the time scale in which |
alignPeriod |
positive numeric, indicating the number of periods to aggregate over. For example, to aggregate
based on a 5-minute frequency, set |
makeReturns |
boolean, should be |
In case the input is an xts
object with data from one day, a numeric of the same length as the number of assets.
If the input data spans multiple days and is in xts
format, an xts
will be returned.
If the input data is a data.table
object, the function returns a data.table
with the same column names as the input data, containing the date and the realized measures.
Giang Nguyen, Jonathan Cornelissen, Kris Boudt, Onno Kleen, and Emil Sjoerup.
Amaya, D., Christoffersen, P., Jacobs, K., and Vasquez, A. (2015). Does realized skewness and kurtosis predict the cross-section of equity returns? Journal of Financial Economics, 118, 135-167.
rk <- rKurt(sampleTData[, list(DT, PRICE)], alignBy = "minutes", alignPeriod = 5, makeReturns = TRUE) rk
rk <- rKurt(sampleTData[, list(DT, PRICE)], alignBy = "minutes", alignPeriod = 5, makeReturns = TRUE) rk
DEPRECATED USE rMedRQuar
rMedRQ(rData, alignBy = NULL, alignPeriod = NULL, makeReturns = FALSE)
rMedRQ(rData, alignBy = NULL, alignPeriod = NULL, makeReturns = FALSE)
rData |
DEPRECATED USE |
alignBy |
DEPRECATED USE |
alignPeriod |
DEPRECATED USE |
makeReturns |
DEPRECATED USE |
Calculate the rMedRQ, defined in Andersen et al. (2012). Assume there are equispaced returns
in period
,
.
Then, the rMedRQ is given by
rMedRQuar(rData, alignBy = NULL, alignPeriod = NULL, makeReturns = FALSE)
rMedRQuar(rData, alignBy = NULL, alignPeriod = NULL, makeReturns = FALSE)
rData |
an |
alignBy |
character, indicating the time scale in which |
alignPeriod |
positive numeric, indicating the number of periods to aggregate over. For example, to aggregate
based on a 5-minute frequency, set |
makeReturns |
boolean, should be |
In case the input is an xts
object with data from one day, a numeric of the same length as the number of assets.
If the input data spans multiple days and is in xts
format, an xts
will be returned.
If the input data is a data.table
object, the function returns a data.table
with the same column names as the input data, containing the date and the realized measures.
Giang Nguyen, Jonathan Cornelissen, Kris Boudt, and Emil Sjoerup.
Andersen, T. G., Dobrev, D., and Schaumburg, E. (2012). Jump-robust volatility estimation using nearest neighbor truncation. Journal of Econometrics, 169, 75-93.
rq <- rMedRQuar(rData = sampleTData[, list(DT, PRICE)], alignBy = "minutes", alignPeriod = 5, makeReturns = TRUE) rq
rq <- rMedRQuar(rData = sampleTData[, list(DT, PRICE)], alignBy = "minutes", alignPeriod = 5, makeReturns = TRUE) rq
DEPRECATED USE rMedRVar
rMedRV(rData, alignBy = NULL, alignPeriod = NULL, makeReturns = FALSE)
rMedRV(rData, alignBy = NULL, alignPeriod = NULL, makeReturns = FALSE)
rData |
DEPRECATED USE |
alignBy |
DEPRECATED USE |
alignPeriod |
DEPRECATED USE |
makeReturns |
DEPRECATED USE |
Calculate the rMedRVar, defined in Andersen et al. (2012).
Let be a return (with
) in period
.
Then, the rMedRVar is given by
rMedRVar(rData, alignBy = NULL, alignPeriod = NULL, makeReturns = FALSE, ...)
rMedRVar(rData, alignBy = NULL, alignPeriod = NULL, makeReturns = FALSE, ...)
rData |
an |
alignBy |
character, indicating the time scale in which |
alignPeriod |
positive numeric, indicating the number of periods to aggregate over. For example, to aggregate
based on a 5-minute frequency, set |
makeReturns |
boolean, should be |
... |
used internally, do not change. |
The rMedRVar belongs to the class of realized volatility measures in this package
that use the series of high-frequency returns of a day
to produce an ex post estimate of the realized volatility of that day
.
rMedRVar is designed to be robust to price jumps.
The difference between RV and rMedRVar is an estimate of the realized jump
variability. Disentangling the continuous and jump components in RV
can lead to more precise volatility forecasts,
as shown in Andersen et al. (2012)
In case the input is an xts
object with data from one day, a numeric of the same length as the number of assets.
If the input data spans multiple days and is in xts
format, an xts
will be returned.
If the input data is a data.table
object, the function returns a data.table
with the same column names as the input data, containing the date and the realized measures.
Jonathan Cornelissen, Kris Boudt, and Emil Sjoerup.
Andersen, T. G., Dobrev, D., and Schaumburg, E. (2012). Jump-robust volatility estimation using nearest neighbor truncation. Journal of Econometrics, 169, 75-93.
IVar
for a list of implemented estimators of the integrated variance.
medrv <- rMedRVar(rData = sampleTData[, list(DT, PRICE)], alignBy = "minutes", alignPeriod = 5, makeReturns = TRUE) medrv
medrv <- rMedRVar(rData = sampleTData[, list(DT, PRICE)], alignBy = "minutes", alignPeriod = 5, makeReturns = TRUE) medrv
DEPRECATED USE rMinRQuar
rMinRQ(rData, alignBy = NULL, alignPeriod = NULL, makeReturns = FALSE)
rMinRQ(rData, alignBy = NULL, alignPeriod = NULL, makeReturns = FALSE)
rData |
DEPRECATED USE |
alignBy |
DEPRECATED USE |
alignPeriod |
DEPRECATED USE |
makeReturns |
DEPRECATED USE |
Calculate the rMinRQuar, defined in Andersen et al. (2012).
Assume there are equispaced returns
in period
,
.
Then, the rMinRQuar is given by
rMinRQuar(rData, alignBy = NULL, alignPeriod = NULL, makeReturns = FALSE)
rMinRQuar(rData, alignBy = NULL, alignPeriod = NULL, makeReturns = FALSE)
rData |
an |
alignBy |
character, indicating the time scale in which |
alignPeriod |
positive numeric, indicating the number of periods to aggregate over. For example, to aggregate
based on a 5-minute frequency, set |
makeReturns |
boolean, should be |
In case the input is an xts
object with data from one day, a numeric of the same length as the number of assets.
If the input data spans multiple days and is in xts
format, an xts
will be returned.
If the input data is a data.table
object, the function returns a data.table
with the same column names as the input data, containing the date and the realized measures.
Giang Nguyen, Jonathan Cornelissen, Kris Boudt, and Emil Sjoerup
Andersen, T. G., Dobrev, D., and Schaumburg, E. (2012). Jump-robust volatility estimation using nearest neighbor truncation. Journal of Econometrics, 169, 75-93.
rq <- rMinRQuar(rData = sampleTData[, list(DT, PRICE)], alignBy = "minutes", alignPeriod = 5, makeReturns = TRUE) rq
rq <- rMinRQuar(rData = sampleTData[, list(DT, PRICE)], alignBy = "minutes", alignPeriod = 5, makeReturns = TRUE) rq
DEPRECATED USE rMinRVar
rMinRV(rData, alignBy = NULL, alignPeriod = NULL, makeReturns = FALSE)
rMinRV(rData, alignBy = NULL, alignPeriod = NULL, makeReturns = FALSE)
rData |
DEPRECATED USE |
alignBy |
DEPRECATED USE |
alignPeriod |
DEPRECATED USE |
makeReturns |
DEPRECATED USE |
Calculate the rMinRVar, defined in Andersen et al. (2009).
Let be a return (with
) in period
.
Then, the rMinRVar is given by
rMinRVar(rData, alignBy = NULL, alignPeriod = NULL, makeReturns = FALSE, ...)
rMinRVar(rData, alignBy = NULL, alignPeriod = NULL, makeReturns = FALSE, ...)
rData |
an |
alignBy |
character, indicating the time scale in which |
alignPeriod |
positive numeric, indicating the number of periods to aggregate over. For example, to aggregate
based on a 5-minute frequency, set |
makeReturns |
boolean, should be |
... |
used internally, do not change. |
In case the input is an xts
object with data from one day, a numeric of the same length as the number of assets.
If the input data spans multiple days and is in xts
format, an xts
will be returned.
If the input data is a data.table
object, the function returns a data.table
with the same column names as the input data, containing the date and the realized measures.
Jonathan Cornelissen, Kris Boudt, Emil Sjoerup.
Andersen, T. G., Dobrev, D., and Schaumburg, E. (2012). Jump-robust volatility estimation using nearest neighbor truncation. Journal of Econometrics, 169, 75-93.
IVar
for a list of implemented estimators of the integrated variance.
minrv <- rMinRVar(rData = sampleTData[, list(DT, PRICE)], alignBy = "minutes", alignPeriod = 5, makeReturns = TRUE) minrv
minrv <- rMinRVar(rData = sampleTData[, list(DT, PRICE)], alignBy = "minutes", alignPeriod = 5, makeReturns = TRUE) minrv
maxi
times the median spreadFunction deletes entries for which the spread is more than "maxi"
times the median
spread on that day.
rmLargeSpread(qData, maxi = 50, tz = NULL)
rmLargeSpread(qData, maxi = 50, tz = NULL)
qData |
an |
maxi |
an integer. By default |
tz |
fallback time zone used in case we we are unable to identify the timezone of the data, by default: |
xts
or data.table
object depending on input.
Jonathan Cornelissen, Kris Boudt, Onno Kleen, and Emil Sjoerup.
Function deletes entries for which the spread is negative.
rmNegativeSpread(qData)
rmNegativeSpread(qData)
qData |
an |
data.table
or xts
object
Jonathan Cornelissen, Kris Boudt and Onno Kleen
rmNegativeSpread(sampleQDataRaw)
rmNegativeSpread(sampleQDataRaw)
Delete entries for which the mid-quote is outlying with respect to surrounding entries.
rmOutliersQuotes(qData, maxi = 10, window = 50, type = "advanced", tz = NULL)
rmOutliersQuotes(qData, maxi = 10, window = 50, type = "advanced", tz = NULL)
qData |
a |
maxi |
an integer, indicating the maximum number of median absolute deviations allowed. |
window |
an integer, indicating the time window for which the "outlyingness" is considered. |
type |
should be |
tz |
fallback time zone used in case we we are unable to identify the timezone of the data, by default: |
If type = "standard"
: Function deletes entries for which the mid-quote deviated by more than "maxi"
median absolute deviations from a rolling centered median (excluding
the observation under consideration) of window observations.
If type = "advanced"
: Function deletes entries for which the mid-quote deviates by more than "maxi"
median absolute deviations from the value closest to the mid-quote of
these three options:
Rolling centered median (excluding the observation under consideration)
Rolling median of the following window of observations
Rolling median of the previous window of observations
The advantage of this procedure compared to the "standard" proposed by Barndorff-Nielsen et al. (2010) is that it will not incorrectly remove large price jumps. Therefore this procedure has been set as the default for removing outliers.
Note that the median absolute deviation is taken over the entire day. In case it is zero (which can happen if mid-quotes don't change much), the median absolute deviation is taken over a subsample without constant mid-quotes.
xts
object or data.table
depending on type of input.
Jonathan Cornelissen and Kris Boudt.
Barndorff-Nielsen, O. E., P. R. Hansen, A. Lunde, and N. Shephard (2009). Realized kernels in practice: Trades and quotes. Econometrics Journal, 12, C1-C32.
Brownlees, C.T., and Gallo, G.M. (2006). Financial econometric analysis at ultra-high frequency: Data handling concerns. Computational Statistics & Data Analysis, 51, 2232-2245.
Delete entries for which the price is outlying with respect to surrounding entries. In comparison to tradesCleanupUsingQuotes, this function doesn't need quote data.
rmOutliersTrades(pData, maxi = 10, window = 50, type = "advanced", tz = NULL)
rmOutliersTrades(pData, maxi = 10, window = 50, type = "advanced", tz = NULL)
pData |
a |
maxi |
an integer, indicating the maximum number of median absolute deviations allowed. |
window |
an integer, indicating the time window for which the "outlyingness" is considered. |
type |
should be |
tz |
fallback time zone used in case we we are unable to identify the timezone of the data, by default: |
If type = "standard"
: Function deletes entries for which the price deviated by more than "maxi"
median absolute deviations from a rolling centered median (excluding
the observation under consideration) of window observations.
If type = "advanced"
: Function deletes entries for which the price deviates by more than "maxi"
median absolute deviations from the value closest to the price of
these three options:
Rolling centered median (excluding the observation under consideration)
Rolling median of the following window of observations
Rolling median of the previous window of observations
The advantage of this procedure compared to the "standard" proposed by Barndorff-Nielsen et al. (2010, footnote 8) is that it will not incorrectly remove large price jumps. Therefore this procedure has been set as the default for removing outliers.
Note that the median absolute deviation is taken over the entire day. In case it is zero (which can happen if prices don't change much), the median absolute deviation is taken over a subsample without constant prices.
xts
object or data.table
depending on type of input.
Jonathan Cornelissen, Kris Boudt, and Onno Kleen.
Barndorff-Nielsen, O. E., P. R. Hansen, A. Lunde, and N. Shephard (2009). Realized kernels in practice: Trades and quotes. Econometrics Journal, 12, C1-C32.
DEPRECATED USE rMPVar
rMPV(rData, alignBy = NULL, alignPeriod = NULL, makeReturns = FALSE)
rMPV(rData, alignBy = NULL, alignPeriod = NULL, makeReturns = FALSE)
rData |
DEPRECATED |
alignBy |
DEPRECATED |
alignPeriod |
DEPRECATED |
makeReturns |
DEPRECATED |
Calculate the Realized Multipower Variation rMPVar, defined in Andersen et al. (2012).
Assume there are equispaced returns
in period
,
. Then, the rMPVar is given by
in which
:
: the window size of return blocks;
: the power of the variation;
and >
.
rMPVar( rData, m = 2, p = 2, alignBy = NULL, alignPeriod = NULL, makeReturns = FALSE, ... )
rMPVar( rData, m = 2, p = 2, alignBy = NULL, alignPeriod = NULL, makeReturns = FALSE, ... )
rData |
an |
m |
the window size of return blocks. 2 by default. |
p |
the power of the variation. 2 by default. |
alignBy |
character, indicating the time scale in which |
alignPeriod |
positive numeric, indicating the number of periods to aggregate over. For example, to aggregate
based on a 5-minute frequency, set |
makeReturns |
boolean, should be |
... |
used internally, do not change. |
numeric
Giang Nguyen, Jonathan Cornelissen, Kris Boudt, and Emil Sjoerup.
Andersen, T. G., Dobrev, D., and Schaumburg, E. (2012). Jump-robust volatility estimation using nearest neighbor truncation. Journal of Econometrics, 169, 75-93.
IVar
for a list of implemented estimators of the integrated variance.
mpv <- rMPVar(sampleTData[, list(DT, PRICE)], m = 2, p = 3, alignBy = "minutes", alignPeriod = 5, makeReturns = TRUE) mpv
mpv <- rMPVar(sampleTData[, list(DT, PRICE)], m = 2, p = 3, alignBy = "minutes", alignPeriod = 5, makeReturns = TRUE) mpv
DEPRECATED USE rMRCov
rMRC(pData, pairwise = FALSE, makePsd = FALSE, theta = 0.8, ...)
rMRC(pData, pairwise = FALSE, makePsd = FALSE, theta = 0.8, ...)
pData |
DEPRECATED USE |
pairwise |
DEPRECATED USE |
makePsd |
DEPRECATED USE |
theta |
DEPRECATED USE |
... |
DEPRECATED USE |
Calculate univariate or multivariate pre-averaged estimator, as defined in Hautsch and Podolskij (2013).
rMRCov( pData, pairwise = FALSE, makePsd = FALSE, theta = 0.8, crossAssetNoiseCorrection = FALSE, ... )
rMRCov( pData, pairwise = FALSE, makePsd = FALSE, theta = 0.8, crossAssetNoiseCorrection = FALSE, ... )
pData |
a list. Each list-item contains an |
pairwise |
boolean, should be |
makePsd |
boolean, in case it is |
theta |
a |
crossAssetNoiseCorrection |
a |
... |
used internally, do not change. |
In practice, market microstructure noise leads to a departure from the pure semimartingale model. We consider the process in period
:
where the observed dimensional log-prices are the sum of underlying Brownian semimartingale process
and a noise term
.
is an i.i.d. process with
.
It is intuitive that under mean zero i.i.d. microstructure noise some form of smoothing of the observed log-price should tend to diminish the impact of the noise.
Effectively, we are going to approximate a continuous function by an average of observations of in a neighborhood, the noise being averaged away.
Assume there is equispaced returns in period
of a list (after refreshing data).
Let
be a return (with
) of an asset in period
. Assume there is
assets.
In order to define the univariate pre-averaging estimator, we first define the pre-averaged returns as
where g is a non-zero real-valued function
given by
=
.
is a sequence of integers satisfying
.
We use
as recommended in Hautsch and Podolskij (2013). The pre-averaged returns are simply a weighted average over the returns in a local window.
This averaging diminishes the influence of the noise. The order of the window size
is chosen to lead to optimal convergence rates.
The pre-averaging estimator is then simply the analogue of the realized variance but based on pre-averaged returns and an additional term to remove bias due to noise
with
The multivariate counterpart is very similar. The estimator is called the Modulated Realized Covariance (rMRCov) and is defined as
where . It is a bias correction to make it consistent.
However, due to this correction, the estimator is not ensured PSD.
An alternative is to slightly enlarge the bandwidth such that
.
results in a consistent estimate without the bias correction and a PSD estimate, in which case:
A covariance matrix.
Giang Nguyen, Jonathan Cornelissen, Kris Boudt, and Emil Sjoerup.
Hautsch, N., and Podolskij, M. (2013). Preaveraging-based estimation of quadratic variation in the presence of noise and jumps: theory, implementation, and empirical Evidence. Journal of Business & Economic Statistics, 31, 165-183.
ICov
for a list of implemented estimators of the integrated covariance.
## Not run: library("xts") # Note that this ought to be tick-by-tick data and this example is only to show the usage. a <- list(as.xts(sampleOneMinuteData[as.Date(DT) == "2001-08-04", list(DT, MARKET)]), as.xts(sampleOneMinuteData[as.Date(DT) == "2001-08-04", list(DT, STOCK)])) rMRCov(a, pairwise = TRUE, makePsd = TRUE) # We can also add use data.tables and use a named list to convey asset names a <- list(foo = sampleOneMinuteData[as.Date(DT) == "2001-08-04", list(DT, MARKET)], bar = sampleOneMinuteData[as.Date(DT) == "2001-08-04", list(DT, STOCK)]) rMRCov(a, pairwise = TRUE, makePsd = TRUE) ## End(Not run)
## Not run: library("xts") # Note that this ought to be tick-by-tick data and this example is only to show the usage. a <- list(as.xts(sampleOneMinuteData[as.Date(DT) == "2001-08-04", list(DT, MARKET)]), as.xts(sampleOneMinuteData[as.Date(DT) == "2001-08-04", list(DT, STOCK)])) rMRCov(a, pairwise = TRUE, makePsd = TRUE) # We can also add use data.tables and use a named list to convey asset names a <- list(foo = sampleOneMinuteData[as.Date(DT) == "2001-08-04", list(DT, MARKET)], bar = sampleOneMinuteData[as.Date(DT) == "2001-08-04", list(DT, STOCK)]) rMRCov(a, pairwise = TRUE, makePsd = TRUE) ## End(Not run)
Function deletes entries with prices that are above the ask plus the bid-ask spread. Similar for entries with prices below the bid minus the bid-ask spread.
rmTradeOutliersUsingQuotes( tData, qData, lagQuotes = 0, nSpreads = 1, BFM = FALSE, backwardsWindow = 3600, forwardsWindow = 0.5, plot = FALSE, ... )
rmTradeOutliersUsingQuotes( tData, qData, lagQuotes = 0, nSpreads = 1, BFM = FALSE, backwardsWindow = 3600, forwardsWindow = 0.5, plot = FALSE, ... )
tData |
a |
qData |
a |
lagQuotes |
numeric, number of seconds the quotes are registered faster than the trades (should be round and positive). Default is 0. For older datasets, i.e. before 2010, it may be a good idea to set this to e.g. 2. See Vergote (2005) |
nSpreads |
numeric of length 1 denotes how far above the offer and below bid we allow outliers to be. Trades are filtered out if they are MORE THAN nSpread * spread above (below) the offer (bid) |
BFM |
a logical determining whether to conduct 'Backwards - Forwards matching' of trades and quotes. The algorithm tries to match trades that fall outside the bid - ask and first tries to match a small window forwards and if this fails, it tries to match backwards in a bigger window. The small window is a tolerance for inaccuracies in the timestamps of bids and asks. The backwards window allow for matching of late reported trades, i.e. block trades. |
backwardsWindow |
a numeric denoting the length of the backwards window. Default is 3600, corresponding to one hour. |
forwardsWindow |
a numeric denoting the length of the forwards window. Default is 0.5, corresponding to one half second. |
plot |
a logical denoting whether to visualize the forwards, backwards, and unmatched trades in a plot. |
... |
used internally |
Note: in order to work correctly, the input data of this function should be cleaned trade (tData) and quote (qData) data respectively. In older high frequency datasets the trades frequently lag the quotes. In newer datasets this tends to happen only during extreme market activity when exchange networks are at maximum capacity.
xts
or data.table
object depending on input.
Jonathan Cornelissen, Kris Boudt, Onno Kleen, and Emil Sjoerup.
Vergote, O. (2005). How to match trades and quotes for NYSE stocks? K.U.Leuven working paper.
Christensen, K., Oomen, R. C. A., Podolskij, M. (2014): Fact or Friction: Jumps at ultra high frequency. Journal of Financial Economics, 144, 576-599
Calculate the Realized Outlyingness Weighted Covariance (rOWCov), defined in Boudt et al. (2008).
Let , for
be a sample
of
high-frequency
return vectors and
their outlyingness given by the squared Mahalanobis distance between
the return vector and zero in terms of the reweighted MCD covariance
estimate based on these returns.
Then, the rOWCov is given by
The weight is one if the multivariate jump test statistic for
in Boudt et al. (2008) is less
than the 99.9% percentile of the chi-square distribution with
degrees of freedom and zero otherwise.
The scalar
is a correction factor ensuring consistency of the rOWCov for the Integrated Covariance,
under the Brownian Semimartingale with Finite Activity Jumps model.
rOWCov( rData, cor = FALSE, alignBy = NULL, alignPeriod = NULL, makeReturns = FALSE, seasadjR = NULL, wFunction = "HR", alphaMCD = 0.75, alpha = 0.001, ... )
rOWCov( rData, cor = FALSE, alignBy = NULL, alignPeriod = NULL, makeReturns = FALSE, seasadjR = NULL, wFunction = "HR", alphaMCD = 0.75, alpha = 0.001, ... )
rData |
a |
cor |
boolean, in case it is |
alignBy |
character, indicating the time scale in which |
alignPeriod |
positive numeric, indicating the number of periods to aggregate over. For example, to aggregate
based on a 5-minute frequency, set |
makeReturns |
boolean, should be |
seasadjR |
a |
wFunction |
determines whether
a zero-one weight function (one if no jump is detected based on |
alphaMCD |
a numeric parameter, controlling the size of
the subsets over which the determinant is minimized.
Allowed values are between 0.5 and 1 and
the default is 0.75. See Boudt et al. (2008) or the |
alpha |
is a parameter between 0 and 0.5, that determines the rejection threshold value (see Boudt et al. (2008) for details). |
... |
used internally, do not change. |
Advantages of the rOWCov compared to the rBPCov
include a higher statistical efficiency, positive semi-definiteness and affine equi-variance.
However, the rOWCov suffers from a curse of dimensionality.
The rOWCov gives a zero weight to a return vector
if at least one of the components is affected by a jump.
In the case of independent jump occurrences, the average proportion of observations
with at least one component being affected by jumps increases fast with the dimension
of the series. This means that a potentially large proportion of the returns receives
a zero weight, due to which the rOWCov can have a low finite sample efficiency in higher dimensions.
an matrix
Jonathan Cornelissen, Kris Boudt, and Emil Sjoerup.
Boudt, K., Croux, C., and Laurent, S. (2008). Outlyingness weighted covariation. Journal of Financial Econometrics, 9, 657–684.
ICov
for a list of implemented estimators of the integrated covariance.
## Not run: library("xts") # Realized Outlyingness Weighted Variance/Covariance for prices aligned # at 1 minutes. # Univariate: row <- rOWCov(rData = as.xts(sampleOneMinuteData[as.Date(DT) == "2001-08-04", list(DT, MARKET)]), makeReturns = TRUE) row # Multivariate: rowc <- rOWCov(rData = as.xts(sampleOneMinuteData[as.Date(DT) == "2001-08-04",]), makeReturns = TRUE) rowc ## End(Not run)
## Not run: library("xts") # Realized Outlyingness Weighted Variance/Covariance for prices aligned # at 1 minutes. # Univariate: row <- rOWCov(rData = as.xts(sampleOneMinuteData[as.Date(DT) == "2001-08-04", list(DT, MARKET)]), makeReturns = TRUE) row # Multivariate: rowc <- rOWCov(rData = as.xts(sampleOneMinuteData[as.Date(DT) == "2001-08-04",]), makeReturns = TRUE) rowc ## End(Not run)
Calculate the realized quad-power variation, defined in Andersen et al. (2012).
Assume there are equispaced returns
in period
,
. Then, the rQPVar is given by
rQPVar(rData, alignBy = NULL, alignPeriod = NULL, makeReturns = FALSE, ...)
rQPVar(rData, alignBy = NULL, alignPeriod = NULL, makeReturns = FALSE, ...)
rData |
an |
alignBy |
character, indicating the time scale in which |
alignPeriod |
positive numeric, indicating the number of periods to aggregate over. For example, to aggregate
based on a 5-minute frequency, set |
makeReturns |
boolean, should be |
... |
used internally, do not change. |
In case the input is an xts
object with data from one day, a numeric of the same length as the number of assets.
If the input data spans multiple days and is in xts
format, an xts
will be returned.
If the input data is a data.table
object, the function returns a data.table
with the same column names as the input data, containing the date and the realized measures.
Giang Nguyen, Jonathan Cornelissen, Kris Boudt, and Emil Sjoerup
Andersen, T. G., Dobrev, D., and Schaumburg, E. (2012). Jump-robust volatility estimation using nearest neighbor truncation. Journal of Econometrics, 169, 75-93.
IVar
for a list of implemented estimators of the integrated variance.
qpv <- rQPVar(rData= sampleTData[, list(DT, PRICE)], alignBy= "minutes", alignPeriod =5, makeReturns= TRUE) qpv
qpv <- rQPVar(rData= sampleTData[, list(DT, PRICE)], alignBy= "minutes", alignPeriod =5, makeReturns= TRUE) qpv
Calculate the realized quarticity (rQuar), defined in Andersen et al. (2012).
Assume there are equispaced returns
in period
,
.
Then, the rQuar is given by
rQuar(rData, alignBy = NULL, alignPeriod = NULL, makeReturns = FALSE)
rQuar(rData, alignBy = NULL, alignPeriod = NULL, makeReturns = FALSE)
rData |
an |
alignBy |
character, indicating the time scale in which |
alignPeriod |
positive numeric, indicating the number of periods to aggregate over. For example, to aggregate
based on a 5-minute frequency, set |
makeReturns |
boolean, should be |
In case the input is an xts
object with data from one day, a numeric of the same length as the number of assets.
If the input data spans multiple days and is in xts
format, an xts
will be returned.
If the input data is a data.table
object, the function returns a data.table
with the same column names as the input data, containing the date and the realized measures.
Giang Nguyen, Jonathan Cornelissen, Kris Boudt, and Emil Sjoerup.
Andersen, T. G., Dobrev, D., and Schaumburg, E. (2012). Jump-robust volatility estimation using nearest neighbor truncation. Journal of Econometrics, 169, 75-93.
rq <- rQuar(rData = sampleTData[, list(DT, PRICE)], alignBy = "minutes", alignPeriod = 5, makeReturns = TRUE) rq
rq <- rQuar(rData = sampleTData[, list(DT, PRICE)], alignBy = "minutes", alignPeriod = 5, makeReturns = TRUE) rq
Calculate the robust two time scale covariance matrix proposed in Boudt and Zhang (2010).
Unlike the rOWCov
, but similarly to the rThresholdCov
, the rRTSCov
uses univariate jump detection rules
to truncate the effect of jumps on the covariance
estimate. By the use of two time scales, this covariance estimate
is not only robust to price jumps, but also to microstructure noise and non-synchronic trading.
rRTSCov( pData, cor = FALSE, startIV = NULL, noisevar = NULL, K = 300, J = 1, KCov = NULL, JCov = NULL, KVar = NULL, JVar = NULL, eta = 9, makePsd = FALSE, ... )
rRTSCov( pData, cor = FALSE, startIV = NULL, noisevar = NULL, K = 300, J = 1, KCov = NULL, JCov = NULL, KVar = NULL, JVar = NULL, eta = 9, makePsd = FALSE, ... )
pData |
a list. Each list-item i contains an |
cor |
boolean, in case it is |
startIV |
vector containing the first step estimates of the integrated variance of the assets, needed in the truncation. Is |
noisevar |
vector containing the estimates of the noise variance of the assets, needed in the truncation. Is |
K |
positive integer, slow time scale returns are computed on prices that are |
J |
positive integer, fast time scale returns are computed on prices that are |
KCov |
positive integer, for the extradiagonal covariance elements the slow time scale returns are computed on prices that are |
JCov |
positive integer, for the extradiagonal covariance elements the fast time scale returns are computed on prices that are |
KVar |
vector of positive integers, for the diagonal variance elements the slow time scale returns are computed on prices that are |
JVar |
vector of positive integers, for the diagonal variance elements the fast time scale returns are computed on prices that are |
eta |
positive real number, squared standardized high-frequency returns that exceed eta are detected as jumps. |
makePsd |
boolean, in case it is |
... |
used internally, do not change. |
The rRTSCov requires the tick-by-tick transaction prices.
(Co)variances are then computed using log-returns calculated on a rolling basis
on stock prices that are (slow time scale) and
(fast time scale) steps apart.
The diagonal elements of the rRTSCov matrix are the variances, computed for log-price series with
price observations
at times
as follows:
where ,
and
The constant adjusts for the bias due to the thresholding and
is a jump indicator function
that is one if
and zero otherwise. The elements in the denominator are the integrated variance (estimated recursively) and noise variance (estimated by the method in Zhang et al, 2005).
The extradiagonal elements of the rRTSCov are the covariances.
For their calculation, the data is first synchronized by the refresh time method proposed by Harris et al (1995).
It uses the function refreshTime
to collect first the so-called refresh times at which all assets have traded at least once
since the last refresh time point. Suppose we have two log-price series: and
. Let
and
be the set of transaction times of these assets.
The first refresh time corresponds to the first time at which both stocks have traded, i.e.
. The subsequent refresh time is defined as the first time when both stocks have again traded, i.e.
. The
complete refresh time sample grid is
, where
is the total number of paired returns. The
sampling points of asset
and
are defined to be
and
.
Given these refresh times, the covariance is computed as follows:
where
with the same jump indicator function as for the variance and
a constant to adjust for the bias due to the thresholding.
Unfortunately, the rRTSCov is not always positive semidefinite.
By setting the argument makePsd = TRUE
, the function makePsd
is used to return a positive semidefinite
matrix. This function replaces the negative eigenvalues with zeroes.
an matrix
Jonathan Cornelissen, Kris Boudt, and Emil Sjoerup.
Boudt K. and Zhang, J. 2010. Jump robust two time scale covariance estimation and realized volatility budgets. Mimeo.
Harris, F., McInish, T., Shoesmith, G., and Wood, R. (1995). Cointegration, error correction, and price discovery on informationally linked security markets. Journal of Financial and Quantitative Analysis, 30, 563-581.
Zhang, L., Mykland, P. A., and Ait-Sahalia, Y. (2005). A tale of two time scales: Determining integrated volatility with noisy high-frequency data. Journal of the American Statistical Association, 100, 1394-1411.
ICov
for a list of implemented estimators of the integrated covariance.
## Not run: library(xts) set.seed(123) start <- strptime("1970-01-01", format = "%Y-%m-%d", tz = "UTC") timestamps <- start + seq(34200, 57600, length.out = 23401) dat <- cbind(rnorm(23401) * sqrt(1/23401), rnorm(23401) * sqrt(1/23401)) dat <- exp(cumsum(xts(dat, timestamps))) price1 <- dat[,1] price2 <- dat[,2] rcRTS <- rRTSCov(pData = list(price1, price2)) # Note: List of prices as input rcRTS ## End(Not run)
## Not run: library(xts) set.seed(123) start <- strptime("1970-01-01", format = "%Y-%m-%d", tz = "UTC") timestamps <- start + seq(34200, 57600, length.out = 23401) dat <- cbind(rnorm(23401) * sqrt(1/23401), rnorm(23401) * sqrt(1/23401)) dat <- exp(cumsum(xts(dat, timestamps))) price1 <- dat[,1] price2 <- dat[,2] rcRTS <- rRTSCov(pData = list(price1, price2)) # Note: List of prices as input rcRTS ## End(Not run)
Calculates the daily Realized Variance.
Let be an intraday return vector with
number of intraday returns.
Then, the realized variance is given by
rRVar(rData, alignBy = NULL, alignPeriod = NULL, makeReturns = FALSE, ...)
rRVar(rData, alignBy = NULL, alignPeriod = NULL, makeReturns = FALSE, ...)
rData |
an |
alignBy |
character, indicating the time scale in which |
alignPeriod |
positive numeric, indicating the number of periods to aggregate over. For example, to aggregate
based on a 5-minute frequency, set |
makeReturns |
boolean, should be |
... |
used internally, do not change. |
In case the input is an xts
object with data from one day, a numeric of the same length as the number of assets.
If the input data spans multiple days and is in xts
format, an xts
will be returned.
If the input data is a data.table
object, the function returns a data.table
with the same column names as the input data, containing the date and the realized measures.
IVar
for a list of implemented estimators of the integrated variance.
rv <- rRVar(sampleOneMinuteData, makeReturns = TRUE) plot(rv[, DT], rv[, MARKET], xlab = "Date", ylab = "Realized Variance", type = "l")
rv <- rRVar(sampleOneMinuteData, makeReturns = TRUE) plot(rv[, DT], rv[, MARKET], xlab = "Date", ylab = "Realized Variance", type = "l")
Calculate the Realized Semicovariances (rSemiCov).
Let be an intraday
return matrix and
the number of intraday returns. Then, let
and
.
Then, the realized semicovariance is given by the following three matrices:
The mixed covariance matrix will have 0 on the diagonal.
From these three matrices, the realized covariance can be constructed as .
The concordant semicovariance matrix is
.
The off-diagonals of the concordant matrix is always positive, while for the mixed matrix, it is always negative.
rSemiCov( rData, cor = FALSE, alignBy = NULL, alignPeriod = NULL, makeReturns = FALSE )
rSemiCov( rData, cor = FALSE, alignBy = NULL, alignPeriod = NULL, makeReturns = FALSE )
rData |
an |
cor |
boolean, in case it is |
alignBy |
character, indicating the time scale in which |
alignPeriod |
positive numeric, indicating the number of periods to aggregate over. For example, to aggregate
based on a 5-minute frequency, set |
makeReturns |
boolean, should be |
In the case that cor is TRUE
, the mixed matrix will be an matrix filled with NA as mapping the mixed covariance matrix into correlation space is impossible due to the 0-diagonal.
In case the data consists of one day a list of five matrices are returned. These matrices are named
mixed
, positive
, negative
, concordant
, and rCov
.
The latter matrix corresponds to the realized covariance estimator and is thus named like the function rCov
.
In case the data spans more than one day, the list for each day will be put into another list named according to the date of the estimates.
Emil Sjoerup.
Bollerslev, T., Li, J., Patton, A. J., and Quaedvlieg, R. (2020). Realized semicovariances. Econometrica, 88, 1515-1551.
ICov
for a list of implemented estimators of the integrated covariance.
# Realized semi-variance/semi-covariance for prices aligned # at 5 minutes. # Univariate: rSVar = rSemiCov(rData = sampleTData[, list(DT, PRICE)], alignBy = "minutes", alignPeriod = 5, makeReturns = TRUE) rSVar ## Not run: library("xts") # Multivariate multi day: rSC <- rSemiCov(sampleOneMinuteData, makeReturns = TRUE) # rSC is a list of lists # We extract the covariance between stock 1 and stock 2 for all three covariances. mixed <- sapply(rSC, function(x) x[["mixed"]][1,2]) neg <- sapply(rSC, function(x) x[["negative"]][1,2]) pos <- sapply(rSC, function(x) x[["positive"]][1,2]) covariances <- xts(cbind(mixed, neg, pos), as.Date(names(rSC))) colnames(covariances) <- c("mixed", "neg", "pos") # We make a quick plot of the different covariances plot(covariances) addLegend(lty = 1) # Add legend so we can distinguish the series. ## End(Not run)
# Realized semi-variance/semi-covariance for prices aligned # at 5 minutes. # Univariate: rSVar = rSemiCov(rData = sampleTData[, list(DT, PRICE)], alignBy = "minutes", alignPeriod = 5, makeReturns = TRUE) rSVar ## Not run: library("xts") # Multivariate multi day: rSC <- rSemiCov(sampleOneMinuteData, makeReturns = TRUE) # rSC is a list of lists # We extract the covariance between stock 1 and stock 2 for all three covariances. mixed <- sapply(rSC, function(x) x[["mixed"]][1,2]) neg <- sapply(rSC, function(x) x[["negative"]][1,2]) pos <- sapply(rSC, function(x) x[["positive"]][1,2]) covariances <- xts(cbind(mixed, neg, pos), as.Date(names(rSC))) colnames(covariances) <- c("mixed", "neg", "pos") # We make a quick plot of the different covariances plot(covariances) addLegend(lty = 1) # Add legend so we can distinguish the series. ## End(Not run)
Calculate the realized skewness, defined in Amaya et al. (2015).
Assume there are equispaced returns in period
. Let
be a return (with
) in period
. Then,
rSkew
is given by
rSkew(rData, alignBy = NULL, alignPeriod = NULL, makeReturns = FALSE)
rSkew(rData, alignBy = NULL, alignPeriod = NULL, makeReturns = FALSE)
rData |
an |
alignBy |
character, indicating the time scale in which |
alignPeriod |
positive numeric, indicating the number of periods to aggregate over. For example, to aggregate
based on a 5-minute frequency, set |
makeReturns |
boolean, should be |
In case the input is an xts
object with data from one day, a numeric of the same length as the number of assets.
If the input data spans multiple days and is in xts
format, an xts
will be returned.
If the input data is a data.table
object, the function returns a data.table
with the same column names as the input data, containing the date and the realized measures.
Giang Nguyen, Jonathan Cornelissen, Kris Boudt, Onno Kleen, and Emil Sjoerup.
Amaya, D., Christoffersen, P., Jacobs, K., and Vasquez, A. (2015). Does realized skewness and kurtosis predict the cross-section of equity returns? Journal of Financial Economics, 118, 135-167.
rs <- rSkew(sampleTData[, list(DT, PRICE)],alignBy ="minutes", alignPeriod =5, makeReturns = TRUE) rs
rs <- rSkew(sampleTData[, list(DT, PRICE)],alignBy ="minutes", alignPeriod =5, makeReturns = TRUE) rs
DEPRECATED USE rSVar
rSV(rData, alignBy = NULL, alignPeriod = NULL, makeReturns = FALSE)
rSV(rData, alignBy = NULL, alignPeriod = NULL, makeReturns = FALSE)
rData |
DEPRECATED USE |
alignBy |
DEPRECATED USE |
alignPeriod |
DEPRECATED USE |
makeReturns |
DEPRECATED USE |
Calculate the realized semivariances, defined in Barndorff-Nielsen et al. (2008).
Function returns two outcomes:
Downside realized semivariance
Upside realized semivariance.
Assume there are equispaced returns
in period
,
.
Then, the rSVar
is given by
rSVar(rData, alignBy = NULL, alignPeriod = NULL, makeReturns = FALSE, ...)
rSVar(rData, alignBy = NULL, alignPeriod = NULL, makeReturns = FALSE, ...)
rData |
an |
alignBy |
character, indicating the time scale in which |
alignPeriod |
positive numeric, indicating the number of periods to aggregate over. For example to aggregate.
based on a 5-minute frequency, set |
makeReturns |
boolean, should be |
... |
used internally |
list with two entries, the realized positive and negative semivariances
Giang Nguyen, Jonathan Cornelissen, Kris Boudt, and Emil Sjoerup.
Barndorff-Nielsen, O. E., Kinnebrock, S., and Shephard N. (2010). Measuring downside risk: realised semivariance. In: Volatility and Time Series Econometrics: Essays in Honor of Robert F. Engle, (Edited by Bollerslev, T., Russell, J., and Watson, M.), 117-136. Oxford University Press.
IVar
for a list of implemented estimators of the integrated variance.
sv <- rSVar(sampleTData[, list(DT, PRICE)], alignBy = "minutes", alignPeriod = 5, makeReturns = TRUE) sv
sv <- rSVar(sampleTData[, list(DT, PRICE)], alignBy = "minutes", alignPeriod = 5, makeReturns = TRUE) sv
Calculate the threshold covariance matrix proposed in Gobbi and Mancini (2009).
Unlike the rOWCov
, the rThresholdCov uses univariate jump detection rules to truncate the effect of jumps on the covariance
estimate. As such, it remains feasible in high dimensions, but it is less robust to small cojumps.
Let be an intraday
return vector of
assets where
and
being the number of intraday returns.
Then, the -th element of the threshold covariance matrix is defined as
with the threshold value set to
times the daily realized bi-power variation of asset
,
as suggested in Jacod and Todorov (2009).
rThresholdCov( rData, cor = FALSE, alignBy = NULL, alignPeriod = NULL, makeReturns = FALSE, ... )
rThresholdCov( rData, cor = FALSE, alignBy = NULL, alignPeriod = NULL, makeReturns = FALSE, ... )
rData |
an |
cor |
boolean, in case it is |
alignBy |
character, indicating the time scale in which |
alignPeriod |
positive numeric, indicating the number of periods to aggregate over. For example, to aggregate
based on a 5-minute frequency, set |
makeReturns |
boolean, should be |
... |
used internally, do not change. |
in case the input is and contains data from one day, an matrix is returned. If the data is a univariate
xts
object with multiple days, an xts
is returned.
If the data is multivariate and contains multiple days (xts
or data.table
), the function returns a list containing matrices. Each item in the list has a name which corresponds to the date for the matrix.
Jonathan Cornelissen, Kris Boudt, and Emil Sjoerup.
Barndorff-Nielsen, O. and Shephard, N. (2004). Measuring the impact of jumps in multivariate price processes using bipower covariation. Discussion paper, Nuffield College, Oxford University.
Jacod, J. and Todorov, V. (2009). Testing for common arrival of jumps in discretely-observed multidimensional processes. Annals of Statistics, 37, 1792-1838.
Mancini, C. and Gobbi, F. (2012). Identifying the Brownian covariation from the co-jumps given discrete observations. Econometric Theory, 28, 249-273.
ICov
for a list of implemented estimators of the integrated covariance.
# Realized threshold Variance/Covariance: # Multivariate: ## Not run: library("xts") set.seed(123) start <- strptime("1970-01-01", format = "%Y-%m-%d", tz = "UTC") timestamps <- start + seq(34200, 57600, length.out = 23401) dat <- cbind(rnorm(23401) * sqrt(1/23401), rnorm(23401) * sqrt(1/23401)) dat <- exp(cumsum(xts(dat, timestamps))) rcThreshold <- rThresholdCov(dat, alignBy = "minutes", alignPeriod = 1, makeReturns = TRUE) rcThreshold ## End(Not run)
# Realized threshold Variance/Covariance: # Multivariate: ## Not run: library("xts") set.seed(123) start <- strptime("1970-01-01", format = "%Y-%m-%d", tz = "UTC") timestamps <- start + seq(34200, 57600, length.out = 23401) dat <- cbind(rnorm(23401) * sqrt(1/23401), rnorm(23401) * sqrt(1/23401)) dat <- exp(cumsum(xts(dat, timestamps))) rcThreshold <- rThresholdCov(dat, alignBy = "minutes", alignPeriod = 1, makeReturns = TRUE) rcThreshold ## End(Not run)
Calculate the rTPQuar, defined in Andersen et al. (2012).
Assume there are equispaced returns
in period
,
. Then, the rTPQuar is given by
rTPQuar(rData, alignBy = NULL, alignPeriod = NULL, makeReturns = FALSE)
rTPQuar(rData, alignBy = NULL, alignPeriod = NULL, makeReturns = FALSE)
rData |
an |
alignBy |
character, indicating the time scale in which |
alignPeriod |
positive numeric, indicating the number of periods to aggregate over. For example, to aggregate
based on a 5-minute frequency, set |
makeReturns |
boolean, should be |
In case the input is an xts
object with data from one day, a numeric of the same length as the number of assets.
If the input data spans multiple days and is in xts
format, an xts
will be returned.
If the input data is a data.table
object, the function returns a data.table
with the same column names as the input data, containing the date and the realized measures.
Giang Nguyen, Jonathan Cornelissen, Kris Boudt, and Emil Sjoerup.
Andersen, T. G., Dobrev, D., and Schaumburg, E. (2012). Jump-robust volatility estimation using nearest neighbor truncation. Journal of Econometrics, 169, 75-93.
tpq <- rTPQuar(rData = sampleTData[, list(DT, PRICE)], alignBy = "minutes", alignPeriod = 5, makeReturns = TRUE) tpq
tpq <- rTPQuar(rData = sampleTData[, list(DT, PRICE)], alignBy = "minutes", alignPeriod = 5, makeReturns = TRUE) tpq
Calculate the two time scale covariance matrix proposed in Zhang et al. (2005) and Zhang (2010). By the use of two time scales, this covariance estimate is robust to microstructure noise and non-synchronic trading.
rTSCov( pData, cor = FALSE, K = 300, J = 1, KCov = NULL, JCov = NULL, KVar = NULL, JVar = NULL, makePsd = FALSE, ... )
rTSCov( pData, cor = FALSE, K = 300, J = 1, KCov = NULL, JCov = NULL, KVar = NULL, JVar = NULL, makePsd = FALSE, ... )
pData |
a list. Each list-item i contains an |
cor |
boolean, in case it is |
K |
positive integer, slow time scale returns are computed on prices that are |
J |
positive integer, fast time scale returns are computed on prices that are |
KCov |
positive integer, for the extradiagonal covariance elements the slow time scale returns are computed on prices that are |
JCov |
positive integer, for the extradiagonal covariance elements the fast time scale returns are computed on prices that are |
KVar |
vector of positive integers, for the diagonal variance elements the slow time scale returns are computed on prices that are |
JVar |
vector of positive integers, for the diagonal variance elements the fast time scale returns are computed on prices that are |
makePsd |
boolean, in case it is |
... |
used internally, do not change. |
The rTSCov requires the tick-by-tick transaction prices. (Co)variances are then computed using log-returns calculated on a rolling basis
on stock prices that are (slow time scale) and
(fast time scale) steps apart.
The diagonal elements of the rTSCov matrix are the variances, computed for log-price series with
price observations
at times
as follows:
where ,
and
The extradiagonal elements of the rTSCov are the covariances.
For their calculation, the data is first synchronized by the refresh time method proposed by Harris et al (1995).
It uses the function refreshTime
to collect first the so-called refresh times at which all assets have traded at least once
since the last refresh time point. Suppose we have two log-price series: and
. Let
and
be the set of transaction times of these assets.
The first refresh time corresponds to the first time at which both stocks have traded, i.e.
. The subsequent refresh time is defined as the first time when both stocks have again traded, i.e.
. The
complete refresh time sample grid is
, where
is the total number of paired returns. The
sampling points of asset
and
are defined to be
and
.
Given these refresh times, the covariance is computed as follows:
where
Unfortunately, the rTSCov is not always positive semidefinite.
By setting the argument makePsd = TRUE, the function makePsd
is used to return a positive semidefinite
matrix. This function replaces the negative eigenvalues with zeroes.
in case the input is and contains data from one day, an N by N matrix is returned. If the data is a univariate xts
object with multiple days, an xts
is returned.
If the data is multivariate and contains multiple days (xts
or data.table
), the function returns a list containing N by N matrices. Each item in the list has a name which corresponds to the date for the matrix.
Jonathan Cornelissen, Kris Boudt, and Emil Sjoerup.
Harris, F., McInish, T., Shoesmith, G., and Wood, R. (1995). Cointegration, error correction, and price discovery on informationally linked security markets. Journal of Financial and Quantitative Analysis, 30, 563-581.
Zhang, L., Mykland, P. A., and Ait-Sahalia, Y. (2005). A tale of two time scales: Determining integrated volatility with noisy high-frequency data. Journal of the American Statistical Association, 100, 1394-1411.
Zhang, L. (2011). Estimating covariation: Epps effect, microstructure noise. Journal of Econometrics, 160, 33-47.
ICov
for a list of implemented estimators of the integrated covariance.
# Robust Realized two timescales Variance/Covariance # Multivariate: ## Not run: library(xts) set.seed(123) start <- strptime("1970-01-01", format = "%Y-%m-%d", tz = "UTC") timestamps <- start + seq(34200, 57600, length.out = 23401) dat <- cbind(rnorm(23401) * sqrt(1/23401), rnorm(23401) * sqrt(1/23401)) dat <- exp(cumsum(xts(dat, timestamps))) price1 <- dat[,1] price2 <- dat[,2] rcovts <- rTSCov(pData = list(price1, price2)) # Note: List of prices as input rcovts ## End(Not run)
# Robust Realized two timescales Variance/Covariance # Multivariate: ## Not run: library(xts) set.seed(123) start <- strptime("1970-01-01", format = "%Y-%m-%d", tz = "UTC") timestamps <- start + seq(34200, 57600, length.out = 23401) dat <- cbind(rnorm(23401) * sqrt(1/23401), rnorm(23401) * sqrt(1/23401)) dat <- exp(cumsum(xts(dat, timestamps))) price1 <- dat[,1] price2 <- dat[,2] rcovts <- rTSCov(pData = list(price1, price2)) # Note: List of prices as input rcovts ## End(Not run)
rRVar
DEPRECATED
DEPRECATED USE rRVar
RV(rData)
RV(rData)
rData |
DEPRECATED USE |
salesCondition is deprecated. Use tradesCondition instead.
salesCondition( tData, validConds = c("", "@", "E", "@E", "F", "FI", "@F", "@FI", "I", "@I") )
salesCondition( tData, validConds = c("", "@", "E", "@E", "F", "FI", "@F", "@FI", "I", "@I") )
tData |
salesCondition is deprecated. Use tradesCondition instead. |
validConds |
salesCondition is deprecated. Use tradesCondition instead. |
Cleaned Tick by tick data for a sector ETF, called ETF
and two stock components of that ETF,
these stocks are named AAA
and BBB
.
sampleMultiTradeData
sampleMultiTradeData
A data.table
object
One minute data price of one stock and a market proxy. This is data from the US market.
sampleOneMinuteData
sampleOneMinuteData
A data.table
object
A data.table
object containing the quotes for the pseudonymized stock XXX for 2 days. This is the cleaned version of the data sample sampleQDataRaw
, using quotesCleanup
.
sampleQData
sampleQData
data.table object
## Not run: # The code to create the sampleQData dataset from raw data is sampleQData <- quotesCleanup(qDataRaw = sampleQDataRaw, exchanges = "N", type = "standard", report = FALSE) ## End(Not run)
## Not run: # The code to create the sampleQData dataset from raw data is sampleQData <- quotesCleanup(qDataRaw = sampleQDataRaw, exchanges = "N", type = "standard", report = FALSE) ## End(Not run)
A data.table
object containing the raw quotes the pseudonymized stock XXX for 2 days, in the typical NYSE TAQ database format.
sampleQDataRaw
sampleQDataRaw
data.table object
A data.table
object containing the trades for the pseudonymized stock XXX for 2 days, in the typical NYSE TAQ database format.
This is the cleaned version of the data sample sampleTDataRaw
, using tradesCleanupUsingQuotes
.
sampleTData
sampleTData
A data.table object.
## Not run: # The code to create the sampleTData dataset from raw data is sampleQData <- quotesCleanup(qDataRaw = sampleQDataRaw, exchanges = "N", type = "standard", report = FALSE) tradesAfterFirstCleaning <- tradesCleanup(tDataRaw = sampleTDataRaw, exchanges = "N", report = FALSE) sampleTData <- tradesCleanupUsingQuotes( tData = tradesAfterFirstCleaning, qData = sampleQData, lagQuotes = 0)[, c("DT", "EX", "SYMBOL", "PRICE", "SIZE")] # Only some columns are included. These are the ones that were historically included. # For most applications, we recommend aggregating the data at a high frequency # For example, every second. aggregated <- aggregatePrice(sampleTData[, list(DT, PRICE)], alignBy = "seconds", alignPeriod = 1) acf(diff(aggregated[as.Date(DT) == "2018-01-02", PRICE])) acf(diff(aggregated[as.Date(DT) == "2018-01-03", PRICE])) signature <- function(x, q){ res <- x[, (rCov(diff(log(PRICE), lag = q, differences = 1))/q), by = as.Date(DT)] return(res[[2]]) } rvAgg <- matrix(nrow = 100, ncol = 2) for(i in 1:100) rvAgg[i, ] <- signature(aggregated, i) plot(rvAgg[,1], type = "l") plot(rvAgg[,2], type = "l") ## End(Not run)
## Not run: # The code to create the sampleTData dataset from raw data is sampleQData <- quotesCleanup(qDataRaw = sampleQDataRaw, exchanges = "N", type = "standard", report = FALSE) tradesAfterFirstCleaning <- tradesCleanup(tDataRaw = sampleTDataRaw, exchanges = "N", report = FALSE) sampleTData <- tradesCleanupUsingQuotes( tData = tradesAfterFirstCleaning, qData = sampleQData, lagQuotes = 0)[, c("DT", "EX", "SYMBOL", "PRICE", "SIZE")] # Only some columns are included. These are the ones that were historically included. # For most applications, we recommend aggregating the data at a high frequency # For example, every second. aggregated <- aggregatePrice(sampleTData[, list(DT, PRICE)], alignBy = "seconds", alignPeriod = 1) acf(diff(aggregated[as.Date(DT) == "2018-01-02", PRICE])) acf(diff(aggregated[as.Date(DT) == "2018-01-03", PRICE])) signature <- function(x, q){ res <- x[, (rCov(diff(log(PRICE), lag = q, differences = 1))/q), by = as.Date(DT)] return(res[[2]]) } rvAgg <- matrix(nrow = 100, ncol = 2) for(i in 1:100) rvAgg[i, ] <- signature(aggregated, i) plot(rvAgg[,1], type = "l") plot(rvAgg[,2], type = "l") ## End(Not run)
Trade data of one stock on one day in the European stock market.
sampleTDataEurope
sampleTDataEurope
A data.table
object
An imaginary data.table
object containing the raw trades the pseudonymized stock XXX for 2 days, in the typical NYSE TAQ database format.
sampleTDataRaw
sampleTDataRaw
A data.table object.
Filter raw trade data to only contain specified exchanges
selectExchange(data, exch = "N")
selectExchange(data, exch = "N")
data |
an |
exch |
The (vector of) symbol(s) of the stock exchange(s) that should be selected.
By default the NYSE is chosen (
|
xts
or data.table
object depending on input.
Jonathan Cornelissen, Kris Boudt, Onno Kleen, and Emil Sjoerup.
Function used to estimate the spot drift of intraday (tick) stock prices/returns
spotDrift( data, method = "mean", alignBy = "minutes", alignPeriod = 5, marketOpen = "09:30:00", marketClose = "16:00:00", tz = NULL, ... )
spotDrift( data, method = "mean", alignBy = "minutes", alignPeriod = 5, marketOpen = "09:30:00", marketClose = "16:00:00", tz = NULL, ... )
data |
Can be one of two input types, |
method |
Which method to be used to estimate the spot-drift. Currently, three methods are available, rolling mean and median as well as the kernel method of Christensen et al. (2018). The kernel is a left hand exponential kernel that will weigh newer observations more heavily than older observations. |
alignBy |
character, indicating the time scale in which |
alignPeriod |
How often should the estimation take place? If |
marketOpen |
Opening time of the market, standard is "09:30:00". |
marketClose |
Closing time of the market, standard is "16:00:00". |
tz |
fallback time zone used in case we we are unable to identify the timezone of the data, by default: |
... |
Additional arguments for the individual methods. See ‘Details’. |
The additional arguments for the mean and median methods are:
periods
for the rolling window length which is 5 by default.
align
controls the alignment. The default is "right"
.
For the kernel mean estimator, the arguments meanBandwidth
can be used to control the bandwidth of the
drift estimator and the preAverage
argument, which can be used to control the pre-averaging horizon.
These arguments default to 300 and 5 respectively.
The following estimation methods can be specified in method
:
Rolling window mean ("mean"
)
Estimates the spot drift by applying a rolling mean over returns.
where is the argument
periods
.
Parameters:
periods
how big the window for the estimation should be. The estimator will have periods
NA
s at the beginning of each trading day.
align
alignment method for returns. Defaults to "left"
, which includes only past data, but other choices, "center"
and "right"
are available.
Warning: These values includes future data.
Outputs:
mu
a matrix containing the spot drift estimates
Rolling window median ("median"
)
Estimates the spot drift by applying a rolling mean over returns.
where is the argument
periods
.
Parameters:
periods
How big the window for the estimation should be. The estimator will have periods
NA
s at the beginning of each trading day.
align
Alignment method for returns. Defaults to "left"
, which includes only past data, but other choices, "center"
and "right"
are available.
These values includes FUTURE DATA, so beware!
Outputs:
mu
a matrix containing the spot drift estimates
kernel spot drift estimator ("kernel"
)
where ,
, and
are the spot drift, the spot volatility, and a jump process respectively.
However, due to microstructure noise, the observed log-price is
In order robustify the results to the presence of market microstructure noise, the pre-averaged returns are used:
where is a weighting function,
, and
is the pre-averaging horizon.
The spot drift estimator is then:
The kernel estimation method has the following parameters:
preAverage
a positive integer
denoting the length of pre-averaging window for the log-prices. Default is 5
meanBandwidth
an integer
denoting the bandwidth for the left-sided exponential kernel for the mean. Default is 300L
Outputs:
mu
a matrix containing the spot drift estimates
An object of class "spotDrift"
containing at least the estimated spot drift process.
Input on what this class should contain and methods for it is welcome.
Emil Sjoerup.
Christensen, K., Oomen, R., and Reno, R. (2020) The drift burst hypothesis. Journal of Econometrics. Forthcoming.
# Example 1: Rolling mean and median estimators for 2 days meandrift <- spotDrift(data = sampleTData, alignPeriod = 1) mediandrift <- spotDrift(data = sampleTData, method = "median", alignBy = "seconds", alignPeriod = 30, tz = "EST") plot(meandrift) plot(mediandrift) ## Not run: # Example 2: Kernel based estimator for one day with data.table format price <- sampleTData[as.Date(DT) == "2018-01-02", list(DT, PRICE)] kerneldrift <- spotDrift(sampleTDataEurope, method = "driftKernel", alignBy = "minutes", alignPeriod = 1) plot(kerneldrift) ## End(Not run)
# Example 1: Rolling mean and median estimators for 2 days meandrift <- spotDrift(data = sampleTData, alignPeriod = 1) mediandrift <- spotDrift(data = sampleTData, method = "median", alignBy = "seconds", alignPeriod = 30, tz = "EST") plot(meandrift) plot(mediandrift) ## Not run: # Example 2: Kernel based estimator for one day with data.table format price <- sampleTData[as.Date(DT) == "2018-01-02", list(DT, PRICE)] kerneldrift <- spotDrift(sampleTDataEurope, method = "driftKernel", alignBy = "minutes", alignPeriod = 1) plot(kerneldrift) ## End(Not run)
Estimates a wide variety of spot volatility estimators.
spotVol( data, method = "detPer", alignBy = "minutes", alignPeriod = 5, marketOpen = "09:30:00", marketClose = "16:00:00", tz = "GMT", ... )
spotVol( data, method = "detPer", alignBy = "minutes", alignPeriod = 5, marketOpen = "09:30:00", marketClose = "16:00:00", tz = "GMT", ... )
data |
Can be one of two input types, |
method |
specifies which method will be used to estimate the spot
volatility. Valid options are |
alignBy |
character, indicating the time scale in which |
alignPeriod |
positive integer, indicating the number of periods to aggregate
over. For example, to aggregate an |
marketOpen |
the market opening time. This should be in the time zone
specified by |
marketClose |
the market closing time. This should be in the time zone
specified by |
tz |
fallback time zone used in case we we are unable to identify the timezone of the data, by default: |
... |
method-specific parameters (see ‘Details’ below). |
The following estimation methods can be specified in method
:
Deterministic periodicity method ("detPer"
)
Parameters:
dailyVol
A string specifying the estimation method for the daily component .
Possible values are
"rBPCov", "rRVar", "rMedRVar"
. "rBPCov"
by default.
periodicVol
A string specifying the estimation method for the component of intraday volatility,
that depends in a deterministic way on the intraday time at which the return is observed.
Possible values are "SD", "WSD", "TML", "OLS"
. See Boudt et al. (2011) for details. Default = "TML"
.
P1
A positive integer corresponding to the number of cosine terms used in the flexible Fourier
specification of the periodicity function, see Andersen et al. (1997) for details. Default = 5.
P2
Same as P1
, but for the sine terms. Default = 5.
dummies
Boolean: in case it is TRUE
, the parametric estimator of periodic standard deviation
specifies the periodicity function as the sum of dummy variables corresponding to each intraday period.
If it is FALSE
, the parametric estimator uses the flexible Fourier specification. Default is FALSE
.
Outputs (see ‘Value’ for a full description of each component):
spot
daily
periodic
Let there be days of
equally-spaced log-returns
,
and
.
In case of
method = "detPer"
, the returns are modeled as
with independent .
The spot volatility is decomposed into a deterministic periodic factor
(identical for every day in the sample) and a daily factor
(identical for all observations within a day).
Both components are then estimated separately, see Taylor and Xu (1997)
and Andersen and Bollerslev (1997). The jump robust versions by Boudt et al.
(2011) have also been implemented.
If periodicVol = "SD"
, we have
with , cross-daily averages
,
and
being the length of the intraday time intervals.
If periodicVol = "WSD"
, we have another nonparametric estimator that is robust to jumps in contrast to
periodicVol = "SD"
. The definition of this estimator can be found in Boudt et al. (2011, Eqs. 2.9-2.12).
The estimates when periodicVol = "OLS"
and periodicVol = "TML"
are based on the regression equation
with i.i.d. zero-mean error term and
.
periodicVol = "OLS"
employs ordinary-least-squares estimation and
periodicVol = "TML"
truncated maximum-likelihood estimation (see Boudt et al., 2011, Section 2.2, for further details).
Stochastic periodicity method ("stochPer"
)
Parameters:
P1
: A positive integer corresponding to the number of cosine terms used in the flexible Fourier
specification of the periodicity function. Default = 5.
P2
: Same as P1
, but for the sine terms. Default = 5.
init
: A named list of initial values to be used in the optimization routine ("BFGS"
in optim
).
Default = list(sigma = 0.03, sigma_mu = 0.005, sigma_h = 0.005, sigma_k = 0.05,
phi = 0.2, rho = 0.98, mu = c(2, -0.5), delta_c = rep(0, max(1,P1)),
delta_s = rep(0, max(1,P2)))
.
The naming of the parameters follows Beltratti and Morana (2001), the corresponding model equations are listed below.
init
can contain any number of these parameters.
For parameters not specified in init
, the default initial value will be used.
control
: A list of options to be passed down to optim
.
Outputs (see ‘Value’ for a full description of each component):
spot
par
This method by Beltratti and Morana (2001) assumes the periodicity factor to
be stochastic. The spot volatility estimation is split into four components:
a random walk, an autoregressive process, a stochastic cyclical process and
a deterministic cyclical process. The model is estimated using a
quasi-maximum likelihood method based on the Kalman Filter. The package
FKF
is used to apply the Kalman filter. In addition to
the spot volatility estimates, all parameter estimates are returned.
The model for the intraday change in the return series is given by
where is the conditional standard deviation of the
-th interval
of day
and
is a i.i.d. mean-zero unit-variance process.
The conditional standard deviations are modeled as
with being a scaling factor and
is the non-stationary volatility
component
with independent .
is the stochastic stationary acyclical volatility component
with independent and
.
The cyclical component is separated in two components:
The first component is written in state-space form,
with and
are
mutually independent zero-mean normal random variables with variance
.
All other parameters and the process
in the state-space representation
are only of instrumental use and are not part of
the return value which is why we won't introduce them in detail
in this vignette; see Beltratti and Morana (2001, pp. 208-209) for more information.
The second component is given by
with and
.
Nonparametric filtering ("kernel"
)
Parameters:
type
String specifying the type of kernel to be used. Options
include "gaussian", "epanechnikov", "beta"
. Default = "gaussian"
.
h
Scalar or vector specifying bandwidth(s) to be used in kernel.
If h
is a scalar, it will be assumed equal throughout the sample. If
it is a vector, it should contain bandwidths for each day. If left empty,
it will be estimated. Default = NULL
.
est
String specifying the bandwidth estimation method. Possible
values include "cv", "quarticity"
. Method "cv"
equals
cross-validation, which chooses the bandwidth that minimizes the Integrated
Square Error. "quarticity"
multiplies the simple plug-in estimator
by a factor based on the daily quarticity of the returns. est
is
obsolete if h
has already been specified by the user.
"cv"
by default.
lower
Lower bound to be used in bandwidth optimization routine,
when using cross-validation method. Default is .
upper
Upper bound to be used in bandwidth optimization routine,
when using cross-validation method. Default is .
Outputs (see ‘Value’ for a full description of each component):
spot
par
This method by Kristensen (2010) filters the spot volatility in a nonparametric way by applying kernel weights to the standard realized volatility estimator. Different kernels and bandwidths can be used to focus on specific characteristics of the volatility process.
Estimation results heavily depend on the bandwidth parameter , so it
is important that this parameter is well chosen. However, it is difficult to
come up with a method that determines the optimal bandwidth for any kind of
data or kernel that can be used. Although some estimation methods are
provided, it is advised that you specify
yourself, or make sure that
the estimation results are appropriate.
One way to estimate , is by using cross-validation. For each day in
the sample,
is chosen as to minimize the Integrated Square Error,
which is a function of
. However, this function often has multiple
local minima, or no minima at all (
). To ensure a reasonable
optimum is reached, strict boundaries have to be imposed on
. These
can be specified by
lower
and upper
, which by default are
and
respectively, where
is the
number of observations in a day.
When using the method "kernel"
, in addition to the spot volatility
estimates, all used values of the bandwidth are returned.
A formal definition of the estimator is too extensive for the context of this vignette. Please refer to Kristensen (2010) for more detailed information. Our parameter names are aligned with this reference.
Piecewise constant volatility ("piecewise"
)
Parameters:
type
string specifying the type of test to be used. Options
include "MDa", "MDb", "DM"
. See Fried (2012) for details. Default = "MDa"
.
m
number of observations to include in reference window.
Default = 40
.
n
number of observations to include in test window.
Default = 20
.
alpha
significance level to be used in tests. Note that the test
will be executed many times (roughly equal to the total number of
observations), so it is advised to use a small value for alpha
, to
avoid a lot of false positives. Default = 0.005
.
volEst
string specifying the realized volatility estimator to be
used in local windows. Possible values are "rBPCov", "rRVar", "rMedRVar"
.
Default = "rBPCov"
.
online
boolean indicating whether estimations at a certain point
should be done online (using only information available at
), or ex post (using all observations between two change points).
Default =
TRUE
.
Outputs (see ‘Value’ for a full description of each component):
spot
cp
This nonparametric method by Fried (2012) is a two-step approach and
assumes the volatility to be
piecewise constant over local windows. Robust two-sample tests are applied to
detect changes in variability between subsequent windows. The spot volatility
can then be estimated by evaluating regular realized volatility estimators
within each local window.
"MDa", "MDb"
refer to different test statistics, see Section 2.2 in Fried (2012).
Along with the spot volatility estimates, this method will return the
detected change points in the volatility level. When plotting a
spotVol
object containing cp
, these change points will be
visualized.
GARCH models with intraday seasonality ("garch"
)
Parameters:
model
string specifying the type of test to be used. Options
include "sGARCH", "eGARCH"
. See ugarchspec
in the
rugarch
package. Default = "eGARCH"
.
garchorder
numeric value of length 2, containing the order of
the GARCH model to be estimated. Default = c(1,1)
.
dist
string specifying the distribution to be assumed on the
innovations. See distribution.model
in ugarchspec
for
possible options. Default = "norm"
.
solver.control
list containing solver options.
See ugarchfit
for possible values. Default = list()
.
P1
a positive integer corresponding to the number of cosine
terms used in the flexible Fourier specification of the periodicity function.
Default = 5.
P2
same as P1
, but for the sinus terms. Default = 5.
Outputs (see ‘Value’ for a full description of each component):
spot
ugarchfit
Along with the spot volatility estimates, this method will return the
ugarchfit
object used by the rugarch
package.
In this model, daily returns based on intraday observations
are modeled as
with , intraday seasonality
> 0, and
being
a zero-mean unit-variance error term.
The overall approach is as in Appendix B of Andersen and Bollerslev (1997).
This method generates the external regressors needed to model the intraday
seasonality with a flexible Fourier form (Andersen and Bollerslev, 1997, Eqs. A.1-A.4).
The
rugarch
package is then employed to estimate the specified intraday GARCH(1,1) model
on the residuals .
Realized Measures ("RM"
)
This estimator takes trailing rolling window observations of intraday returns to estimate the spot volatility.
Parameters:
RM
string denoting which realized measure to use to estimate the local volatility.
Possible values are: "rBPCov", "rMedRVar", "rMinRVar", "rCov", "rRVar"
.
Default = "rBPCov"
.
lookBackPeriod
positive integer denoting the amount of sub-sampled returns to use
for the estimation of the local volatility. Default is 10
.
dontIncludeLast
logical indicating whether to omit the last return in the calculation of the local volatility.
This is done in Lee-Mykland (2008) to produce jump-robust estimates of spot volatility.
Setting this to TRUE
will then use lookBackPeriod - 1
returns in the construction of the realized measures. Default = FALSE
.
Outputs (see ‘Value’ for a full description of each component):
spot
RM
lookBackPeriod
This method returns the estimates of the spot volatility, a string containing the realized measure used, and the lookBackPeriod.
(Non-overlapping) Pre-Averaged Realized Measures ("PARM"
)
This estimator takes rolling historical window observations of intraday returns to estimate the spot volatility
as in the option "RM"
but adds return pre-averaging of the realized measures.
For a description of return pre-averaging see the details on spotDrift.
Parameters:
RM
String denoting which realized measure to use to estimate the local volatility.
Possible values are: "rBPCov", "rMedRVar", "rMinRVar", "rCov", and "rRVar"
. Default = "rBPCov"
.
lookBackPeriod
positive integer denoting the amount of sub-sampled returns to use for the estimation of the local volatility. Default = 50.
Outputs (see ‘Value’ for a full description of each component):
spot
RM
lookBackPeriod
kn
A spotVol
object, which is a list containing one or more of the
following outputs, depending on the method used:
spot
An xts
or matrix
object (depending on the input) containing
spot volatility estimates , reported for each interval
between
marketOpen
and marketClose
for every day
in
data
. The length of the intervals is specified by alignPeriod
and alignBy
. Methods that provide this output: All.
daily
An xts
or numeric
object (depending on the input) containing
estimates of the daily volatility levels for each day in
data
,
if the used method decomposed spot volatility into a daily and an intraday
component. Methods that provide this output: "detPer"
.
periodic
An xts
or numeric
object (depending on the input) containing
estimates of the intraday periodicity factor for each day interval
between
marketOpen
and marketClose
, if the spot volatility was
decomposed into a daily and an intraday component. If the output is in
xts
format, this periodicity factor will be dated to the first day of
the input data, but it is identical for each day in the sample. Methods that
provide this output: "detPer"
.
par
A named list containing parameter estimates, for methods that estimate one
or more parameters. Methods that provide this output:
"stochper", "kernel"
.
cp
A vector containing the change points in the volatility, i.e. the observation
indices after which the volatility level changed, according to the applied
tests. The vector starts with a 0. Methods that provide this output:
"piecewise"
.
ugarchfit
A ugarchfit
object, as used by the rugarch
package, containing all output from fitting the GARCH model to the data.
Methods that provide this output: "garch"
.
The spotVol
function offers several methods to estimate spot
volatility and its intraday seasonality, using high-frequency data. It
returns an object of class spotVol
, which can contain various outputs,
depending on the method used. See ‘Details’ for a description of each method.
In any case, the output will contain the spot volatility estimates.
The input can consist of price data or return data, either tick by tick or
sampled at set intervals. The data will be converted to equispaced
high-frequency returns (read: the
-th return on day
).
Jonathan Cornelissen, Kris Boudt, Onno Kleen, and Emil Sjoerup.
Andersen, T. G. and Bollerslev, T. (1997). Intraday periodicity and volatility persistence in financial markets. Journal of Empirical Finance, 4, 115-158.
Beltratti, A. and Morana, C. (2001). Deterministic and stochastic methods for estimation of intraday seasonal components with high frequency data. Economic Notes, 30, 205-234.
Boudt K., Croux C., and Laurent S. (2011). Robust estimation of intraweek periodicity in volatility and jump detection. Journal of Empirical Finance, 18, 353-367.
Fried, R. (2012). On the online estimation of local constant volatilities. Computational Statistics and Data Analysis, 56, 3080-3090.
Kristensen, D. (2010). Nonparametric filtering of the realized spot volatility: A kernel-based approach. Econometric Theory, 26, 60-93.
Taylor, S. J. and Xu, X. (1997). The incremental volatility information in one million foreign exchange quotations. Journal of Empirical Finance, 4, 317-340.
## Not run: init <- list(sigma = 0.03, sigma_mu = 0.005, sigma_h = 0.007, sigma_k = 0.06, phi = 0.194, rho = 0.986, mu = c(1.87,-0.42), delta_c = c(0.25, -0.05, -0.2, 0.13, 0.02), delta_s = c(-1.2, 0.11, 0.26, -0.03, 0.08)) # Next method will take around 370 iterations vol1 <- spotVol(sampleOneMinuteData[, list(DT, PRICE = MARKET)], method = "stochPer", init = init) plot(vol1$spot[1:780]) legend("topright", c("stochPer"), col = c("black"), lty=1) ## End(Not run) # Various kernel estimates ## Not run: h1 <- bw.nrd0((1:nrow(sampleOneMinuteData[, list(DT, PRICE = MARKET)]))*60) vol2 <- spotVol(sampleOneMinuteData[, list(DT, PRICE = MARKET)], method = "kernel", h = h1) vol3 <- spotVol(sampleOneMinuteData[, list(DT, PRICE = MARKET)], method = "kernel", est = "quarticity") vol4 <- spotVol(sampleOneMinuteData[, list(DT, PRICE = MARKET)], method = "kernel", est = "cv") plot(cbind(vol2$spot, vol3$spot, vol4$spot)) xts::addLegend("topright", c("h = simple estimate", "h = quarticity corrected", "h = crossvalidated"), col = 1:3, lty=1) ## End(Not run) # Piecewise constant volatility ## Not run: vol5 <- spotVol(sampleOneMinuteData[, list(DT, PRICE = MARKET)], method = "piecewise", m = 200, n = 100, online = FALSE) plot(vol5) ## End(Not run) # Compare regular GARCH(1,1) model to eGARCH, both with external regressors ## Not run: vol6 <- spotVol(sampleOneMinuteData[, list(DT, PRICE = MARKET)], method = "garch", model = "sGARCH") vol7 <- spotVol(sampleOneMinuteData[, list(DT, PRICE = MARKET)], method = "garch", model = "eGARCH") plot(as.numeric(t(vol6$spot)), type = "l") lines(as.numeric(t(vol7$spot)), col = "red") legend("topleft", c("GARCH", "eGARCH"), col = c("black", "red"), lty = 1) ## End(Not run) ## Not run: # Compare realized measure spot vol estimation to pre-averaged version vol8 <- spotVol(sampleTDataEurope[, list(DT, PRICE)], method = "RM", marketOpen = "09:00:00", marketClose = "17:30:00", tz = "UTC", alignPeriod = 1, alignBy = "mins", lookBackPeriod = 10) vol9 <- spotVol(sampleTDataEurope[, list(DT, PRICE)], method = "PARM", marketOpen = "09:00:00", marketClose = "17:30:00", tz = "UTC", lookBackPeriod = 10) plot(zoo::na.locf(cbind(vol8$spot, vol9$spot))) ## End(Not run)
## Not run: init <- list(sigma = 0.03, sigma_mu = 0.005, sigma_h = 0.007, sigma_k = 0.06, phi = 0.194, rho = 0.986, mu = c(1.87,-0.42), delta_c = c(0.25, -0.05, -0.2, 0.13, 0.02), delta_s = c(-1.2, 0.11, 0.26, -0.03, 0.08)) # Next method will take around 370 iterations vol1 <- spotVol(sampleOneMinuteData[, list(DT, PRICE = MARKET)], method = "stochPer", init = init) plot(vol1$spot[1:780]) legend("topright", c("stochPer"), col = c("black"), lty=1) ## End(Not run) # Various kernel estimates ## Not run: h1 <- bw.nrd0((1:nrow(sampleOneMinuteData[, list(DT, PRICE = MARKET)]))*60) vol2 <- spotVol(sampleOneMinuteData[, list(DT, PRICE = MARKET)], method = "kernel", h = h1) vol3 <- spotVol(sampleOneMinuteData[, list(DT, PRICE = MARKET)], method = "kernel", est = "quarticity") vol4 <- spotVol(sampleOneMinuteData[, list(DT, PRICE = MARKET)], method = "kernel", est = "cv") plot(cbind(vol2$spot, vol3$spot, vol4$spot)) xts::addLegend("topright", c("h = simple estimate", "h = quarticity corrected", "h = crossvalidated"), col = 1:3, lty=1) ## End(Not run) # Piecewise constant volatility ## Not run: vol5 <- spotVol(sampleOneMinuteData[, list(DT, PRICE = MARKET)], method = "piecewise", m = 200, n = 100, online = FALSE) plot(vol5) ## End(Not run) # Compare regular GARCH(1,1) model to eGARCH, both with external regressors ## Not run: vol6 <- spotVol(sampleOneMinuteData[, list(DT, PRICE = MARKET)], method = "garch", model = "sGARCH") vol7 <- spotVol(sampleOneMinuteData[, list(DT, PRICE = MARKET)], method = "garch", model = "eGARCH") plot(as.numeric(t(vol6$spot)), type = "l") lines(as.numeric(t(vol7$spot)), col = "red") legend("topleft", c("GARCH", "eGARCH"), col = c("black", "red"), lty = 1) ## End(Not run) ## Not run: # Compare realized measure spot vol estimation to pre-averaged version vol8 <- spotVol(sampleTDataEurope[, list(DT, PRICE)], method = "RM", marketOpen = "09:00:00", marketClose = "17:30:00", tz = "UTC", alignPeriod = 1, alignBy = "mins", lookBackPeriod = 10) vol9 <- spotVol(sampleTDataEurope[, list(DT, PRICE)], method = "PARM", marketOpen = "09:00:00", marketClose = "17:30:00", tz = "UTC", lookBackPeriod = 10) plot(zoo::na.locf(cbind(vol8$spot, vol9$spot))) ## End(Not run)
Convenience function to split data from one xts
or data.table
with at least "DT"
, "SYMBOL"
, and "PRICE"
columns to a format
that can be used in the functions for calculation of realized measures.
This is the opposite of gatherPrices
.
spreadPrices(data)
spreadPrices(data)
data |
An |
An xts
or a data.table
object with columns "DT"
and
a column named after each unique entrance in the "SYMBOL"
column of the input.
These columns contain the price of the associated symbol. We drop all other columns, e.g. SIZE
.
Emil Sjoerup.
## Not run: library(data.table) data1 <- copy(sampleTData)[, `:=`(PRICE = PRICE * runif(.N, min = 0.99, max = 1.01), DT = DT + runif(.N, 0.01, 0.02))] data2 <- copy(sampleTData)[, SYMBOL := 'XYZ'] dat <- rbind(data1, data2) setkey(dat, "DT") dat <- spreadPrices(dat) rCov(dat, alignBy = 'minutes', alignPeriod = 5, makeReturns = TRUE, cor = TRUE) ## End(Not run)
## Not run: library(data.table) data1 <- copy(sampleTData)[, `:=`(PRICE = PRICE * runif(.N, min = 0.99, max = 1.01), DT = DT + runif(.N, 0.01, 0.02))] data2 <- copy(sampleTData)[, SYMBOL := 'XYZ'] dat <- rbind(data1, data2) setkey(dat, "DT") dat <- spreadPrices(dat) rCov(dat, alignBy = 'minutes', alignPeriod = 5, makeReturns = TRUE, cor = TRUE) ## End(Not run)
Realized measures for the SPY ETF calculated at 1 and 5 minute sampling.
SPYRM
SPYRM
A data.table
object
The CLOSE column is NOT the official close price, but simply the last recorded price of the day. Thus, this may be slightly different from other sources.
HARmodel
objectsSummary for HARmodel
objects
## S3 method for class 'HARmodel' summary(object, ...)
## S3 method for class 'HARmodel' summary(object, ...)
object |
An object of class |
... |
pass |
A modified summary.lm
This is a wrapper function for cleaning the trade data of all stock data inside the folder dataSource. The result is saved in the folder dataDestination.
In case you supply the argument rawtData
, the on-disk functionality is ignored. The function returns a vector
indicating how many trades were removed at each cleaning step in this case.
and the function returns an xts
or data.table
object.
The following cleaning functions are performed sequentially:
noZeroPrices
, autoSelectExchangeTrades
or selectExchange
, tradesCondition
, and
mergeTradesSameTimestamp
.
Since the function rmTradeOutliersUsingQuotes
also requires cleaned quote data as input, it is not incorporated here and
there is a separate wrapper called tradesCleanupUsingQuotes
.
tradesCleanup( dataSource = NULL, dataDestination = NULL, exchanges = "auto", tDataRaw = NULL, report = TRUE, selection = "median", validConds = c("", "@", "E", "@E", "F", "FI", "@F", "@FI", "I", "@I"), marketOpen = "09:30:00", marketClose = "16:00:00", printExchange = TRUE, saveAsXTS = FALSE, tz = NULL )
tradesCleanup( dataSource = NULL, dataDestination = NULL, exchanges = "auto", tDataRaw = NULL, report = TRUE, selection = "median", validConds = c("", "@", "E", "@E", "F", "FI", "@F", "@FI", "I", "@I"), marketOpen = "09:30:00", marketClose = "16:00:00", printExchange = TRUE, saveAsXTS = FALSE, tz = NULL )
dataSource |
character indicating the folder in which the original data is stored. |
dataDestination |
character indicating the folder in which the cleaned data is stored. |
exchanges |
vector of stock exchange symbols for all data in
The default value is |
tDataRaw |
|
report |
boolean and |
selection |
argument to be passed on to the cleaning routine |
validConds |
character vector containing valid sales conditions. Passed through to |
marketOpen |
character in the format of |
marketClose |
character in the format of |
printExchange |
Argument passed to |
saveAsXTS |
indicates whether data should be saved in |
tz |
fallback time zone used in case we we are unable to identify the timezone of the data, by default: |
Using the on-disk functionality with .csv.zip files from the WRDS database will write temporary files on your machine in order to unzip the files - we try to clean up after it, but cannot guarantee that there won't be files that slip through the crack if the permission settings on your machine does not match ours.
If the input data.table
does not contain a DT column but it does contain DATE and TIME_M columns, we create the DT column by REFERENCE, altering the data.table
that may be in the user's environment.
For each day an xts
or data.table
object is saved into the folder of that date, containing the cleaned data.
This procedure is performed for each stock in "ticker"
.
The function returns a vector indicating how many trades remained after each cleaning step.
In case you supply the argument rawtData
, the on-disk functionality is ignored
and the function returns a list with the cleaned trades as xts
object (see examples).
Jonathan Cornelissen, Kris Boudt, Onno Kleen, and Emil Sjoerup
Barndorff-Nielsen, O. E., Hansen, P. R., Lunde, A., and Shephard, N. (2009). Realized kernels in practice: Trades and quotes. Econometrics Journal, 12, C1-C32.
Brownlees, C.T. and Gallo, G.M. (2006). Financial econometric analysis at ultra-high frequency: Data handling concerns. Computational Statistics & Data Analysis, 51, 2232-2245.
# Consider you have raw trade data for 1 stock for 2 days head(sampleTDataRaw) dim(sampleTDataRaw) tDataAfterFirstCleaning <- tradesCleanup(tDataRaw = sampleTDataRaw, exchanges = list("N")) tDataAfterFirstCleaning$report dim(tDataAfterFirstCleaning$tData) # In case you have more data it is advised to use the on-disk functionality # via "dataSource" and "dataDestination" arguments
# Consider you have raw trade data for 1 stock for 2 days head(sampleTDataRaw) dim(sampleTDataRaw) tDataAfterFirstCleaning <- tradesCleanup(tDataRaw = sampleTDataRaw, exchanges = list("N")) tDataAfterFirstCleaning$report dim(tDataAfterFirstCleaning$tData) # In case you have more data it is advised to use the on-disk functionality # via "dataSource" and "dataDestination" arguments
Function performs cleaning procedure rmTradeOutliersUsingQuotes
for the trades of all stocks data in "dataDestination".
Note that preferably the input data for this function
is trade and quote data cleaned by respectively e.g. tradesCleanup
and quotesCleanup
.
tradesCleanupUsingQuotes( tradeDataSource = NULL, quoteDataSource = NULL, dataDestination = NULL, tData = NULL, qData = NULL, lagQuotes = 0, nSpreads = 1, BFM = FALSE, backwardsWindow = 3600, forwardsWindow = 0.5, plot = FALSE )
tradesCleanupUsingQuotes( tradeDataSource = NULL, quoteDataSource = NULL, dataDestination = NULL, tData = NULL, qData = NULL, lagQuotes = 0, nSpreads = 1, BFM = FALSE, backwardsWindow = 3600, forwardsWindow = 0.5, plot = FALSE )
tradeDataSource |
character indicating the folder in which the original trade data is stored. |
quoteDataSource |
character indicating the folder in which the original quote data is stored. |
dataDestination |
character indicating the folder in which the cleaned data is stored, folder of |
tData |
|
qData |
|
lagQuotes |
numeric, number of seconds the quotes are registered faster than the trades (should be round and positive). Default is 0. For older datasets, i.e. before 2010, it may be a good idea to set this to, e.g., 2 (see, Vergote, 2005). |
nSpreads |
numeric of length 1 denotes how far above the offer and below bid we allow outliers to be. Trades are filtered out if they are MORE THAN nSpread * spread above (below) the offer (bid) |
BFM |
a logical determining whether to conduct "Backwards - Forwards matching" of trades and quotes. The algorithm tries to match trades that fall outside the bid - ask and first tries to match a small window forwards and if this fails, it tries to match backwards in a bigger window. The small window is a tolerance for inaccuracies in the timestamps of bids and asks. The backwards window allow for matching of late reported trades, i.e. block trades. |
backwardsWindow |
a numeric denoting the length of the backwards window used when |
forwardsWindow |
a numeric denoting the length of the forwards window used when |
plot |
a logical denoting whether to visualize the forwards, backwards, and unmatched trades in a plot. Passed on to |
In case you supply the arguments tData
and qData
, the on-disk functionality is ignored
and the function returns cleaned trades as a data.table
or xts
object (see examples).
When using the on-disk functionality and tradeDataSource and quoteDataSource are the same, the quote files are all files in the folder that contains 'quote', and the rest are treated as containing trade data.
For each day an xts
object is saved into the folder of that date, containing the cleaned data.
Jonathan Cornelissen, Kris Boudt, Onno Kleen, and Emil Sjoerup.
Barndorff-Nielsen, O. E., Hansen, P. R., Lunde, A., and Shephard, N. (2009). Realized kernels in practice: Trades and quotes. Econometrics Journal, 12, C1-C32.
Brownlees, C.T., and Gallo, G.M. (2006). Financial econometric analysis at ultra-high frequency: Data handling concerns. Computational Statistics & Data Analysis, 51, 2232-2245.
Christensen, K., Oomen, R. C. A., Podolskij, M. (2014): Fact or Friction: Jumps at ultra high frequency. Journal of Financial Economics, 144, 576-599
# Consider you have raw trade data for 1 stock for 2 days ## Not run: tDataAfterFirstCleaning <- tradesCleanup(tDataRaw = sampleTDataRaw, exchanges = "N", report = FALSE) qData <- quotesCleanup(qDataRaw = sampleQDataRaw, exchanges = "N", report = FALSE) dim(tDataAfterFirstCleaning) tDataAfterFinalCleaning <- tradesCleanupUsingQuotes(qData = qData[as.Date(DT) == "2018-01-02"], tData = tDataAfterFirstCleaning[as.Date(DT) == "2018-01-02"]) dim(tDataAfterFinalCleaning) ## End(Not run) # In case you have more data it is advised to use the on-disk functionality # via the "tradeDataSource", "quoteDataSource", and "dataDestination" arguments
# Consider you have raw trade data for 1 stock for 2 days ## Not run: tDataAfterFirstCleaning <- tradesCleanup(tDataRaw = sampleTDataRaw, exchanges = "N", report = FALSE) qData <- quotesCleanup(qDataRaw = sampleQDataRaw, exchanges = "N", report = FALSE) dim(tDataAfterFirstCleaning) tDataAfterFinalCleaning <- tradesCleanupUsingQuotes(qData = qData[as.Date(DT) == "2018-01-02"], tData = tDataAfterFirstCleaning[as.Date(DT) == "2018-01-02"]) dim(tDataAfterFinalCleaning) ## End(Not run) # In case you have more data it is advised to use the on-disk functionality # via the "tradeDataSource", "quoteDataSource", and "dataDestination" arguments
Delete entries with abnormal trades condition
tradesCondition( tData, validConds = c("", "@", "E", "@E", "F", "FI", "@F", "@FI", "I", "@I") )
tradesCondition( tData, validConds = c("", "@", "E", "@E", "F", "FI", "@F", "@FI", "I", "@I") )
tData |
an |
validConds |
a character vector containing valid sales conditions defaults to |
To get more information on the sales conditions, see the NYSE documentation. Section about Daily TAQ Trades File. The current version (as of May 2020) can be found online at NYSE's webpage
xts
or data.table
object depending on input.
Some CSV readers and the WRDS API parses empty strings as NAs. We transform NA
values in COND to ""
.
Jonathan Cornelissen, Kris Boudt, Onno Kleen, and Emil Sjoerup.