A Justification of Conditional Confidence Intervals

Eric Beutner, Alexander Heinemann, Stephan Smeekes

Research output: Working paperProfessional

102 Downloads (Pure)

Abstract

To quantify uncertainty around point estimates of conditional objects such as conditional means or variances, parameter uncertainty has to be taken into account. Attempts to incorporate parameter uncertainty are typically based on the unrealistic assumption of observing two independent processes, where one is used for parameter estimation, and the other for conditioning upon. Such unrealistic foundation raises the question whether these intervals are theoretically justified in a realistic setting. This paper presents an asymptotic justification for this type of intervals that does not require such an unrealistic assumption, but relies on a sample-split approach instead. By showing that our sample-split intervals coincide asymptotically with the standard intervals, we provide a novel, and realistic, justification for confidence intervals of conditional objects. The analysis is carried out for a general class of Markov chains nesting various time series models.
Original languageEnglish
Number of pages54
Publication statusPublished - 2 Oct 2017

Publication series

SeriesarXiv e-prints
Number1710.00643

JEL classifications

  • c53 - "Forecasting and Prediction Methods; Simulation Methods "
  • c22 - "Single Equation Models; Single Variables: Time-Series Models; Dynamic Quantile Regressions; Dynamic Treatment Effect Models"
  • c32 - "Multiple or Simultaneous Equation Models: Time-Series Models; Dynamic Quantile Regressions; Dynamic Treatment Effect Models"
  • g17 - Financial Forecasting and Simulation

Keywords

  • Conditional confidence intervals
  • Parameter uncertainty
  • Markov chains
  • Prediction
  • Sample-splitting
  • Merging

Cite this