A Justification of Conditional Confidence Intervals

Eric Beutner, Alexander Heinemann, Stephan Smeekes

Research output: Working paper / PreprintWorking paper

151 Downloads (Pure)


To quantify uncertainty around point estimates of conditional objects such as conditional means or variances, parameter uncertainty has to be taken into account. Attempts to incorporate parameter uncertainty are typically based on the unrealistic assumption of observing two independent processes, where one is used for parameter estimation, and the other for conditioning upon. Such unrealistic foundation raises the question whether these intervals are theoretically justified in a realistic setting. This paper presents an asymptotic justification for this type of intervals that does not require such an unrealistic assumption, but relies on a sample-split approach instead. By showing that our sample-split intervals coincide asymptotically with the standard intervals, we provide a novel, and realistic, justification for confidence intervals of conditional objects. The analysis is carried out for a general class of Markov chains nesting various time series models.
Original languageEnglish
PublisherCornell University - arXiv
Number of pages54
Publication statusPublished - 2 Oct 2017

JEL classifications

  • c53 - "Forecasting and Prediction Methods; Simulation Methods "
  • c22 - "Single Equation Models; Single Variables: Time-Series Models; Dynamic Quantile Regressions; Dynamic Treatment Effect Models"
  • c32 - "Multiple or Simultaneous Equation Models: Time-Series Models; Dynamic Quantile Regressions; Dynamic Treatment Effect Models"
  • g17 - Financial Forecasting and Simulation


  • Conditional confidence intervals
  • Parameter uncertainty
  • Markov chains
  • Prediction
  • Sample-splitting
  • Merging


Dive into the research topics of 'A Justification of Conditional Confidence Intervals'. Together they form a unique fingerprint.

Cite this