To quantify uncertainty around point estimates of conditional objects such as conditional means or variances, parameter uncertainty has to be taken into account. Attempts to incorporate parameter uncertainty are typically based on the unrealistic assumption of observing two independent processes, where one is used for parameter estimation, and the other for conditioning upon. Such unrealistic foundation raises the question whether these intervals are theoretically justified in a realistic setting. This paper presents an asymptotic justification for this type of intervals that does not require such an unrealistic assumption, but relies on a sample-split approach instead. By showing that our sample-split intervals coincide asymptotically with the standard intervals, we provide a novel, and realistic, justification for confidence intervals of conditional objects. The analysis is carried out for a rich class of time series models. We also present the results of a simulation study to evaluate the performance of the sample-split approach. The results indicate that also in practice sample-split intervals might be more appropriate than the standard intervals.
- Conditional confidence intervals
- parameter uncertainty
- BOOTSTRAP PREDICTION INTERVALS