Rr‑statistics.co

Time Series Stationarity & Order Picker

Time-series models like ARIMA assume the series is stationary, meaning its statistical behavior does not drift over time. ADF and KPSS tests check this. Paste a series to get both verdicts, ACF and PACF plots, a recommended differencing order, and ranked candidate ARIMA(p,d,q) models with AICc.

i New to stationarity? Read the 4-min primer

Stationarity in one sentence. A series is stationary if its mean, variance, and autocorrelation structure don't drift over time. Most forecasting models (ARMA, ARIMA, VAR) assume stationarity, so the first job is to check, and if needed, to difference the series until it looks stationary.

ADF vs KPSS. They flip the null hypothesis. ADF (Augmented Dickey-Fuller) tests H₀: a unit root is present (non-stationary). KPSS tests H₀: the series is stationary. Combine them: ADF rejects + KPSS does not reject ⇒ stationary. KPSS rejects + ADF does not ⇒ non-stationary, difference once. Both reject or both fail to reject ⇒ ambiguous; lean on the visual.

Reading ACF and PACF. The autocorrelation function (ACF) shows correlation at each lag; the partial autocorrelation function (PACF) shows the same after removing shorter-lag effects. AR(p) shows PACF cutting off at lag p, ACF tailing off. MA(q) shows ACF cutting off at lag q, PACF tailing off. Mixed ARMA shows both decaying. Bars outside ±1.96/√n are the conventional significance bounds.

Picking ARIMA(p,d,q). First fix d via ADF/KPSS. Then read p and q from ACF/PACF cutoff patterns as a starting point. Then refine with AICc, the small-sample-corrected information criterion: lower AICc means a better trade-off between fit and complexity. Auto-suggested orders are starting points; always validate with residual diagnostics before trusting a forecast.

ADF · KPSS · ACF · PACF · Differencing + ARIMA(p,d,q) candidates · Runs in your browser

Try a real-world example to load.

📊 AR(1)

Paste a series or pick a scenario.
R code RUNNABLE
R Reproduce in R

        
Diagnostics INTERACTIVE
Series + rolling mean
ACF
PACF
Candidate AICc (lower is better)
Inference

Read more Anatomy of stationarity testing & ARIMA order picking
Live recap - your inputs plugged in
Pick a scenario or paste data to see the derivation chain.
ADF: Δyᵗ = α + βt + γyᵗ₋₁ + Σᵢ δᵢΔyᵗ₋ᵢ + εᵗ H₀: γ = 0 (unit root present)
Augmented Dickey-Fuller. Regress the differenced series on a constant, a linear trend, the previous level, and lagged differences (lags chosen by AIC up to Schwert's default 12⋅(n/100)1/4). The t-statistic on the lagged level uses non-standard Dickey-Fuller critical values, not Student-t. Small p ⇒ reject unit root ⇒ series looks stationary.
KPSS: η = (1/n²) Σ Sᵗ² / σ̂² Sᵗ = Σᵢ≤ᵗ (yᵢ − ȳ) H₀: series is (level- or trend-) stationary
KPSS. Build the partial sum process of the residuals from a constant (or trend), divide by a long-run variance estimator (Newey-West with bandwidth ~ 4⋅(n/100)1/4). Small p ⇒ reject stationarity. KPSS complements ADF: combining them lets you separate stationary, trend-stationary, and unit-root processes.
ACF: ρ(k) = Σᵗ (yᵗ − ȳ)(yᵗ₋ₖ − ȳ) / Σᵗ (yᵗ − ȳ)² PACF: Durbin-Levinson recursion
Autocorrelation diagnostics. ACF is the standard sample autocorrelation. PACF is the last coefficient in an AR(k) fit, computed by the Durbin-Levinson recursion. The dashed bands at ±1.96/√n are the asymptotic 95% bounds under white noise; bars outside flag real dependence.
AICc = 2k − 2 log L + 2k(k+1) / (n − k − 1) k = p + q + (intercept)
Order selection. Differencing order d is fixed first by ADF/KPSS combination. ACF/PACF cutoff patterns nominate (p,q) shortlists. Each candidate is fit by exact maximum likelihood (innovations algorithm) and ranked by AICc, the small-sample-corrected information criterion. Hyndman-Khandakar pruning rejects non-causal / non-invertible fits.
Caveats When this is the wrong tool
If you have…
Use instead
n < 30
Both ADF and KPSS have low power on short series. The verdict will be ambiguous; the visual is your best evidence.
Multiple time series (VAR / VECM / cointegration)
Out of scope. Use the vars or urca R packages.
Structural breaks (level shifts, regime change)
ADF/KPSS treat the whole series as one regime. Use Zivot-Andrews or Bai-Perron tests via strucchange.
Heteroskedastic series (GARCH-flavoured)
Variance non-stationarity is a different beast. Run ARCH-LM tests and consider a GARCH model after fitting the mean.
Missing values mid-series
We strip missing values before testing. If the gaps are large, impute with imputeTS::na_kalman first; biased imputation will bias the tests.
Daily series with weekly + yearly seasonality
Multi-seasonal series need msts + tbats from the forecast package; this tool handles a single seasonal period.
Further reading

Numerical accuracy: ADF p-value via MacKinnon (1996) response surface; KPSS p-value via interpolation of Kwiatkowski et al. (1992) critical values; ACF/PACF via Durbin-Levinson; ARIMA log-likelihood via the innovations algorithm.