Rr‑statistics.co

Survival Analysis Power Calculator

Survival studies measure time to an event (death, relapse, churn). Power depends on the hazard ratio you want to detect, accrual period, follow-up time, and dropout rate. Specify your design and get the required events and total sample size via Schoenfeld's formula, with a Kaplan-Meier preview of the assumed curves.

i Planning a time-to-event trial? Read the 4-min primer

What survival power answers. A survival power calculation tells you how many subjects to enrol, and how many events (deaths, relapses, failures) to observe, before the log-rank test has a fair chance of detecting the hazard ratio you care about. Power is the probability the trial flags the difference if the difference is real; events, not patient counts, are the currency.

Hazard ratio intuition. The hazard ratio (HR) compares the instantaneous risk in the treatment arm versus the control arm. HR = 0.7 means treatment cuts the hazard by 30 percent at every moment; HR = 1 is no effect; HR = 0.5 is a strong halving of risk. Smaller deviations from 1 require dramatically more events. Halve the log HR and you roughly quadruple the events you need.

Events versus sample size. Schoenfeld's formula gives the required total events D directly, no patient count attached. To turn D into a patient count n, divide by the probability that a typical patient has an event during the study window. That probability depends on the median survival, the accrual schedule, the planned follow-up, and any losses. Long follow-up converts patients into events efficiently; short follow-up wastes them.

Picking accrual and follow-up. Two clocks run together: accrual (how long you spend recruiting) and follow-up (how long after the last enrollment you keep watching). Patients enrolled late see less follow-up, so they contribute fewer events. If recruitment is slow but the disease is fast, accrual length dominates; if the disease is slow, you need a long minimum follow-up after enrollment closes.

3 formulas · events, n, or power · Schoenfeld · Lakatos · Freedman · Runs in your browser

Try a real-world example to load.

🩺 cancer trial HR=0.7

A typical phase III oncology trial: median survival 12 months in control, 24 months accrual, 12 months follow-up, target 80% power.

REQUIRED TOTAL EVENTS
-
R code RUNNABLE
R Reproduce in R

        
Expected Kaplan-Meier curves INTERACTIVE
Control vs treatment, exponential survival under H1
Inference

Read more Anatomy of a survival power calculation
D = (z_{1-α/2} + z_{1-β})² · (1 + k)² / ( k · (log HR)² )
Schoenfeld events. The log-rank statistic under proportional hazards is asymptotically normal with variance proportional to the number of events. Solving the standard normal power equation for events gives the closed form above. Nothing about the calendar enters here, just events, allocation, alpha and power. The log of the hazard ratio enters squared, so a HR of 0.95 (log HR = -0.05) needs roughly 100 times more events than a HR of 0.5 (log HR = -0.69).
λ_C = log(2) / m_C (control hazard) λ_T = HR · λ_C (treatment hazard) S(t) = exp(-λ t) (exponential survival)
Translating median survival into hazard. Under exponential survival, the hazard is constant and equals log(2) divided by the median. The treatment hazard is the control hazard scaled by HR. The Kaplan-Meier curves shown on the right are these two exponentials. Real survival is rarely exactly exponential, but for closed-form planning the assumption is the workhorse.
P(event) = 1 - (1 / (λ · A)) · (exp(-λ F) - exp(-λ (A + F)))
Probability of an event during the study. A patient enrolled uniformly over an accrual window of length A, then followed for F more months after accrual closes, has a calendar time on study between F and A + F. Integrating exponential survival over this uniform enrollment gives the formula above. The total sample size is the required events divided by the average of the two arms' event probabilities, weighted by allocation.
P(event | dropout η) computed by numeric integration of λ · exp(-(λ+η) t) over study window
Lakatos with dropout. An annual dropout rate η competes with the event rate. The Lakatos approach approximates the survival under the combined hazard λ + η, but only events (not dropouts) count toward the log-rank. We integrate numerically over the calendar window for each arm. Dropout inflates the required sample size without changing the required events; a 10% per year loss can push n by 15 to 25 percent.
D_Freedman = ( (1 + k · HR) / (1 - HR) )² · (z_{1-α/2} + z_{1-β})² / k
Freedman classical reference. Freedman (1982) used the ratio of expected event counts directly rather than the log-HR limit. It tracks Schoenfeld for moderate HR and grows visibly larger as HR approaches 1. Some regulatory templates still cite Freedman; we expose it for cross-checks. For HR near 1 it can be quite different, so we flag the discrepancy.
Caveats When this is the wrong tool
If you have…
Use instead
Non-proportional hazards (immunotherapy crossing curves, delayed effect)
The log-rank test loses power dramatically when the HR drifts. Use a weighted log-rank (Fleming-Harrington) or a milestone-survival comparison; closed forms above are not valid. Simulate under the assumed crossing curves.
Competing risks
If patients can experience an event other than the primary one (e.g. death from a second cause that censors the primary endpoint), the cause-specific hazard differs from the subdistribution hazard. Use a Fine-Gray planning calculation or simulation against the cumulative incidence function.
Cluster-randomized survival trial
Clinics or wards randomised as units violate independence. Inflate by the design effect or use the cluster-survival formulas of Donner-Klar; total events still drive power but the variance of the log-rank statistic must include the intracluster correlation.
Group-sequential or adaptive design
Interim analyses with stopping rules require alpha-spending (O'Brien-Fleming, Pocock) and an inflated maximum information target. Use gsDesign::nSurv() or the East software; a single-look closed form will under-size the trial.
Bayesian or simulation-based planning
If the prior on the hazard ratio is informative, a Bayesian sample size approach (assurance, expected power) can be appropriate. Simulation against the Cox model with realistic censoring patterns is usually the safest catch-all when assumptions get messy.
Further reading

Numerical methods: Schoenfeld (1981) closed form for events, Freedman (1982) for the alternative reference, exponential survival with uniform accrual integrated in closed form, Lakatos (1988) approximated by quadrature for dropout adjustment. Cross-checked against R packages powerSurvEpi and gsDesign.