One-Sample t-Test in R: Test a Mean Against a Value
A one-sample t-test in R compares the mean of a single sample to a known value (the null hypothesis mean). Use t.test(x, mu = m0) where x is the sample and m0 is the hypothesized mean.
t.test(x, mu = 100) # default two-sided t.test(x, mu = 100, alternative = "greater") # one-sided (right) t.test(x, mu = 100, alternative = "less") # one-sided (left) t.test(x, mu = 100, conf.level = 0.99) # 99% CI t.test(x, mu = 100)$p.value # extract p-value t.test(x, mu = 100)$conf.int # extract CI shapiro.test(x) # check normality first
Need explanation? Read on for examples and pitfalls.
What a one-sample t-test does in one sentence
A one-sample t-test asks: "is the mean of this sample DIFFERENT from a hypothesized value?" It computes the t-statistic (mean(x) - mu) / (sd(x) / sqrt(n)) and converts it to a p-value using the t distribution with n - 1 degrees of freedom.
Used when you have one sample and want to test against a known/expected value. Common in quality control ("is the average length 100 mm as specified?"), product testing ("is mean rating different from 3.5?"), and theoretical comparisons.
Syntax
Pass the sample as x and the hypothesized mean as mu. The result is a htest object with t-statistic, p-value, CI, and more.
shapiro.test(x) (Shapiro-Wilk) or a Q-Q plot. If non-normal, use wilcox.test(x, mu = 100) (Wilcoxon signed-rank) instead. For n >= 30, the central limit theorem gives the t-test reasonable robustness even on slightly non-normal data.Five common patterns
1. Two-sided test (default)
The two-sided test is the default. The alternative hypothesis is "true mean != 100".
2. One-sided test (greater)
alternative = "greater" tests "true mean > 100". Use when you have a directional hypothesis BEFORE seeing the data. Never pick the direction post-hoc; that inflates the false-positive rate.
3. Custom confidence level
Wider confidence levels (99%) produce wider intervals. Tighter levels (90%) produce narrower ones.
4. Effect size (Cohen's d)
Cohen's d is the difference in standard deviations. Convention: 0.2 = small, 0.5 = medium, 0.8 = large effect. Always report effect size alongside the p-value; significance alone does not measure importance.
5. Extract specific results
The htest object is a list. Use $ to extract specific values for further analysis or reporting.
Assumptions and how to check them
The t-test assumes: independent observations, approximately normal distribution (or large enough sample), no extreme outliers.
| Assumption | How to check | What to do if violated |
|---|---|---|
| Independence | Study design | Use a model that handles dependence (mixed model) |
| Normality | shapiro.test(x), Q-Q plot | wilcox.test() (non-parametric) |
| No extreme outliers | boxplot(x), look at data | Trim outliers, transform, or use Wilcoxon |
| Random sampling | Study design | Acknowledge limitation in results |
For sample sizes 30+, the CLT gives the t-test robustness against mild non-normality. For sample sizes under 15, normality really matters; consider Wilcoxon if doubtful.
Common pitfalls
Pitfall 1: confusing one-sample with two-sample t-test. One-sample compares ONE sample to a value. Two-sample compares TWO samples to each other. Use t.test(x, mu = 100) for one-sample; t.test(x, y) for two-sample.
Pitfall 2: post-hoc choice of alternative direction. Picking alternative = "greater" after seeing the data inflates the false-positive rate. Direction must be specified BEFORE seeing data, based on prior hypothesis.
t.test(x, mu = 100) performs Student's one-sample t-test (no variance assumption between groups; only normality assumption). Welch's correction does not apply here.Pitfall 3: relying on p-value alone. A p-value answers "is there evidence of a difference?" not "how big is the difference?". Always report and interpret the effect size and CI alongside.
Try it yourself
Try it: Run a one-sample t-test on mtcars$mpg to test whether the mean MPG differs from 20. Extract the p-value and CI. Save the result to ex_test.
Click to reveal solution
Explanation: t.test(mtcars$mpg, mu = 20) tests whether the sample mean (20.09) differs from 20. The p-value (0.93) is large, meaning we cannot reject the null. The 95% CI (17.9, 22.3) includes 20, consistent with that conclusion. No evidence the true mean differs from 20.
Related statistical tests
After mastering the one-sample t-test, look at:
t.test(x, y): two-sample (independent groups) t-testt.test(x, y, paired = TRUE): paired-samples t-test (matched pre/post)wilcox.test(x, mu = 100): non-parametric alternative for non-normal datavar.test(): F-test for equal variancesprop.test(): test a proportion against a valuebinom.test(): exact binomial test for proportions
For sample size planning, use pwr::pwr.t.test() to compute required N for a target power.
FAQ
How do I run a one-sample t-test in R?
Use t.test(x, mu = hypothesized_value). For example, t.test(mtcars$mpg, mu = 20) tests whether the mean MPG differs from 20. The result includes t-statistic, degrees of freedom, p-value, and 95% confidence interval.
What is the difference between one-sample and two-sample t-test?
One-sample compares one sample's mean to a fixed VALUE (mu). Two-sample compares the means of two samples to each other. Both use t.test(); one-sample passes mu, two-sample passes a second vector y.
How do I check assumptions for a t-test in R?
Normality: shapiro.test(x) (Shapiro-Wilk) or a Q-Q plot via qqnorm(x); qqline(x). Outliers: boxplot(x). Independence: study design only. For n < 30 with non-normal data, switch to wilcox.test().
How do I extract the p-value from a t.test in R?
Save the result and use $: result <- t.test(x, mu = 100); result$p.value. Other useful fields: $statistic, $conf.int, $estimate, $parameter (degrees of freedom).
What does mu mean in t.test()?
mu is the hypothesized mean under the null hypothesis. For a one-sample test, it is the value you compare your sample mean against. Default is 0 (test whether mean differs from 0). For a comparison test (two-sample), mu is the hypothesized DIFFERENCE between the two means (default 0).