Critical Value Calculator

Find the cut-off value for your hypothesis test. Enter your significance level and degrees of freedom to get the critical value and rejection region for Z, t, chi-square, or F distributions.
α

Critical value
±1.9600
The cut-off value(s) that define the rejection region(s) for your test.
Rejection region
(-∞, -1.96] ∪ [1.9600, ∞)
Distribution
Z (standard normal)
Test type
Two-tailed
Significance level (α)
0.05

Critical values are fundamental components of statistical hypothesis testing that serve as decision thresholds for accepting or rejecting null hypotheses. These values mark the boundaries of rejection regions in probability distributions and play a crucial role in determining statistical significance. This article explores what critical values are, how they're calculated for different distributions, and how to interpret them in various testing scenarios.

What are critical values?

A critical value is a point (or points) on the scale of the test statistic beyond which we reject the null hypothesis. In other words, it's the cutoff value that defines the boundary between the region where we accept the null hypothesis and the region where we reject it in favor of the alternative hypothesis.

Critical values are determined by:

  1. The selected significance level (α)
  2. The type of statistical test (two-tailed, left-tailed, or right-tailed)
  3. The probability distribution of the test statistic
  4. The degrees of freedom (for distributions that require this parameter)

Significance level and critical values

The significance level (α) represents the probability of rejecting the null hypothesis when it is actually true (Type I error). Common values include 0.05 (5%), 0.01 (1%), and 0.10 (10%).

The relationship between α and critical values depends on the test type:

  1. Two-tailed test: The critical value divides α equally between both tails of the distribution
  2. Left-tailed test: The critical value places all of α in the left tail
  3. Right-tailed test: The critical value places all of α in the right tail

Critical values for different distributions

Z-distribution (Standard Normal)

The Z-distribution is used when the population standard deviation is known or when the sample size is large enough (n ≥ 30).

For a Z-distribution:

  • Two-tailed test: z=±z1α/2z = \pm z_{1-\alpha/2}

  • Left-tailed test: z=zαz = z_{\alpha}

  • Right-tailed test: z=z1αz = z_{1-\alpha}

Common Z critical values:

  • For α = 0.05, two-tailed: ±1.96
  • For α = 0.05, one-tailed: ±1.645
  • For α = 0.01, two-tailed: ±2.576
  • For α = 0.01, one-tailed: ±2.326

T-distribution (Student's t)

The t-distribution is used when the population standard deviation is unknown and the sample size is small (typically n < 30).

For a t-distribution with df degrees of freedom:

  • Two-tailed test: t=±t1α/2,dft = \pm t_{1-\alpha/2, df}

  • Left-tailed test: t=tα,dft = t_{\alpha, df}

  • Right-tailed test: t=t1α,dft = t_{1-\alpha, df}

The degrees of freedom (df) for a single sample t-test is n-1, where n is the sample size.

Chi-Square distribution (χ²)

The chi-square distribution is commonly used for testing goodness-of-fit, independence in contingency tables, and variances.

For a chi-square distribution with df degrees of freedom:

  • Two-tailed test: χcritical, lower2=χα/2,df2\chi^2_{\text{critical, lower}} = \chi^2_{\alpha/2, df} χcritical, upper2=χ1α/2,df2\chi^2_{\text{critical, upper}} = \chi^2_{1-\alpha/2, df}

  • Left-tailed test: χ2=χα,df2\chi^2 = \chi^2_{\alpha, df}

  • Right-tailed test: χ2=χ1α,df2\chi^2 = \chi^2_{1-\alpha, df}

The degrees of freedom depend on the specific test:

  • For goodness-of-fit: df = k-1 (where k is the number of categories)
  • For independence: df = (r-1)(c-1) (where r is the number of rows and c is the number of columns)

F-distribution

The F-distribution is used primarily in analysis of variance (ANOVA) and for comparing variances of two populations.

For an F-distribution with df₁ (numerator) and df₂ (denominator) degrees of freedom:

  • Two-tailed test: Fcritical, lower=Fα/2,df1,df2F_{\text{critical, lower}} = F_{\alpha/2, df_1, df_2} Fcritical, upper=F1α/2,df1,df2F_{\text{critical, upper}} = F_{1-\alpha/2, df_1, df_2}

  • Left-tailed test: F=Fα,df1,df2F = F_{\alpha, df_1, df_2}

  • Right-tailed test: F=F1α,df1,df2F = F_{1-\alpha, df_1, df_2}

For an ANOVA test:

  • df₁ = k-1 (where k is the number of groups)
  • df₂ = N-k (where N is the total sample size)

Rejection regions

The rejection region represents the set of values for the test statistic that leads to rejecting the null hypothesis. The critical value defines the boundary of this region.

For different test types:

  1. Two-tailed test:

    • Reject H₀ if the test statistic < lower critical value or > upper critical value
    • For Z and t: Reject if |test statistic| > critical value
  2. Left-tailed test:

    • Reject H₀ if the test statistic < critical value
  3. Right-tailed test:

    • Reject H₀ if the test statistic > critical value

Example calculations

Example 1: Z-test (Standard Normal)

Suppose we want to test a hypothesis with α = 0.05 using a two-tailed Z-test.

The critical values are: z=±z10.05/2=±z0.975=±1.96z = \pm z_{1-0.05/2} = \pm z_{0.975} = \pm 1.96

The rejection region is: z < -1.96 or z > 1.96, or |z| > 1.96

Example 2: T-test with 15 degrees of freedom

For a two-tailed t-test with df = 15 and α = 0.05:

The critical values are: t=±t10.05/2,15=±t0.975,15=±2.131t = \pm t_{1-0.05/2, 15} = \pm t_{0.975, 15} = \pm 2.131

The rejection region is: t < -2.131 or t > 2.131, or |t| > 2.131

Example 3: Chi-square test with 5 degrees of freedom

For a right-tailed chi-square test with df = 5 and α = 0.01:

The critical value is: χ2=χ10.01,52=χ0.99,52=15.086\chi^2 = \chi^2_{1-0.01, 5} = \chi^2_{0.99, 5} = 15.086

The rejection region is: χ² > 15.086

Practical interpretation

When conducting hypothesis tests, you compare your calculated test statistic to the critical value:

  1. If your test statistic falls in the rejection region (beyond the critical value), you reject the null hypothesis
  2. If your test statistic does not fall in the rejection region, you fail to reject the null hypothesis

For example, if you calculate a Z-statistic of 2.5 for a two-tailed test with α = 0.05, since 2.5 > 1.96, you would reject the null hypothesis.

Critical values vs. p-values

While critical values define rejection regions based on predetermined significance levels, p-values represent the probability of observing a test statistic as extreme as the one calculated, assuming the null hypothesis is true.

The relationship between them:

  • If p-value < α, then the test statistic exceeds the critical value (reject H₀)
  • If p-value ≥ α, then the test statistic does not exceed the critical value (fail to reject H₀)

Finding critical values

Critical values can be found using:

  1. Statistical tables: Traditional printed tables for Z, t, χ², and F distributions
  2. Statistical software: Programs like R, SPSS, SAS, and Excel
  3. Online calculators: Web-based tools for quick calculations
  4. Statistical functions: Most programming languages have functions to calculate critical values

Applications in different fields

Critical values have diverse applications across multiple disciplines:

  • Medicine: Determining whether a treatment has a significant effect
  • Economics: Testing economic theories and market behaviors
  • Psychology: Assessing the significance of observed behaviors
  • Quality control: Setting control limits for manufacturing processes
  • Environmental science: Testing hypotheses about environmental changes

Frequently asked questions

How do critical values relate to confidence intervals?

Critical values are used to construct confidence intervals. For example, a 95% confidence interval for a mean using the Z-distribution has boundaries μ ± 1.96 × (σ/√n), where 1.96 is the critical value.

What happens if I change the significance level?

Changing the significance level changes the critical value. A smaller α (e.g., 0.01 instead of 0.05) makes the critical value more extreme, requiring stronger evidence to reject the null hypothesis.

Can critical values be negative?

Yes, for distributions that include negative values (like Z and t distributions), critical values can be negative, especially for left-tailed and two-tailed tests.

How do degrees of freedom affect critical values?

As degrees of freedom increase, the t-distribution approaches the standard normal distribution. Similarly, critical values for chi-square and F-distributions change with degrees of freedom.

Why do two-tailed tests have different critical values than one-tailed tests?

Two-tailed tests distribute the significance level between both tails of the distribution, while one-tailed tests place all of it in one tail. For example, with α = 0.05, a two-tailed Z-test has critical values at ±1.96, while a one-tailed test has a critical value at ±1.645.

What's the difference between critical values in parametric and non-parametric tests?

Parametric tests (like t-tests) use critical values from specific distributions assuming certain conditions about the data. Non-parametric tests (like the Wilcoxon signed-rank test) may use different methods to determine critical values, often based on rank statistics rather than the data itself.