حاسبة الانحراف المعياري

حاسبة الانحراف المعياري تتعامل مع حساب الإحصاء الأكثر طلبًا في أداة واحدة.

Sample vs population. If your numbers are a SAMPLE drawn from a larger population (the usual case in research, surveys, A/B tests), use sample SD — it divides by (n−1) for an unbiased estimate. If your numbers ARE the entire population (every employee in your company, every roll of a die), use population SD — divides by n. The headline shows sample SD; population SD is in the breakdown.

كيفية الاستخدام

  1. 1

    ألصق أرقامك في حقل النص.

  2. 2

    يتم تحديث الملخص فورًا.

  3. 3

    الانحراف المعياري للعينة هو العنوان الرئيسي.

  4. 4

    إذا كانت مجموعة بياناتك هي السكان بالكامل، استخدم الانحراف المعياري للسكان.

  5. 5

    اضغط نسخ لأخذ ملخص من سطر واحد.

الأسئلة الشائعة

Ratings & Reviews

Rate this tool

Sign in to rate and review this tool.

Loading reviews…

What is standard deviation?

Standard deviation (SD) measures how spread out a set of numbers is from their average. It's the most-used statistic for "how much variation is there?" in any dataset — exam scores, heights, daily revenues, sensor readings, manufacturing tolerances, anything quantitative.

Two datasets can have the exact same mean but very different standard deviations:

  • {49, 50, 50, 50, 51} — mean 50, SD ≈ 0.7. Tightly clustered.
  • {0, 25, 50, 75, 100} — mean 50, SD ≈ 39.5. Wildly spread.

Both have the same average, but the first dataset is consistent and the second is highly variable. Standard deviation captures this difference in a single number.

The two formulas — and which to use

Sample standard deviation (s)

s = √( Σ (xᵢ − x̄)² / (n − 1) )

Use this when your numbers are a sample drawn from a larger population. The (n − 1) divisor is called Bessel's correction; it makes the result an unbiased estimator of the underlying population's true SD. This is the default in research, surveys, A/B testing, science, business analytics, and almost everywhere except certain physics and engineering contexts.

Population standard deviation (σ)

σ = √( Σ (xᵢ − μ)² / n )

Use this when your numbers ARE the entire population — you've measured every member, not just a sample. Examples: every employee in your company, every coin flip in a finite experiment, every product in a finite production batch. Population SD divides by n (since you're not estimating from a sample, no correction needed).

Sample SD is always slightly larger than population SD on the same data. For small datasets the difference matters a lot; for large datasets it shrinks.

How to use the standard deviation calculator

  1. Paste your numbers into the textarea — separated by commas, spaces, semicolons, or new lines (the calculator accepts any combination).
  2. The summary updates instantly: count, mean, median, sum, variance, both SDs, min, max.
  3. Sample SD is the headline (the most-used variant). Population SD is in the breakdown.
  4. Tap Copy for a one-line summary you can drop into a report.

Worked example: exam scores

Scores: 65, 72, 78, 80, 81, 85, 88, 90, 92, 95.

  • n = 10. Sum = 826. Mean = 82.6. Median = (81 + 85) / 2 = 83.
  • Squared deviations from mean: (65−82.6)² + (72−82.6)² + ... = 309.76 + 112.36 + 21.16 + 6.76 + 2.56 + 5.76 + 29.16 + 54.76 + 88.36 + 153.76 = 784.4.
  • Population variance = 784.4 / 10 = 78.44. Population SD = √78.44 ≈ 8.86.
  • Sample variance = 784.4 / 9 ≈ 87.16. Sample SD = √87.16 ≈ 9.34.

Interpretation: the 10 scores have mean 82.6 with sample SD about 9.3. Most scores fall within ±9 of the mean (i.e., between 73 and 92).

The empirical rule (68-95-99.7)

For data that follows a normal distribution (bell curve):

  • 68% of values fall within ±1 SD of the mean.
  • 95% within ±2 SD.
  • 99.7% within ±3 SD.

For exam scores with mean 75 and SD 10:

  • About 68% of students score 65–85.
  • About 95% score 55–95.
  • About 99.7% score 45–105 (which would be capped at 100).
  • A score above 95 is in the top 2.5% (more than 2 SDs above mean).

Real-world data isn't always perfectly normal, but the empirical rule is a useful sanity check. If you have data with mean 100 and SD 5 and you see a value of 200, that's 20 SDs out — almost certainly an error or extreme outlier worth investigating.

Why divide by (n − 1) for samples?

This is one of the most-asked statistics questions. The intuition: when you compute the sample mean from your sample, the deviations (xᵢ − x̄) are calculated against an estimate (the sample mean) rather than the true population mean. This systematically underestimates the true variability — by a factor of exactly n/(n−1). Dividing by (n − 1) instead of n exactly compensates for this bias.

The proof requires some algebra (showing E[Σ(xᵢ − x̄)²] = (n−1)σ²), but the intuition is enough: when your "centre" is itself estimated from the data, you've used up one "degree of freedom," and the divisor should reflect that.

Variance vs standard deviation

Variance is SD squared. They convey the same information; the choice between them is mostly aesthetic and statistical convention:

  • Standard deviation: easier to interpret because it's in the same units as the data. Income data in dollars → SD in dollars. Easy to say "scores varied by about 10 points."
  • Variance: easier to use mathematically because it adds nicely (variance of a sum of independent variables = sum of variances). Used internally in t-tests, ANOVA, regression. Has the awkwardness of being in squared units (income variance in dollars²).

Practical advice: report SD in summaries, but be aware that the underlying math (when you read papers) often uses variance.

Where SD shows up in real work

Quality control

Manufacturing tolerances are SD-driven. A "Six Sigma" process keeps measurement errors within ±6 SD of the target — corresponding to about 3.4 defects per million opportunities. Quality engineers monitor both the mean (centering) and SD (spread) of their processes.

Financial risk

Stock returns' SD is called "volatility" in finance. A high-volatility stock has wide daily swings; a low-volatility one moves predictably. Risk-adjusted returns (Sharpe ratio) divide return by SD — the higher the ratio, the better the return per unit of risk.

A/B testing

The minimum sample size for an A/B test depends on the SD of your metric and the effect size you want to detect. Conversion-rate tests with low variance need smaller samples; revenue-per-user tests with high variance (a few whales dominate) need much larger samples.

Polling and surveys

The "margin of error" you see in poll results is roughly 2 × (SD / √n), where SD is the sample SD. Bigger samples shrink the margin; higher variance in responses widens it. A poll's margin of error is your visual on the spread inherent in the estimate.

Education

Standardized test scores are designed with mean 100 (or 500 or 1000 depending on the test) and SD 15 (or 100 or 200). Knowing your SD lets you convert your raw score into a percentile (e.g., one SD above mean = 84th percentile).

Sports analytics

Performance metrics (batting averages, save percentages, on-base percentages) are compared to a league mean ± SD. A player one SD above the mean is "good"; two SDs above is elite; three SDs above is generational.

Common mistakes

  • Using population SD when you should use sample SD. The most common error. If you're working with survey data, study results, or any sample, use sample SD (the (n−1) version). Population SD is only correct when you've measured every single member of a closed population.
  • Confusing SD with standard error. Standard error of the mean is SD divided by √n. SE shrinks as n grows; SD doesn't. SE describes how precisely you've estimated the mean; SD describes the spread of the data itself. Different things, often confused in published research.
  • Forgetting to take the square root. The intermediate step (Σ(xᵢ − x̄)²) is the sum of squared deviations. Divide by n or n−1 to get variance. Square root the variance to get SD. Easy to stop one step too early.
  • Reporting SD without the mean. SD alone is meaningless without context. Always report mean ± SD together (e.g., "scores were 75 ± 10").
  • Assuming normal distribution when data is skewed. The 68-95-99.7 rule only works for roughly bell-curve data. For skewed data (incomes, web latencies, financial returns with fat tails), SD is still computable but the rule of thumb breaks down. Use percentiles or robust statistics instead.

What the calculator gives you, summarized

  • Sample SD — the headline (research-default formula, divides by n−1).
  • Population SD — when your data is the whole population, divides by n.
  • Variance — SD squared, also computed.
  • Mean, median, sum — the standard "centre" measures.
  • Min and max — the range of your data.
  • n — count of values, important context for any SD interpretation.

One pasted list of numbers, eight statistics. The first stop for any "how spread out is this data?" question.