What percent error actually measures
Percent error answers one question: how far off was a measurement from the value it should have been, expressed as a fraction of that expected value? It shows up in chemistry lab reports, physics homework, engineering tolerance checks, accounting reconciliations, and any field where a measured number gets compared against a reference.
The reference value goes by different names depending on the discipline — accepted, theoretical, expected, true, calibrated, textbook — but they all play the same role. They're the number you're claiming the answer should be. Percent error tells you how badly you missed.
The number it produces is unitless. That matters more than it sounds. A raw error of "0.2" means almost nothing without context — 0.2 what, of what scale? But a percent error of 0.5% reads the same whether you're measuring grams of sodium, meters of cable, or watts of power. That's why percent error appears in every lab manual ever written.
How to use the Percent Error Calculator
The calculator runs in your browser. Nothing is sent anywhere, nothing is logged, no sign-up.
- Enter the actual value — the reference, the textbook answer, the calibrated standard, the value your experiment was supposed to produce.
- Enter the measured value — what your experiment, instrument, or calculation actually gave you.
- Pick a mode. Absolute (the default) gives you a positive number whether you over- or under-measured — this is what most school lab reports want. Signed keeps the plus or minus, telling you the direction of the miss.
- Read the result. The calculator also prints the formula with your specific numbers substituted in, which you can copy-paste straight into a lab writeup.
Both modes use the same magnitude — only the sign changes. So if absolute mode says 5%, signed mode says either +5% or −5%, depending on whether your measurement was high or low.
The formula, and why it has those vertical bars
The standard percent error formula:
Percent error = |measured − actual| / |actual| × 100
The vertical bars are absolute value — they strip the sign off whatever's inside. So |−7| = 7 and |+7| = 7. In school lab reports, the convention is to express error as a positive magnitude regardless of direction, which is why the absolute-value version is the default.
The signed version drops the bars in the numerator:
Signed percent error = (measured − actual) / |actual| × 100
Positive means you over-measured (your number was bigger than it should have been). Negative means you under-measured. Engineers, accountants, and quality-control teams use the signed form because the direction of an error has different operational consequences — over-pouring concrete is a different problem than under-pouring it.
Note that the denominator keeps the absolute-value bars in both forms. That's because dividing by a negative reference would flip the sign of the result for the wrong reason. The denominator is just a scale — it normalizes the error against the size of the reference, and scales don't have signs.
A worked example, start to finish
Say you ran an experiment expecting a yield of 100 grams of product, and the actual weighed yield was 95 grams. Plug it in:
- actual = 100
- measured = 95
- |measured − actual| = |95 − 100| = |−5| = 5
- |actual| = |100| = 100
- 5 / 100 × 100 = 5% percent error
Five percent. In signed mode, it's −5% — the minus sign tells you the experiment came up short. If you'd over-shot to 105 grams instead, absolute mode would still give 5%, but signed mode would give +5%.
Another one. The accepted value of gravitational acceleration is 9.81 m/s². A pendulum experiment in a high school physics lab measured 9.74 m/s². Percent error: |9.74 − 9.81| / |9.81| × 100 = 0.07 / 9.81 × 100 = 0.71%. Under one percent on a pendulum experiment with a stopwatch — solid work for an introductory lab.
What counts as a "good" percent error
There's no universal answer. The acceptable threshold depends entirely on the field, the instrument, and the consequences of being wrong. Aerospace machining and pharmaceutical formulation operate at thresholds that would be ludicrous overkill for a kitchen-renovation tile estimate. Here's a rough lay of the land:
| Domain | Typical acceptable percent error | What drives the threshold |
|---|---|---|
| School chemistry lab | Under 5% is solid; under 10% is acceptable | Student-grade equipment, room for learning |
| Introductory physics lab | Under 1% with precise instruments; up to 5% with stopwatches | Instrument precision |
| Analytical chemistry (research) | Under 1%; often 0.1% for HPLC, GC-MS | Calibrated standards and certified reference materials |
| Pharmaceuticals (potency assay) | 1–2% on active ingredient | Regulatory limits (FDA, EMA, USP) |
| Mechanical engineering (general) | 0.1% to 5% depending on tolerance class | The application — bearing fit vs. structural beam |
| Aerospace machining | 0.001% to 0.01% | Flight safety, fatigue stress |
| Construction / carpentry | 1–5% on materials, 0.1% on critical dimensions | Material waste vs. fit-and-finish |
| Survey / accounting reconciliation | 0.1% on totals, often 0% required for exact-match line items | Regulatory and audit requirements |
If your school lab report shows 25% percent error, something probably went wrong with the experimental setup — a unit-conversion error, a contaminated reagent, a misread thermometer. Twenty-five percent isn't "a bit off"; it's a signal that the procedure broke. Look at the procedure before blaming the math.
Percent error versus its cousins
Three formulas get confused with each other constantly. They look similar and they produce similar-looking numbers, but they answer different questions:
- Percent error. There's a known correct value. You measured something. How far off was your measurement, as a fraction of the correct value? Denominator: the accepted value.
- Percent difference. You have two measurements of the same thing and neither is "correct" — they're peers. How far apart are they, as a fraction of their average? Denominator: the average of the two values. Formula: |a − b| / ((a + b)/2) × 100.
- Percent change. Something went from value A to value B over time. By what percent did it change? Denominator: the original value. Formula: (new − old) / old × 100. This one is signed by convention because direction matters (a stock dropping 30% means something different from rising 30%).
Pick the right formula based on what you actually have. A reference standard? Use percent error. Two independent measurements? Percent difference. A before-and-after? Percent change. If you find yourself unsure, ask what the denominator represents — the right answer should make physical sense.
The edge case: when actual = 0
The formula divides by the actual value. If the actual value is zero, the formula breaks — you can't divide by zero, and there's no sensible scale to express the error against. The calculator will tell you the result is undefined.
If your reference value really is zero (you expected an instrument reading of exactly 0 and got something else), report the absolute error directly instead of trying to express it as a percent. Or use a different reference value — sometimes a "noise floor" or "detection threshold" is a more meaningful denominator than zero.
This isn't a bug in the calculator. It's a known limitation of the percent error metric itself, and it's why scientific reports sometimes use absolute error or root-mean-square error when reference values can legitimately be zero.
How precisely to report your answer
Match the precision of your inputs. If your measured value has three significant figures, report percent error to three significant figures. If your instrument reads to two decimal places, report percent error to two decimal places. The calculator shows up to four decimals — round down to match your actual measurement precision when you write it up.
Reporting more decimal places than your instrument can resolve is one of the most common errors in undergraduate lab reports. A percent error of "4.7283%" implies you measured five-digit precision when in fact your scale only reads to the gram. Just report "4.7%" and move on. The fake precision distracts from the actual finding.
Related tools
Percent error is one statistic in a larger toolkit. If you're cleaning up an experimental writeup or doing engineering-level analysis, these companions are often useful:
- Standard Deviation Calculator — when you have repeated measurements and want to characterize their spread, not just compare against a reference.
- Average Calculator — for averaging repeated trials before computing percent error, which is standard practice in lab work.
- Rounding Calculator — for cleaning up the percent error to the right number of significant figures before writing it into your report.
- Z-Score Calculator — for expressing how unusual a measurement is in standard-deviation units rather than percent units.
Frequently asked questions
What's the formula?
Absolute (most common): percent error = |measured − actual| / |actual| × 100. The vertical bars mean absolute value, so the result is always positive. Signed: percent error = (measured − actual) / |actual| × 100, which keeps the plus or minus sign so you know the direction of the error.
When should I use absolute mode versus signed mode?
Use absolute for school chemistry and physics lab reports — the convention is to express error as a positive magnitude. Use signed for engineering tolerance reports, financial reconciliations, or any case where the direction of the error has different consequences (a part that's too big has a different fix than one that's too small). The magnitude is the same in both modes; only the sign differs.
Why does the formula divide by the actual value, not the measured one?
Because the percentage needs a reference point, and the actual value is the reference — it's the value you're claiming the answer should be. Percent error answers "how far off was the measurement, as a fraction of what was expected?" Dividing by the measurement instead would answer a different question that's rarely useful.
What's the difference between percent error and percent difference?
Percent error: you have a known correct value, and you're checking how close a measurement came to it. Percent difference: you have two measurements of the same thing where neither is the reference, and you're checking how far apart they are. The denominators differ. Percent error uses the accepted value. Percent difference uses the average of the two measurements.
Can percent error be more than 100%?
Yes. If your measurement is more than double the actual value, percent error exceeds 100%. Example: actual = 10, measured = 30, percent error = 200%. This doesn't happen often in careful measurement but does show up when there's a unit-conversion mistake (centimeters vs. millimeters), a misplaced decimal, or a contamination issue.
Why does the calculator say undefined when actual = 0?
Because the formula divides by the actual value, and dividing by zero is undefined — there's no sensible scale to express the error as a fraction of. If your reference value really is zero, report absolute error (just measured − actual) instead, or pick a non-zero reference. This is a known limitation of the percent error metric itself.
How many decimal places should I report?
Match the precision of your measurement. If you measured to three significant figures, report percent error to three significant figures. If your scale reads to the gram, report to the percent — don't add decimals. The calculator shows up to four decimal places, but most of those should be trimmed before the number lands in your report.
What's a "good" percent error?
Field-dependent. School chemistry labs: under 5% is solid, under 10% is acceptable, over 20% suggests a procedural problem. Precise physics instruments: under 1%. Pharmaceutical potency: 1–2%. Aerospace machining: 0.001%. There's no universal threshold — it's set by the field's accepted practice and the cost of being wrong.