Bias

Keywords: bias, metrology

Bias in metrology is the systematic difference between the average measured value and the true (reference) value — a constant offset that affects accuracy (not precision), caused by calibration errors, measurement physics, or systematic instrument offsets.

Bias Assessment

- Reference Standard: Measure a certified reference material (CRM) or NIST-traceable standard — compare the average measurement to the certified value.
- Calculation: $Bias = ar{x}_{measured} - x_{reference}$ — positive bias means the gage reads high.
- Significance: Perform a t-test to determine if the bias is statistically significant — small biases may be within noise.
- Correction: Apply a bias correction: $x_{corrected} = x_{measured} - Bias$ — calibration removes systematic bias.

Why It Matters

- Accuracy: Bias is the primary component of measurement accuracy — precision (repeatability) and accuracy (bias) are independent.
- Calibration: Regular calibration corrects for drift in bias — calibration intervals must prevent excessive bias accumulation.
- Tool Matching: Bias differences between tools (CD-SEM #1 vs. #2) cause apparent process variation — matching requires bias alignment.

Bias is the systematic error — the constant offset between what the measurement tool reports and the true value, correctable through calibration.

Want to learn more?

Search 13,225+ semiconductor and AI topics or chat with our AI assistant.

Search Topics Chat with CFSGPT