Scan Compression and Test Data Volume Reduction is the DFT methodology that uses on-chip decompressor and compressor hardware to dramatically reduce the amount of test data that must be stored on the ATE (automatic test equipment) and transferred to the chip during manufacturing test, achieving compression ratios of 100-500x while maintaining fault coverage comparable to full-scan ATPG — essential for keeping test costs manageable as gate counts and scan chain lengths grow with each technology node.
Compression Architecture:
- Decompressor: receives a small number of ATE scan-in channels (typically 4-32) and expands them to fill hundreds or thousands of internal scan chains simultaneously; the decompressor is typically a linear feedback shift register (LFSR) or combinational XOR network that generates pseudo-random patterns seeded by the ATE data, with selective overrides for specified (deterministic) bit positions
- Compressor: collects responses from all internal scan chains and compresses them into a small number of ATE scan-out channels using an XOR-based space compactor; the compactor output is a signature that changes if any scan cell captures an incorrect value, providing near-complete fault observation
- Channel Ratio: the compression ratio approximately equals the number of internal scan chains divided by the number of ATE channels; with 1000 internal chains and 10 ATE channels, the compression ratio is ~100x for scan data volume
- EDT (Embedded Deterministic Test): Synopsys EDT is the industry-standard compression architecture; it uses an LFSR-based decompressor with a small number of external "care bits" that override the pseudo-random fill to create deterministic test patterns targeting specific faults
Test Data Volume Challenge:
- Uncompressed Volume: a modern SoC with 100 million gates may have 10-50 million scan flip-flops requiring thousands of test patterns; uncompressed test data can exceed 100 Gbits, requiring excessive ATE memory and test time
- ATE Memory Cost: ATE memory is expensive ($100K-$1M per tester channel per gigabit); test data volume directly translates to test cost; compression reduces memory requirements from terabits to gigabits, enabling testing on existing equipment
- Test Time: test time is proportional to (number of patterns × scan chain depth × 1/scan frequency); compression reduces the effective chain depth seen by the ATE by the compression ratio, proportionally reducing test time and associated cost
Advanced Compression Techniques:
- Adaptive Scan: modifies scan chain architecture to skip don't-care bits during shift, further reducing test time beyond basic compression; chains are partitioned into segments that can be individually enabled or bypassed
- X-Handling: unknown values (X-states) from uninitialized memories, multi-driver bus contention, or analog blocks corrupt the compactor output; X-masking or X-tolerance techniques selectively block X-propagating scan chains from the compactor during affected patterns
- Hierarchical Compression: large SoCs use a two-level compression scheme where each IP block has local compression within a global chip-level compression framework; this modular approach enables independent IP-level test development with efficient chip-level test integration
- Test Point Insertion: controllability and observability test points are inserted at strategic locations in the logic to improve fault detection with fewer patterns; test points are particularly effective for hard-to-detect faults that would otherwise require many additional patterns, reducing the overall pattern count and test data volume
Coverage and Quality:
- Fault Coverage: compressed test sets achieve 97-99%+ stuck-at fault coverage and 85-95% transition delay fault coverage, comparable to uncompressed full-scan test; the small coverage gap is caused by pattern dependency constraints of the LFSR-based decompressor
- Diagnostic Resolution: compressed test responses can be diagnosed to locate failing scan cells and identify defective logic; specialized diagnostic patterns with reduced compression and targeted observation improve the resolution of failure localization
Scan compression and test data volume reduction is the indispensable DFT technology that keeps manufacturing test economically viable as chip complexity scales — enabling billions of transistors to be thoroughly tested within practical time and cost constraints through elegant on-chip hardware that trades a small amount of silicon area for orders-of-magnitude reduction in test data bandwidth.