On the Caching Schemes to Speed Up Program Reduction
MetadataShow full item record
Program reduction is a highly practical, widely demanded technique to help debug language tools, such as compilers, interpreters and debuggers. Given a program P which exhibits a property ψ, conceptually, program reduction iteratively applies various program transformations to generate a vast number of variants from P by deleting certain tokens, and returns the minimal variant preserving ψ as the result. A program reduction process inevitably generates duplicate variants, and the number of them can be significant. Our study reveals that on average 62.3% of the generated variants in HDD, a state-of-the-art program reducer, are duplicates. Checking them against ψ is thus redundant and unnecessary, which wastes time and computation resources. Although it seems that simply caching the generated variants can avoid redundant property tests, such a trivial method is impractical in the real world due to the significant memory footprint. Therefore, a memory-efficient caching scheme for program reduction is in great demand. This thesis is the first effort to conduct systematic, extensive analysis of memory-efficient caching schemes for program reduction. We first propose to use two well-known compression methods, i.e., ZIP and SHA, to compress the generated variants before they are stored in the cache. Furthermore, our keen understanding on the program reduction process motivates us to propose a novel, domain-specific, both memory and computation-efficient caching scheme, Refreshable Compact Caching (RCC). Our key insight is two-fold: 1) by leveraging the correlation between variants and the original program P, we losslessly encode each variant into an equivalent, compact, canonical representation; 2) we periodically remove stale cache entries to minimize the memory footprint over time. Our evaluation on 20 real-world C compiler bugs demonstrates that caching schemes help avoid issuing redundant queries by 62.3%; correspondingly, the runtime performance is notably boosted by 15.6%. With regard to the memory efficiency, all three methods use less memory than the state-of-the-art string-based scheme STR. ZIP and SHA cut down the memory footprint by 73.99% and 99.74%, compared to STR; more importantly, the highly-scalable, domain-specific RCC dominates peer schemes, and outperforms the second-best SHA by 89.0%.
Cite this version of the work
Xueyan Zhang (2023). On the Caching Schemes to Speed Up Program Reduction. UWSpace. http://hdl.handle.net/10012/19028