Show simple item record

dc.contributor.authorLiang, Qian
dc.date.accessioned2021-09-21 15:09:14 (GMT)
dc.date.available2021-09-21 15:09:14 (GMT)
dc.date.issued2021-09-21
dc.date.submitted2021-09-03
dc.identifier.urihttp://hdl.handle.net/10012/17458
dc.description.abstractUnit testing is a widely used tool in modern software development processes. A well-known issue in writing tests is handling dependencies: creating usable objects for dependencies is often complicated. Developers must therefore often introduce mock objects to stand in for dependencies during testing. Test suites are an increasingly important component of the source code of a software system. We believe that the static analysis of test suites, alongside the systems under test, can enable developers to better characterize the behaviours of existing test suites, thus guiding further test suite analysis and manipulation. However, because mock objects are created using reflection, they confound existing static analysis techniques. At present, it is impossible to statically distinguish methods invoked on mock objects from methods invoked on real objects. Static analysis tools therefore currently cannot determine which dependencies' methods are actually tested, versus mock methods being called. In this thesis, we introduce MockDetector, a technique to identify mock objects and track method invocations on mock objects. We first built a Soot-based imperative dataflow analysis implementation of MockDetector. Then, to quickly prototype new analysis features and to explore declarative program analysis, we created a Doop-based declarative analysis, added features to it, and ported them back to the Soot-based analysis. Both analyses handle common Java mock libraries' APIs for creating mock objects and propagate the mock objects information through test cases. Following our observations of tests in the wild, we have added special-case support for arrays and collections holding mock objects. On our suite of 8 open-source benchmarks, our imperative dataflow analysis approach reported 2,095 invocations on mock objects intraprocedurally, whereas our declarative dataflow approach reported 2,130 invocations on mock objects (under context-insensitive base analyses in intraprocedural mode), out of a total number of 63,017 method invocations in test suites; across benchmarks, mock invocations accounted for a range from 0.086% to 16.4% of the total invocations. Removing confounding mock invocations from consideration as focal methods can improve the precision of focal method analysis, a key prerequisite to further analysis of test cases.en
dc.language.isoenen
dc.publisherUniversity of Waterlooen
dc.subjectUnit Testsen
dc.subjectMock Objectsen
dc.subjectStatic Analysisen
dc.titleMockDetector: Detecting and tracking mock objects in unit testsen
dc.typeMaster Thesisen
dc.pendingfalse
uws-etd.degree.departmentElectrical and Computer Engineeringen
uws-etd.degree.disciplineElectrical and Computer Engineeringen
uws-etd.degree.grantorUniversity of Waterlooen
uws-etd.degreeMaster of Applied Scienceen
uws-etd.embargo.terms0en
uws.contributor.advisorLam, Patrick
uws.contributor.affiliation1Faculty of Engineeringen
uws.published.cityWaterlooen
uws.published.countryCanadaen
uws.published.provinceOntarioen
uws.typeOfResourceTexten
uws.peerReviewStatusUnrevieweden
uws.scholarLevelGraduateen


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record


UWSpace

University of Waterloo Library
200 University Avenue West
Waterloo, Ontario, Canada N2L 3G1
519 888 4883

All items in UWSpace are protected by copyright, with all rights reserved.

DSpace software

Service outages