Show simple item record

dc.contributor.authorGu, Sunsheng
dc.date.accessioned2022-01-18 13:46:58 (GMT)
dc.date.available2022-01-18 13:46:58 (GMT)
dc.date.issued2022-01-18
dc.date.submitted2022-01-05
dc.identifier.urihttp://hdl.handle.net/10012/17899
dc.description.abstractExplainable AI (XAI) methods are frequently applied to obtain qualitative insights about deep models' predictions. However, such insights need to be interpreted by a human observer to be useful. In this thesis, we aim to use explanations directly to make decisions without human observers. We adopt two gradient-based explanation methods, Integrated Gradients (IG) and backprop, for the task of 3D object detection. Then, we propose a set of quantitative measures, named Explanation Concentration (XC) scores, that can be used for downstream tasks. These scores quantify the concentration of attributions within the boundaries of detected objects. We evaluate the effectiveness of XC scores via the task of distinguishing true positive (TP) and false positive (FP) detected objects in the KITTI and Waymo datasets. The results demonstrate improvement of more than 100\% on both datasets compared to other heuristics such as random guesses and number of LiDAR points in bounding box, raising confidence in XC's potential for application in more use cases. Our results also indicate that computationally expensive XAI methods like IG may not be more valuable when used quantitatively compared to simpler methods. Moreover, we apply loss terms based on XC and pixel attribution prior (PAP), which is another qualitative measure for attributions, to the task of training a 3D object detection model. We show that performance boost is possible as long as we select the right subset of predictions for which the attribution-based losses are applied.en
dc.language.isoenen
dc.publisherUniversity of Waterlooen
dc.relation.urithe KITTI Vision Benchmark Suiteen
dc.relation.urithe Waymo Open Dataseten
dc.subjectexplainable AIen
dc.subjectdeep learningen
dc.subjectobject detectionen
dc.subjectmachine learningen
dc.titleXC: Exploring Quantitative Use Cases for Explanations in 3D Object Detectionen
dc.typeMaster Thesisen
dc.pendingfalse
uws-etd.degree.departmentElectrical and Computer Engineeringen
uws-etd.degree.disciplineElectrical and Computer Engineeringen
uws-etd.degree.grantorUniversity of Waterlooen
uws-etd.degreeMaster of Applied Scienceen
uws-etd.embargo.terms0en
uws.contributor.advisorCzarnecki, Krzysztof
uws.contributor.affiliation1Faculty of Engineeringen
uws.published.cityWaterlooen
uws.published.countryCanadaen
uws.published.provinceOntarioen
uws.typeOfResourceTexten
uws.peerReviewStatusUnrevieweden
uws.scholarLevelGraduateen


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record


UWSpace

University of Waterloo Library
200 University Avenue West
Waterloo, Ontario, Canada N2L 3G1
519 888 4883

All items in UWSpace are protected by copyright, with all rights reserved.

DSpace software

Service outages