BEGIN:VCALENDAR VERSION:2.0 PRODID:-//Date iCal//NONSGML kigkonsult.se iCalcreator 2.20.4// METHOD:PUBLISH X-WR-CALNAME;VALUE=TEXT:ÌÇÐÄÔ­´´ BEGIN:VTIMEZONE TZID:America/New_York BEGIN:STANDARD DTSTART:20191103T020000 TZOFFSETFROM:-0400 TZOFFSETTO:-0500 TZNAME:EST END:STANDARD BEGIN:DAYLIGHT DTSTART:20200308T020000 TZOFFSETFROM:-0500 TZOFFSETTO:-0400 TZNAME:EDT END:DAYLIGHT END:VTIMEZONE BEGIN:VEVENT UID:calendar.377796.field_event_date.0@www.wright.edu DTSTAMP:20260220T000916Z CREATED:20191113T170606Z DESCRIPTION:Ph.D. Committee:  Drs. Derek Doran (advisor)\, Pascal Hitzler ( Kansas State University)\, Michael Raymer\, and Tanvi Banerjee ABSTRACT:De ep Neural Networks (DNNs) are powerful tools blossomed in a variety of suc cessful real-life applications. While the performance of DNNs is outstandi ng\, their opaque nature raises a growing concern in the community\, causi ng suspicions on the reliability and trustworthiness of decisions made by DNNs. In order to release such concerns and towards building reliable deep learning systems\, research efforts are actively made in diverse aspects such as model interpretation\, model fairness and bias\, adversarial attac ks and defenses\, and so on.In this dissertation\, we focus on the researc h topic of DNN interpretations\, aiming to unfold the black-box and provid e explanations in a human-understandable way. We first conduct a categoriz ed literature review\, introducing the realm of explainable deep learning. Following the review\, two specific problems are tackled\, explanations o f Convolutions Neural Networks (CNNs)\, which relates the CNN decisions wi th input concepts\, and interpretability of multi-model interactions\, whe re an explainable model is built to solve a task similar to visual questio n answering. Visualization techniques are leveraged to depict the intermed iate hidden states of CNNs and attention mechanisms are utilized to build an instinct explainable model. Towards increasing the trustworthiness of D NNs\, a certainty measurement for decisions are also proposed as a potenti al future extension of this study. DTSTART;TZID=America/New_York:20191120T140000 DTEND;TZID=America/New_York:20191120T160000 LAST-MODIFIED:20191118T125910Z LOCATION:399 Joshi SUMMARY:Ph.D. Dissertation Proposal Defense Towards Interpretable and Relia ble Deep Neural Networks By Ning Xie URL;TYPE=URI:/events/phd-dissertation-proposal-defens e-towards-interpretable-reliable-deep-neural-networks-ning END:VEVENT END:VCALENDAR