Trustworthy Artificial Intelligence (Trusted AI)
Abstract: Trustworthy artificial intelligence (Trusted AI) is essential when autonomous, safety-critical systems use learning-enabled components (LECs) in uncertain environments. When reliant on deep learning, these learning-enabled systems (LES) must address the reliability, interpretability, and robustness (collectively, the assurance) of learning models. Three types of uncertainty most significantly affect assurance. First, uncertainty about the physical environment can cause suboptimal, and sometimes catastrophic, results as the system struggles to adapt to unanticipated or poorly-understood environmental conditions. For example, when lane markings are occluded (either on the camera and/or the physical lanes), lane management functionality can be critically compromised. Second, uncertainty in the cyber environment can create unexpected and adverse consequences, including not only performance impacts (network load, real-time responses, etc.) but also potential threats or overt (cybersecurity) attacks. Third, uncertainty can exist with the components themselves and affect how they interact upon reconfiguration. Left unchecked, it may cause unexpected and unwanted feature interactions. While learning-enabled technologies have made great strides in addressing uncertainty, challenges remain in addressing the assurance of such systems when encountering uncertainty not addressed in training data. Furthermore, we need to consider LESs as first-class software-based systems that should be rigorously developed, verified, and maintained — i.e., software engineered. In addition to developing specific strategies to address these concerns, appropriate software architectures are needed to coordinate LECs and ensure they deliver acceptable behavior even under uncertain conditions. To this end, this presentation overviews a number of our multi-disciplinary research projects involving industrial collaborators, which collectively support a search-based software engineering, model-based approach to address Trusted AI and provide assurance for learning-enabled systems (i.e., SBSE4LES). In addition to sharing lessons learned from more than two decades of research addressing assurance for (learning-enabled) self-adaptive systems operating under a range of uncertainty, near-term and longer-term research challenges for addressing assurance of LESs will be overviewed.
Bio: Betty H.C. Cheng is a professor in the Department of Computer Science and Engineering at Michigan State University. She has also been the Industrial Relations Manager and senior researcher for BEACON, the National Science Foundation Science and Technology Center in the area of Evolution in Action. Her research interests include self-adaptive autonomous systems, safe use of AI-enabled systems, requirements engineering, model-driven engineering, automated software engineering, and harnessing evolutionary computation and search-based techniques to address software engineering problems. These research areas are used to support the development and maintenance of high-assurance adaptive systems that must continuously deliver acceptable behavior, even in the face of environmental and system uncertainty. Example applications include intelligent transportation and vehicle systems. She collaborates extensively with industrial partners in her research projects in order to ensure real-world relevance of her research and to facilitate technology exchange between academia and industry. She has collaborated with Ford, General Motors, ZF, Motorola, and Siemens. Previously, she was awarded an NASA/JPL Faculty Fellowship to investigate the use of new software engineering techniques for a portion of the NASA space shuttle software. She currently has projects in the areas of assured autonomy (systems with machine learning components), model-driven approaches to autonomous systems and digital twins, cyber security for automotive systems, and feature interaction detection and mitigation for autonomic systems, all in the context of operating under uncertainty while maintaining assurance objectives. Her research has been funded by several federal funding agencies, including NSF, AFRL, ONR, DARPA, NASA, ARO, and numerous industrial organizations. She serves on the journal editorial boards for ACM Transactions for Autonomous and Adaptive Systems, as well as Software and Systems Modeling; she has served as Co-Associate Editor-in-Chief and two terms as an Associate Editor for IEEE Transactions for Software Engineering and Requirements Engineering Journal. She was the Technical Program Co-Chair for IEEE International Conference on Software Engineering (ICSE-2013), the premier and flagship conference for software engineering.
She received her Bachelor of Science degree from Northwestern University, and her MS and PhD from the University of Illinois-Urana Champaign, all in computer science.
Please click this URL to start or join. https://iastate.zoom.us/j/96810972944?pwd=SVVLWlY2cVdZYXhxWWg4ZHF1cVdSZz09
Or, go to https://iastate.zoom.us/join and enter meeting ID: 968 1097 2944 and password: 334840
Join from dial-in phone line:
Dial: +1 309 205 3325 or +1 312 626 6799
Meeting ID: 968 1097 2944
Participant ID: Shown after joining the meeting
International numbers available: https://iastate.zoom.us/u/aqUgrVklM