Autonomous cyber-physical and robotic systems are increasingly deployed in complex and safety-critical environments. To help these systems perform well in such environments, learning-enabled components implemented with neural networks are responsible for critical perception and control functions. Unfortunately, the complex interactions between the open environments and deep learning components lead to behaviors that are hard to analyze or predict, thus becoming a major obstacle to ensuring safe and trustworthy autonomy. In this context, a key supervisory function is to compute a predictive online probability of safety that takes the diverse uncertainties into account.
This talk will present a recent assurance approach that combines formal verification and run-time monitoring for stochastic dynamical systems. The central contribution is that we can guarantee that the online monitor produces a conservative probability estimate – one that never exceeds the true chance of safety. To this end, we combine Bayesian filtering with probabilistic model checking of Markov decision processes and evaluate our approach on a simulated study of critical infrastructure.
Bio: Dr. Ivan Ruchkin is an assistant professor at the Department of Electrical and Computer Engineering, University of Florida, where he leads the Trustworthy Engineered Autonomy (TEA) Lab. His research makes autonomous systems safer and more trustworthy by combining techniques from formal methods and artificial intelligence. Ivan received his Ph.D. degree in Software Engineering from Carnegie Mellon University and completed his postdoctoral training at the University of Pennsylvania. His contributions were recognized with multiple Best Paper awards, a Gold Medal in the ACM Student Research Competition, and the Frank Anger Memorial Award for the crossover of ideas between the software engineering and embedded systems communities. More information can be found at https://ivan.ece.ufl.edu.
Date
Location
Troy 2012
Speaker:
Ivan Ruchkin
from University of Florida