Home | People | Research | Publications | Software | Links | News | Contact   

Safe Aviation Autonomy with Learning-Enabled Components in the Loop: From Formal Assurances to Trusted Recovery Methods

 

Future autonomous aviation systems, such as cyber-taxis, are expected to complete millions of flights per day. These systems have the potential to significantly benefit from machine-learning-enabled components for enhanced perception, decision-making, and control that outperform their traditional, non-learning based counterparts. Despite the promise of deploying machine learning (ML) in future aviation systems, today's ML methods remain poor at generalizing to unseen conditions and lack formal safety guarantees. Our goal is to develop safe, trustworthy, and robust ML methods that will usher in a new level of autonomy in the national airspace.

We will apply the concept of redundancy—a hallmark of aviation safety—to algorithmic systems using multiple simultaneous algorithmic pipelines that operate on different algorithmic principles to produce independent outputs. Algorithmic redundancy will be particularly important in FDIR for perception systems. Current perception pipelines in autonomous systems largely fall into the categories of deep-learning-based versus model-based. Deep learning
methods tend to be faster, richer, and perform better on average, yet they are known to be sensitive to slight parameter variations and their performance is famously difficult to verify over the domain of possible input-output pairs. Conversely, model-based techniques are often slower, rely on hand-tuned features, and perform worse on average than deep learned models, but they can be analytically characterized and often have provable mathematical guarantees on performance. We will study architectures in which multiple simultaneous deep learning and model-based pipelines are continually fused (e.g., by augmenting data-driven processing pipelines with features generated by physical models and evaluated in a supervisor module.

Environmental disturbances (e.g., winds) can have a detrimental effect on landing accuracy. One way to account for uncertianty in dynamical systems is to consider the closed-loop evolution of the state density and control its evolution. In this case, one needs to abandon the deterministic point of view of the world in lieu of a stochastic/probabilistic one.  For the simple scenario of a linear, discrete-time stochastic system affected by Gaussian disturbances, the problem of directly controlling the state distribution reduces to the problem of controlling just the mean and the covariance. The covariance steering theory is exact, in the sense that no Monte Carlo simulations are needed in order to ensure that all state trajectories remain within a given set with high probability. Generalizing this insight we will devise a framework to address the covariance steering of dynamical systems under sensory imperfections / faults (for both linear and non-linear dynamical models). We will also use trajectory desensitization ideas to generate nominal trajectories that are insensitive to model variations (due, e.g., to component failures). Finally, we will also pursue the use of data-based approaches within the covariance steering framework. to develop data-based covariance steering controllers for general stochastic systems that would be able to handle the uncertainties stemming from data-based RL models.

This is a project is part of teh NASA University Leadership Initiative on Aviation Autonomy and involves several researchers from Stanford, UC Berkeley, MIT, Georgia Tech, University of New Mexico, Raytheon, Hampton University, and MIT Lincoln Laboratory.
 

Sponsors

This project is funded by NASA.

Selected Publications

External Links


Home | People | Research | Publications | Software | Links | News | Contact