Research
Decision Theory and Motion Planning
Motion planning and decision making are at the core of Robotics and A.I. In theory, these problems can be solved using optimal control or dynamic programming. However, computational cost for solving many real world problems is prohibitively high and is exacerbated by the “curse of dimensionality”. Randomized sampling-based methods (like RRT*) ameliorate this problem by avoiding apriori griding of the environment and incrementally building a graph online. On the other hand, deterministic search algorithms (such as A*) can be augmented with intelligent abstractions to speed up their performance, and decision theory can borrow ideas from information theory to model agents that are resource-aware. Current research in this area lies at the intersection of A.I, machine learning, optimal control and information theory.
Following is a list of current and prior research projects at the DCSL. Click on each one to see more details.
Current Undergraduate Research Opportunites
Facilities
Current Projects
- LEAD: Learning-Enabled Assistive Driving: Formal Assurances during Operation and Training
- AstroSLAM: A Robust and Reliable Visual Localization and Pose Estimation for Space Robots in Orbit
- Spacecraft-Mounted Robotics
- Safe Aviation Autonomy with Learning-Enabled Components in the Loop: From Formal Assurances to Trusted Recovery Methods
- Advanced Planning for Autonomous Vehicles Using Reinforcement Learning and Deep Inverse Reinforcement Learning
- Incremental Sampling-Based Algorithms and Stochastic Optimal Control on Random Graphs
- Decision-Making for Autonomous Systems with Limited Resources
- ARCHES: Autonomous Resilient Cognitive Heterogeneous Swarms
Selected Previous Projects
- Optimal Strategies for Uncertain Differential Games with Applications
- Bounded-Rational Decision-Making Hierarchical Models for Autonomous Agents
- RAIDER: Resilient Actionable Intelligence for Distributed Environment understanding and Reasoning
- Virtual Driver Path Planning/Following Using Reinforcement Learning and Learning from Demonstration
- Adaptive Intelligence for Cyber-Physical Automotive Active Safety - System Design and Evaluation
- Autonomous Multi-Spectral Relative Navigation, Active Localization, and Motion Planning in the Vicinity of an Asteroid
- Learning Optimal Control using Forward-Backward SDEs
- Stochastic Optimal Control for Powered Descent Guidance
- Statistical Mechanics for Learning Algorithmic-Based Controllers: The Role or Physics in New Computational Models for Real-Time Control
- Information-Theoretic Trajectory Optimization for Motion Planning and Control with Applications to Space Proximity Operations
- A Framework for Bounded Rationality Autonomy Using Neuromorphic Decision and Action Models
- Advanced
Driver Assistance and Active Safety Systems through Driver’s
Controllability Augmentation and
Adaptation - Environment-Agent Interaction in Autonomous Networked Teams with Applications to Minimum-Time Coordinated Control of Multi-Agent Systems
- NASCAR: Neuro-inspired Adaptive Sensing and Control for Agile Response
- Autonomous, Vision Based Satellite Proximity Operations for Inspection, Health Monitoring and Surveillance in Orbit
- Motion Coordination and Adaptation Using Deception and Human Interactions
- Advanced Methods for Intelligent Flight Guidance and Planning in Support of Pilot Decision Making
- Control of High-Speed Autonomous Wheeled Vehicles
- Multiresolution Path Planning for Autonomous Agents
- Optimal and Nonlinear Control using Multiscale Methods
- Next Generation Active Safety Control Systems for Crash-Avoidance of Passenger Vehicles Using Expert Driver Knowledge
- Experimental Validation of Spacecraft Attitude Control Laws
- Vehicle Dynamics and Control During Abnormal Driving Conditions and Loss-of-Control Recovery
- Zero and Low-Bias Control of Active Magnetic Bearings
- Coordinated Attitude Control and Energy Storage On-Board Spacecraft
- Multi-Agent Optimization with Applications to Satellite Servicing in LEO
- Design and Construction of Autopilot for Unmanned Aerial Vehicles
- Robust and Optimal Control of Nonlinear Mechanical Systems with Rotating Components
- Advanced Control Techniques for Energy Storage Flywheel Magnetic Bearings
- Nonsmooth Feedback Control of Nonholonomic Systems with Applications to Mobile Robots
- High Performance Satellite Pointing Algorithm Development and Testing
- Flight Algorithms for Nocturnal Atmospheric Wind Energy Extraction
- Rapid Reconnaissance and Response (R3) Mission