forked from YzyLmc/brown-lab-talks
-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
Showing
5 changed files
with
13 additions
and
4 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1 @@ | ||
A fundamental obstacle to developing autonomous, goal-directed robots lies in grounding high-level knowledge (from various sources like language models, recipe books, and internet videos) in perception, action, and reasoning. Approaches like task and motion planning (TAMP) promise to reduce the complexity of robotic manipulation for complex tasks by integrating higher-level symbolic planning (task planning) with lower-level trajectory planning (motion planning). However, task-level representations must include a substantial amount of information expressing the robot's own constraints, mixing logical object-level requirements (e.g., a bottle must be open to be poured) with robot constraints (e.g., a robot's gripper must be empty before it can pick up an object). I propose an additional level of planning that naturally exists above the current TAMP pipeline which I call object-level planning (OLP). OLP exploits rich, object-level knowledge to bootstrap task-level planning by generating informative plan sketches. I will show how object-level plan sketches can initialize TAMP processes via bootstrapping, where PDDL domain and problem definitions are created for manipulation planning and execution. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,3 @@ | ||
Robots must behave safely and reliably if we are to confidently deploy them in the real world around humans. To complete tasks, robots must manage a complex, interconnected autonomy stack of perception, planning, and control modules. While machine learning has unlocked the potential for holistic, full-stack control in the real world, these methods can be catastrophically unreliable. In contrast, model-based safety-critical control provides rigorous guarantees, but struggles to scale to real systems, where common assumptions on the stack, e.g., perfect task specification and perception, break down. | ||
|
||
In this talk, I will argue that we need not choose between real-world utility and safety: by taking a full-stack approach to safety-critical control that leverages learned components where they can be trusted, we can build practical yet rigorous algorithms that can make real robots more reliable. I will first discuss how to make task specification easier and safer by learning hard constraints from human task demonstrations, and how we can plan safely with these learned specifications despite uncertainty. Then, given a task specification, I will discuss how we can reliably leverage learned dynamics and perception for planning and control by estimating where these learned models are accurate, enabling probabilistic guarantees for full-stack vision-based control. Finally, I will provide perspectives on open challenges and future opportunities, including robust perception-based hybrid control algorithms for reliable robotic manipulation and human-robot collaboration. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1 @@ | ||
Our group specialises in developing machine learning algorithms for autonomous systems control, with a particular focus on deep reinforcement learning and multi-agent reinforcement learning. We have a focus on problems of optimal decision making, prediction, and coordination in multi-agent systems. In this talk, I will give an overview of our research agenda along with some recent published papers in these areas, including our ongoing R&D work with Dematic to develop multi-agent RL solutions for large-scale multi-robot warehouse applications. I will also present some of our research done at UK-based self-driving company Five AI (acquired by Bosch in 2022) on robust and interpretable motion planning and prediction for autonomous driving. |
Binary file not shown.