Skip to content

Commit

Permalink
Update meeting 2/23
Browse files Browse the repository at this point in the history
  • Loading branch information
YzyLmc committed Feb 25, 2024
1 parent 1a46223 commit c998deb
Show file tree
Hide file tree
Showing 5 changed files with 13 additions and 4 deletions.
12 changes: 8 additions & 4 deletions pasttalks.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,19 +15,23 @@ permalink: /pasttalks/
</thead>
<tbody>
<tr>
<td><b>Toward Full-Stack Reliable Robot Learning for Autonomy and Interaction</b> [<a href='assets/abstracts/glenchou.txt' target="_blank">abstract</a>][<a href='https://brown.hosted.panopto.com/Panopto/Pages/Viewer.aspx?id=f89a50f8-b208-4c16-9d03-b103012dbd92' target="_blank">recording</a>]</td>
<td><b>Toward Full-Stack Reliable Robot Learning for Autonomy and Interaction</b> [<a href='abstracts/glenchou.txt' target="_blank">abstract</a>][<a href='https://brown.hosted.panopto.com/Panopto/Pages/Viewer.aspx?id=f89a50f8-b208-4c16-9d03-b103012dbd92' target="_blank">recording</a>]</td>
<td>Glen Chou</td>
</tr>
<tr>
<td><b>Deep Reinforcement Learning for Multi-Agent Interaction</b> [<a href='assets/abstracts/stefanoalbrecht.txt' target="_blank">abstract</a>][<a href='https://brown.hosted.panopto.com/Panopto/Pages/Viewer.aspx?id=a32a4d42-eac8-45a0-8ad3-b10a0135683e' target='_blank'>recording</a>]</td>
<td><b>Deep Reinforcement Learning for Multi-Agent Interaction</b> [<a href='abstracts/stefanoalbrecht.txt' target="_blank">abstract</a>][<a href='https://brown.hosted.panopto.com/Panopto/Pages/Viewer.aspx?id=a32a4d42-eac8-45a0-8ad3-b10a0135683e' target='_blank'>recording</a>]</td>
<td>Stefano V. Albrecht</td>
</tr>
<tr>
<td><b>Object-level Planning: Bridging the Gap between Human Knowledge and Task and Motion Planning</b> [<a href='assets/abstracts/davidpaulius.txt' target="_blank">abstract</a>][<a href='https://brown.hosted.panopto.com/Panopto/Pages/Viewer.aspx?id=4d4d8cf2-965b-4f60-8c0e-b111013014d8' target='_blank'>recording</a>]</td>
<td><b>Object-level Planning: Bridging the Gap between Human Knowledge and Task and Motion Planning</b> [<a href='abstracts/davidpaulius.txt' target="_blank">abstract</a>][<a href='https://brown.hosted.panopto.com/Panopto/Pages/Viewer.aspx?id=4d4d8cf2-965b-4f60-8c0e-b111013014d8' target='_blank'>recording</a>]</td>
<td>David Paulius</td>
</tr>
<tr>
<td><b>[internal] <a href="https://www.panlishuo.com/" target="_blank">Lishuo</a>, <a href="https://taodav.cc/" target="_blank">David</a>, <a href="https://sparr.io/" target="_blank">Shane</a>, <a href="https://cs.brown.edu/people/grad/xhe71/" target="_blank">Ivy</a>, <a href="https://www.linkedin.com/in/lakshita-dodeja-15399321b/" target="_blank">Lakshita</a></b> [<a href='https://brown.hosted.panopto.com/Panopto/Pages/Viewer.aspx?id=770f2727-f138-4f05-b058-b1180133cd30' target='_blank'>recording</a>]</td>
<td><b>[internal] <a href="https://www.panlishuo.com/" target="_blank">Lishuo</a>, <a href="https://taodav.cc/" target="_blank">David</a>, <a href="https://sparr.io/" target="_blank">Shane</a>, <a href="https://cs.brown.edu/people/grad/xhe71/" target="_blank">Ivy</a>, <a href="https://www.linkedin.com/in/lakshita-dodeja-15399321b/" target="_blank">Lakshita</a></b> [<a href='https://brown.hosted.panopto.com/Panopto/Pages/Viewer.aspx?id=770f2727-f138-4f05-b058-b1180133cd30' target='_blank'>recording</a>]</td><td></td>

</tr>
<tr>
<td><b>[internal] <a href='https://benedictquartey.com/home-page' target="_blank">Benedict Quartey</a>, <a href='https://thao-nguyen-ai.github.io/' target="_blank">Thao Nguyen</a></b> [<a href='https://brown.hosted.panopto.com/Panopto/Pages/Viewer.aspx?id=e51d18f6-0bee-4acf-84ef-b11f012ec1fa' target='_blank'>recording</a>][slides: <a href='pdf/LIMP_Talk_Benedict.pdf' target="_blank">1</a> <a href="https://docs.google.com/presentation/d/1Mfv624cONP7E16hLCEBHJXxadQCO6m7-ziCPur-Fmjw/edit?usp=sharing" target="_blank">2</a>]</td><td></td>

</tr>
</tbody>
Expand Down
1 change: 1 addition & 0 deletions pasttalks/abstracts/davidpaulius copy.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
A fundamental obstacle to developing autonomous, goal-directed robots lies in grounding high-level knowledge (from various sources like language models, recipe books, and internet videos) in perception, action, and reasoning. Approaches like task and motion planning (TAMP) promise to reduce the complexity of robotic manipulation for complex tasks by integrating higher-level symbolic planning (task planning) with lower-level trajectory planning (motion planning). However, task-level representations must include a substantial amount of information expressing the robot's own constraints, mixing logical object-level requirements (e.g., a bottle must be open to be poured) with robot constraints (e.g., a robot's gripper must be empty before it can pick up an object). I propose an additional level of planning that naturally exists above the current TAMP pipeline which I call object-level planning (OLP). OLP exploits rich, object-level knowledge to bootstrap task-level planning by generating informative plan sketches. I will show how object-level plan sketches can initialize TAMP processes via bootstrapping, where PDDL domain and problem definitions are created for manipulation planning and execution.
3 changes: 3 additions & 0 deletions pasttalks/abstracts/glenchou.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
Robots must behave safely and reliably if we are to confidently deploy them in the real world around humans. To complete tasks, robots must manage a complex, interconnected autonomy stack of perception, planning, and control modules. While machine learning has unlocked the potential for holistic, full-stack control in the real world, these methods can be catastrophically unreliable. In contrast, model-based safety-critical control provides rigorous guarantees, but struggles to scale to real systems, where common assumptions on the stack, e.g., perfect task specification and perception, break down.

In this talk, I will argue that we need not choose between real-world utility and safety: by taking a full-stack approach to safety-critical control that leverages learned components where they can be trusted, we can build practical yet rigorous algorithms that can make real robots more reliable. I will first discuss how to make task specification easier and safer by learning hard constraints from human task demonstrations, and how we can plan safely with these learned specifications despite uncertainty. Then, given a task specification, I will discuss how we can reliably leverage learned dynamics and perception for planning and control by estimating where these learned models are accurate, enabling probabilistic guarantees for full-stack vision-based control. Finally, I will provide perspectives on open challenges and future opportunities, including robust perception-based hybrid control algorithms for reliable robotic manipulation and human-robot collaboration.
1 change: 1 addition & 0 deletions pasttalks/abstracts/stefanoalbrecht.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Our group specialises in developing machine learning algorithms for autonomous systems control, with a particular focus on deep reinforcement learning and multi-agent reinforcement learning. We have a focus on problems of optimal decision making, prediction, and coordination in multi-agent systems. In this talk, I will give an overview of our research agenda along with some recent published papers in these areas, including our ongoing R&D work with Dematic to develop multi-agent RL solutions for large-scale multi-robot warehouse applications. I will also present some of our research done at UK-based self-driving company Five AI (acquired by Bosch in 2022) on robust and interpretable motion planning and prediction for autonomous driving.
Binary file added pasttalks/pdf/LIMP_Talk_Benedict.pdf
Binary file not shown.

0 comments on commit c998deb

Please sign in to comment.