Skip to content

Commit

Permalink
update 3/15 meeting
Browse files Browse the repository at this point in the history
  • Loading branch information
YzyLmc committed Mar 17, 2024
1 parent a1cf498 commit 4cfb2bf
Show file tree
Hide file tree
Showing 3 changed files with 17 additions and 2 deletions.
13 changes: 11 additions & 2 deletions pasttalks.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,11 +28,20 @@ permalink: /pasttalks/
</tr>
<tr>
<td><b>[internal] <a href="https://www.panlishuo.com/" target="_blank">Lishuo</a>, <a href="https://taodav.cc/" target="_blank">David</a>, <a href="https://sparr.io/" target="_blank">Shane</a>, <a href="https://cs.brown.edu/people/grad/xhe71/" target="_blank">Ivy</a>, <a href="https://www.linkedin.com/in/lakshita-dodeja-15399321b/" target="_blank">Lakshita</a></b> [<a href='https://brown.hosted.panopto.com/Panopto/Pages/Viewer.aspx?id=770f2727-f138-4f05-b058-b1180133cd30' target='_blank'>recording</a>]</td><td></td>

</tr>
<tr>
<td><b>[internal] <a href='https://benedictquartey.com/home-page' target="_blank">Benedict Quartey</a>, <a href='https://thao-nguyen-ai.github.io/' target="_blank">Thao Nguyen</a></b> [<a href='https://brown.hosted.panopto.com/Panopto/Pages/Viewer.aspx?id=e51d18f6-0bee-4acf-84ef-b11f012ec1fa' target='_blank'>recording</a>][slides: <a href='pdf/LIMP_Talk_Benedict.pdf' target="_blank">1</a> <a href="https://docs.google.com/presentation/d/1Mfv624cONP7E16hLCEBHJXxadQCO6m7-ziCPur-Fmjw/edit?usp=sharing" target="_blank">2</a>]</td><td></td>

</tr>
<tr>
<td><b>Towards Interactive Task and Motion Imitation</b> [<a href='abstracts/felixwang.txt' target="_blank">abstract</a>][<a href='https://brown.hosted.panopto.com/Panopto/Pages/Viewer.aspx?id=08651f60-fdba-4160-952e-b1260133f8d1' target='_blank'>recording</a>]</td>
<td>Felix Yanwei Wang</td>
</tr>
<tr>
<td><b>Towards Composable Scene Representations in Robotics and Vision</b> [<a href='abstracts/ondrejbiza.txt' target="_blank">abstract</a>][<a href='https://brown.hosted.panopto.com/Panopto/Pages/Viewer.aspx?id=b0a1556d-bb90-4fca-a239-b12d0130a9bb' target='_blank'>recording</a>]</td>
<td>Ondrej Biza</td>
</tr>
<tr>
<td><b>[internal] <a href="https://yzylmc.github.io/" target="_blank">Ziyi</a>, <a href="https://cs.brown.edu/people/grad/bhedegaa/" target="_blank">Benned</a>, <a href="https://benjaminaspiegel.com/" target="_blank">Ben</a>, <a href="https://arjun-prakash.github.io/" target="_blank">Arjun</a>, <a href="https://saulbatman.github.io/" target="_blank">Mingxi</a></b> [<a href='https://brown.hosted.panopto.com/Panopto/Pages/Viewer.aspx?id=12df9810-de8f-4a71-9fe9-b1340121ea76' target='_blank'>recording</a>][slides: <a href="https://docs.google.com/presentation/d/17o0kTTD0Fr9F7g_PQ_E2FNKCUdqcbreB39IQXoFTxv8/edit?usp=sharing" target="_blank">1</a> <a href="https://drive.google.com/file/d/1wYVRpC5O9ZBeqz_EB1mhaK6waUyKWIGk/view?usp=sharing" target="_blank">2</a> <a href="https://drive.google.com/file/d/1cVG4jKJ9fxiLVHNbincsx9Y1Du6S7WA_/view?usp=sharing" target="_blank">3</a> <a href="https://docs.google.com/presentation/d/1rvFEgDjE6V-wnYNMVAKXf3sYZdaXn2kAwwLREaZODfg/edit?usp=sharing" target="_blank">4</a> <a href="https://docs.google.com/presentation/d/1YUL8QKFx9XB4dv6EXQRU-zxy7CzXIoBqXYVbN8oBD8k/edit?usp=sharing" target="_blank">5</a>]</td><td></td>
</tr>
</tbody>
</table>
Expand Down
1 change: 1 addition & 0 deletions pasttalks/abstracts/felixwang.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Learning from demonstration (LfD) methods have shown impressive success in solving long-horizon manipulation tasks. However, few of them are designed to be interactive as physical interactions from humans might exacerbate the problem of covariate shift. Consequently, these policies are often executed open-loop and are restarted when they erred. To learn imitation policies that support real-time physical human-robot interaction, we draw inspiration from task and motion planning and temporal logic planning to formulate task and motion imitation: continuous motion imitation that satisfies discrete task constraints implicit in the demonstrations. We introduce algorithms that use either linear temporal logic specification or priors from large language models to robustify few-shot imitation, handling out-of-distribution scenarios that are typical during human-robot interactions.
5 changes: 5 additions & 0 deletions pasttalks/abstracts/ondrejbiza.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
Robotics researchers have made great strides in learning general manipulation policies. But, many challenges remain. In particular, we need policies that generalize between object instances, re-arrangements of scenes and viewpoint changes, as well as the ability to learn skills from a few demonstrations.

In the first part of my talk, I will describe how we can use a representation based on the warping of object shapes to generalize skills across unseen object instances. I will show that this representation enables one-shot learning of object re-arrangement policies with a high success rate on a physical robot. In the second part, I will discuss my previous work at Google Research called Invariant Slot Attention. Inspired by theories of human cognition, I will show that learning to represent objects invariant of their pose and size improves fully unsupervised object discovery.

Finally, I will talk about my interests for future research, especially in scaling up pre-training in robotics, autonomous data collection and policy learning and object-oriented 3D scene representations.

0 comments on commit 4cfb2bf

Please sign in to comment.