forked from YzyLmc/brown-lab-talks
-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
Showing
3 changed files
with
17 additions
and
2 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1 @@ | ||
Learning from demonstration (LfD) methods have shown impressive success in solving long-horizon manipulation tasks. However, few of them are designed to be interactive as physical interactions from humans might exacerbate the problem of covariate shift. Consequently, these policies are often executed open-loop and are restarted when they erred. To learn imitation policies that support real-time physical human-robot interaction, we draw inspiration from task and motion planning and temporal logic planning to formulate task and motion imitation: continuous motion imitation that satisfies discrete task constraints implicit in the demonstrations. We introduce algorithms that use either linear temporal logic specification or priors from large language models to robustify few-shot imitation, handling out-of-distribution scenarios that are typical during human-robot interactions. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,5 @@ | ||
Robotics researchers have made great strides in learning general manipulation policies. But, many challenges remain. In particular, we need policies that generalize between object instances, re-arrangements of scenes and viewpoint changes, as well as the ability to learn skills from a few demonstrations. | ||
|
||
In the first part of my talk, I will describe how we can use a representation based on the warping of object shapes to generalize skills across unseen object instances. I will show that this representation enables one-shot learning of object re-arrangement policies with a high success rate on a physical robot. In the second part, I will discuss my previous work at Google Research called Invariant Slot Attention. Inspired by theories of human cognition, I will show that learning to represent objects invariant of their pose and size improves fully unsupervised object discovery. | ||
|
||
Finally, I will talk about my interests for future research, especially in scaling up pre-training in robotics, autonomous data collection and policy learning and object-oriented 3D scene representations. |