Skip to content

Commit

Permalink
update felix's talk detail
Browse files Browse the repository at this point in the history
  • Loading branch information
YzyLmc committed Feb 27, 2024
1 parent c998deb commit e834af3
Show file tree
Hide file tree
Showing 3 changed files with 3 additions and 2 deletions.
2 changes: 1 addition & 1 deletion _speakers/3_felixyanweiwang.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,4 +21,4 @@ img: felixyanweiwang.png

<!-- Whatever you write below will show up as the speaker's bio -->

__TBD__
Felix Yanwei Wang is a fifth-year PhD candidate from MIT EECS, working with Prof. Julie Shah, and is a Work of the Future Fellow in Generative AI at MIT. He has spent time working with Prof. Dieter Fox at the Nvidia Robotics Lab. Before his PhD, he obtained his master's degree in robotics at Northwestern University, working with Prof. Todd Murphey and Prof. Mitra Hartmann. His research goal is to design LfD methods that are inherently interactive—i.e., admissible to real-time human feedback and yet remain close to the demonstration manifold in some measure—so that humans can easily modify a pre-trained policy without needing to reteach the robot for new tasks.
1 change: 1 addition & 0 deletions assets/abstracts/felixwang.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Learning from demonstration (LfD) methods have shown impressive success in solving long-horizon manipulation tasks. However, few of them are designed to be interactive as physical interactions from humans might exacerbate the problem of covariate shift. Consequently, these policies are often executed open-loop and are restarted when they erred. To learn imitation policies that support real-time physical human-robot interaction, we draw inspiration from task and motion planning and temporal logic planning to formulate task and motion imitation: continuous motion imitation that satisfies discrete task constraints implicit in the demonstrations. We introduce algorithms that use either linear temporal logic specification or priors from large language models to robustify few-shot imitation, handling out-of-distribution scenarios that are typical during human-robot interactions.
2 changes: 1 addition & 1 deletion index.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ Brown Robotics Talks consists of BigAI talks and lab talks ([CIT](https://www.go
</tr>
<tr>
<td>03/01</td>
<td><b>TBD</b></td>
<td><b>Towards Interactive Task and Motion Imitation</b> [<a href='assets/abstracts/felixwang.txt' target="_blank">abstract</a>]</td>
<td>Felix Yanwei Wang</td>
</tr>
<tr>
Expand Down

0 comments on commit e834af3

Please sign in to comment.