Skip to content

Commit

Permalink
update 2022 talks
Browse files Browse the repository at this point in the history
  • Loading branch information
YzyLmc committed Jan 3, 2024
1 parent 65be9ac commit c11ee72
Show file tree
Hide file tree
Showing 26 changed files with 150 additions and 0 deletions.
108 changes: 108 additions & 0 deletions pasttalks.md
Original file line number Diff line number Diff line change
Expand Up @@ -92,3 +92,111 @@ permalink: /pasttalks/
</tr>
</tbody>
</table>

<h2>2022</h2>
<table>
<thead>
<tr>
<th>Talk</th>
<th>Speaker</th>
</tr>
</thead>
<tbody>
<tr>
<td><b>Handling Distribution Shifts by training RL agents to be adaptive</b> [<a href='abstracts/anuragajay.txt' target="_blank">abstract</a>][<a href='https://drive.google.com/file/d/1o4zlo8A3SVL1xviRpt9Egnsm-7t4AkKP/view?usp=sharing' target="_blank">recording</a>]</td>
<td><a href='https://anuragajay.github.io/' target="_blank">Anurag Ajay</a></td>
</tr>
<tr>
<td><b>Resource Optimization for Learning in Robotics</b> [<a href='abstracts/shivamvats.txt' target="_blank">abstract</a>][<a href='https://drive.google.com/file/d/1i29PGRUrMAoBEzZxHHUji3YlWkStYZBq/view?usp=sharing' target="_blank">recording</a>]</td>
<td><a href='https://shivamvats.com/' target="_blank">Shivam Vats</a></td>
</tr>
<tr>
<td><b>Towards understanding self-supervised representation learning</b> [<a href='abstracts/nikunjsaunshi.txt' target="_blank">abstract</a>][<a href='https://drive.google.com/file/d/1thkbS-n1qLuYgkEb127gD8wO0KylT7Or/view?usp=share_link' target="_blank">recording</a>]</td>
<td><a href='https://www.nikunjsaunshi.com/' target="_blank">Nikunj Saunshi</a></td>
</tr>
<tr>
<td><b>Integrating Psychophysiological Measurements with Robotics in Dynamic Environments</b> [<a href='abstracts/poojaandcourtney.txt' target="_blank">abstract</a>][<a href='https://drive.google.com/file/d/1q6Zk58wsbOXMr7tZmU2_ltulIvD80oal/view?usp=sharing' target="_blank">recording</a>]</td>
<td><a href='https://www.linkedin.com/in/pooja-bovard' target="_blank">Pooja Bovard</a> And <a href='https://cs.brown.edu/people/grad/cctse/' target="_blank">Courtney Tse</a></td>
</tr>
<tr>
<td><b>Learning and Memory in General Decision Processes</b> [<a href='abstracts/camallen.txt' target="_blank">abstract</a>][<a href='https://drive.google.com/file/d/14Hf9fncRsRjmjtmJUL6dZSRcdCKAmOtv/view?usp=sharing' target="_blank">recording</a>]</td>
<td><a href='https://camallen.net/' target="_blank">Cam Allen</a></td>
</tr>
<tr>
<td><b>Creating Versatile Learning Agents Via Lifelong Compositionality</b> [<a href='abstracts/jorgemendez.txt' target="_blank">abstract</a>][<a href='https://drive.google.com/file/d/1RkszNHiyE-VpVowQ7YODMIc17o07Yhev/view?usp=sharing' target="_blank">recording</a>]</td>
<td><a href='https://www.csail.mit.edu/person/jorge-mendez' target="_blank">Jorge Mendez</a></td>
</tr>
<tr>
<td><b>Learning Scalable Strategies for Swarm Robotic Systems</b> [<a href='abstracts/lishuopan.txt' target="_blank">abstract</a>][<a href='https://drive.google.com/file/d/1PNpglCIQiuNtRKRIiM4jXHoXZpeOmTXT/view?usp=sharing' target="_blank">recording</a>]</td>
<td><a href='https://www.panlishuo.com/' target="_blank">Lishuo Pan</a></td>
</tr>
<tr>
<td><b>Dynamic probabilistic logic models for effective task-specific abstractions in RL</b> [<a href='abstracts/harshakokel.txt' target="_blank">abstract</a>][<a href='https://drive.google.com/file/d/13x6WzRi8nVKsZ4xQY_PZSw2E9JNNtpRc/view?usp=sharing' target="_blank">recording</a>]</td>
<td><a href='https://harshakokel.com/' target="_blank">Harsha Kokel</a></td>
</tr>
<tr>
<td><b>Why is this Taking so Dang Long? The Performance Characteristics of Multi-agent Path Finding Algorithms</b> [<a href='abstracts/ericewing.txt' target="_blank">abstract</a>][<a href='https://drive.google.com/file/d/1-QDX9yhz1pxELbrhHqy1iGDzjDrFkx9G/view?usp=sharing' target="_blank">recording</a>]</td>
<td><a href='https://ewinge.me/' target="_blank">Eric Ewing</a></td>
</tr>
<tr>
<td><b>Learning-Augmented Anticipatory Planning: designing capable and trustworthy robots that plan despite missing knowledge</b> [<a href='abstracts/gregorystein.txt' target="_blank">abstract</a>][<a href='https://drive.google.com/file/d/15mAt3DFq900IfYodqJ5Oxx0pp7xiEvyZ/view?usp=sharing' target="_blank">recording</a>]</td>
<td><a href='https://gjstein.com/' target="_blank">Gregory Stein</a></td>
</tr>
<tr>
<td><b>Towards Lifelong Reinforcement Learning through Zero-Shot Logical Composition</b> [<a href='abstracts/geraudtasse.txt' target="_blank">abstract</a>][<a href='https://drive.google.com/file/d/1ucIdkQAh9y4eRfSdD_JdgiPPotVYhM0y/view?usp=sharing' target="_blank">recording</a>]</td>
<td><a href='https://geraudnt.github.io/' target="_blank">Geraud Nangue Tasse</a></td>
</tr>
<tr>
<td><b>WHAT IS ARiSE AND ITS PURPOSE, HOW DOES IT BENEFIT HAMPTON UNIVERSITY, LOCAL MILITARY/GOVERNMENT AGENCIES, UAS INDUSTRIES, AND THE COMMUNITIES OF HAMPTON ROADS</b> [<a href='abstracts/johnpmurray.txt' target="_blank">abstract</a>][<a href='https://drive.google.com/file/d/1xda8ZtTLCWz1GU24G2HhbhEb5cC_eWNe/view?usp=sharing' target="_blank">recording</a>]</td>
<td><a href='https://www.linkedin.com/in/john-murray-749893292' target="_blank">John P Murray</a></td>
</tr>
<tr>
<td><b>Learning and Using Hierarchical Abstractions for Efficient Taskable Robots</b> [<a href='abstracts/geraudtasse.txt' target="_blank">abstract</a>][<a href='https://drive.google.com/file/d/1FEXJEr6J8C-hddAObTzET6sBKTU_6ePU/view?usp=sharing' target="_blank">recording</a>]</td>
<td><a href='https://www.namanshah.net/' target="_blank">Naman Shah</a></td>
</tr>
<tr>
<td><b>Representation in Robotics</b> [<a href='abstracts/kaiyuzheng.txt' target="_blank">abstract</a>][<a href='https://drive.google.com/file/d/13OO9SUWuNwSKZeoA1fma4hNz64tNo9PE/view?usp=sharing' target="_blank">recording</a>]</td>
<td><a href='https://kaiyuzheng.me/' target="_blank">Kaiyu Zheng</a></td>
</tr>
<tr>
<td><b>Toward More Robust Hyperparameter Optimization</b> [<a href='abstracts/afedercooper.txt' target="_blank">abstract</a>][<a href='https://drive.google.com/file/d/1zayhgJWOvlwqsTYnFrTYkiyFqZQuwCsR/view?usp=sharingt' target="_blank">recording</a>]</td>
<td><a href='https://afedercooper.info/' target="_blank">A. Feder Cooper</a></td>
</tr>
<tr>
<td><b>Statistical and Computational Issues in Reinforcement Learning (with Linear Function Approximation)</b> [<a href='abstracts/gauravmahajan.txt' target="_blank">abstract</a>][<a href='https://drive.google.com/file/d/1OSRSyupwbLG-PG-7wHGKUDJyEnZnwnq_/view?usp=sharing' target="_blank">recording</a>]</td>
<td><a href='https://gomahajan.github.io/' target="_blank">Gaurav Mahajan</a></td>
</tr>
<tr>
<td><b>On the Expressivity of Markov Reward</b> [<a href='abstracts/daveabel.txt' target="_blank">abstract</a>][<a href='https://drive.google.com/file/d/1nyOO4HQYVH3To0b-i9AI2abQQjlqdlgh/view?usp=sharing' target="_blank">recording</a>]</td>
<td><a href='https://david-abel.github.io/' target="_blank">Dave Abel</a></td>
</tr>
<tr>
<td><b>Robot Skill Learning via Representation Sharing and Reward Conditioning</b>[<a href='abstracts/tuluhanakbulut.txt' target="_blank">abstract</a>] & <b>Shape-Based Transfer of Generic Skills</b> [<a href='abstracts/skyethompson.txt' target="_blank">abstract</a>][<a href='https://drive.google.com/file/d/1YNw2egsp7ArblMY53kHVp0j1Jt6RjkQN/view?usp=sharing' target="_blank">recording</a>]</td>
<td><a href='https://www.linkedin.com/in/mete-tuluhan-akbulut-90b21076/' target="_blank">Tuluhan Akbulut</a> & <a href='https://scholar.google.com/citations?user=KdcjezcAAAAJ&hl=en' target="_blank">Skye Thompson</a></td>
</tr>
<tr>
<td><b>Hardware Architecture for LiDAR Point Cloud Processing in Autonomous Driving</b> [<a href='abstracts/xinminghuang.txt' target="_blank">abstract</a>][<a href='https://drive.google.com/file/d/1Myh42HheSwGCgfRzi39OtLvm1N1H1CQD/view?usp=sharing' target="_blank">recording</a>]</td>
<td><a href='hhttps://users.wpi.edu/~xhuang/' target="_blank">Xinming Huang</a></td>
</tr>
<tr>
<td><b>Working with Spot</b> [<a href='abstracts/max&kaiyu.txt' target="_blank">abstract</a>] & <b>Count based exploration</b> [<a href='abstracts/samlobel.txt' target="_blank">abstract</a>][<a href='https://drive.google.com/file/d/10KPiTRUYaBkCxvhZCj_MqBzi1sUjonSm/view?usp=sharing' target="_blank">recording</a>]</td>
<td><a href='https://kaiyuzheng.me/' target="_blank">Kaiyu Zheng</a> & <a href='https://www.linkedin.com/in/maxmerlin/' target="_blank">Max Merlin</a> & <a href='https://samlobel.github.io/' target="_blank">Sam Lobel</a></td>
</tr>
<tr>
<td><b>MICo: Improved representations via sampling-based state similarity for Markov decision processes</b> [<a href='abstracts/pablocastro.txt' target="_blank">abstract</a>][<a href='https://drive.google.com/file/d/1rgotoyfdwTu4GhNDXcxepyXZuzfPtPJM/view?usp=sharing' target="_blank">recording</a>]</td>
<td><a href='https://psc-g.github.io/' target="_blank">Pablo Samuel Castro</a></td>
</tr>
<tr>
<td><b>Weak inductive biases for composable primitive representations</b> [<a href='abstracts/wilkacarvalho.txt' target="_blank">abstract</a>][<a href='https://drive.google.com/file/d/1uoP3n5wRdgzqUhjLH27JT2wtCPKNrskk/view?usp=sharing' target="_blank">recording</a>]</td>
<td><a href='https://cogscikid.com/' target="_blank">Wilka Carvalho</a></td>
</tr>
<tr>
<td><b>Mirror Descent Policy Optimization</b> [<a href='abstracts/manantomar.txt' target="_blank">abstract</a>][<a href='https://drive.google.com/file/d/1umosW9BO5ixMJmJfTa5pbGPmYXeN6FJf/view?usp=sharing' target="_blank">recording</a>]</td>
<td><a href='https://manantomar.github.io/' target="_blank">Manan Tomar</a></td>
</tr>
<tr>
<td><b>Joint Task and Motion Planning with the Functional Object-Oriented Network</b> [<a href='abstracts/davidpaulius.txt' target="_blank">abstract</a>][<a href='https://drive.google.com/file/d/1BXKoq5OLQs-RqBfxu1KQQUExx0BkvIj_/view?usp=sharing' target="_blank">recording</a>]</td>
<td><a href='https://davidpaulius.github.io/' target="_blank">David Paulius</a></td>
</tr>
</tbody>
</table>
1 change: 1 addition & 0 deletions pasttalks/abstracts/afedercooper.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Recent empirical work shows that inconsistent results based on choice of hyperparameter optimization (HPO) configuration are a widespread problem in ML research. When comparing two algorithms J and K, searching one subspace can yield the conclusion that J outperforms K, whereas searching another can entail the opposite. In short, the way we choose hyperparameters can deceive us. In this talk, I will discuss work from NeurIPS 2020 in which we provide a theoretical complement to this prior empirical work, arguing that, to avoid such deception, the process of drawing conclusions from HPO should be made more rigorous. In this work, we name this process epistemic hyperparameter optimization (EHPO), and put forth a logical framework to capture its semantics and how it can lead to inconsistent conclusions about performance. Our framework enables us to prove EHPO methods that are guaranteed to be defended against deception, given a bounded compute time budget t. I will show how our framework is useful for proving and empirically validating a defended variant of random search, and close with broader takeaways concerning the future of robust HPO research.
7 changes: 7 additions & 0 deletions pasttalks/abstracts/anuragajay.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
While sequential decision-making algorithms like reinforcement learning have been seeing much broader applicability and success, they are still often deployed under the assumption that the train and test set of MDPs are drawn IID from the same distribution. But in most real-world production systems, distribution shift is ubiquitous, and any system designed for real-world deployment must be able to handle this robustly. An RL agent deployed in the wild must be robust to data distribution shifts arising from the diversity and dynamism of the real world. In this talk, I will describe two scenarios where such data distribution shifts can occur: (i) offline reinforcement learning and (ii) meta reinforcement learning. In both scenarios, I will discuss how dealing with distribution shift requires careful training of dynamic, adaptive policies that can infer and adapt to varying levels of distribution shift. This allows agents to go beyond the standard requirement of train and test distribution matching and show improvement in scenarios with significant distribution shifts. I will discuss how this framework will allow us to build adaptive and robust simulated robotics systems.

Relevant papers:

(1) Offline RL policies should be trained to be adaptive (ICML 2022)
(2) Distributionally Adaptive Meta RL (NeurIPS 2022)
(3) Is conditional generative modeling all you need for decision making? (FMDM Workshop NeurIPS 2022)
1 change: 1 addition & 0 deletions pasttalks/abstracts/camallen.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
The Markov assumption is pervasive in reinforcement learning. By modeling problems as Markov decision processes, agents act as though they can always observe the complete state of the world. While this assumption is sometimes a useful fiction, in general decision processes, agents must find ways to cope with only partial information. Classical techniques for partial observability typically require access to unobservable or hard-to-acquire information (like the complete set of possible world states, or knowledge of mutually exclusive potential futures). Meanwhile, modern recurrent neural networks, which rely only on observables and simple forms of memory, have proven remarkably effective in practice, but lack a principled theoretical framework for understanding when and what agents should remember. And yet---despite its flaws---the Markov assumption may offer a path towards precisely this type of understanding. We show that estimating the value of the agent's policy both with and without the Markov assumption leads to a value discrepancy in non-Markov environments that appears to reliably indicate when memory is useful. We present initial progress towards a theory of such value discrepancies, and sketch an algorithm for automatically learning memory functions by uncovering and subsequently minimizing those discrepancies. Our approach suggests that agents can make effective decisions in general decision processes as long as they remember whatever information is necessary for them to trust their value function estimates.
1 change: 1 addition & 0 deletions pasttalks/abstracts/daveabel.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Reward is the driving force for reinforcement-learning agents. In this talk, I will present our recent NeurIPS paper that explores the expressivity of reward as a way to capture tasks that we would want an agent to perform. We frame this study around three new abstract notions of “task” that might be of interest: (1) a set of acceptable behaviors, (2) a partial ordering over behaviors, or (3) a partial ordering over trajectories. Our main results prove that while Markov reward can express many of these tasks, there exist instances of each task type that no Markov reward function can capture. We then provide a set of polynomial-time algorithms that construct a Markov reward function that allows an agent to optimize tasks of each of these three types, and correctly determine when no such reward function exists. I conclude by summarizing recent follow up work that studies alternatives for enriching the expressivity of reward.
1 change: 1 addition & 0 deletions pasttalks/abstracts/davidpaulius.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Following work on joint object-action representations, the functional object-oriented network (FOON) was introduced as a knowledge graph representation for robots. Taking the form of a bipartite graph, a FOON contains symbolic (high-level) concepts pertinent to a robot's understanding of its environment and tasks in a way that mirrors human understanding of actions. However, little work has been done to demonstrate how task plans acquired from FOON can be used for task execution by a robot, as the concepts typically found in a FOON are too abstract for immediate execution. To address this, we incorporate a hierarchical task planning approach to translate a FOON graph into a PDDL-based representation of domain knowledge for manipulation planning. As a result of this process, a task plan can be acquired that a robot can execute from start to end, leveraging the use of action contexts and motion primitives in the form of dynamic movement primitives (DMP). Learned action contexts can then be extended to never-before-seen scenarios.
1 change: 1 addition & 0 deletions pasttalks/abstracts/ericewing.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Multi-agent Path Finding (MAPF) is the problem of finding paths for a set of agents to their goals without collisions between agents. It is an important problem for many multi-robot applications, especially in automated warehouses. MAPF has been well studied, with many algorithms developed to solve MAPF instances optimally. However, the characteristics of these algorithms are poorly understood. No single algorithm dominates the other algorithms and it is hard to determine which algorithm should be used for which instance and for which instances algorithms will struggle to find a solution. In this talk, I will present results from two papers that seek to better understand the performance of MAPF algorithms. The first part of the talk will cover our MAPF Algorithm SelecTor (MAPFAST), a deep learning approach to predicting which algorithm will perform the best on a given instance. The second part of the talk will cover the role the betweenness centrality of the environment plays on the empirical difficulty of MAPF instances.
4 changes: 4 additions & 0 deletions pasttalks/abstracts/gauravmahajan.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
What properties of MDP allow for generalization in reinforcement learning? How does representation learning help with this? Even though we understand the analogous questions in supervised learning, providing theory for understanding these fundamental questions in reinforcement learning is challenging.


In this talk, we will discuss a number of recent works on the statistical and computational views of these questions. We will start by exploring this from a statistical point of view, where we will see algorithmic ideas for sample efficient reinforcement learning. Then, we will move on to the computational land and give evidence that the computational and statistical views of RL are fundamentally different by showing a surprising computational-statistical gap in reinforcement learning. Along the way, we will make progress on one of the most fundamental questions in reinforcement learning with linear function approximation: Suppose the optimal value function (Q* or V*) is linear in a given d dimensional feature mapping, is efficient reinforcement learning possible?
1 change: 1 addition & 0 deletions pasttalks/abstracts/geraudtasse.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Reinforcement learning has achieved recent success in a number of difficult, high-dimensional environments. However, these methods generally require millions of samples from the environment to learn optimal behaviors, limiting their real-world applicability. Hence this work is aimed at creating a principled framework for lifelong agents to learn essential skills and be able to combine them to solve new compositional tasks without further learning. To achieve this, we design useful representations of skills for each task and we construct a Boolean algebra over the set of tasks and skills. This enables us to compose learned skills to immediately solve new tasks that are expressible as a logical composition of past tasks. We present theoretical guarantees for our framework and demonstrate its usefulness for lifelong learning via a number of experiments.
Loading

0 comments on commit c11ee72

Please sign in to comment.