forked from YzyLmc/brown-lab-talks
-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
Showing
8 changed files
with
15 additions
and
8 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1 @@ | ||
As we know, when deploying robots in the real world, the predictions from a robot's models often deviate from what actually occurs. In this talk, we will explore ways to use models more effectively for planning, regardless of their structure, by using data. First, we will examine model preconditions, which specify the conditions in which a model should be used, and one method to define them by predicting model deviation. Then, we will review some results that evaluate how characterizing model deviation can improve planning efficiency, reliability during execution, and data-efficiency capability expansion when current models are insufficient. Finally, we will explore ongoing collaborative research on adapting state representation fidelity based on the planning problem. We test these methods in the real world where models are rarely accurate such as robot plant watering. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1 @@ | ||
For robots to be effectively deployed in homes and settings where they will interact with novice users, they must empower users with accessible and understandable control over that robot. This control may manifest itself as a user directly teleoperating the robot, directing the means of collaboration, or appropriately leveraging the robot’s autonomy to perform novel tasks. To enable this, our work focuses on the design and evaluation of user-centered algorithms and approaches to give users’ greater control over an already autonomous and partially capable robot. We consider how to leverage pretraining prior to deployment, user-robot interaction histories, and in our ongoing work, generalist robot foundation models to empower novices to use a robot how and for the purposes they want. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1 @@ | ||
Recent work has demonstrated that a promising strategy for teaching robots a wide range of complex skills is by training them on a curriculum of progressively more challenging environments. However, developing an effective curriculum of environment distributions currently requires significant expertise, which must be repeated for every new domain. Our key insight is that environments are often naturally represented as code. Thus, we probe whether effective environment curriculum design can be achieved and automated via code generation by large language models (LLM). In this paper, we introduce Eurekaverse, an unsupervised environment design algorithm that uses LLMs to sample progressively more challenging, diverse, and learnable environments for skill training. We validate Eurekaverse's effectiveness in the domain of quadrupedal parkour learning, in which a quadruped robot must traverse through a variety of obstacle courses. The automatic curriculum designed by Eurekaverse enables gradual learning of complex parkour skills in simulation and can successfully transfer to the real-world, outperforming manual training courses designed by humans. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1 @@ | ||
Enabling decision-making agents, such as robots and AI systems, to operate effectively in complex open-world environments poses significant challenges. This talk presents an approach that integrates structured learning into planning and world modeling to enhance data efficiency and generalization. By incorporating structure into end-to-end learning, agents can jointly learn representations and plan actions, allowing them to build world models on-the-fly and adapt to new situations. I explore two primary paradigms of world representation: lossless abstractions, which retain full environmental complexity through methods like symmetric and compositional representations; and lossy abstractions, which simplify planning for computational efficiency but require grounding abstract plans to real-world execution, such as symbolic-based abstraction. By combining these structured learning approaches, I aim to overcome the limitations of traditional planning methods and end-to-end learning, leading to more scalable and adaptable decision-making agents in complex environments. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1 @@ | ||
Efforts to understand our world drive the development of autonomous systems capable of effectively planning exploration tasks across applications like scientific sampling, environmental monitoring, surveillance, and search and rescue. When deployed in real-world settings, these robotic systems face dynamic changes and environmental uncertainties that dramatically increase decision complexity, making planning challenging. By leveraging the geometric properties of target areas, planning can be reframed as a combinatorial optimization problem, reducing complexity and enabling the breakdown of tasks into manageable subproblems. This talk presents a hierarchical approach to creating robust exploration and coverage plans, focusing on (i) generating global trajectory plans and (ii) adapting these trajectories in response to dynamic changes. We will discuss single and multi-robot coverage strategies that consider obstacles, environmental features, and sensor-specific data collection optimization, along with approaches to incorporate uncertainties into these plans. Real-world applications, such as automated scientific sampling in marine environments, will demonstrate the feasibility and impact of these techniques. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1 @@ | ||
Autonomous robots need to decide on actions and follow motions to achieve those actions, yet issues of symbolic planning and motion planning have long been addressed separately. This talk describes our work on integrated task and motion planning leading to a key challenge, and our novel results, on deciding the existence of motion plans. This line of work offers new capabilities for robots to plan for difficult and complex manipulation scenarios. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1 @@ | ||
Modern machine learning often faces scenarios where models cannot fully utilize the vast amounts of available data, and agents operate in environments so complex that they cannot feasibly visit all possible states. Deciding which data to train on—or how to explore effectively—is crucial to overcoming these challenges. In the first part of this talk, I will discuss the generalization challenges in deep reinforcement learning and demonstrate how effective exploration strategies can improve generalization and the implications this has for the scaling RL algorithms. In the second part, I will show how similar principles can be applied to dynamically select high quality data for language model pretraining and improve performance on a wide range of downstream tasks. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters