diff --git a/.DS_Store b/.DS_Store new file mode 100644 index 0000000..111d876 Binary files /dev/null and b/.DS_Store differ diff --git a/assets/.DS_Store b/assets/.DS_Store new file mode 100644 index 0000000..20a9c14 Binary files /dev/null and b/assets/.DS_Store differ diff --git a/assets/Adaptive_Costmaps_ICRA_2025__arxiv_.pdf b/assets/Adaptive_Costmaps_ICRA_2025__arxiv_.pdf new file mode 100644 index 0000000..e8f356b Binary files /dev/null and b/assets/Adaptive_Costmaps_ICRA_2025__arxiv_.pdf differ diff --git a/assets/flowchart.png b/assets/flowchart.png new file mode 100644 index 0000000..e6ba86d Binary files /dev/null and b/assets/flowchart.png differ diff --git a/assets/results/adaptation_fig8.png b/assets/results/adaptation_fig8.png new file mode 100644 index 0000000..f31359a Binary files /dev/null and b/assets/results/adaptation_fig8.png differ diff --git a/assets/results/turnpike_hill.png b/assets/results/turnpike_hill.png new file mode 100644 index 0000000..871028f Binary files /dev/null and b/assets/results/turnpike_hill.png differ diff --git a/assets/results/turnpike_speedmap.png b/assets/results/turnpike_speedmap.png new file mode 100644 index 0000000..ddc6868 Binary files /dev/null and b/assets/results/turnpike_speedmap.png differ diff --git a/assets/results/wheelchair_v1-5.png b/assets/results/wheelchair_v1-5.png new file mode 100644 index 0000000..11ab235 Binary files /dev/null and b/assets/results/wheelchair_v1-5.png differ diff --git a/assets/results/wvn_compare_v2.png b/assets/results/wvn_compare_v2.png new file mode 100644 index 0000000..f3054b3 Binary files /dev/null and b/assets/results/wvn_compare_v2.png differ diff --git a/index.html b/index.html index 24f3503..4f56e4f 100644 --- a/index.html +++ b/index.html @@ -43,6 +43,10 @@ @@ -124,7 +150,7 @@

SALON: Self-supervised Adaptive Learnin + + +
+
+
+
+
+

How It Works

+

- Autonomous robot navigation in off-road environments presents a number of challenges due to its lack of structure, making it difficult to handcraft robust heuristics for diverse scenarios. While learned methods using hand labels or self-supervised data improve generalizability, they often require a tremendous amount of data and can be vulnerable to domain shifts. To improve generalization in novel environments, recent works have incorporated adaptation and self-supervision to develop autonomous systems that can learn from their own experiences online. However, current works often rely on significant prior data, for example minutes of human teleoperation data for each terrain type, which is difficult to scale with more environments and robots. To address these limitations, we propose SALON, a perception-action framework for fast adaptation of traversability estimates with minimal human input. SALON rapidly learns online from experience while avoiding out of distribution terrains to produce adaptive and risk-aware cost and speed maps. Within seconds of collected experience, our results demonstrate comparable navigation performance over kilometer-scale courses in diverse off-road terrain as methods trained on 100-1000x more data. We additionally show promising results on significantly different robots in different environments. + Using visual foundation models (such as DINOv2) as feature extractors is key to our approach. By grounding their generalizable features with proprioceptive feedback, robots can quickly adapt their understanding of the world through their own experiences without a human in the loop.

@@ -310,6 +354,81 @@

Explore the Dataset!

+
+
+
+
+
+

Autonomy Results

+
+ + + + +
+
+
+ +
+

We run our autonomy experiments on a Yamaha Viking All-Terrain Vehicle, with two courses shown below. Course 1 consists of waypoints spaced 50m apart, and Course 2 consists of waypoints with varying spacing up to 200m. The VLAD clusters used for feature generation were copmuted from sample images collected from the "training data" zone. For each run, the system is initialized with no prior environment interaction data, and a single high-cost tree label from the training data area.

+ Experiment Overview +
+
+
+
+

SALON's is able to not only avoid lethal vegetation but also distinguish fine-grained terrain properties. Rough gravel in the middle of the trail below is higher cost than the smoother areas around it.

+ Cosmtaps +
+
+
+
+

Prediction of speedmaps allows the system to go faster where appropriate. As seen below, it is predicted that the robot can drive faster on trail than in grass.

+ Speedmaps +
+
+
+
+

Example of SALON's fast adaptation: Within 10 seconds of experiencing grass for the first time, SALON is able to quickly differentiate key terrains, such as, ideal short grass, riskier vegetation and lethal trees.

+ Adaptation +
+
+
+
+
+
+
+
+ +
+
+
+
+
+

Robot+Sensor Generalizability

+
+ + +
+
+
+
+

Evaluation on wheelchair in urban environment: After driving over rough cobblestone, the system quickly recognizes within 5 seconds that it is much rougher than the smooth sidewalk.

+ Adaptation +
+
+
+
+

With the same amount of data as Wild Visual Navigation, our method is able to correctly cost lethal objects like trees and walls without incorrectly costing short grass. Like WVN, we leverage only use visual features, and geometric information is used only to place them in the map.

+ Adaptation +
+
+
+
+
+
+
+
+
@@ -327,20 +446,6 @@

Video

-
-
-
- -
-
-

Experiment Overview

-
- -
-
-
-
- +