Skip to content

Commit

Permalink
Converted publication stuff to jekyll style
Browse files Browse the repository at this point in the history
  • Loading branch information
andreykurenkov committed Aug 28, 2018
1 parent 6ed18d2 commit a339e55
Show file tree
Hide file tree
Showing 21 changed files with 263 additions and 238 deletions.
64 changes: 64 additions & 0 deletions Gemfile.lock
Original file line number Diff line number Diff line change
@@ -0,0 +1,64 @@
GEM
remote: https://rubygems.org/
specs:
addressable (2.5.2)
public_suffix (>= 2.0.2, < 4.0)
colorator (1.1.0)
ffi (1.9.25)
forwardable-extended (2.6.0)
jekyll (3.6.2)
addressable (~> 2.4)
colorator (~> 1.0)
jekyll-sass-converter (~> 1.0)
jekyll-watch (~> 1.1)
kramdown (~> 1.14)
liquid (~> 4.0)
mercenary (~> 0.3.3)
pathutil (~> 0.9)
rouge (>= 1.7, < 3)
safe_yaml (~> 1.0)
jekyll-feed (0.10.0)
jekyll (~> 3.3)
jekyll-sass-converter (1.5.2)
sass (~> 3.4)
jekyll-seo-tag (2.5.0)
jekyll (~> 3.3)
jekyll-watch (1.5.1)
listen (~> 3.0)
kramdown (1.17.0)
liquid (4.0.0)
listen (3.1.5)
rb-fsevent (~> 0.9, >= 0.9.4)
rb-inotify (~> 0.9, >= 0.9.7)
ruby_dep (~> 1.2)
mercenary (0.3.6)
minima (2.5.0)
jekyll (~> 3.5)
jekyll-feed (~> 0.9)
jekyll-seo-tag (~> 2.1)
pathutil (0.16.1)
forwardable-extended (~> 2.6)
public_suffix (3.0.3)
rb-fsevent (0.10.3)
rb-inotify (0.9.10)
ffi (>= 0.5.0, < 2)
rouge (2.2.1)
ruby_dep (1.5.0)
safe_yaml (1.0.4)
sass (3.5.7)
sass-listen (~> 4.0.0)
sass-listen (4.0.0)
rb-fsevent (~> 0.9, >= 0.9.4)
rb-inotify (~> 0.9, >= 0.9.7)

PLATFORMS
ruby

DEPENDENCIES
jekyll (~> 3.6.2)
jekyll-feed (~> 0.6)
minima (~> 2.0)
tzinfo-data

BUNDLED WITH
1.16.1
39 changes: 39 additions & 0 deletions _includes/_publication_list_entry.html
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
<div class="row mar-bot-30">
<div class="col-md-3">
<img class="img-responsive" src="/publications/images/{{ publication.images.thumb }}">
</div>
<div class="col-md-9">
<p class="pub-title"><a href="{{ site.url }}{{ publication.url }}" title="{{ publication.title }}">{{ publication.title }}</a></p>
<p class="pub-authors">{{ publication.authors }}</p>
<p class="pub-info">{{ publication.pub_info_name }}<br/>
{{ publication.pub_info_date }}
</p>
<p class="pub-description">
{% if publication.paper_link %}
<a class="btn btn-default btn-xs" href="{{ publication.paper_link }}" target="_blank" role="button">
<i class="fa fa-book" aria-hidden="true"></i> Paper
</a>
{% endif %}

{% if publication.webpage_link %}
<a class="btn btn-default btn-xs" href="{{ publication.webpage_link }}" target="_blank" role="button">
<i class="fa fa-users" aria-hidden="true"></i> Project Webpage
</a>
{% endif %}

{% if publication.video_link %}
<a class="btn btn-default btn-xs" href="{{ publication.video_link }}" target="_blank" role="button">
<i class="fa fa-video-camera" aria-hidden="true"></i> Talk Video
</a>
{% endif %}


{% if publication.cpde_link %}
<a class="btn btn-default btn-xs" href="{{ publication.code_link }}" target="_blank" role="button">
<i class="fa fa-file-code" aria-hidden="true"></i> Code
</a>
{% endif %}

</p>
</div>
</div>
5 changes: 5 additions & 0 deletions _includes/_publication_list_year.html
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
<div class="row">
<div class="col-lg-12">
<h2 class="section-header-first">{{ year }} <small>Publications</small></h2>
</div>
</div>
8 changes: 1 addition & 7 deletions _layouts/base.html
Original file line number Diff line number Diff line change
@@ -1,15 +1,9 @@
<!DOCTYPE html>
<html lang="en-US">
{% include head.html %}

{% assign body_id = 'page' %}
{% if page.layout == 'post' %}
{% assign body_id = 'post' %}
{% endif %}

<body id={{ body_id }}>
<body>
{% include navigation.html %}
{% include header.html %}

{{ content }}

Expand Down
6 changes: 6 additions & 0 deletions _layouts/publication.html
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
---
layout: base
---
<img src="/publications/images/{{ page.images.main }}" style="max-width:80%" alt="main">

{{ content }}
102 changes: 38 additions & 64 deletions index.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,8 @@ layout: base
title: Home
about: "PAIR website"
---
{% include header.html %}

<!-- Page Content -->
<div class="container-fluid">
<div class="container">
Expand Down Expand Up @@ -56,75 +58,47 @@ about: "PAIR website"
</div>
</div>

<div class="container-fluid container-colored">
<div class="container">
<div class="row press-mention">
<div class="col-md-12">
<h4 class="text-center press-mention">Press Mentions</h4>
<div class="container">
<!-- Portfolio Section -->
<div class="row">
<div class="col-lg-12">
<h2 class="page-header">Recent Projects</h2>
</div>
<div class="col-md-4 col-sm-6">
<a href="./projects/hand_hygiene/">
<img class="img-responsive img-portfolio img-hover" src="./img/project_thumbs/hand_hygiene.png" alt="">
</a>
</div>
<div class="col-md-4 col-sm-6">
<a href="./projects/senior_care/">
<img class="img-responsive img-portfolio img-hover" src="./img/project_thumbs/senior_wellbeing.png" alt="">
</a>
</div>
<div class="col-md-4 col-sm-6">
<a href="#">
<img class="img-responsive img-portfolio img-hover" src="./img/project_thumbs/conversational_agents.png" alt="">
</a>
</div>
<div class="press-box">
<a href="http://www.npr.org/sections/health-shots/2016/03/14/470404174/siri-and-other-phone-assistants-dont-always-help-in-a-crisis" target="_blank">
<img class="press-logo" src="img/logos/npr.png"/>
</a>
<a href="http://www.cnn.com/2016/03/14/health/smartphone-responses-rape-violence/" target="_blank">
<img class="press-logo" src="img/logos/cnn.png"/>
</a>
<a href="http://abcnews.go.com/Health/siri-digital-assistants-best-idea-health-safety-emergency/story?id=37635171" target="_blank">
<img class="press-logo" src="img/logos/abc.png"/>
</a>
<a href="https://www.washingtonpost.com/national/health-science/heres-what-happens-when-you-ask-siri-about-rape-or-depression/2016/03/18/c8283852-ebb1-11e5-a6f3-21ccdbc5f74e_story.html" target="_blank">
<img class="press-logo" src="img/logos/washington_post.png"/>
</a>
<a href="http://www.cbsnews.com/news/health-crisis-siri-and-cortana-may-not-have-your-back/" target="_blank">
<img class="press-logo" src="img/logos/cbs.png">
</a>
<span class="stretch"></span>
</div>​
</div>
</div>
</div>

<div class="container">
<!-- Portfolio Section -->
<div class="row">
<div class="col-lg-12">
<h2 class="page-header">Recent Projects</h2>
</div>
<div class="col-md-4 col-sm-6">
<a href="./projects/hand_hygiene/">
<img class="img-responsive img-portfolio img-hover" src="./img/project_thumbs/hand_hygiene.png" alt="">
</a>
</div>
<div class="col-md-4 col-sm-6">
<a href="./projects/senior_care/">
<img class="img-responsive img-portfolio img-hover" src="./img/project_thumbs/senior_wellbeing.png" alt="">
</a>
</div>
<div class="col-md-4 col-sm-6">
<a href="#">
<img class="img-responsive img-portfolio img-hover" src="./img/project_thumbs/conversational_agents.png" alt="">
</a>
</div>
</div>
<!-- /.row -->
<!-- /.row -->


<hr>
<hr>

<!-- Call to Action Section -->
<div class="well">
<div class="row">
<div class="col-md-8">
<p>We are actively pursuing several clinical and artificial intelligence projects across the entire healthcare system.
We focus both on clinical outcomes, health improvements, and academic insights.</p>
</div>
<div class="col-md-4">
<a class="btn btn-lg btn-default btn-block" href="projects/index.php">See our projects &nbsp;<i class="fa fa-caret-right" aria-hidden="true"></i></a>
</div>
</div>
</div>
<!-- Call to Action Section -->
<div class="well">
<div class="row">
<div class="col-md-8">
<p>We are actively pursuing several clinical and artificial intelligence projects across the entire healthcare system.
We focus both on clinical outcomes, health improvements, and academic insights.</p>
</div>
<div class="col-md-4">
<a class="btn btn-lg btn-default btn-block" href="projects/index.php">See our projects &nbsp;<i class="fa fa-caret-right" aria-hidden="true"></i></a>
</div>
</div>
</div>

<hr>
<br/><br/>
<hr>
<br/><br/>
</div>
<!-- /.container -->
15 changes: 15 additions & 0 deletions publications/_posts/2017-10-01-mandlekar-iros17-arpl.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
---
layout: publication
title: "Adversarially Robust Policy Learning through Active Construction of Physically-Plausible Perturbations"
authors: "Ajay Mandlekar*, Yuke Zhu*, Animesh Garg*, Li Fei-Fei, Silvio Savarese"
pub_info_name: "Int’l Conf. on Intelligent Robots and Systems (IROS)"
pub_info_date: October 2017
excerpt: text text text
images:
thumb: mandlekar-arpl-iros17.png
main: mandlekar-arpl-iros17.png
paper_link: "http://vision.stanford.edu/pdf/mandlekar2017iros.pdf"
website_link: "https://stanfordvl.github.io/ARPL/"
video_link: "https://www.youtube.com/watch?v=yZ-gSsbbzh0"
---
Policy search methods in reinforcement learning have demonstrated success in scaling up to larger problems beyond toy examples. However, deploying these methods on real robots remains challenging due to the large sample complexity required during learning and their vulnerability to malicious intervention. We introduce Adversarially Robust Policy Learning (ARPL), an algorithm that leverages active computation of physically-plausible adversarial examples during training to enable robust policy learning in the source domain and robust performance under both random and adversarial input perturbations. We evaluate ARPL on four continuous control tasks and show superior resilience to changes in physical environment dynamics parameters and environment state as compared to state-of-the-art robust policy learning methods.
15 changes: 15 additions & 0 deletions publications/_posts/2017-10-01-zhu-iccv17-vsp.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
---
layout: publication
title: "Visual Semantic Planning using Deep Successor Representations"
authors: "Yuke Zhu*, Daniel Gordon*, Eric Kolve, Dieter Fox, Li Fei-Fei, Abhinav Gupta, Roozbeh Mottaghi, Ali Farhadi"
pub_info_name: "Int’l Conf. on Computer Vision (ICCV)"
pub_info_date: October 2017
excerpt: text text text
images:
thumb: zhu-iccv17-vsp.png
main: zhu-iccv17-vsp.png
paper_link: "https://web.stanford.edu/~yukez/papers/iccv2017.pdf"
code_link: "https://github.com/allenai/ai2thor"
video_link: "https://www.youtube.com/watch?v=yZ-gSsbbzh0"
---
A crucial capability of real-world intelligent agents is their ability to plan a sequence of actions to achieve their goals in the visual world. In this work, we address the problem of visual semantic planning: the task of predicting a sequence of actions from visual observations that transform a dynamic environment from an initial state to a goal state. Doing so entails knowledge about objects and their affordances, as well as actions and their preconditions and effects. We propose learning these through interacting with a visual and dynamic environment. Our proposed solution involves bootstrapping reinforcement learning with imitation learning. To ensure cross task generalization, we develop a deep predictive model based on successor representations. Our experimental results show near optimal results across a wide range of tasks in the challenging THOR environment.
13 changes: 13 additions & 0 deletions publications/_posts/2017-12-01-harrison-isrr17-adapt.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
---
layout: publication
title: "AdaPT: Zero-Shot Adaptive Policy Transfer for Stochastic Dynamical Systems"
authors: "James Harrison*, Animesh Garg*, Boris Ivanovic, Yuke Zhu, Silvio Savarese, Li Fei-Fei, Marco Pavone"
pub_info_name: "Int’l Symposium on Robotics Research (ISRR)"
pub_info_date: Dec 2017
excerpt: text text text
images:
thumb: harrison-isrr17-adapt.png
main: harrison-isrr17-adapt.png
paper_link: "https://arxiv.org/abs/1707.04674"
---
Model-free policy learning has enabled robust performance of complex tasks with relatively simple algorithms. However, this simplicity comes at the cost of requiring an Oracle and arguably very poor sample complexity. This renders such methods unsuitable for physical systems. Variants of model-based methods address this problem through the use of simulators, however, this gives rise to the problem of policy transfer from simulated to the physical system. Model mismatch due to systematic parameter shift and unmodelled dynamics error may cause sub-optimal or unsafe behavior upon direct transfer. We introduce the Adaptive Policy Transfer for Stochastic Dynamics (ADAPT) algorithm that achieves provably safe and robust, dynamically-feasible zero-shot transfer of RL-policies to new domains with dynamics error. ADAPT combines the strengths of offline policy learning in a black-box source simulator with online tube-based MPC to attenuate bounded model mismatch between the source and target dynamics. ADAPT allows online transfer of policy, trained solely in a simulation offline, to a family of unknown targets without fine-tuning. We also formally show that (i) ADAPT guarantees state and control safety through state-action tubes under the assumption of Lipschitz continuity of the divergence in dynamics and, (ii) ADAPT results in a bounded loss of reward accumulation relative to a policy trained and evaluated in the source environment. We evaluate ADAPT on 2 continuous, non-holonomic simulated dynamical systems with 4 different disturbance models, and find that ADAPT performs between 50%-300% better on mean reward accrual than direct policy transfer.
14 changes: 14 additions & 0 deletions publications/_posts/2018-05-21-xu-icra18-ntp.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
---
layout: publication
title: "Neural Task Programming: Learning to Generalize Across Hierarchical Tasks"
authors: "Danfei Xu*, Suraj Nair*, Yuke Zhu, Julian Gao, Animesh Garg, Li Fei-Fei, Silvio Savarese"
pub_info_name: "IEEE Int’l Conf. on Robotics and Automation (ICRA)"
pub_info_date: May 2018
excerpt: text text text
images:
thumb: xu-icra18-ntp.jpg
main: xu-icra18-ntp.jpg
paper_link: "https://arxiv.org/abs/1710.01813"
video_link: "https://www.youtube.com/watch?v=THq7I7C5rkk"
---
In this work, we propose a novel robot learning framework called Neural Task Programming (NTP), which bridges the idea of few-shot learning from demonstration and neural program induction. NTP takes as input a task specification (e.g., video demonstration of a task) and recursively decomposes it into finer sub-task specifications. These specifications are fed to a hierarchical neural program, where bottom-level programs are callable subroutines that interact with the environment. We validate our method in three robot manipulation tasks. NTP achieves strong generalization across sequential tasks that exhibit hierarchal and compositional structures. The experimental results show that NTP learns to generalize well to- wards unseen tasks with increasing lengths, variable topologies, and changing objectives.
14 changes: 14 additions & 0 deletions publications/_posts/2018-06-01-huang-cvpr18-findingit.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
---
layout: publication
title: "Finding “It”: Weakly-Supervised Reference-Aware Visual Grounding in Instructional Video"
authors: "De-An Huang, Shyamal Buch, Lucio Dery, Animesh Garg, Li Fei-Fei, Juan Carlos Niebles"
pub_info_name: "IEEE Conf. on Computer Vision & Pattern Recognition (CVPR)"
pub_info_date: June 2018
excerpt: text text text
images:
thumb: huang-cvpr18-findingit.jpg
main: huang-cvpr18-findingit.jpg
paper_link: "http://openaccess.thecvf.com/content_cvpr_2018/papers/Huang_Finding_It_Weakly-Supervised_CVPR_2018_paper.pdf"
video_link: "https://youtu.be/GBo4sFNzhtU?t=23m30s"
---
Grounding textual phrases in visual content with standalone image-sentence pairs is a challenging task. When we consider grounding in instructional videos, this problem becomes profoundly more complex: the latent temporal structure of instructional videos breaks independence assumptions and necessitates contextual understanding for resolving ambiguous visual-linguistic cues. Furthermore, dense annotations and video data scale mean supervised approaches are prohibitively costly. In this work, we propose to tackle this new task with a weakly-supervised framework for reference-aware visual grounding in instructional videos, where only the temporal alignment between the transcription and the video segment are available for supervision. We introduce the visually grounded action graph, a structured representation capturing the latent dependency between grounding and references in video. For optimization, we propose a new reference-aware multiple instance learning (RA-MIL) objective for weak supervision of grounding in videos. We evaluate our approach over unconstrained videos from YouCookII and RoboWatch, augmented with new reference-grounding test set annotations. We demonstrate that our jointly optimized, reference-aware approach simultaneously improves visual grounding, reference-resolution, and generalization to unseen instructional video categories.
15 changes: 15 additions & 0 deletions publications/_posts/2018-06-26-fang-rss18-tog.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
---
layout: publication
title: "Learning Task-Oriented Grasping for Tool Manipulation with Simulated Self-Supervision"
authors: "Kuan Fang, Yuke Zhu, Animesh Garg, Viraj Mehta, Andrey Kurenkov, Li Fei-Fei, Silvio Savarese"
pub_info_name: "Robotics Systems and Science (RSS)"
pub_info_date: August 2018
excerpt: text text text
images:
thumb: fang-rss18-tog.png
main: fang-rss18-tog.png
paper_link: "https://arxiv.org/abs/1806.09266"
webpage_link: "https://sites.google.com/view/task-oriented-grasp"
video_link: "https://youtu.be/v0ErAR8Dwy8?t=43s"
---
Tool manipulation is vital for facilitating robots to complete challenging task goals. It requires reasoning about the desired effect of the task and thus properly grasping and manipulating the tool to achieve the task. Task-agnostic grasping optimizes for grasp robustness while ignoring crucial task-specific constraints. In this paper, we propose the Task-Oriented Grasping Network (TOG-Net) to jointly optimize both task-oriented grasping of a tool and the manipulation policy for that tool. The training process of the model is based on large-scale simulated self-supervision with procedurally generated tool objects. We perform both simulated and real-world experiments on two tool-based manipulation tasks: sweeping and hammering. Our model achieves overall 71.1% task success rate for sweeping and 80.0% task success rate for hammering. Supplementary material is available at: bit.ly/task-oriented-grasp
13 changes: 13 additions & 0 deletions publications/_posts/2018-08-01-huang-arxiv18-ntg.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
---
layout: publication
title: "Neural Task Graphs: Generalizing to Unseen Tasks from a Single Video Demonstration"
authors: "De-an Huang, Suraj Nair, Danfei Xu, Yuke Zhu, Animesh Garg, Li Fei-Fei, Silvio Savarese, Juan Carlos Niebles"
pub_info_name: "Pre-print"
pub_info_date: August 2018
excerpt: text text text
images:
thumb: huang-arxiv18-ntg.jpg
main: huang-arxiv18-ntg.jpg
paper_link: "https://arxiv.org/abs/1807.03480"
---
Our goal is for a robot to execute a previously unseen task based on a single video demonstration of the task. The success of our approach relies on the principle of transferring knowledge from seen tasks to unseen ones with similar semantics. More importantly, we hypothesize that to successfully execute a complex task from a single video demonstration, it is necessary to explicitly incorporate compositionality to the model. To test our hypothesis, we propose Neural Task Graph (NTG) Networks, which use task graph as the intermediate representation to modularize the representations of both the video demonstration and the derived policy. We show this formulation achieves strong inter-task generalization on two complex tasks: Block Stacking in BulletPhysics and Object Collection in AI2-THOR. We further show that the same principle is applicable to real-world videos. We show that NTG can improve data efficiency of few-shot activity understanding in the Breakfast Dataset.
File renamed without changes
File renamed without changes
File renamed without changes
File renamed without changes
File renamed without changes
File renamed without changes
File renamed without changes
Loading

0 comments on commit a339e55

Please sign in to comment.