-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathindex.html
executable file
·196 lines (188 loc) · 10.9 KB
/
index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
<!DOCTYPE HTML>
<html lang="en">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<title>Archana Swaminathan</title>
<meta name="author" content="Archana Swaminathan">
<meta name="viewport" content="width=device-width, initial-scale=1">
<link rel="shortcut icon" href="images/favicon/favicon.ico" type="image/x-icon">
<link rel="stylesheet" type="text/css" href="stylesheet.css">
</head>
<body>
<table style="width:100%;max-width:900px;border:0px;border-spacing:0px;border-collapse:separate;margin-right:auto;margin-left:auto;"><tbody>
<tr style="padding:0px">
<td style="padding:0px">
<table style="width:100%;border:0px;border-spacing:0px;border-collapse:separate;margin-right:auto;margin-left:auto;"><tbody>
<tr style="padding:0px">
<td style="padding:0%;width:60%;vertical-align:middle;">
<p class="name" style="text-align: center;">
Archana Swaminathan
</p>
<p>I'm a third year PhD student in the Department of Computer Science, at the <a href="https://www.umiacs.umd.edu/">University of Maryland</a> in College Park, advised by <a href="https://www.cs.umd.edu/~abhinav/">Abhinav Shrivastava</a>. My research lies at the intersection of computer vision and deep learning.
I have interned at Amazon Science (Summer '24) and Bosch Research (Summer '21) </p>
</p>
<p>
Before starting my PhD, I had the pleasure of completing my undergraduate thesis in collaboration with <a href="https://v-sense.scss.tcd.ie/">V-SENSE, Trinity College Dublin</a>, under the guidance of <a href="https://scholar.google.ch/citations?user=HZRejX4AAAAJ&hl=de">Prof. Aljosa Smolic</a>. I graduated from <a href="https://www.bits-pilani.ac.in/">BITS Pilani, India</a> and double majored in Electrical Engineering and Mathematics.
</p>
<p style="text-align:center">
<a href="mailto:[email protected]">Email</a> /
<a href="data/cv.pdf">CV</a> /
<a href="https://scholar.google.com/citations?user=xUjGypwAAAAJ&hl=en">Scholar</a> /
<a href="https://x.com/TweetsArchana">Twitter</a> /
<a href="https://github.com/archana1998/">Github</a>
</p>
</td>
<td style="padding:1%;width:30%;max-width:50%;vertical-align:middle;padding-top:60px;">
<a href="images/avatar.jpg"><img style="width:90%;max-width:100%;object-fit: cover; border-radius: 50%;" alt="profile photo" src="images/avatar.jpg" class="hoverZoomLink"></a>
</td>
</tr>
</tbody></table>
<table style="width:100%;border:0px;border-spacing:0px;border-collapse:separate;margin-right:auto;margin-left:auto;"><tbody>
<tr>
<td style="padding:0px;width:90%;vertical-align:middle">
<h2>Research</h2>
<p>
I specialize in 3D computer vision and scene understanding, developing algorithms that interpret complex, dynamic environments. My work focuses on decoding the physical properties of objects and their interactions in diverse 3D scenes, with a particular emphasis on applications in robotics. My research aims to bridge the gap between visual perception and practical implementation, pushing the boundaries of how machines interpret and interact with the world around them.
</td>
</tr>
</tbody></table>
<table style="width:100%;border:0px;border-spacing:0px;border-collapse:separate;margin-right:auto;margin-left:auto;"><tbody>
<tr>
<td style="padding:0px;width:30%;vertical-align:middle">
<div class="one">
<div class="two" id='leia_image'>
<img src='images/leia_teaser.jpg' width=100% >
</div>
</div>
</td>
<td style="padding:10px;width:70%;vertical-align:middle">
<a href="https://archana1998.github.io/leia/">
<span class="papertitle">LEIA: Latent View-invariant Embeddings for Implicit 3D Articulation</span>
</a>
<br>
<strong>Archana Swaminathan</strong>,
<a href="https://learn2phoenix.github.io/">Anubhav Gupta</a>,
<a href="https://kampta.github.io/">Kamal Gupta</a>,
<a href="https://www.cs.umd.edu/~shishira/">Shishira R Maiya</a>,
<a href="https://vatsalag99.github.io/">Vatsal Agarwal</a>,
<a href="https://www.cs.umd.edu/~abhinav/">Abhinav Shrivastava</a>
<br>
<em>Proceedings of the European Conference on Computer Vision (ECCV)</em>, 2024
<br>
<p></p>
<p>
Modeling unseen 3D articulation states by interpolating across a learnable, view-invariant latent embedding space.
</p>
<a href="https://archana1998.github.io/leia/">Project</a>
/
<a href="https://archana1998.github.io/leia/">Paper</a>
</td>
</tr>
<tr>
<td style="padding:0px;width:30%;vertical-align:middle">
<div class="one">
<div class="two" id='text_free_diffusion_image'>
<img src='images/text2diff_teaser.png' width=100%>
</div>
</div>
</td>
<td style="padding:10px;width:70%;vertical-align:middle">
<a href="https://mgwillia.github.io/diffssl/">
<span class="papertitle">Do text-free diffusion models learn discriminative visual representations?</span>
</a>
<br>
<a href="https://soumik-kanad.github.io/">Soumik Mukhopadhyay*</a>,
<a href="https://mgwillia.github.io/">Matthew Gwilliam*</a>,
<a href="https://vatsalag99.github.io/">Vatsal Agarwal</a>,
<a href="https://scholar.google.com/citations?user=uwmKc4wAAAAJ&hl=en">Namitha Padmanabhan</a>,
<strong>Archana Swaminathan</strong>,
<a href="https://tianyizhou.github.io/">Tianyi Zhou</a>,
<a href="https://www.cs.umd.edu/~abhinav/">Abhinav Shrivastava</a>
<br>
<em>Proceedings of the European Conference on Computer Vision (ECCV)</em>, 2024
<br>
<p></p>
<p>
Explore diffusion models as unified unsupervised image representation learning models for many recognition tasks. Propose DifFormer and DifFeed, novel mechanisms for fusing diffusion features for image classification.
</p>
<a href="https://mgwillia.github.io/diffssl/">Project</a>
/
<a href="https://arxiv.org/abs/2311.17921">Paper</a>
</td>
</tr>
<tr>
<td style="padding:0px;width:30%;vertical-align:middle">
<div class="one">
<div class="two" id='chop_and_learn_image'>
<img src='images/chopnlearn_teaser.png' width=100%>
</div>
</div>
</td>
<td style="padding:10px;width:70%;vertical-align:middle">
<a href="https://chopnlearn.github.io/">
<span class="papertitle">Chop & Learn: Recognizing and Generating Object-State Compositions</span>
</a>
<br>
<a href="https://www.cs.umd.edu/~nirat/">Nirat Saini*</a>,
<a href="https://hywang66.github.io/">Hanyu Wang*</a>,
<strong>Archana Swaminathan</strong>,
Vinoj Jayasundara,
<a href="https://boheumd.github.io/">Bo He</a>,
<a href="https://kampta.github.io/"> Kamal Gupta</a>,
<a href="https://www.cs.umd.edu/~abhinav/">Abhinav Shrivastava</a>
<br>
<em>Proceedings of the IEEE International Conference on Computer Vision (ICCV)</em>, 2023
<br>
<p></p>
<p>
Benchmark suite for fruits, vegetables and various cutting styles from multiple views. Compositional Image Generation supports generating unseen cutting styles of different objects.
</p>
<a href="https://chopnlearn.github.io/">Project</a>
/
<a href="https://openaccess.thecvf.com/content/ICCV2023/papers/Saini_Chop__Learn_Recognizing_and_Generating_Object-State_Compositions_ICCV_2023_paper.pdf">Paper</a>
</td>
</tr>
<tr>
<td style="padding:0px;width:30%;vertical-align:middle">
<div class="one">
<div class="two" id='texture_improvement_image'>
<img src='images/texture_teaser.png' width=100%>
</div>
</div>
</td>
<td style="padding:10px;width:70%;vertical-align:middle">
<a href="https://www.researchgate.net/publication/362578954_Texture_improvement_for_human_shape_estimation_from_a_single_image">
<span class="papertitle">Texture improvement for human shape estimation from a single image</span>
</a>
<br>
Jorge González Escribano,
<a href="https://www.researchgate.net/profile/Susana-Ruano-2">Susana Ruano Sainz</a>,
<strong>Archana Swaminathan</strong>,
David Smith,
<a href="https://scholar.google.ch/citations?user=HZRejX4AAAAJ&hl=de">Aljosa Smolic</a>
<br>
<em>Proceedings of the 24th Irish Machine Vision and Image Processing conference (IMVIP)</em>, 2022
<br>
<p></p>
<p>
Novel way to predict the back view of the person by including semantic and positional information that outperforms the state-of-the-art techniques
</p>
<a href="https://www.researchgate.net/publication/362578954_Texture_improvement_for_human_shape_estimation_from_a_single_image">Paper</a>
</td>
</tr>
</tbody></table>
<table style="width:100%;border:0px;border-spacing:0px;border-collapse:separate;margin-right:auto;margin-left:auto;"><tbody>
<tr>
<td style="padding:0px">
<br>
<p style="text-align:right;">
Awesosme website credits go to <a href="https://github.com/jonbarron/jonbarron_website">this guy</a>.
</p>
</td>
</tr>
</tbody></table>
</td>
</tr>
</table>
</body>
</html>