-
Notifications
You must be signed in to change notification settings - Fork 1
/
Copy pathindex.html
497 lines (477 loc) · 30.7 KB
/
index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
<!DOCTYPE html>
<!-- <style>
body {
filter: grayscale(1);
}
</style> -->
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<meta name="keywords" content="Yang Li">
<meta name="description" content="Academic homepage of Yang Li">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Yang Li's personal homepage</title>
<link href="css/main.css" rel="stylesheet">
<link href="css/bootstrap.min.css" rel="stylesheet">
<link href='https://fonts.googleapis.com/css?family=Open+Sans:400,300' rel='stylesheet'>
<base target="_blank">
</head>
<body>
<div class="container">
<header class="row">
<div class="myinfo col-8 col-sm-6 col-xs-12">
<h1>Yang Li </h1>
<h1>李 洋</h1>
<h4>
Associate Professor
</h4>
<p>
<a href="http://www.cs.ecnu.edu.cn">School of Computer Science & Technology</a>
</p>
<p>
<a href="http://english.ecnu.edu.cn/">East China Normal University</a>
</p>
<p>
Address: 3663 North Zhongshan Road, Shanghai 200062, China
</p>
<p>
Email: yli AT cs DOT ecnu DOT edu DOT cn
</p>
<p>[<a href="https://www.semanticscholar.org/author/Y.-Li/50024044">Semantic Scholar</a>][<a href="https://scholar.google.com/citations?user=N1ZDSHYAAAAJ">Google Scholar</a>][<a href="https://faculty.ecnu.edu.cn/_s16/ly2_19214/main.psp">中文</a>][<a href="https://space.bilibili.com/487404760">Bilibili</a>]</p>
</div>
<div class="col-4 col-sm-6 col-xs-12">
<img class="portrait img-responsive" src="files/photo-new.jpg">
</div>
</header>
<div class="row">
<div class="col-12 col-sm-12 col-xs-12">
<hr>
</div>
</div>
<main>
<div class="row">
<div class="col-12 col-sm-12 col-xs-12">
<h3>Bio</h3>
<p>
Dr. Yang Li is an associate professor at the <a href="http://www.cs.ecnu.edu.cn/">School of Computer Science and Technology</a> in <a href="http://english.ecnu.edu.cn/">East China Normal University</a>.
He is leading the research group, <a href="https://github.com/vpx-ecnu">Visual Perception + X group</a>, focusing on the intersection of computer vision, computer graphics, and robotics areas.
Dr. Li completed his PhD in the College of Computer Science at Zhejiang University with Prof. <a href="https://person.zju.edu.cn/en/jkzhu">Jianke Zhu</a>.
In 2018, he visited the <a href="https://ucsd.edu">University of California, San Diego</a> and worked with Prof. <a href="https://yip.eng.ucsd.edu/">Michael Yip</a> in the ECE department.
Meanwhile, he worked as a part-time researcher at <a href="https://azft.alibaba.com/">Alibaba-Zhejiang University Joint Institute of Frontier Technologies</a> during his PhD.
Prior to that, he spent some years in the video game industry at <a href="https://virtuosgames.com/en">Virtuos</a>.
</p>
<!-- <p>
For prospective students, please send me an email with your CV if you are interested in my research work.
enabling vision-based AI technologies in video analysis, controllable video \& image generation, and non-rigid-object-manipulation in robotics.
I am also very interested in finding and defining structural information in video frames to , e.g. robotic manipulation.
</p> -->
</div>
</div>
<div class="row">
<div class="col-12 col-sm-12 col-xs-12">
<hr>
</div>
</div>
<div class="row">
<div class="col-12 col-sm-12 col-xs-12">
<h3>Research Group</h3>
<div class="custom-container container">
<div class="col-4 col-sm-4 col-xs-12"><a href="https://github.com/vpx-ecnu"><img class="portrait img-responsive" src="files/VPX-log.jpg"></a></div>
<div class="desc col-8 col-sm-8 col-xs-12">
<p>
<strong>Visual Perception + X (VPX) group</strong>'s mission is to develop visual perception for cross-disciplinary research,
particularly, in methods of extracting meaningful information and structural data from videos and raw streaming sources.
Using the vision information further enables AI-based downstream application, including metaverse, AIGC and embodied intelligence.
Currently, VPX focuses on enabling vision-based AI technologies in video analysis, controllable video \& image generation, and non-rigid-object-manipulation in robotics.
</p>
</div>
</div>
</div>
</div>
<div class="row">
<div class="col-12 col-sm-12 col-xs-12">
<h4>Research Areas</h4>
<p>
</p>
<ul class="mine">
<li><strong>Computer Vision</strong></li>
<li><strong>Machine Learning</strong></li>
<li><strong>Computer Graphics</strong> (Neural Rendering)</li>
<li><strong>Robotics</strong> (Visual Perception and Manipulation)</li>
</ul>
</div>
</div>
<div class="row">
<div class="col-12 col-sm-12 col-xs-12">
<hr>
</div>
</div>
<div class="row">
<div class="col-12 col-sm-12 col-xs-12">
<h3>Selected Research Projects</h3>
<div class="custom-container container">
<div class="col-4 col-sm-4 col-xs-12"><img class="portrait img-responsive" src="files/example.gif"></div>
<div class="desc col-8 col-sm-8 col-xs-12">
<h4><b>Visual Object Tracking</b></h4>
<p>Given a video sequence and a selected target in the first frame, visual object tracking aims to track the target object robustly and accurately during the whole video sequence. We are interested in different geometric representations of the tracking algorithm. With various geometric representations, visual object tracking methods can be viewed as basic blocks for many high-level <i>AI</i> applications, such as surveillance, video editing/analysis, etc. </p>
</div>
</div>
<div class="custom-container container">
<div class="col-4 col-sm-4 col-xs-12"><img class="portrait img-responsive" src="files/3dreconstruction.gif"></div>
<div class="desc col-8 col-sm-8 col-xs-12">
<h4><b>3D Motion Capture</b></h4>
<p>Taking one step forward, estimating the geometric representation of a visual object in the RGB-D sequence leads us to the 3D motion capture topic. It is highly related to 3D dynamic reconstruction, RGB-D fusion and 3D visual tracking. The algorithm outputs all geometric properties for every pixel in the sequence. It can be viewed as a fundamental 3D perception method which is very useful in <i>AR/VR</i> and <i>Robotics</i>. The 3D reconstruction technique can also be applied to game/film making, geography, and so on. </p>
</div>
</div>
<div class="custom-container container">
<div class="col-4 col-sm-4 col-xs-12"><img class="portrait img-responsive" src="files/nerf.gif"></div>
<div class="desc col-8 col-sm-8 col-xs-12">
<h4><b>Scene Reconstruction & Neural Rendering</b></h4>
<p>With all geometric properties, the next step is to visualize to what human being could understand. To this end, reconstruction and rendering related topics come to our research directions. Based on deep learning, we can bring the real work back onto screen without traditional computer graphics pipeline. We value these topics as next generation tech. for <i>Games</i> and <i>Metaverse</i>. </p>
</div>
</div>
<div class="custom-container container">
<div class="col-4 col-sm-4 col-xs-12"><img class="portrait img-responsive" src="files/superfull.gif"></div>
<div class="desc col-8 col-sm-8 col-xs-12">
<h4><b>Visual Perception for Robots</b></h4>
<p>With the capabilities of perceiving visual information in a sequence/video stream, we have succeeded in applying computer vision algorithms to surgical robots to automatically manipulate bio-tissue. In the future, we are planning to continue exploring the possibilities of <i>AGI</i> and <i>Embodied Intelligence</i>-related interdisciplinary projects. </p>
</div>
</div>
</div>
</div>
<div class="row">
<div class="col-12 col-sm-12 col-xs-12">
<hr>
</div>
</div>
<div class="row">
<div class="col-12 col-sm-12 col-xs-12">
<h3>Selected Publications</h3>
</div>
</div>
<div class="row">
<div class="col-12 col-sm-12 col-xs-12">
<p class="pub">
<b>Warped convolutional neural networks for large homography transformation with psl(3) algebra</b> <br>
Xinrui Zhan, Wenyu Liu, Risheng Yu, Jianke Zhu and Yang Li<br>
<i>Neurocomputing</i>, 2025<br>
[<a href="https://www.sciencedirect.com/science/article/abs/pii/S0925231224020836">PAPER</a>]
[<a href="https://arxiv.org/abs/2206.11657">Early Arxiv Version</a>]
</p>
</div>
</div>
<div class="row">
<div class="col-12 col-sm-12 col-xs-12">
<p class="pub">
<b>Open-World Reinforcement Learning over Long Short-Term Imagination</b> <br>
Jiajian Li, Qi Wang, Yunbo Wang, Xin Jin, Yang Li, Wenjun Zeng, Xiaokang Yang<br>
<i>ICLR</i>, 2025<br>
[<a href="https://arxiv.org/pdf/2410.03618">PAPER</a>]
[<a href="https://qiwang067.github.io/ls-imagine">Project Webpage</a>]
</p>
</div>
</div>
<div class="row">
<div class="col-12 col-sm-12 col-xs-12">
<p class="pub">
<b>Motion-Zero: A Zero-Shot Trajectory Control Framework of Moving Object for Diffusion-Based Video Generation</b> <br>
Changgu Chen, Junwei Shu, Gaoqi He, Changbo Wang, Yang Li<br>
<i>AAAI</i>, 2025<br>
[<a href="https://arxiv.org/abs/2401.10150">PAPER</a>]
[<a href="https://vpx-ecnu.github.io/MotionZero-website">Project Webpage</a>]
</p>
</div>
</div>
<div class="row">
<div class="col-12 col-sm-12 col-xs-12">
<p class="pub">
<b>ChatTracker: Enhancing Visual Tracking Performance via Chatting with Multimodal Large Language Model</b> <br>
Yiming Sun, Fan Yu, Shaoxiang Chen, Yu Zhang, Junwei Huang, Yang Li, Chenhui Li, Changbo Wang<br>
<i>NeurIPS</i>, 2024<br>
[<a href="https://arxiv.org/abs/2411.01756">PAPER</a>]
[<a href="https://vpx-ecnu.github.io/ChatTracker_website">Project Webpage</a>]
</p>
</div>
</div>
<div class="row">
<div class="col-12 col-sm-12 col-xs-12">
<p class="pub">
<b>FIND: Fine-tuning Initial Noise Distribution with Policy Optimization for Diffusion Models</b> <br>
Changgu Chen, Libing Yang, Xiaoyan Yang, Lianggangxu Chen, Gaoqi He, Changbo Wang, Yang Li<br>
<i>ACM Multimedia</i>, 2024<br>
[<a href="https://arxiv.org/abs/2407.19453">PAPER</a>]
[<a href="https://vpx-ecnu.github.io/FIND-website/">Project Webpage</a>]
</p>
</div>
</div>
<div class="row">
<div class="col-12 col-sm-12 col-xs-12">
<p class="pub">
<b>ClothPPO: A Proximal Policy Optimization Enhancing Framework for Robotic Cloth Manipulation with Observation-Aligned Action Spaces</b> <br>
Libing Yang, Yang Li, Long Chen<br>
<i>International Joint Conference on Artificial Intelligence</i> (IJCAI), 2024<br>
[<a href="https://arxiv.org/abs/2405.04549">PAPER</a>]
[<a href="https://vpx-ecnu.github.io/ClothPPO-website">Project Webpage</a>]
</p>
</div>
</div>
<div class="row">
<div class="col-12 col-sm-12 col-xs-12">
<p class="pub">
<b>Multi-Prototype Space Learning for Commonsense-Based Scene Graph Generation</b> <br>
Lianggangxu Chen, Youqi Song, Yiqing Cai, Jiale Lu, Yang Li, Yuan Xie, Changbo Wang, Gaoqi He<br>
<i>The Conference on Association for the Advancement of Artificial Intelligence</i> (AAAI), 2024<br>
[<a href="https://ojs.aaai.org/index.php/AAAI/article/view/27874/27773">PAPER</a>]
</p>
</div>
</div>
<div class="row">
<div class="col-12 col-sm-12 col-xs-12">
<p class="pub">
<b>RAGT: Learning Robust Features for Occluded Human Pose and Shape Estimation with Attention-Guided Transformer</b> <br>
Ziqing Li, Yang Li, Shaohui Lin<br>
<i>CAD&Graphics</i>, 2023<br>
[<a href="https://link.springer.com/chapter/10.1007/978-981-99-9666-7_22">PAPER</a>]
</p>
</div>
</div>
<div class="row">
<div class="col-12 col-sm-12 col-xs-12">
<p class="pub">
<b>AdaptMVSNet: Efficient Multi-View Stereo with Adaptive Convolution and Attention Fusion</b> <br>
Pengfei Jiang, Xiaoyan Yang, Yuanjie Chen, Wenjie Song, Yang Li<br>
<i>Computers & Graphics</i>, 2023<br>
[<a href="https://www.sciencedirect.com/science/article/pii/S0097849323001838">PAPER</a>]
[<a href="https://github.com/HDjpf/AdaptMVSNet">CODE</a>]
</p>
</div>
</div>
<div class="row">
<div class="col-12 col-sm-12 col-xs-12">
<p class="pub">
<b>Contact-conditioned Hand-held Object Reconstruction from Single-View Images</b> <br>
Xiaoyuan Wang, Yang Li, Adnane Boukhayma, Changbo Wang, Marc Christie<br>
<i>Computers & Graphics</i>, 2023<br>
[<a href="https://www.sciencedirect.com/science/article/pii/S009784932300078X">PAPER</a>]
</p>
</div>
</div>
<div class="row">
<div class="col-12 col-sm-12 col-xs-12">
<p class="pub">
<b>InvVis: Large-Scale Data Embedding for Invertible Visualization</b> <br>
Huayuan Ye, Chenhui Li, Yang Li, Changbo Wang<br>
<i>IEEE Transactions on Visualization and Computer Graphics</i>, 2023<br>
[<a href="https://arxiv.org/pdf/2307.16176.pdf">PDF</a>]
[<a href="https://github.com/huayuan4396/InvVis">CODE</a>]
</p>
</div>
</div>
<div class="row">
<div class="col-12 col-sm-12 col-xs-12">
<p class="pub">
<b>Multi-Source Templates Learning for Real-Time Aerial Tracking</b> <br>
Yiming Sun, Yang Li, Changbo Wang<br>
<i>IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)</i>, 2023<br>
[<a href="https://ieeexplore.ieee.org/document/10094642">PAPER</a>]
[<a href="https://github.com/vpx-ecnu/MSTL">CODE</a>]
</p>
</div>
</div>
<div class="row">
<div class="col-12 col-sm-12 col-xs-12">
<p class="pub">
<b>Homography Decomposition Networks for Planar Object Tracking</b> <br>
Xinrui Zhan, Yueran Liu, Jianke Zhu and Yang Li<br>
<i>The Conference on Association for the Advancement of Artificial Intelligence (AAAI)</i>, 2022<br>
[<a href="https://arxiv.org/abs/2112.07909">PDF</a>]
[<a href="https://github.com/zhanxinrui/HDN">CODE</a>]
[<a href="https://zhanxinrui.github.io/HDN-homepage/">Project Webpage</a>]
</p>
</div>
</div>
<div class="row">
<div class="col-12 col-sm-12 col-xs-12">
<p class="pub">
<b>SuPer Deep: A Surgical Perception Framework for Robotic Tissue Manipulation using Deep Learning for Feature Extraction</b> <br>
Jingpei Lu, Ambareesh Jayakumari, Florian Richter, Yang Li and Michael C. Yip<br>
<i>IEEE Conference on Robotics and Automation (ICRA)</i>, 2021<br>
[<a href="https://arxiv.org/abs/2003.03472">PDF</a>]
[<a href="https://sites.google.com/ucsd.edu/super-framework">Project Webpage</a>]
</p>
</div>
</div>
<div class="row">
<div class="col-12 col-sm-12 col-xs-12">
<p class="pub">
<b>Attribute-aware Pedestrian Detection in a Crowd</b> <br>
Jialiang Zhang, Lixiang Lin, Jianke Zhu, Yang Li, Yun-chen Chen, Yao Hu, Steven C.H. Hoi<br>
<i>IEEE Trans. on Multimedia</i>, 2020<br>
[<a href="https://arxiv.org/abs/1910.09188">PDF</a>]
[<a href="https://github.com/kalyo-zjl/APD">CODE</a>]
</p>
</div>
</div>
<div class="row">
<div class="col-12 col-sm-12 col-xs-12">
<p class="pub">
<b>SuPer: A Surgical Perception Framework for Endoscopic Tissue Manipulation with Surgical Robotics</b> <br>
Yang Li, Florian Richter, Jingpei Lu, Emily K. Funk, Ryan K. Orosco, Jianke Zhu and Michael C. Yip<br>
<i>IEEE Robotics and Automation Letters</i>, 2020<br>
[<a href="https://arxiv.org/abs/1909.05405">PDF</a>]
[<a href="https://sites.google.com/ucsd.edu/super-framework">Project Webpage</a>]
</p>
</div>
</div>
<div class="row">
<div class="col-12 col-sm-12 col-xs-12">
<p class="pub">
<b>DeepFacade: A deep learning approach to facade parsing with symmetric loss</b> <br>
Hantang Liu, Yinghao Xu, Jialiang Zhang, Jianke Zhu, Yang Li, Steve C.H. Hoi<br>
<i>IEEE Trans. on Multimedia</i>, 2020<br>
[<a href="https://ieeexplore.ieee.org/document/8979370">PDF</a>]
</p>
</div>
</div>
<div class="row">
<div class="col-12 col-sm-12 col-xs-12">
<p class="pub">
<b>Robust Estimation of Similarity Transformation for Visual Object Tracking</b> <br>
Yang Li, Jianke Zhu, Steven C.H. Hoi, Wenjie Song, Zhefeng Wang, Hantang Liu<br>
<i>The Conference on Association for the Advancement of Artificial Intelligence (AAAI)</i>, 2019<br>
[<a href="https://arxiv.org/abs/1712.05231">PDF</a>]
[<a href="https://github.com/ihpdep/LDES">CODE</a>]
[<a href="https://sites.google.com/view/ldestracker">Project Webpage</a>]
</p>
</div>
</div>
<div class="row">
<div class="col-12 col-sm-12 col-xs-12">
<p class="pub">
<b>Temporally-Adjusted Correlation Filter-based Tracking</b> <br>
Wenjie Song, Yang Li, Jianke Zhu, Chun Chen<br>
<i>Neurocomputing</i>, 2018<br>
[<a href="https://www.sciencedirect.com/science/article/pii/S0925231218301000">PDF</a>]
</p>
</div>
</div>
<div class="row">
<div class="col-12 col-sm-12 col-xs-12">
<p class="pub">
<b>CFNN: Correlation Filter Neural Network for Visual Object Tracking</b> <br>
Yang Li, Zhan Xu and Jianke Zhu<br>
<i>International Joint Conference on Artificial Intelligence</i> (IJCAI), 2017<br>
[<a href="https://github.com/ihpdep/ihpdep.github.io/raw/master/papers/ijcai17_cfnn.pdf">PDF</a>]
[<a href="https://github.com/enderhsu/CFNN">CODE</a>]
</p>
</div>
</div>
<div class="row">
<div class="col-12 col-sm-12 col-xs-12">
<p class="pub">
<b>Reliable Patch Trackers: Robust Visual Tracking by Exploiting Reliable Patches</b> <br>
Yang Li, Jianke Zhu, Steven C.H. Hoi<br>
<i>Computer Vision and Pattern Recognition</i> (CVPR), 2015<br>
[<a href="https://github.com/ihpdep/ihpdep.github.io/raw/master/papers/cvpr15_rpt.pdf">PDF</a>]
[<a href="https://github.com/ihpdep/ihpdep.github.io/raw/master/papers/cvpr15_rpt_ext.pdf">ABSTRACT</a>]
[<a href="https://github.com/ihpdep/rpt">CODE</a>]
</p>
</div>
</div>
<div class="row">
<div class="col-12 col-sm-12 col-xs-12">
<p class="pub">
<b>Image Alignment by Online Robust PCA via Stochastic Gradient Descent</b><br>
Wenjie Song, Jianke Zhu, Yang Li, Chun Chen. <br>
<i>IEEE Transactions on Circuits and Systems for Video Technology</i> (TCSVT), 2015<br>
[<a href="http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7155543">PDF</a>]
</p>
</div>
</div>
<div class="row">
<div class="col-12 col-sm-12 col-xs-12">
<p class="pub">
<b>A Scale Adaptive Kernel Correlation Filter Tracker with Feature Integration </b><br>
Yang Li, Jianke Zhu<br>
<i>European Conference on Computer Vision, Workshop VOT2014</i> (ECCVW), 2014.<br>
[<a href="https://github.com/ihpdep/ihpdep.github.io/raw/master/papers/eccvw14_samf.pdf">PDF</a>]
[<a href="https://github.com/ihpdep/samf">CODE</a>]
</p>
</div>
</div>
<div class="row">
<div class="col-12 col-sm-12 col-xs-12">
<p class="pub">
<b>The visual object tracking vot2014 challenge results. </b><br>
M. Kristan, R. Pflugfelder, et al. (Co-author)<br>
<i>In ECCV 2014 Workshops, Workshop on Visual Object Tracking Challenge</i> 2014 <br>
[<a href="http://www.epics-project.eu/publications/2014_kristan_iccvw.pdf">PDF</a>]
We won the <i>second place</i> in <a href="http://votchallenge.net/vot2014/index.html">VOT2014 challenge</a>.
</p>
</div>
</div>
<div class="row">
<div class="col-12 col-sm-12 col-xs-12">
<p class="pub">
<b>The visual object tracking vot2013 challenge results. </b><br>
M. Kristan, R. Pflugfelder, et al. (Co-author)<br>
<i>In ICCV 2013 Workshops, Workshop on Visual Object Tracking Challenge</i> 2013 <br>
[<a href="http://personal.ee.surrey.ac.uk/Personal/R.Bowden/publications/2013/Kristan_VOT_2013_ICCV_paper.pdf">PDF</a>]
</p>
</div>
</div>
<div class="row">
<div class="col-12 col-sm-12 col-xs-12">
<p class="pub">
<b>Adaptive lattice-based light rendering of participating media. </b><br>
Changbo Wang, Chenhui Li, Jinqiu Dai, Yang Li <br>
<i>Journal of Computer Animation and Virtual World</i> 22(6): 487-498 (2011). <br>
[<a href="http://onlinelibrary.wiley.com/doi/10.1002/cav.426/pdf">PDF</a>]
</p>
</div>
</div>
<div class="row">
<div class="col-12 col-sm-12 col-xs-12">
<p class="pub">
<b>Real-time realistic rendering of under seawater scene. </b><br>
Chenhui Li, Changbo Wang, Yang Li, Min Zhao, et al.<br>
<i>Journal of Image and Graphics.</i> 2011.16(8):1497-1502.<br>
[<a href="http://en.cnki.com.cn/Article_en/CJFDTotal-ZGTB201108022.htm">PDF</a>]
</p>
</div>
</div>
<div class="row">
<div class="col-12 col-sm-12 col-xs-12">
<hr>
</div>
</div>
<div class="row">
<div class="col-12 col-sm-12 col-xs-12">
<h3>Professional Services</h3>
<ul class="mine">
<li>Program Committee Member of <strong>AAAI (2018-2022)</strong></li>
<li>Program Committee Member of <strong>IJCAI (2018-2020)</strong></li>
<li>Program Committee Member of <strong>ICONIP (2020)</strong></li>
<li>Invited Reviewer for <strong>International Journal of Computer Vision </strong></li>
<li>Invited Reviewer for <strong>IEEE Transactions on Image Processing </strong></li>
<li>Invited Reviewer for <strong>IEEE Transactions on Multimedia</strong></li>
<li>Invited Reviewer for <strong>IEEE Robotics and Automation Letters</strong></li>
<li>Invited Reviewer for <strong>IEEE Transactions on Circuits and Systems for Video Technology</strong></li>
<li>Invited Reviewer for <strong>Neurocomputing</strong></li>
<li>Invited Reviewer for <strong>IEEE Signal Processing Letters</strong></li>
<li>Invited Reviewer for <strong>International Journal of Advanced Robotic Systems</strong></li>
<li>Reviewer of <strong>CVPR, ECCV, MM, NeurIPS</strong></li>
</ul>
</div>
</div>
<div class="row">
<div class="col-12 col-sm-12 col-xs-12">
<hr>
</div>
</div>
<center>
<script type='text/javascript' id='clustrmaps' src='https://cdn.clustrmaps.com/map_v2.js?cl=ffffff&w=300&t=tt&d=U_jew6EMXsGBevNcBBxwLKEy57jIMni4L31hdsHL3Yg'></script>
</center>
</main>
</div>
</body></html>