Robust Vision Challenge

Robust Vision Challenge 2020

Introduction to Robust Vision Challenge 2020

First Live Session: 12h-14h UTC+1

Youtube link for session 1
  • 12h00-12h15: Introduction / Announcement of RVC Winners
  • 12h15-12h30: CFNet_RVC (Stereo)
  • 12h30-12h45: NLCA_NET_v2_RVC (Stereo)
  • 12h45-13h00: PRAFlow_RVC (Flow)
  • 13h00-13h15: RMDP_RVC (Depth)
  • 13h15-13h30: wisedet_RVC (Object Det.)
  • 13h30-13h45: EffPS_b1bs4_RVC (Panoptic)
  • 13h45-14h00: Closing

Second Live Session: 22h-24h UTC+1

Youtube link for session 2
  • 22h00-22h15: Introduction / Announcement of RVC Winners
  • 22h15-22h30: RAFT-TF_RVC (Flow)
  • 22h30-23h45: UniDet_RVC (Object Det.)
  • 22h45-23h00: UniDet_RVC (Instance)
  • 23h00-23h15: SN_RN152pyrx8_RVC (Semantic)
  • 23h15-23h30: MSeg1080_RVC (Semantic)
  • 23h30-24h00: Closing

Keynote Speakers

Keynote: Robustness Across the Data Abundance Spectrum

Ross Girshick is a research scientist at Facebook AI Research (FAIR), working on computer vision and machine learning. He received a PhD in computer science from the University of Chicago under the supervision of Pedro Felzenszwalb in 2012. Prior to joining FAIR, Ross was a researcher at Microsoft Research, Redmond and a postdoc at the University of California, Berkeley, where he was advised by Jitendra Malik and Trevor Darrell. His interests include instance-level object understanding and visual reasoning challenges that combine natural language processing with computer vision. He received the 2017 PAMI Young Researcher Award and is well-known for developing the R-CNN approach to object detection. In 2017, Ross also received the Marr Prize at ICCV for Mask R-CNN.


Keynote: Noisy Student Training for Robust Vision

Quoc Le is a Principal Scientist at Google Brain, where he works on large scale brain simulation using unsupervised feature learning and deep learning. His work focuses on object recognition, speech recognition and language understanding. Quoc obtained his PhD at Stanford, undergraduate degree with First Class Honours and Distinguished Scholar at the Australian National University, and was a researcher at National ICT Australia, Microsoft Research and Max Planck Institute of Biological Cybernetics. Quoc won best paper award as ECML 2007.


Keynote: What Do Our Models Learn?

Aleksander Mądry is Professor of Computer Science in the MIT EECS Department. He is a principal investigator in the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL), the Director of the MIT Center for Deployable Machine Learning, and the Faculty Lead of the CSAIL-MSR Trustworthy and Robust AI Collaboration. Aleksander received his PhD from MIT in 2011. Prior to joining MIT, he spent time at Microsoft Research New England and on the faculty of EPFL. Aleksander's research interests span algorithms, continuous optimization, science of deep learning and understanding machine learning from a robustness perspective. His work has been recognized with a number of awards, including an NSF CAREER Award, an Alfred P. Sloan Research Fellowship, an ACM Doctoral Dissertation Award Honorable Mention, and 2018 Presburger Award.


Epilogue of Robust Vision Challenge 2020


The increasing availability of large annotated datasets such as Middlebury, PASCAL VOC, ImageNet, MS COCO, KITTI and Cityscapes has lead to tremendous progress in computer vision and machine learning over the last decade. Public leaderboards make it easy to track the state-of-the-art in the field by comparing the results of dozens of methods side-by-side. While steady progress is made on each individual dataset, many of them are limited to specific domains. KITTI, for example, focuses on real-world urban driving scenarios, while Middlebury considers indoor scenes. Consequently, methods that are state-of-the-art on one dataset often perform worse on a different one or require substantial adaptation of the model parameters.

The goal of this challenge is to foster the development of vision systems that are robust and consequently perform well on a variety of datasets with different characteristics. Towards this goal, we propose the Robust Vision Challenge, where performance on several tasks (eg, reconstruction, optical flow, semantic/instance segmentation, single image depth prediction) is measured across a number of challenging benchmarks with different characteristics, e.g., indoors vs. outdoors, real vs. synthetic, sunny vs. bad weather, different sensors. We encourage submissions of novel algorithms, techniques which are currently in review and methods that have already been published.

Questions? Please see the Submit Results page for anwsers regarding the RVC rules. Note: Submission deadline extended to 14th August 2020

Prizes for winner and the runner-up of each of the seven challenges:

1st Place: $1200

2nd Place: $600

Presentation at our
ECCV 2020 Workshop

Challenges

RVC 2020 features seven challenges: stereo, optical flow, single image depth prediction, object detection, semantic segmentation, instance segmentation, and panoptic segmentation. Participants are free to submit to a single challenge or to multiple challenges. For each challenge, the results of a single model must be submitted to all benchmarks (indicated with an x below).

Stereo
Flow
Depth
Obj. Det.
Semantic
Instance
Panoptic

Important Dates

April 24, 2020 Training data and development kit released:
https://github.com/ozendelait/rvc_devkit/tree/release
July 1, 2020 RVC Submission system online
August 14, 2020 July 31, 2020 Submission deadline (6pm CEST)
August 23, 2020 August 21, 2020 Report deadline (23h59 UTC+1)
August 28, 2020 Robust Vision Challenge 2020 Workshop at ECCV 2020 (Virtual Conference))

Sponsors

Gold Sponsors


Silver Sponsors



eXTReMe Tracker