Robust Vision Challenge 2018

In conjunction with CVPR 2018

Submission Guidelines

Please carefully read the following guidelines before submitting. If you do not comply with our guidlines your method will be removed.

The goal of ROB is to foster development of algorithms which are robust across various datasets. Thus, each participating method must be tested on all datasets involved in the respective challenge. It is not allowed to use different methods or to alter the parameters of a method for each individual benchmark of a challenge. By submitting to ROB you agree to hand your source code to the organizing team upon request for inspection. Moreover, a paper, Arxiv paper or small report (minimum 1/2 to 1 page) must be provided which covers the algorithmic details of the approach. Your method needs to be public (non-anonymous) to compete for the workshop prices. Synthetic training data may be used for pre-training models. External training data with annotated real images may be used as long as the dataset is public. The winner and the runner-up of each category will receive prize money, are invited to present the method at the ROB 2018 workshop and participate in a joint dinner, as well as co-author a joint TPAMI submission. Participants may submit to a single or multiple challenges. Note that the prize money and invitation goes to the author of the method, not the submitter in case both are not the same. In other words, if a method published by team A is submitted by team B, the prize and invitation for the talk/dinner will go to team A.

Invalid Submission Examples:

  • A method which is trained separately on each benchmark, resulting in different parameters/weights for each benchmark of the challenge.
  • A method which trains a classifier to detect from which benchmark a file comes from (either by image content or meta data, e.g., image dimensions)
  • A method which executes separate program paths specifically designed for individual datasets (this excludes pre-processing such as image resizing which is allowed)
  • A method which explicitly makes use of the knowledge that some output labels (eg, semantic labels) correspond to one dataset and some output labels correspond to another dataset. However, learned co-occurrence statistics between labels can be used.
  • A method which is trained on non-publicly available datasets

Valid Submission Examples:

  • A method that is trained on all benchmarks, resulting in a single parameter set/model. This model is then applied to all test sets of the challenge.
  • A method is trained on all benchmarks. Training samples are drawn with equal proportion from each dataset to balance the number of samples during training. The resulting model is then applied to all test sets of the challenge.
  • A method that was trained with all training/validation data available from the individual benchmarks and additional public data (eg, pre-trained on ImageNet or Mapillary Vistas) but does not contain dataset specific instructions or training
  • A method fullfilling the above criterion, but which is trained with the ROB training set in a supervised manner and with the ROB test set (or any other data set) in an unsupervised manner
  • A method not using explicit training data
  • A method trained on only some of the benchmarks while omitting one or more benchmarks completely. The same method must still participate in all individual benchmarks (i.e. submit results for respective benchmarking data to each benchmark)

Submission Process

We are very much looking forward to your submission. To submit to ROB 2018, follow these steps:

  1. Download the development kit using the link below
  2. Download and convert all datasets into a common format using the devkit
  3. Train your model on the unified training set and run your trained model on the unified test set
  4. Use the devkit to convert your results back to the original dataset format(s)
  5. Choose a short method name (between 4 and 16 characters excluding suffix) for your method (allowed characters: + - _ A..Z a..z 0..9)
  6. Append the suffix _ROB to your method name
  7. Upload your results to each indivdiual benchmark
  8. Register your method to the ROB challenge using the form below
Note that the ROB crawler is executed only once a day. Thus, to see your results listed in the ROB leaderboard you need to submit to all benchmarks, make sure that your results appear under the same name on all benchmark leaderboards, register your method to ROB and wait for the next day to see your results.

Challenges

ROB 2018 features 6 challenges: stereo, multi-view stereo (MVS), optical flow, single image depth prediction, semantic segmentation and instance segmentation. Participants are free to submit to a single challenge or to multiple challenges. For each challenge, the results of a single model must be submitted to all benchmarks (indicated with an x below). For each challenge we will award a price to the winner and the runner-up.

Stereo
MVS
Flow
Depth
Semantic
Instance

ROB Development Kit

For downloading the ROB devkit please provide your email, we will send you download instructions. We will use your email address solely for sending you updates about the status of the challenge and the associated workshop. We will not share your email address with any third party or use it for any other purpose.

Register Method to ROB

New methods can be submitted again after CVPR 2018!


Gold Sponsors

Silver Sponsors



eXTReMe Tracker