Please carefully read the following guidelines before submitting. If you do not comply with our guidlines your method will be removed.
The goal of ROB is to foster development of algorithms which are robust across various datasets. Thus, each participating method must be tested on all datasets involved in the respective challenge. It is not allowed to use different methods or to alter the parameters of a method for each individual benchmark of a challenge. By submitting to ROB you agree to hand your source code to the organizing team upon request for inspection. Moreover, a paper, Arxiv paper or small report (minimum 1/2 to 1 page) must be provided which covers the algorithmic details of the approach. Your method needs to be public (non-anonymous) to compete for the workshop prices. Synthetic training data may be used for pre-training models. External training data with annotated real images may be used as long as the dataset is public. The winner and the runner-up of each category will receive prize money, are invited to present the method at the ROB 2018 workshop and participate in a joint dinner, as well as co-author a joint TPAMI submission. Participants may submit to a single or multiple challenges. Note that the prize money and invitation goes to the author of the method, not the submitter in case both are not the same. In other words, if a method published by team A is submitted by team B, the prize and invitation for the talk/dinner will go to team A.
Invalid Submission Examples:
- A method which is trained separately on each benchmark, resulting in different parameters/weights for each benchmark of the challenge.
- A method which trains a classifier to detect from which benchmark a file comes from (either by image content or meta data, e.g., image dimensions)
- A method which executes separate program paths specifically designed for individual datasets (this excludes pre-processing such as image resizing which is allowed)
- A method which explicitly makes use of the knowledge that some output labels (eg, semantic labels) correspond to one dataset and some output labels correspond to another dataset. However, learned co-occurrence statistics between labels can be used.
- A method which is trained on non-publicly available datasets
Valid Submission Examples:
- A method that is trained on all benchmarks, resulting in a single parameter set/model. This model is then applied to all test sets of the challenge.
- A method is trained on all benchmarks. Training samples are drawn with equal proportion from each dataset to balance the number of samples during training. The resulting model is then applied to all test sets of the challenge.
- A method that was trained with all training/validation data available from the individual benchmarks and additional public data (eg, pre-trained on ImageNet or Mapillary Vistas) but does not contain dataset specific instructions or training
- A method fullfilling the above criterion, but which is trained with the ROB training set in a supervised manner and with the ROB test set (or any other data set) in an unsupervised manner
- A method not using explicit training data
- A method trained on only some of the benchmarks while omitting one or more benchmarks completely. The same method must still participate in all individual benchmarks (i.e. submit results for respective benchmarking data to each benchmark)