|
||||||||||||||||||||
Scene Understanding Datasets
stanford background dataset (14.0MB) [.tar.gz]The Stanford Background Dataset is a new dataset introduced in Gould et al. (ICCV 2009) for evaluating methods for geometric and semantic scene understanding. The dataset contains 715 images chosen from existing public datasets: LabelMe, MSRC, PASCAL VOC and Geometric Context. Our selection criteria were for the images to be of outdoor scenes, have approximately 320-by-240 pixels, contain at least one foreground object, and have the horizon position within the image (it need not be visible).Semantic and geometric labels were obtained using Amazon's Mechanical Turk (AMT). The labels are:
Figure 1. Example Images, Semantic Labels, and Regions from the Stanford Background Dataset.
|
cascaded classification models DS1 dataset (23.0MB) [.tar.gz]This dataset is designed for evaluating holistic scene understanding algorithms and is composed of 422 images of outdoor scenes from various existing datasets. Each image is annotated with object bounding boxes, pixel semantic classes, and high-level scene category (e.g., urban, rural, harbor). The dataset includes six object categories (boat, car, cow, motorbike, person, and sheep) and the same eight pixel-level semantic classes as the Stanford Background Dataset.
If you use this dataset in your work, you should reference:
|
semantically-augmented make3d dataset (29.0MB) [.tar.gz]This dataset contains images and labels used in Liu et al. (CVPR 2010). The original images and depthmaps were obtained from the Make3d project. However, the images were resized to 240-by-320 for our work. Images were also labeled with semantic classes: -1 unknown, 0 sky, 1 tree/bush, 2 road/path, 3 grass, 4 water, 5 building, 6 mountain, 7 foreground object. Data is divided into training (400 image) and evaluation (134) images.If you use this dataset in your work, you should reference:
|
downloads and other resources
acknowledgmentsThis material is based upon work supported by the National Science Foundation under Grant No. RI-0917151. Any opinions, findings and conclusions or recomendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation (NSF).We are also grateful for support from the DARPA Transfer Learning program, the Office of Naval Research (ONR) Multidisciplinary University Research Initiative (MURI) program and the Boeing company. |
Copyright © 2002-2006 DAGS - Daphne's Approximate Group of Students. All rights reserved. |