Saliency Database

ImgSal: A benchmark for saliency detection v1.0

Features of The Database
1: Collection of 235 color images, which are divided into six different categories;
2: Provide both human fixation records (saccades data) and human labeled results;
3: Easy to use. ROC and PoDSC evaluation codes are available here

Dataset Description
We will simultaneously consider the detection of salient regions of different size. In fact, an acceptable saliency detector should detect both large and small salient regions. Moreover, saliency detection should also locate salient regions in both cluttered backgrounds and those with repeating distractors. We also note that different images present different levels of difficulty for any saliency detector. However, the existing saliency benchmarks (e.g. Bruce's dataset, Hou'dataset, Harel's dataset and so on) are collections of images, with no attempt to categorize the difficulty of analysis required. Hence, we have created a new saliency benchmark for saliency models validation. The database provide both REGION ground truth (human labeled) and FIXATION ground truth (by eye tracker).

Image Collection
A database containing 235 images was collected using Google as well as by consulting the recent literature. The images in this database are 480 x 640 pixels and are divided into 6 categories:
1)50 images with large salient regions;
2) 80 images with intermediate salient regions;
3) 60 images with small salient regions;
4) 15 images with cluttered backgrounds;
5) 15 images with repeating distractors;
6) 15 images with both large and small salient regions.
Download them here: [C1][C2][C3][C4][C5][C6].

Human Labeling Process (Region Ground Truth)
Part of the labeling process is inspired by  LableMe, Hou’s method.
The database was labeled by 19 naive subjects. The images were shown one by one in a random manner on a 24'' LCD display.  Each subject was asked to sit in front of the screen and remain at a distance of six times the image width (the view angle was about 11 degrees). First, we initially presented the images for a short time duration, sufficient for actually seeing the image.  Second, we added a recalling stage instead of labeling the image immediately.  

Ground Truth Generation: In our experiment, we set a pixel to 1 if more than half the subjects agreed that it belonged to a salient region and otherwise to 0.

Download labeling results here.There are two kinds of Ground Truth(GT) provided: 1)The GT1 are the binary maps,  where these pixels are assigned to be 1 if more than half of the subject agree that they are salient; 2) The GT2 are probability maps, where the value of a pixel in the Ground Truth is related to the number of subjects who agree that it is salient.

Please feel free to contact me if you have any question about this benchmark. If you use this database and/or the evaluation codes, please cite our PAMI paper 06243147.pdf  and/or our BMVC  paper [bibtex].

Eye Tracking Data (Fixation Ground Truth)
The fixation data was recorded by the eye tracker Tobii T60. This machine was provided by Mr. Xianmai Wang (王贤买) , Tobii. Inc).  21 Participants joined this experiments.

The setup of eye tracking experiment

The Original fixation data (raw data) will be released later .
[Large salient Regions]
[Medium salient Regions]
[Small salient Regions]
(1-15: images with repeating distracters; 16-30:images with cluttered backgrounds; 31-45:images with both large and small salient regions.)
Note that images in each categories are in random manner.

Download fixation maps here. You can use them to evaluate algorithms directly.
1)  We provide both the fixation maps (.mat) and density maps (.jpg) with the same resolution as the original images (480x640);
2)  The density map is the blurred version of the fixation map (see blur parameter below), and these density maps are only for view;  

3) The fixation maps (.mat) can be open in MATLAB. For each fixation map, the value in each location indicates the count number of  fixations of all participants.
4) Please Email me if you need the CODE for ROC curve plotting based on the fixation Ground Truth.

If you use this database and/or the evaluation codes, please cite our PAMI paper 06243147.pdf  and/or our BMVC  paper [bibtex].
Fixation Maps                                                               Density Maps
[C1: Large salient Regions]                            [Large salient Regions]
[C2: Medium salient Regions]                           [Medium salient Regions]
[C3: Small salient Regions]                            [Small salient Regions]
[C4: Repeating Distracters]                            [Repeating Distracters]
[C5: Cluttered Background ]                            [Cluttered Background ]
[C6: Large & Small Salient Regions]                    [Large & Small Salient Regions]

Download saccades data here. New!
As Suggested by Ali Borji, we provide the saccade data online.
[saccades data]
All the data are in MATLAB format, so it’s easy to use.
1: There are four folders provided:
  (1) Large salient Regions
  (2) Medium salient Regions
  (3) Small salient Regions
  (4) Other  (In this folder, Images 1-15: images with repeating distracters; 16-30:images with cluttered backgrounds; 31-45:images with both large and small salient regions. image 46 is for test, please just ignore it);
2: In each folder, there are 21 *.mat files (MATLAB data format), and each file corresponds to a subject.
3: File format of '*.mat' in each folder (taking 'Rec 1.mat' in the first folder as an example):

load('Rec 1.mat');
size(myStore)  % 'myStore' is the data matrix
ans =
  298     4    50

From this, we know that there are 50 images in the first folder, and for each image, a 298×4 record matrix is provided;
in the record matrix , there are 298 fixations, and
   (1) column 1 indicates the image number;
   (2) column 2 indicates the fixation order;
   (3) column 3 and 4 are the coordinates (x,y) of corresponding fixation point. Note that the fixation is valid only when 0<x<640 and 0
Please contact me if you have any questions.
If you use this database and/or the evaluation codes, please cite our PAMI paper 06243147.pdf  and/or our BMVC  paper [bibtex].


People who have contributed to this database:

Xiangjing An, Martine Levine, Hangen He, Bin Dai, Heng Xie, Erke Shang, Zhenping Sun, Xin Xu, Jun Tan, Shengdong Pan, Wenchao Zhang, Yan Wang, Haiyang Zhao, the 21 participants and so on...


Recently Visited

Similar Subject

Same institution

Similar Interests