This web page is an exert from this original page containing more data ressources from other experiments.

Experimental Procedure

Participants seated in a dimly lit room at 110 cm from a computer screen piloted from a PC computer. Two tasks alternated: a categorization task and a recognition task. In both tasks, target images and non-target images were equally likely presented. Participants were tested in two recording phases. The first day was composed of 13 series, the second day of 12 series, with 100 images per series (see details of the series below). To start a series, subjects had to press a touch-sensitive button. A small fixation point (smaller than 0.1° of visual angle) was drawn in the middle of a black screen. Then, an 8 bit color vertical photograph (256 pixels wide by 384 pixels high which roughly correspond to 4.5° of visual angle in width and 6.5° in height) was flashed for 20 ms (2 frames of a 100 Hz SVGA screen) using a programmable graphic board (VSG 2.1, Cambridge Research Systems). This short presentation time avoid that subjects use exploratory eye movement to respond. Participants gave their responses following a go/nogo paradigm. For each target, they had to lift their finger from the button as quickly and accurately as possible (releasing the button restored a focused light beam between an optic fiber led and its receiver; the response latency of this apparatus was under 1 ms). Participants were given 1000 ms to respond, after what any response was considered as a nogo response. The stimulus onset asynchrony (SOA) was 2000 ms plus or minus a random delay of 200 ms. For each distractor, participants had to keep pressing the button during at least 1000 ms (nogo response).
More specifically, in the animal categorization task, participants had to respond whenever there was an animal in the picture. In the recognition task, the session started with a learning phase. A probe image was flashed 15 times during 20 ms intermixed with two presentations of 1000 ms after the fifth and the tenth flashes, allowing an ocular exploration of the image; with an inter-stimulus of 1000 ms. Participants were instructed to carefully examine and learn the probe image in order to recognize it in the following series. The test phase started immediately after the learning phase. The probe image constituted the unique target of the series. Both tasks were organized in series of 100 images; 50 targets images were mixed with 50 non-targets in the animal categorization task; 50 copies of an unique photographs were mixed at random with 50 non-targets in the recognition task.

Stimuli

The pictures were photographs of natural scenes (Corel CD-ROM library; images available for viewing on the download page). The images of each category were chosen to be as varied as possible. The animal category included pictures of mammals, birds, fishes, arthropods, and reptiles. There was no a priori information about the size, position or number of the targets in a single photograph. There were also a wide range of non-target images, with outdoor and indoor scenes, natural landscapes or city scenes, pictures of food, fruits, vegetables, trees and flowers... In the categorization task, 500 distractors and 500 targets were seen by every subject but randomly distributed among all 10 series. In the recognition task, 750 distrators and 210 target photographs were used (15 target photographs per subject). Target photographs were chosen according to the results of a previous study (Fabre-Thorpe et al., 2001). The first group of 70 images (5 per subjects) was composed of the animal images out of 1000 which were correctly categorized by all subjects and were associated with the fastest RTs). The second group of 70 images was composed of animal images which had the lowest accuracy and associated with the longest RTs). In the last group, pictures contained no animal, i.e. these pictures were distractors of the categorization task. 

EEG Recording

Electric brain potentials were recorded from 32 electrodes mounted on a elastic cap (Oxford Instruments). Electrode Cz was used as reference and a mastoid electrode was used as ground (details of electrode positions are available on the download page). Data acquisition was made at 1000 Hz (corresponding to a sample bin of 1 ms) using a SynAmps recording system coupled with a PC computer. Impedances were kept below 5 kOhms.

Data organization

25 groups of file, each group of file corresponding to 100 trials (13 group of file for session of day 1 and 12 groups of file for session of day 2), were recorded for each subject. In the list below, "xxxDffNN" indicates the base file name for each group of file (each group of file containing 3 files of different type as described below). "xxx" indicates the initials of each subject; "D" represents the day of recording (1 or 2); "NN" represents the base file number; "ff" is meaningless. For instance "cba1ff04" is the base file name for file number 4 of subject "cba" on day 1. For each base file name, 3 files are present in the archive, one file with the extension ".CNT" for the raw data, one file with the extension ".DAT" and ".EXP" containing additional information about the data trials (see next paragraph). For day 1 the file generated for each subject "xxx" are
For day 2,
Each base file name above corresponds to the presentation of 100 images (50 targets and 50 distractors) and generates 3 files with different extensions ".CNT", ".DAT" and ".EXP" as described below:

Reading/processing the data

The publicly available eeglab software allows you to import this data under Matlab. To import the raw Neuroscan CNT files, first use menu "File > Import data > From Neuroscan CNT file". Simply press enter. Then extract data epochs using menu "Edit > Extract epochs" (simply press OK). Then use menu "File > Import epoch > From Neuroscan .DAT file" to import epoch information. You are now ready to analyse the data (you might want to start by concatenating files from each subject (menu "Edit > Meger dataset")). Electrode locations and electrode names (as stored in the original .CNT raw data file along with the 10-20 system correspondence) are available as an Excel file here (a channel location file compatible with the EEGLAB software is also available delorme_locfile.loc, and it may be read in EEGLAB using menu "Edit > Channel location".

Preprocessed data

The The EEGLAB study tutorial also contains a "STUDY" containing some of the target and distracter trials for 10 subjects in this task. The study is availaible here (380 Mb. The EEGLAB script that was used to process these subjects and generate the datasets for the study is available here. Note that this data has already been pre-processed and the raw Neuroscan .CNT files have been removed.

Images

Images presented during the experiment are available here for target images and here for non-target images (you may not download or copy these images; The Corel images on this site are for viewing only and may not be downloaded or saved. They were purchased by the CERCO CNRS laboratory to use for psychophysics research, and under the terms of our licensing agreement, we cannot sell or give away these images). All of these image are also avaialbe on a web site of the university of Berkeley (enter an image number on this site to look up the image presented in each trial).

Cite

ERP analysis on this data has been published in
Delorme, A., Rousselet, G., Mace, M., Fabre-Thorpe M. Interaction of Bottom-up and Top-down processing in the fast visual analysis of natural scenes. Cognitive Brain Research, 103-113. Author's PDF, Science direct
Note that this data has also been used to generate brain dynamic animation in
Delorme, A., Makeig, S., Fabre-Thorpe, M. Sejnowski, T. (2002) From Single-trials EEG to Brain Area Dynamics, Neurocomputing, 44-46, 1057-1064. Author's PDF, Science Direct
which is the first paper to compute synchronization between brain source activities separated using ICA.


Back to cover page