Contact
Secretary Medical Physics Group
Department for Medical Physics and Acoustics
Medical School
University of Oldenburg
D-26111 Oldenburg
phone +49 (441) 798-5470
fax +49 (441) 798-3902
Sekretariat Medizinische Physik
Department for Medizinische Physik und Akustik
Fakultät VI - Medizin und Gesundheitswesen
Carl von Ossietzky Universität Oldenburg
Telefon: +49 (441) 798-5470
Fax: +49 (441) 798-3902
Paketanschrift:
Carl von Ossietzky Str. 9-11
D-26111 Oldenburg
Contact for OLACS CORPUS Downloads
Experiments
In order to fully understand the complex ways in which speech recognition, understanding and production work in difficult situations (i.e. under the influence of noise), several experiments will be run to deliver empirical data.
We are frequently in need of participants for the various experiments.
Participants are required to be native speakers of German. Additional requirements may apply.
If you are interested in participating and thus gaining insight into the fascinating world of research, please contact us either by email:
Here you find an overview of the various experimental methods we use:
Audiometry
An audiogram is a standard way of representing a person's hearing loss or hearing ability. Most audiograms cover the limited range 100Hz to 8000Hz (8kHz) which is most important for clear understanding of speech, and they plot the threshold of hearing relative to a standardized curve that represents 'normal' hearing, in dBHL. Audiograms are produced using a piece of test equipment called an audiometer, and this allows different frequencies to be presented to the subject, usually over calibrated headphones, at any specified level.
The test involves different tones being presented at a specific frequency (pitch) and intensity (loudness). When the person hears the sound they raise their hand or press a button so that the tester knows that they have heard it. The lowest intensity sound they can hear is recorded. Typically, audiometric tests determine a subject's hearing levels with the help of an audiometer, but may also measure ability to discriminate between different sound intensities, recognize pitch, or distinguish speech from background noise. Results of audiometric tests are used to diagnose hearing loss or diseases of the ear, and often make use of an Audiogram.
We use the Audiogram as a pretest to determine a participant's hearing ability.
EEG/ ERP
An ElectroEncephaloGram (EEG) is a test that measures and records the electrical activity of the brain. Special sensors (electrodes) are attached to the head and hooked by wires to a computer. The computer records your brain's electrical activity on the screen as wavy lines, which represent the changes in the pattern of the brain's electrical activity.
Event-Related Potentials (ERP) involve averaging the EEG activity time-locked to the presentation of complex processing of stimuli (visual or auditory).
As a participant, you will be invited to our Linguistic Lab (A6, 2-202), where you will sit in front of a monitor and pushing buttons every now and then while working on an experimental task. The electrodes are conveniently attached to your scalp on a cap. An abrasive gel is used to connect the electrodes on the cap with your scalp. Our lab is fully equipped, so you have the opportunity to wash and blow-dry your hair after the experiment.
Chasing gazes
The eyes have often been described as a window into the human mind. When we hear spoken language about things present in our immediate surroundings, our gaze (largely automatically) searches for items or persons that speakers refer to. When we speak to describe something that we see, our eyes fixate upon parts of what we perceive as our description unfolds. Since where we look is tightly linked to processing linguistic information about what we see, tracking eye movements has become a well-established tool for psycholinguists to learn more about the time course of listening and speaking.
When looking at a scene, the eyes regularly perform jumps, so-called saccades, between fixations, when the gaze rests upon a particular point. During a saccade, we do not perceive anything. What we perceive of as a stable image of our surroundings is basically an image computed in our brain from the multiple rests or fixations.
Our eye tracking lab is located at the "Haus des Hörens" audiological research center. We use an SR Research EyeLink CL Remote eye tracking device that can record eye movements and fixations at a speed of up to 1000 Hz. The device can be used with a headrest and in remote mode. Tracking the gaze of a subject with this system works through a high-speed image analysis analysis of an infrared video stream. The device consists of three parts: an infrared camera pointed at the subject, an infrared light source, and the computer running the analysis software. In order to estimate the position of the gaze, the system keeps track of the elliptical shape of the pupil. The infrared light illuminating the subject creates a characteristic reflection on the cornea, which is also recorded. After an initial calibration, the pupil position and shape information together with the position of the corneal reflection allow for a precise mapping of gaze onto the stimulus picture or scene shown to the subject.
The setup is currently used for language perception experiments where subjects listen to speech of varying complexity under different noise conditions. We show the subjects b/w scene drawings from our OLACS corpus that either match or do not match the sentence they hear and track the course of fixations on the visual display. In a different experiment we are looking for the distribution of gazes across depicted scenes while the subjects are describing the scene verbally.
Production Studies
Looking closely, speaking is quite a complex task. We have to think of what we want to talk about, plan an appropriate way of saying what we want to communicate, select the right words, build a syntactic structure and put the words into the right place, and finally coordinate the articulatory muscles to produce the planned words and sentences in the correct order without spluttering. Most of these processes happen automatically and within a few hundred milliseconds.
Speaking under noisy circumstances can be a bit more challenging. Speakers normally tend to counteract concurrent noise by changing pitch and intensity of their voice. We are interested in the possible effects (automatic or strategic) that adverse communication settings and sensorineural perception problems of speakers with hearing impairment might have on central cognitive processes implied in planning and formulating sentences. Do speakers perhaps "simplify" their language under noise?
In order to test this question empirically, we developed different experimental methods to elicit spontaneous and controlled speech under different acoustic background conditions. In our language production experiments, speakers are presented with a variety of materials, mostly pictures, and we ask the participants to describe the materials in different ways.
For example, we recently conducted a study where a couple of subjects with and without hearing impairment were asked to describe simple drawings from the OLACS corpus in one sentence. The acoustic background was manipulated using the communication and acoustics simulator (CAS) facility at the Oldenburg "Haus des Hörens" audiological research centre.
Reaction Time Studies
Simple reaction time is the time required for an observer to respond to the presence of a stimulus. For example, a subject might be asked to press a button as soon as a light or sound appears. Mean RT for young adults is approximately 190 milliseconds to detect a visual stimulus, and approximately 160 milliseconds to detect an auditory stimulus.
As a participant, you will be invited to our Linguistic Lab (A6, 2-202), where you will sit in front of a monitor and pushing buttons every now and then while working on an experimental task.