Received Feb. 18, 1997; revised July 9, 1997; accepted July 10, 1997.
1 Section of Neurobiology and Behavior, Cornell University, Ithaca, New York 14853, and 2 University of California Bodega Marine Laboratory, Bodega Bay, California 94923
A fundamental problem faced by the auditory system of humans and other vertebrates is the segregation of concurrent vocal signals. To discriminate between individual vocalizations, the auditory system must extract information about each signal from the single temporal waveform that results from the summation of the simultaneous acoustic signals. Here, we present the first report of midbrain coding of simultaneous acoustic signals in a vocal species, the plainfin midshipman fish, that routinely encounters concurrent vocalizations. During the breeding season, nesting males congregate and produce long-duration, multiharmonic mate calls that overlap, producing beat waveforms. Neurophysiological responses to two simultaneous tones near the fundamental frequencies of natural calls reveal that midbrain units temporally code the difference frequency (dF). Many neurons are tuned to a specific dF; their selectivity overlaps the range of dFs for naturally occurring acoustic beats. Beats and amplitude-modulated (AM) signals are also coded differently by most units. Although some neurons exhibit differential tuning for beat dFs and the modulation frequencies (modFs) of AM signals, others exhibit similar temporal selectivity but differ in their degree of synchronization to dFs and modFs. The extraction of dF information, together with other auditory cues, could enable the detection and segregation of concurrent vocalizations, whereas differential responses to beats and AM signals could permit discrimination of beats from other AM-like signals produced by midshipman. A central code of beat dFs may be a general vertebrate mechanism used for coding concurrent acoustic signals, including human vowels.
The Journal of Neurophysiology Vol. 81 No. 2 February 1999, pp. 552-563
Copyright ©1999 by the American Physiological Society
1Section of Neurobiology and Behavior, Cornell University, Ithaca, New York 14853; and 2University of California Bodega Marine Laboratory, Bodega Bay, California 94923
Midbrain combinatorial code for temporal and spectral Information in
concurrent acoustic signals. All vocal species, including humans, often encounter simultaneous (concurrent) vocal signals from
conspecifics. To segregate concurrent signals, the auditory system must
extract information regarding the individual signals from their summed
waveforms. During the breeding season, nesting male midshipman fish
(Porichthys notatus) congregate in localized regions of the
intertidal zone and produce long-duration (>1 min), multi-harmonic
signals ("hums") during courtship of females. The hums of
neighboring males often overlap, resulting in acoustic beats with
amplitude and phase modulations at the difference frequencies (dFs)
between their fundamental frequencies (F0s) and
harmonic components. Behavioral studies also show that midshipman can
localize a single hum-like tone when presented with a choice between
two concurrent tones that originate from separate speakers. A previous study of the neural mechanisms underlying the segregation of concurrent signals demonstrated that midbrain neurons temporally encode a beat's
dF through spike synchronization; however, spectral information about
at least one of the beat's components is also required for signal
segregation. Here we examine the encoding of spectral differences in
beat signals by midbrain neurons. The results show that, although the
spike rate responses of many neurons are sensitive to the spectral
composition of a beat, virtually all midbrain units can encode
information about differences in the spectral composition of beat
stimuli via their interspike intervals (ISIs) with an equal
distribution of ISI spectral sensitivity across the behaviorally relevant dFs. Together, temporal encoding in the midbrain of dF information through spike synchronization and of spectral information through ISI could permit the segregation of concurrent vocal
signals.