Making sound features disappearN Rabinowitz, M Schemitsch, O Brimijoin and E P SimoncelliPublished in 37th MidWinter Meeting, Association for Research in Otolaryngology (ARO), vol.37 pp. 87-88, Feb 2014. |
Methods
To test this, we built a novel psychoacoustics chamber, wherein features of sound sources were systematically varied according to human subjects' head positions. We predicted that subjects would passively learn these head-coupled features and, interpreting them as head-angle-related listening conditions, cease to perceive them as source features.
Results
In the first experiment, subjects listened to a regular pulse train in the free field, while performing a visual search task. Subjects had to occasionally report changes in the pulse rate, but were unaware that the (task-irrelevant) pure-tone carrier-frequency of the pulses was systematically coupled to their head angle (yaw). The amplitude of these changes was adjusted to be substantially larger than those that could be produced by real-world room acoustics. After 30-45 minutes of exposure, we found that subjects' judgements were systematically biased according to their head angles, typically by ~15% of their pre-exposure thresholds.
In the second experiment, we reversed the roles of pulse- rate and carrier-frequency, coupling the (task-irrelevant) pulse-rate to the subjects' head angle, while they reported on carrier frequency. Unlike the first experiment, we found that exposure did not induce systematic head-angle biases in pulse-rate judgements.
Conclusion
These results suggest that the brain can learn to recognize and discount for listening conditions, but only for a limited set of sound features for which it possesses the infrastructure necessary to compensate for head-angle-related changes.