Interfacing Sound Stream Segregation to Automatic Speech Recognition -- Preliminary Results on Listening to Several Sounds Simultaneously

Hiroshi G. Okuno, Tomohiro Nakatani, Takeshi Kawabata

This paper reports the preliminary results of experiments on listening to several sounds at once. like issues are addressed: segregating speech streams from a mixture of sounds, and interfacing speech stream segregation with automatic speech recognition (AD). Speech stream segregation (SSS) is modeled as a process of extracting harmonic fragments, grouping these extracted harmonic fragments, and substituting some sounds for non-harmonic parts of groups. This system is implemented by extending the harmonic-based stream segregation system reported at AAAI-94 and IJCAI-95. The main problem in interfacing SSS with HMM-based ASR is how to improve the recognition performance which is degraded by spectral distortion of segregated sounds caused mainly by the binaural input, grouping, and residue substitution. Our solution is to re-train the parameters of the HMM with training data binauralized for four directions, to group harmonic fragments according to their directions, and to substitute the residue of harmonic fragments for non-harmonic parts of each group. Experiments with 500 mixtures of two women’s utterances of a word showed that the cumulative accuracy of word recognition up to the 10th candidate of each woman’s utterance is, on average, 75%.

This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy.