US20140241546A1 - Microphone sensitivity difference correction device, method, and noise suppression device - Google Patents
Microphone sensitivity difference correction device, method, and noise suppression device Download PDFInfo
- Publication number
- US20140241546A1 US20140241546A1 US14/155,731 US201414155731A US2014241546A1 US 20140241546 A1 US20140241546 A1 US 20140241546A1 US 201414155731 A US201414155731 A US 201414155731A US 2014241546 A1 US2014241546 A1 US 2014241546A1
- Authority
- US
- United States
- Prior art keywords
- correction
- frequency domain
- signals
- phase difference
- frequency
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000035945 sensitivity Effects 0.000 title claims abstract description 123
- 230000001629 suppression Effects 0.000 title claims description 156
- 238000000034 method Methods 0.000 title claims description 62
- 230000005236 sound signal Effects 0.000 claims abstract description 91
- 238000001514 detection method Methods 0.000 claims abstract description 26
- 238000005070 sampling Methods 0.000 claims description 23
- 230000008569 process Effects 0.000 description 39
- 239000013256 coordination polymer Substances 0.000 description 20
- 230000007274 generation of a signal involved in cell-cell signaling Effects 0.000 description 11
- 238000006243 chemical reaction Methods 0.000 description 10
- 238000001228 spectrum Methods 0.000 description 10
- 238000010586 diagram Methods 0.000 description 6
- 230000003595 spectral effect Effects 0.000 description 6
- 230000006870 function Effects 0.000 description 5
- 238000007796 conventional method Methods 0.000 description 4
- 238000000926 separation method Methods 0.000 description 4
- 230000008901 benefit Effects 0.000 description 2
- 230000003111 delayed effect Effects 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 230000008520 organization Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R3/00—Circuits for transducers, loudspeakers or microphones
- H04R3/04—Circuits for transducers, loudspeakers or microphones for correcting frequency response
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L21/00—Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
- G10L21/02—Speech enhancement, e.g. noise reduction or echo cancellation
- G10L21/0208—Noise filtering
- G10L21/0216—Noise filtering characterised by the method used for estimating noise
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R29/00—Monitoring arrangements; Testing arrangements
- H04R29/004—Monitoring arrangements; Testing arrangements for microphones
- H04R29/005—Microphone arrays
- H04R29/006—Microphone matching
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R3/00—Circuits for transducers, loudspeakers or microphones
- H04R3/04—Circuits for transducers, loudspeakers or microphones for correcting frequency response
- H04R3/10—Circuits for transducers, loudspeakers or microphones for correcting frequency response of variable resistance microphones
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L21/00—Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
- G10L21/02—Speech enhancement, e.g. noise reduction or echo cancellation
- G10L21/0208—Noise filtering
- G10L21/0216—Noise filtering characterised by the method used for estimating noise
- G10L2021/02161—Number of inputs available containing the signal or the noise to be suppressed
- G10L2021/02166—Microphone arrays; Beamforming
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2410/00—Microphones
- H04R2410/01—Noise reduction using microphones having different directional characteristics
Definitions
- the embodiments discussed herein are related to a microphone sensitivity difference correction device, a microphone sensitivity difference correction method, a microphone sensitivity difference correction program and a noise suppression device.
- noise suppression is conventionally performed to suppress noise contained in a speech signal that has mixed-in noise other than a target voice (for example voices of people talking).
- a target voice for example voices of people talking.
- Technology employing a microphone array including plural microphones is known as such noise suppression technology.
- noise suppression based on an amplitude ratio between signals received from plural microphones there is a method for noise suppression based on an amplitude ratio between signals received from plural microphones.
- the amplitude ratio becomes 1.0 when the distance between each of the microphones and the sound source is the same distance or when far away, and the amplitude ratio is a value that deviates from 1.0 when the distance between each of the microphones and the sound source is a different distance.
- Noise suppression based on the amplitude ratio is a method that employs the amplitude ratio, and so, for example, when a target sound source is present at a position that has different distances to each of the microphones, the method suppresses noise that has a value of amplitude ratio of close to 1.0 in the received signals from the plural microphones.
- a proposal for a device that corrects the level from at least one sound signal by deriving a correction coefficient when performing audio processing based on sound signals respectively generated from sound input to plural sound input sections.
- frequency components are detected of sound arriving from a substantially orthogonal direction with respect to a straight line defining the placement position of a first sound input section and a second sound input section among the plural sound input sections. The direction from which the sound arrives is detected based on phase differences between the sounds arriving from the first sound input section and the second sound input section.
- correction coefficients are derived for correcting the level of at least one of the respective sound signals generated from the input sound by the first sound input section and the second sound input section.
- a microphone sensitivity difference correction device includes: a detection section that detects a frequency domain signal expressing a stationary noise, based on frequency domain signals of input sound signals respectively input from plural microphones contained in a microphone array that have been converted into signals in a frequency domain for each frame; a first correction section that employs the frequency domain signal expressing the stationary noise to compute a first correction coefficient for correcting the sensitivity difference between the plural microphones by a frame unit, and that employs the first correction coefficient to correct the frequency domain signals by frame unit; and a second correction section that employs the frequency domain signals that have been corrected by the first correction section to compute a second correction coefficient for correcting by frequency unit the sensitivity difference between the plural microphones for each of the frames, and that employs the second correction coefficient to correct for each of the frames by frequency unit the frequency domain signals that have been corrected by the first correction section.
- FIG. 1 is a block diagram illustrating an example of a configuration of a noise suppression device according to a first exemplary embodiment
- FIG. 2 is a block diagram illustrating an example of a functional configuration of a noise suppression device according to the first exemplary embodiment
- FIG. 3 is a schematic diagram to explain a sound source position with respect to a microphone array
- FIG. 4 is a schematic block diagram illustrating an example of a computer that functions as a noise suppression device
- FIG. 5 is a flow chart illustrating noise suppression processing according to the first exemplary embodiment
- FIG. 6 is a block diagram illustrating an example of a functional configuration of a noise suppression device according to a second exemplary embodiment
- FIG. 7 is a graph illustrating an example of phase difference when an inter-microphone distance is short
- FIG. 8 is a graph illustrating an example of phase difference when an inter-microphone distance is long
- FIG. 9 is a schematic diagram to explain a phase difference determination region
- FIG. 10 is a flow chart illustrating noise suppression processing of a second exemplary embodiment
- FIG. 11 is a graph illustrating an example of input sound signals
- FIG. 12 is a graph illustrating an example results of noise suppression by a conventional method.
- FIG. 13 is a graph illustrating an example results of noise suppression by technology disclosed herein.
- FIG. 1 illustrates a noise suppression device 10 according to a first exemplary embodiment.
- a microphone array 11 of plural microphones at a specific separation d is connected to the noise suppression device 10 .
- the microphones 11 A and 11 B collect peripheral sound and convert the collected sound into an analogue signal and output the signal.
- the signal output from the microphone 11 A is input sound signal 1 and the signal output from the microphone 11 B is input sound signal 2 .
- Noise other than the target voice (sound from the target voice source, for example voices of people talking) is mixed into the input sound signal 1 and the input sound signal 2 .
- the input sound signal 1 and the input sound signal 2 that have been output from the microphone array 11 are input to the noise suppression device 10 .
- the noise suppression device 10 After correcting for sensitivity difference between the microphone 11 A and the microphone 11 B, a noise suppressed output sound signal is generated and output.
- the noise suppression device 10 includes analogue-to-digital (A/D) converters 12 A, 12 B, time-frequency converters 14 A, 14 B, a detection section 16 , a frame unit correction section 18 , a frequency unit correction section 20 , and an amplitude ratio computation section 22 .
- the noise suppression device 10 also includes a suppression coefficient computation section 24 , suppression signal generation section 26 , and a frequency-time converter 28 .
- the frame unit correction section 18 is an example of a first correction section of technology disclosed herein.
- the frequency unit correction section 20 is an example of a second correction section of technology disclosed herein.
- the amplitude ratio computation section 22 , the suppression coefficient computation section 24 , and the suppression signal generation section 26 are examples of a suppression section of technology disclosed herein.
- Portions of the A/D converters 12 A, 12 B, the time-frequency converters 14 A, 14 B, the detection section 16 , the frame unit correction section 18 , the frequency unit correction section 20 and the frequency-time converter 28 are examples of a microphone sensitivity difference correction device of technology disclosed herein.
- the A/D converters 12 A, 12 B respectively take the input sound signal 1 and the input sound signal 2 that are input analogue signals and convert them at a sampling frequency Fs into a signal M 1 (t) and a signal M 2 (t) that are digital signals.
- t is a sampling time stamp.
- the time-frequency converters 14 A, 14 B respectively take the signal M 1 (t) and the signal M 2 (t) that are time domain signals converted by the A/D converters 12 A, 12 B, and convert them into signals M 1 (f, i) and signals M 2 (f, i) that are frequency domain signals for each of the frames.
- FFT Fast Fourier Transformation
- i denotes frame number
- f denotes frequency
- M(f, i) is a signal representing the frequency f of frame i, and is an example of a frequency domain signal of technology disclosed herein.
- 1 frame may be set at for example several tens of msec.
- the detection section 16 employs the signals M 1 (f, i) and the signals M 2 (f, i) converted by the time-frequency converters 14 A, 14 B to determine whether or not there is stationary noise for each of the frequencies f in each of the frames, or whether or not there is a nonstationary sound containing a voice. Signals M 1 (f, i) and signals M 2 (f, i) expressing stationary noise is detected thereby.
- signals M 1 (f, i) and the signals M 2 (f, i) are determined to be signals representing a stationary noise when the value of the r(f, i) is near to 1.0. Note that determination may be made as to whether or not a sound is stationary noise based on the ratio r(f, i) between the stationary noise model N st (f, i) and the signals M 2 (f, i).
- determination may be made as to whether or not the spectral profile of the signals M 1 (f, i) has a peak and trough structure with the characteristics of voice data. Determination may be made that there is stationary noise when there is a poorly defined peak and trough structure. Determination of the peak and trough structure may be performed by comparison of peak values of the signal. Note that determination may be made as to whether or not there is stationary noise based on the spectral profile of the signals M 2 (f, i).
- a correlation coefficient is computed between a spectral profile of signals M 1 (f, i) of the current frame and spectral profiles of signals M 1 (f, i) of the previous frame.
- the correlation coefficient is near to 0, then determination may be made that the signals M 1 (f, i) and the signals M 2 (f, i) are signals representing stationary noise.
- stationary noise detection may be made based on the correlation between the spectral profile of the signals M 2 (f, i) of the current frame and the spectral profile of the signals M 2 (f, i) of the previous frame.
- the frame unit correction section 18 employs the signals M 1 (f, i) and the signals M 2 (f, i) detected by the detection section 16 as signals representing stationary noise and computes a sensitivity difference correction coefficient at frame unit level, and corrects the signals M 2 (f, i) at the frame unit level.
- a sensitivity difference correction coefficient C 1 (i) may be computed at the frame unit level as expressed by the following Equation (1). Note that the sensitivity difference correction coefficient C 1 (i) at the frame unit level is an example of a first correction coefficient of technology disclosed herein.
- ⁇ is an update coefficient expressing the extent to reflect the frame unit sensitivity difference correction coefficient C 1 (i ⁇ 1) computed for the previous frame in the frame unit sensitivity difference correction coefficient C 1 (i) of the current frame, and is a value such that 0 ⁇ 1.
- ⁇ is an example of a first update coefficient of technology disclosed herein. Namely, the sensitivity difference correction coefficient C 1 (i ⁇ 1) of the previous frame is updated by computing the sensitivity difference correction coefficient C 1 (i) of the current frame.
- f max is a value that is 1 ⁇ 2 the sampling frequency Fs.
- of Equation (1) takes a value that is the sum of the signals M 1 (f, i) detected as signals expressing stationary noise by the detection section 16 over the range from frequency 0 to f max . Similar applies to ⁇
- the frame unit correction section 18 generates signals M 2 ′(f, i) that are the signals M 2 (f, i) corrected as expressed by following Equation (2) based on the computed sensitivity difference correction coefficient C 1 (i) by frame unit.
- the frame unit sensitivity difference correction coefficient C 1 (i) expresses the sensitivity difference at the frame unit level between the signals M 1 (f, i) and the signals M 2 (f, i). Multiplying the frame unit sensitivity difference correction coefficient C 1 (i) by the signals M 2 (f, i) enables the sensitivity difference between the signals M 1 (f, i) and signals M 2 (f, i) to be corrected at the frame unit level.
- the frequency unit correction section 20 employs the signals M 1 (f, i) and the signals M 2 ′(f, i) corrected at the frame unit level by the frame unit correction section 18 to compute a sensitivity difference correction coefficient at the frequency unit level, and to correct the signals M 2 ′(f, i) by frequency unit.
- a frequency unit sensitivity difference correction coefficient C P (f, i) may be computed as expressed in following Equation (3). Note that the frequency unit sensitivity difference correction coefficient C P (f, i) is an example of a second correction coefficient of technology disclosed herein.
- ⁇ is an update coefficient representing the extent to reflect the frequency unit sensitivity difference correction coefficient C P (f,i ⁇ 1) computed at the same frequency f for the previous frame in the frequency unit sensitivity difference correction coefficient C P (f, i) of the current frame, and is a value such that 0 ⁇ 1.
- ⁇ is an example of a second update coefficient of technology disclosed herein. Namely, the frequency unit sensitivity difference correction coefficient C P (f, i ⁇ 1) of the previous frame is updated by computing the frequency unit sensitivity difference correction coefficient C P (f, i) of the current frame.
- the frequency unit correction section 20 generates signals M 2 ′′(f, i) of the signals M 2 ′(f, i) corrected as expressed by the following Equation (4) based on the computed frequency unit sensitivity difference correction coefficient C P (f, i).
- the frequency unit sensitivity difference correction coefficient C P (f, i) expresses the sensitivity difference at the frequency unit level between the M 1 (f, i) and the M 2 ′(f, i). Multiplying the frequency unit sensitivity difference correction coefficient C P (f, i) by the M 2 ′(f, i) enables correction to be performed by frequency unit of the sensitivity difference between the signals M 1 (f, i) and the signals M 2 ′(f, i). Note that the signals M 2 ′(f, i) are signals on which correction has already been performed at the frame unit level, and correction at the frequency unit level is correction that performs fine correction for each of the frequencies.
- the amplitude ratio computation section 22 computes the respective amplitude spectra each of the signals M 1 (f, i) and signals M 2 ′′(f, i). Amplitude ratios R(f, i) are then respectively computed between amplitude spectra of the same frequency for each of the frequencies in each of the frames.
- the suppression coefficient computation section 24 determines whether or not the input sound signal is a target voice or noise and computes a suppression coefficient.
- a case is now considered in which, as illustrated in FIG. 3 , a separation between the microphone 11 A and the microphone 11 B (inter-microphone distance) is d, a sound source direction is ⁇ , and a distance from the sound source to the microphone 11 A is ds.
- sound direction ⁇ is a direction in which a sound source is present with respect to the microphone array 11 , and as illustrated in FIG.
- Equation (5) a theoretical value R T of the amplitude ratio between the input sound signal 1 and the input sound signal 2 (the amplitude ratio when there is no sensitivity difference occurring between the microphones) is expressed by the following Equation (5).
- a theoretical value R T of the amplitude ratio is a value from R min to R max as expressed by the following Equation (6) and Equation (7).
- the suppression coefficient computation section 24 accordingly first determines a range R min to R max based on the inter-microphone distance d, the sound source direction ⁇ , and the distance ds from the sound source of the target voice to the microphone 11 A. Then when the computed amplitude ratios R(f, i) are within the range R min to R max , the input sound signal is determined to be the target voice, and a suppression coefficient ⁇ (f, i) is computed as set out below.
- ⁇ min is a value such that 0 ⁇ min ⁇ 1, and when for example a suppression amount of ⁇ 3 dB is desired ⁇ min is about 0.7, and when a suppression amount of ⁇ 6 dB is desired ⁇ min is about 0.5.
- suppression coefficient ⁇ may be computed so as to gradually change from 1.0 to ⁇ min as the amplitude ratio R(f, i) progresses away from the range R min to R max as expressed by the following.
- ⁇ ( f,i ) ⁇ 10(1.0 ⁇ min ) R ( f,i )+10 R max (1.0 ⁇ min )+1.0
- the suppression coefficient ⁇ (f, i) described above is a value from 0.0 to 1.0 that becomes nearer to 0.0 the greater to degree of suppression.
- the suppression signal generation section 26 By multiplying the suppression coefficient ⁇ (f, i) computed by the suppression coefficient computation section 24 by the signals M 1 (f, i), the suppression signal generation section 26 generates a suppression signal in which noise has been suppressed for each of the frequencies and each frame.
- the frequency-time converter 28 takes the suppression signal that is a frequency domain signal generated by the suppression signal generation section 26 and converts it into an output sound signal that is a time domain signal by using for example an inverse Fourier transform, and outputs the converted signal.
- the noise suppression device 10 may, for example, be implemented by a computer 40 such as that illustrated in FIG. 4 .
- the computer 40 includes a CPU 42 , a memory 44 and a nonvolatile storage section 46 .
- the CPU 42 , the memory 44 and the storage section 46 are connected together through a bus 48 .
- the microphone array 11 (the microphones 11 A and 11 B) are connected to the computer 40 .
- the storage section 46 may be implemented for example by a Hard Disk Drive (HDD) or a flash memory.
- the storage section 46 serving as a storage medium is stored with a noise suppression program 50 for making the computer 40 function as the noise suppression device 10 .
- the CPU 42 reads the noise suppression program 50 from the storage section 46 , expands the noise suppression program 50 in the memory 44 and sequentially executes the processes of the noise suppression program 50 .
- the noise suppression program 50 includes an A/D conversion process 52 , time-frequency conversion process 54 , a detection process 56 , a frame unit correction process 58 , a frequency unit correction process 60 , and an amplitude ratio computation process 62 .
- the noise suppression program 50 also includes a suppression coefficient computation process 64 , a suppression signal generation process 66 , and a frequency-time conversion process 68 .
- the CPU 42 operates as the A/D converters 12 A, 12 B illustrated in FIG. 2 by executing the A/D conversion process 52 .
- the CPU 42 operates as the time-frequency converters 14 A, 14 B illustrated in FIG. 2 by executing the time-frequency conversion process 54 .
- the CPU 42 operates as the detection section 16 illustrated in FIG. 2 by executing the detection process 56 .
- the CPU 42 operates as the frame unit correction section 18 illustrated in FIG. 2 by executing the frame unit correction process 58 .
- the CPU 42 operates as the frequency unit correction section 20 illustrated in FIG. 2 by executing the frequency unit correction process 60 .
- the CPU 42 operates as the amplitude ratio computation section 22 illustrated in FIG. 2 by executing the amplitude ratio computation process 62 .
- the CPU 42 operates as the suppression coefficient computation section 24 illustrated in FIG.
- the CPU 42 operates as the suppression signal generation section 26 illustrated in FIG. 2 by executing the suppression signal generation process 66 .
- the CPU 42 operates as the frequency-time converter 28 illustrated in FIG. 2 by executing the frequency-time conversion process 68 .
- the computer 40 executing the noise suppression program 50 accordingly functions as the noise suppression device 10 .
- noise suppression device 10 with, for example, a semiconductor integrated circuit, and more particularly with an Application Specific Integrated Circuit (ASIC) and Digital Signal Processor (DSP).
- ASIC Application Specific Integrated Circuit
- DSP Digital Signal Processor
- the noise suppression device 10 When the input sound signal 1 and the input sound signal 2 are output from the microphone array 11 , the CPU 42 expands the noise suppression program 50 stored on the storage section 46 into the memory 44 , and executes the noise suppression processing illustrated in FIG. 5 .
- the A/D converters 12 A, 12 B respectively convert, with the sampling frequency Fs, the input sound signal 1 and the input sound signal 2 that are input analogue signals into the signal M 1 (t) and the signal M 2 (t) that are digital signals.
- the time-frequency converters 14 A, 14 B respectively convert the signal M 1 (t) and the signal M 2 (t) that are time domain signals into the signals M 1 (f, i) and the signals M 2 (f, i) that are frequency domain signals for each of the frames.
- the detection section 16 employs the signals M 2 (f, i) and the signals M 2 (f, i) to determine, for each of the frequencies f of the frame i, whether or not the input sound signal is a stationary noise or a nonstationary sound, and to detect signals M 1 (f, i) and the signals M 2 (f, i) expressing stationary noise.
- the frame unit correction section 18 employs the signals M 1 (f, i) and the signals M 2 (f, i) detected as signals expressing stationary noise to compute the frame unit sensitivity difference correction coefficient C 1 (i) such as for example expressed by Equation (1).
- the frame unit correction section 18 multiplies the frame unit sensitivity difference correction coefficient C 1 (i) by the signals M 2 (f, i), and generates signals M 2 ′(f, i) with the sensitivity difference between the signals M 1 (f, i) and the signals M 2 (f, i) corrected by frame unit.
- the frequency unit correction section 20 employs the signals M 1 (f, i) and the signals M 2 ′(f, i) to compute the sensitivity difference correction coefficient C P (f, i) at frequency unit level as for example expressed by Equation (3).
- the frequency unit correction section 20 multiplies the sensitivity difference correction coefficient C P (f, i) by frequency unit by the signals M 2 ′(f, i), and generates the signals M 2 ′′(f, i) with the sensitivity difference between the signals M 1 (f, i) and the signals M 2 ′(f, i) corrected by frequency unit.
- the amplitude ratio computation section 22 computes amplitude spectra for each of the signals M 1 (f, i) and signals M 2 ′′(f, i). The amplitude ratio computation section 22 then compares amplitude spectra against each other for the same frequency for each of the frequencies and each of the frames, and computes amplitude ratios R(f, i).
- the suppression coefficient computation section 24 determines whether the input sound signal is the target voice or stationary noise based on the amplitude ratios R(f, i), and computes the suppression coefficient ⁇ (f, i).
- the suppression signal generation section 26 multiplies the suppression coefficient ⁇ (f, i) by the signals M 1 (f, i) to generate suppression signals with suppressed noise for each of the frequencies of each of the frames.
- the frequency-time converter 28 converts the suppression signal that is a frequency domain signal into an output sound signal that is a time domain signal by employing for example an inverse Fourier transform.
- the A/D converters 12 A, 12 B determine whether or not there is a following input sound signal. When an input sound signal has been input, processing returns to step 100 , and the processing of steps 100 to 120 is repeated. The noise suppression processing is ended when determined that no subsequent input sound signal has been input.
- the noise suppression device 10 of the first exemplary embodiment the fact that the amplitude ratio between input sound signals is close to 1.0 for a stationary noise is employed to detect stationary noise in the input sound signals, and to correct for the sensitivity difference between the microphones.
- Utilizing the stationary noise enables a voice to be detected from a wider range by using sensitivity difference correction than in cases in which sensitivity difference correction is performed based on a voice arriving from a specific direction detected using phase difference.
- correction is performed by frequency unit to signals in which at least one signal of the input sound signals converted into frequency domain signals has first been corrected by frame unit. Thereby sensitivity difference correction is enabled to be performed rapidly even in cases in which the sensitivity difference is different for each of the frequencies.
- the time until a stable correction coefficient for sensitivity difference correction is achieved is shortened even in cases in which the sensitivity difference between microphones is large. Namely, rapid correction of inter-microphone sensitivity difference is enabled. A decrease is thereby enabled in audio distortion caused by noise suppression in which sensitivity difference correction is delayed.
- signals M 2 (f, i) are corrected for sensitivity difference based on inter-microphone sensitivity differences, and a noise suppression coefficient is then multiplied by the signals M 1 (f, i) to generate a suppression signal.
- signals M 1 (f, i) may be corrected for sensitivity difference, and a noise suppression coefficient then multiplied by the signals M 2 (f, i) to generate a suppression signal. Either of these methods may be employed when there is no large difference between the respective distances from the target sound source to the microphone 11 A and the microphone 11 B.
- the frame unit sensitivity difference correction coefficient C 1 (i) and the frequency sensitivity difference correction coefficient C P (f, i) by frequency unit are updated for each of the frames
- an update coefficient ⁇ in Equation (1) and an update coefficient ⁇ in Equation (3) may be set so as to be larger the longer the execution duration of the above noise suppression processing. Note that updates of the update coefficients ⁇ and ⁇ may both be performed using the same method, or may be performed using separate methods.
- FIG. 6 illustrates a noise suppression device 210 according to a second exemplary embodiment. Note that the same reference numerals are allocated in the noise suppression device 210 to similar parts to those of the noise suppression device 10 of the first exemplary embodiment, and further explanation is omitted thereof.
- the noise suppression device 210 includes A/D converters 12 A, 12 B, time-frequency converters 14 A, 14 B, a detection section 216 , a frame unit correction section 218 , a frequency unit correction section 20 , and an amplitude ratio computation section 22 .
- the noise suppression device 210 also includes a suppression coefficient computation section 224 , suppression signal generation section 26 , a frequency-time converter 28 , a phase difference utilization range setting section 30 , a phase difference computation section 32 and an accuracy computation section 34 .
- the frame unit correction section 218 is an example of a first correction section of technology disclosed herein.
- the frequency unit correction section 20 is an example of a second correction section of technology disclosed herein.
- the amplitude ratio computation section 22 , the suppression coefficient computation section 224 , and the suppression signal generation section 26 are examples of a suppression section of technology disclosed herein. Portions of the A/D converters 12 A, 12 B, the time-frequency converters 14 A, 14 B, the detection section 216 , the frame unit correction section 218 , the frequency unit correction section 20 and the frequency-time converter 28 are examples of a microphone sensitivity difference correction device of technology disclosed herein.
- the phase difference utilization range setting section 30 receives setting values for inter-microphone distance and sampling frequency, and sets a frequency band capable of utilizing phase difference to determine a sound arrival direction based on the inter-microphone distance and the sampling frequency.
- FIG. 7 is a graph illustrating the phase difference between the input sound signal 1 and the input sound signal 2 for each sound source direction when the inter-microphone distance d between the microphone 11 A and the microphone 11 B is smaller than speed of sound c/ sampling frequency Fs.
- FIG. 8 is a graph illustrating the phase difference between the input sound signal 1 and the input sound signal 2 for each sound source direction when the inter-microphone distance d is larger than speed of sound c/ sampling frequency Fs. Sound source directions of 10°, 30°, 50°, 70°, 90° are illustrated in FIG. 7 and FIG. 8 .
- phase rotation does not occur in any sound source direction when the inter-microphone distance d is smaller than speed of sound c/ sampling frequency Fs, there is no impediment to utilizing the phase difference to determine the arrival direction of the sound.
- FIG. 8 when the inter-microphone distance d is larger than speed of sound c/ sampling frequency Fs, phase rotation occurs in a high region frequency band higher than a given frequency (in the vicinity of 1 kHz in the example of FIG. 8 ). It becomes difficult to utilized phase difference to determine the arrival direction of sound when phase rotation occurs. Namely, an issue arises in that there are constraints on the inter-microphone distance when phase difference is utilized to correct for sensitivity difference between microphones and for noise suppression.
- the phase difference utilization range setting section 30 accordingly computes a frequency band such that phase rotation in the phase difference between the input sound signal 1 and the input sound signal 2 does not arise, based on the inter-microphone distance d and the sampling frequency Fs. Then the computed frequency band is set as a phase difference utilization range for determining the arrival direction of sound by utilizing phase difference.
- phase difference utilization range setting section 30 uses the inter-microphone distance d, the sampling frequency Fs and the speed of sound c to compute an upper limit frequency f max of the phase difference utilization range according to the following Equations (8) and (9).
- the phase difference utilization range setting section 30 sets as the phase difference utilization range a frequency band of computed f max or lower. Setting of the phase difference utilization range may be executed only once on operation startup of the device, and the computed upper limit frequency f max then stored for example in a memory.
- FIG. 9 illustrates phase differences when the sampling frequency Fs is 8 kHz, the inter-microphone distance d is 135 mm, and the sound source direction ⁇ is 30°. In such cases, the f max is about 1.2 kHz by Equation (9).
- the phase difference computation section 32 computes each phase spectrum of the signals M 1 (f, i) and the signals M 2 (f, i) in the phase difference utilization range (frequency band of frequency f max or lower) that has been set by the phase difference utilization range setting section 30 .
- the phase difference computation section 32 then computes the phase difference between each of the phase spectra of the same frequency.
- the detection section 216 detects sound arrival directions other than the sound source direction of the target voice (referred to below as the “target sound direction”) by determining the arrival direction of input sound signals for each of the frequencies f in each of the frames. Sounds arriving from outside of the target sound direction are treated as being sounds arriving from far away, enabling a value in the vicinity of 1.0 to be given to the amplitude ratio between input sound signals, similarly to the treatment of stationary noise.
- target sound direction sound arrival directions other than the sound source direction of the target voice
- the detection section 216 determines from the phase difference computed by the phase difference computation section 32 whether or not sound of the current frame is sound that has arrived from the target sound direction.
- the target sound direction is the direction of the mouth of the person who is holding the mobile phone and speaking. Explanation next follows regarding a case, as illustrated in FIG. 3 , in which the target sound source is placed at a position nearer to the microphone 11 A than to the microphone 11 B.
- the detection section 216 sets a determination region, for example as illustrated by diagonal lines in FIG. 9 , to determine whether or not the input sound signal is sound that has arrived from the target sound direction when the computed phase difference is contained therein.
- the phase difference of the determination region is contained in the phase difference utilization range that has been set in the phase difference utilization range setting section 30 , the sound of the frequency f component of the current frame of the input sound signal may be treated as being sound that has arrived from the target sound direction.
- the phase difference is outside of the determination region, the sound of the frequency f component of the current frame of the input sound signal may be treated as being sound that has arrived from outside the sound source direction.
- the frame unit correction section 218 employs the signals M 1 (f, i) and the signals M 2 (f, i) detected as sound that has arrived from outside of the target sound direction by the detection section 216 to compute the sensitivity difference correction coefficient by frame unit, and corrects the signals M 2 (f, i) by frame unit.
- the f max of Equation (1) is an upper limit frequency that has been set by the phase difference utilization range setting section 30 .
- of Equation (1) takes a value that is the sum of the signals M 1 (f, i) detected by the detection section 216 as being sound arriving from outside the target sound direction over the range from frequency 0 to f max . Similar applies to the term ⁇
- the frame unit correction section 218 similarly to the frame unit correction section 18 of the first exemplary embodiment, generates signals M 2 ′(f, i) that are the signals M 2 (f, i) corrected as expressed for example by Equation (2), based on the computed sensitivity difference correction coefficient C 1 (i) by frame unit.
- the accuracy computation section 34 computes a degree of accuracy of the sensitivity difference correction.
- the second exemplary embodiment utilizes the fact that the sound that has arrived from outside the target sound direction has a value of amplitude ratio between input sound signals that is close to 1.0, similarly to with stationary noise.
- the amplitude ratio between detected input sound signals as sound that has arrived from outside of the target sound direction is a value that is not close to 1.0.
- a value of the amplitude ratio is employed that deviates greatly from 1.0, then sometimes this does not enable accurate sensitivity difference correction to be performed, and audio distortion occurs when noise suppression is performed.
- a similar issue arises when sufficient coefficient updating is not performed. In such cases configuration is made such that noise suppression is only performed when there is a high degree of accuracy to the sensitivity difference correction.
- the accuracy computation section 34 updates the degree of accuracy when there is a high probability that the sound is from the target sound direction.
- the probability that the sound is from the target sound direction is a value from 0.0 to 1.0, and hence a degree of accuracy E P (f, i) is computed such as that expressed by following Equation (10) when for example the probability that the sound comes from the target sound direction exceeds a threshold value, with a threshold value of for example 0.8.
- E P ( f,i ) ⁇ E P ( f,i ⁇ 1)+(1 ⁇ ) ⁇ (
- ⁇ here is an update coefficient representing the extent to reflect the degree of accuracy E P (f, i ⁇ 1) computed for the previous frame in the degree of accuracy E P (f, i) computed for the current frame, and is a value such that 0 ⁇ 1.
- ⁇ is an example of a third update coefficient of technology disclosed herein. Namely, the degree of accuracy E P (f, i ⁇ 1) for each of the frequencies of the previous frame is updated by computing the degree of accuracy E P (f, i) for each of the frequencies of the current frame.
- the suppression coefficient computation section 224 computes the suppression coefficient ⁇ (f, i) in a similar manner to the suppression coefficient computation section 24 of the first exemplary embodiment. However, for frequencies for which the degree of accuracy E P (f, i) is less than a specific threshold value (for example 1.0), this is treated as being a sensitivity difference correction coefficient that is not updated until accurate sensitivity difference correction may be performed, and the suppression coefficient ⁇ (f, i) is taken as a 1.0 (a value for which no suppression is performed).
- a specific threshold value for example 1.0
- the noise suppression device 210 may, for example, be implemented by a computer 240 such as that illustrated in FIG. 4 .
- the computer 240 includes a CPU 42 , a memory 44 and a nonvolatile storage section 46 .
- the CPU 42 , the memory 44 and the storage section 46 are connected together through a bus 48 .
- the microphone array 11 (the microphones 11 A and 11 B) are connected to the computer 240 .
- the storage section 46 may be implemented for example by a HDD or a flash memory.
- the storage section 46 serving as a storage medium is stored with a noise suppression program 250 for making the computer 240 function as the noise suppression device 210 .
- the CPU 42 reads the noise suppression program 250 from the storage section 46 , expands the noise suppression program 250 in the memory 44 and sequentially executes the processes of the noise suppression program 250 .
- the noise suppression program 250 includes an A/D conversion process 52 , time-frequency conversion process 54 , a detection process 256 , a frame unit correction process 258 , a frequency unit correction process 60 , and an amplitude ratio computation process 62 .
- the noise suppression program 250 also includes a suppression coefficient computation process 264 , a suppression signal generation process 66 , a frequency-time conversion process 68 , a phase difference utilization range setting process 70 , a phase difference computation process 72 , and an accuracy computation process 74 .
- the CPU 42 operates as the detection section 216 illustrated in FIG. 6 by executing the detection process 256 .
- the CPU 42 operates as the frame unit correction section 218 illustrated in FIG. 6 by executing the frame unit correction process 258 .
- the CPU 42 operates as the suppression coefficient computation section 224 illustrated in FIG. 6 by executing the suppression coefficient computation process 264 .
- the CPU 42 operates as the phase difference utilization range setting section 30 illustrated in FIG. 6 by executing the phase difference utilization range setting process 70 .
- the CPU 42 operates as the phase difference computation section 32 illustrated in FIG. 6 by executing the phase difference computation process 72 .
- the CPU 42 operates as the accuracy computation section 34 illustrated in FIG. 6 by executing the accuracy computation process 74 .
- Other processes are similar to those of the noise suppression program 50 of the first exemplary embodiment.
- the computer 240 executing the noise suppression program 250 accordingly functions as the noise suppression device 210 .
- noise suppression device 210 with, for example, a semiconductor integrated circuit, and more particularly with an ASIC and DSP.
- the noise suppression device 210 When the input sound signal 1 and the input sound signal 2 are output from the microphone array 11 , the CPU 42 expands the noise suppression program 250 stored on the storage section 46 into the memory 44 , and executes the noise suppression processing illustrated in FIG. 10 . Note that processing in the noise suppression processing of the second exemplary embodiment that is similar to the noise suppression processing in the first exemplary embodiment is allocated the same reference numerals and detailed explanation is omitted thereof.
- the phase difference utilization range setting section 30 receives setting vales for the inter-microphone distance d and the sampling frequency Fs, and computes the frequency band capable of utilizing the phase difference to determining the arrival direction of sound, and sets the phase difference utilization range.
- the input sound signal 1 and the input sound signal 2 that are analogue signals are converted into the signal M 1 (t) and the signal M 2 (t) that are digital signals, and then further converted into the signals M 1 (f, i) and the signals M 2 (f, i) that are frequency domain signals.
- the phase difference computation section 32 computes the respective phase spectra of the signals M 1 (f, i) and the signals M 2 (f, i) in the phase difference utilization range set by the phase difference utilization range setting section 30 (the frequency band of frequency f max or lower). The phase difference computation section 32 then computes as a phase difference the difference between respective phase spectra of the same frequency.
- the detection section 216 detects the signals M 1 (f, i) and the signals M 2 (f, i) expressing the arriving sound for directions other than the target sound direction by determining the arrival direction for each of the frequencies f of each of the frames based on the phase difference computed at step 202 .
- the frame unit correction section 218 employs the signals M 1 (f, i) and the signals M 2 (f, i) detected as sound arriving from directions other than the target sound direction to compute the frame unit sensitivity difference correction coefficient C 1 (i) such as for example expressed by Equation (1).
- the f max of Equation (1) is the upper limit frequency set by the phase difference utilization range setting section 30 .
- of Equation (1) is the sum of signals M 1 (f, i) detected as sound arriving from directions other than the target sound direction over the range of frequencies from 0 to f max . Similar applies to the term ⁇
- the signals M 2 ′′(f, i) subjected to sensitivity difference correction by frequency unit are then generated from the signals M 2 (f, i) to which sensitivity difference correction by frame unit has been performed by steps 108 to 112 .
- the accuracy computation section 34 computes as a probability that the input sound signal for that frame is sound from the target sound direction, a probability that a frequency with the phase difference is contained in the determination region (for example the region illustrated by diagonal lines in FIG. 9 ) out of each of the frequencies in the phase difference utilization range.
- the accuracy computation section 34 determines whether or not the probability computed at step 208 has exceeded a specific threshold value (for example 0.8). Processing proceeds to step 212 when the probability that that the sound is from the target sound direction exceeds the threshold value.
- the accuracy computation section 34 updates the degree of accuracy E P (f, i ⁇ 1) up to the previous frame by computation of the degree of accuracy E P (f, i) for example as expressed by Equation (10).
- the processing skips step 212 and proceeds to step 114 .
- the amplitude ratio computation section 22 computes the amplitude ratios R(f, i).
- the suppression coefficient computation section 224 computes the suppression coefficient ⁇ (f, i) similarly to at step 116 in the first exemplary embodiment. However, for frequencies where the degree of accuracy E P (f, i) updated at step 212 is less than a specific threshold value (for example 1.0), the suppression coefficient ⁇ (f, i) is made 1.0 (a value for not performing suppression).
- steps 118 to 122 the output sound signal is output by processing similar to that of the first exemplary embodiment, and the noise suppression processing is ended.
- the noise suppression device 210 of the second exemplary embodiment sound arriving from directions other than the target sound direction is detected based on the computed phase difference in the frequency band capable of utilizing phase difference.
- the amplitude ratio between the input sound signals are values close to 1.0, and the sensitivity difference between microphones is corrected.
- This similarly to with the first exemplary embodiment, enables the inter-microphone sensitivity difference to be rapidly corrected for, even for cases in which there are limitations to microphone array placement.
- a decrease is thereby enabled in audio distortion caused by noise suppression in which sensitivity difference correction is delayed.
- noise suppression processing is performed only in cases in which there is a high degree of accuracy in the sensitivity difference correction, enabling audio distortion to be prevented from occurring due to noise suppression being performed when accurate sensitivity difference correction is unable to be performed.
- the frame unit sensitivity difference correction coefficient C 1 (i), the frequency unit frequency sensitivity difference correction coefficient C P (f, i) and the degree of accuracy E P (f, i) are updated for each of the frames
- an update coefficient ⁇ in Equation (1), an update coefficient ⁇ in Equation (3) and an update coefficient ⁇ in Equation (10) may be set so as to be larger the longer the execution duration of the above noise suppression processing.
- the values of ⁇ , ⁇ and ⁇ may be updated as expressed by the following Equations (11) to (13). In such cases ⁇ , ⁇ and ⁇ adopt different values for each of the frequencies.
- update coefficients ⁇ , ⁇ and ⁇ may all be updated using the same method, or may be updated using separate different methods.
- a microphone sensitivity difference correction device of technology disclosed herein may be implemented as a stand-alone, or in combination with another device.
- the configuration may be made such that a corrected signal is output as it is, or a corrected signal may be input to a device that performs other audio processing that noise suppression processing.
- FIG. 11 is a graph illustrating an example of amplitude spectra of the input sound signal 1 and the input sound signal 2 .
- the output of the input sound signal 1 output from the microphone 11 A that is placed nearer to the sound source should have larger amplitude than the input sound signal 2 .
- the degree of suppression of the microphone 11 B is greater than that of the microphone 11 A, and so the amplitude of the input sound signal 2 is greater than the amplitude of the input sound signal 1 .
- FIG. 12 results of performing noise suppression on the input sound signal 1 and the input sound signal 2 illustrated in FIG. 11 by employing a conventional method are illustrated in FIG. 12 .
- the conventional method here is a method in which noise suppression processing is performed by sensitivity difference correction between each of the microphones based on sound arriving from orthogonal directions detected by employing phase difference.
- this conventional method it is only possible to perform accurate sensitivity difference correction in low frequency regions within the phase difference utilization range when the inter-microphone distance is larger than the speed of sound/sampling frequency.
- a voice is suppressed in the intermediate to high frequency regions (the peak portions).
- results of performing noise suppression on the input sound signal 1 and the input sound signal 2 illustrated in FIG. 11 utilizing the technology disclosed herein are illustrated in FIG. 13 .
- the noise suppression results by the technology disclosed herein illustrated in FIG. 13 a voice is not suppressed across all the frequency bands, and only the noise (the valley portions) is suppressed.
- the degrees of freedom are raised for placing positions of each of the microphones, enabling installation to a microphone array of various devices that are getting thinner and thinner, such as smartphones. Moreover it is also possible to rapidly correct sensitivity differences between microphones, and to execute noise suppression without audio distortion.
- the noise suppression programs 50 and 250 serving as examples of a noise suppression program of technology disclosed herein are pre-stored (pre-installed) on the storage section 46 .
- the noise suppression program of technology disclosed herein may be supplied in a format such as stored on a storage medium such as a CD-ROM or DVD-ROM.
- An aspect of technology disclosed herein has the advantageous effect of enabling rapid correction to be performed for sensitivity differences between microphones even when there are limitations to the placement positions of the microphone arrays.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Signal Processing (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Otolaryngology (AREA)
- Quality & Reliability (AREA)
- Computational Linguistics (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Circuit For Audible Band Transducer (AREA)
- Obtaining Desirable Characteristics In Audible-Bandwidth Transducers (AREA)
Abstract
Description
- This application is based upon and claims the benefit of priority of the prior Japanese Patent Application No. 2013-039695, filed on Feb. 28, 2013, the entire contents of which are incorporated herein by reference.
- The embodiments discussed herein are related to a microphone sensitivity difference correction device, a microphone sensitivity difference correction method, a microphone sensitivity difference correction program and a noise suppression device.
- In, for example, a vehicle mounted car navigation system, a hands-free phone, or a telephone conference system, noise suppression is conventionally performed to suppress noise contained in a speech signal that has mixed-in noise other than a target voice (for example voices of people talking). Technology employing a microphone array including plural microphones is known as such noise suppression technology.
- In such conventional noise suppression technology using a microphone array, there is a method for noise suppression based on an amplitude ratio between signals received from plural microphones. The amplitude ratio becomes 1.0 when the distance between each of the microphones and the sound source is the same distance or when far away, and the amplitude ratio is a value that deviates from 1.0 when the distance between each of the microphones and the sound source is a different distance. Noise suppression based on the amplitude ratio is a method that employs the amplitude ratio, and so, for example, when a target sound source is present at a position that has different distances to each of the microphones, the method suppresses noise that has a value of amplitude ratio of close to 1.0 in the received signals from the plural microphones.
- However, even when the distances between each of the microphones and the sound sources are the same distances, sometimes the value of the amplitude ratio deviates from 1.0 due to sensitivity differences that arise between each of the microphones. Since accurate noise suppression based on amplitude ratio is not be performed in such cases, there is accordingly a need for technology to correct for such sensitivity differences between the microphones.
- As technology to correct sensitivity differences between microphones, there is, for example, a proposal for a device that corrects the level from at least one sound signal by deriving a correction coefficient when performing audio processing based on sound signals respectively generated from sound input to plural sound input sections. In such a device, for respective sounds input to the plural sound input sections, frequency components are detected of sound arriving from a substantially orthogonal direction with respect to a straight line defining the placement position of a first sound input section and a second sound input section among the plural sound input sections. The direction from which the sound arrives is detected based on phase differences between the sounds arriving from the first sound input section and the second sound input section. In order to match the levels of sound signal respectively generated by the first sound input section and the second sound input section based on the sound of the detected frequency components, correction coefficients are derived for correcting the level of at least one of the respective sound signals generated from the input sound by the first sound input section and the second sound input section.
- International Publication Pamphlet No. WO2009/069184
- However, in conventional technology to correct for sensitivity differences between microphones, a direction of arriving sound is detected based on phase difference of sound respectively arriving at two input sections. Thus when each of the microphones are placed in positions that enable the phase difference to be used across all frequency regions, correction of sensitivity difference is possible in a range over which there is not such a large sensitivity difference between the microphones. However, when the separation between two microphones is wider than the speed of sound/sampling frequency, due to sampling processing, sometimes phase rotation of phase differences occurs in high frequency bands. In such cases, the direction of arriving sound is no longer accurately detectable based on phase difference, and this hence makes it impossible to perform sensitivity difference correction over all frequency bands.
- Moreover, when the separation between two microphones is narrower than the speed of sound/sampling frequency, the following issue arises even in cases in which it is possible to detect the direction of the arriving sound based on the phase difference over all the frequency bands. There are limited conditions to make a sound source be present in a direction in which the amplitude of the signals received from each of the microphones are the same as each other, so as to detect sound arriving from orthogonal directions in conventional technology. The probability of detecting sound that matches these conditions is accordingly low, and time is required until the correction coefficient is updated to enable appropriate sensitivity difference correction to be performed, and sometimes sensitivity difference correction is performed based on correction coefficients that are not appropriate to the actual sensitivity difference. In particular when the sensitivity difference is large, this leads to audio distortion when sensitivity difference correction immediately after sound emission is not performed in time.
- According to an aspect of the embodiments, a microphone sensitivity difference correction device includes: a detection section that detects a frequency domain signal expressing a stationary noise, based on frequency domain signals of input sound signals respectively input from plural microphones contained in a microphone array that have been converted into signals in a frequency domain for each frame; a first correction section that employs the frequency domain signal expressing the stationary noise to compute a first correction coefficient for correcting the sensitivity difference between the plural microphones by a frame unit, and that employs the first correction coefficient to correct the frequency domain signals by frame unit; and a second correction section that employs the frequency domain signals that have been corrected by the first correction section to compute a second correction coefficient for correcting by frequency unit the sensitivity difference between the plural microphones for each of the frames, and that employs the second correction coefficient to correct for each of the frames by frequency unit the frequency domain signals that have been corrected by the first correction section.
- The object and advantages of the invention will be realized and attained by means of the elements and combinations particularly pointed out in the claims.
- It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory and are not restrictive of the invention.
-
FIG. 1 is a block diagram illustrating an example of a configuration of a noise suppression device according to a first exemplary embodiment; -
FIG. 2 is a block diagram illustrating an example of a functional configuration of a noise suppression device according to the first exemplary embodiment; -
FIG. 3 is a schematic diagram to explain a sound source position with respect to a microphone array; -
FIG. 4 is a schematic block diagram illustrating an example of a computer that functions as a noise suppression device; -
FIG. 5 is a flow chart illustrating noise suppression processing according to the first exemplary embodiment; -
FIG. 6 is a block diagram illustrating an example of a functional configuration of a noise suppression device according to a second exemplary embodiment; -
FIG. 7 is a graph illustrating an example of phase difference when an inter-microphone distance is short; -
FIG. 8 is a graph illustrating an example of phase difference when an inter-microphone distance is long; -
FIG. 9 is a schematic diagram to explain a phase difference determination region; -
FIG. 10 is a flow chart illustrating noise suppression processing of a second exemplary embodiment; -
FIG. 11 is a graph illustrating an example of input sound signals; -
FIG. 12 is a graph illustrating an example results of noise suppression by a conventional method; and -
FIG. 13 is a graph illustrating an example results of noise suppression by technology disclosed herein. - Detailed explanation follows regarding an example of an exemplary embodiment of technology disclosed herein, with reference to the drawings.
-
FIG. 1 illustrates anoise suppression device 10 according to a first exemplary embodiment. Amicrophone array 11 of plural microphones at a specific separation d is connected to thenoise suppression device 10. There are at least two microphones included in themicrophone array 11. Explanation follows regarding an example in which two microphones are included,microphone 11A andmicrophone 11B. - The
microphones microphone 11A isinput sound signal 1 and the signal output from themicrophone 11B isinput sound signal 2. Noise other than the target voice (sound from the target voice source, for example voices of people talking) is mixed into theinput sound signal 1 and theinput sound signal 2. Theinput sound signal 1 and theinput sound signal 2 that have been output from themicrophone array 11 are input to thenoise suppression device 10. In thenoise suppression device 10, after correcting for sensitivity difference between themicrophone 11A and themicrophone 11B, a noise suppressed output sound signal is generated and output. - As illustrated in
FIG. 2 , thenoise suppression device 10 includes analogue-to-digital (A/D)converters frequency converters detection section 16, a frameunit correction section 18, a frequencyunit correction section 20, and an amplituderatio computation section 22. Thenoise suppression device 10 also includes a suppressioncoefficient computation section 24, suppressionsignal generation section 26, and a frequency-time converter 28. Note that the frameunit correction section 18 is an example of a first correction section of technology disclosed herein. The frequencyunit correction section 20 is an example of a second correction section of technology disclosed herein. The amplituderatio computation section 22, the suppressioncoefficient computation section 24, and the suppressionsignal generation section 26 are examples of a suppression section of technology disclosed herein. Portions of the A/D converters frequency converters detection section 16, the frameunit correction section 18, the frequencyunit correction section 20 and the frequency-time converter 28 are examples of a microphone sensitivity difference correction device of technology disclosed herein. - The A/
D converters input sound signal 1 and theinput sound signal 2 that are input analogue signals and convert them at a sampling frequency Fs into a signal M1(t) and a signal M2(t) that are digital signals. t is a sampling time stamp. - The time-
frequency converters D converters - The
detection section 16 employs the signals M1(f, i) and the signals M2(f, i) converted by the time-frequency converters - Determination as to whether or not a sound is stationary noise or a nonstationary sound may utilize a method described for example “Japanese Laid-open Patent Publication No. 2011-186384”. More specifically, a stationary noise model Nst(f, i) is estimated based on the signals M2(f, i) and the signals M2(f, i), and a ratio r(f, i) is derived between a stationary noise model Nst(f, i) and signals M1(f, i). The ratio r(f, i) is expressed as r(f, i)=M1(f, i)/Nst(f, i). From the fact that sound containing a nonstationary sound generally has a large r(f, i), and a stationary noise has an r(f, i) value close to 1.0, signals M1(f, i) and the signals M2(f, i) are determined to be signals representing a stationary noise when the value of the r(f, i) is near to 1.0. Note that determination may be made as to whether or not a sound is stationary noise based on the ratio r(f, i) between the stationary noise model Nst(f, i) and the signals M2(f, i).
- As another method for determining whether or not a sound is stationary noise or a nonstationary sound, determination may be made as to whether or not the spectral profile of the signals M1(f, i) has a peak and trough structure with the characteristics of voice data. Determination may be made that there is stationary noise when there is a poorly defined peak and trough structure. Determination of the peak and trough structure may be performed by comparison of peak values of the signal. Note that determination may be made as to whether or not there is stationary noise based on the spectral profile of the signals M2(f, i).
- Moreover, as another method for determining whether or not a sound is stationary noise or nonstationary sound, there is a method in which a correlation coefficient is computed between a spectral profile of signals M1(f, i) of the current frame and spectral profiles of signals M1(f, i) of the previous frame. When the correlation coefficient is near to 0, then determination may be made that the signals M1(f, i) and the signals M2(f, i) are signals representing stationary noise. Note that stationary noise detection may be made based on the correlation between the spectral profile of the signals M2(f, i) of the current frame and the spectral profile of the signals M2(f, i) of the previous frame.
- The frame
unit correction section 18 employs the signals M1(f, i) and the signals M2(f, i) detected by thedetection section 16 as signals representing stationary noise and computes a sensitivity difference correction coefficient at frame unit level, and corrects the signals M2(f, i) at the frame unit level. For example, a sensitivity difference correction coefficient C1(i) may be computed at the frame unit level as expressed by the following Equation (1). Note that the sensitivity difference correction coefficient C1(i) at the frame unit level is an example of a first correction coefficient of technology disclosed herein. -
- Wherein: α is an update coefficient expressing the extent to reflect the frame unit sensitivity difference correction coefficient C1(i−1) computed for the previous frame in the frame unit sensitivity difference correction coefficient C1(i) of the current frame, and is a value such that 0≦α<1. Note that α is an example of a first update coefficient of technology disclosed herein. Namely, the sensitivity difference correction coefficient C1(i−1) of the previous frame is updated by computing the sensitivity difference correction coefficient C1(i) of the current frame. Moreover, fmax is a value that is ½ the sampling frequency Fs. The term Σ|M1(f, i)| of Equation (1) takes a value that is the sum of the signals M1(f, i) detected as signals expressing stationary noise by the
detection section 16 over the range fromfrequency 0 to fmax. Similar applies to Σ|M2(f, i)|. - Moreover, the frame
unit correction section 18 generates signals M2′(f, i) that are the signals M2(f, i) corrected as expressed by following Equation (2) based on the computed sensitivity difference correction coefficient C1(i) by frame unit. -
M 2′(f,i)=C 1(i)×M 2(f,i) (2) - The frame unit sensitivity difference correction coefficient C1(i) expresses the sensitivity difference at the frame unit level between the signals M1(f, i) and the signals M2(f, i). Multiplying the frame unit sensitivity difference correction coefficient C1(i) by the signals M2(f, i) enables the sensitivity difference between the signals M1(f, i) and signals M2(f, i) to be corrected at the frame unit level.
- The frequency
unit correction section 20 employs the signals M1(f, i) and the signals M2′(f, i) corrected at the frame unit level by the frameunit correction section 18 to compute a sensitivity difference correction coefficient at the frequency unit level, and to correct the signals M2′(f, i) by frequency unit. For example, a frequency unit sensitivity difference correction coefficient CP(f, i) may be computed as expressed in following Equation (3). Note that the frequency unit sensitivity difference correction coefficient CP(f, i) is an example of a second correction coefficient of technology disclosed herein. -
C P(f,i)=β×C P(f,i−1)+(1−β)×(|M 1(f,i)|/M 2′(f,i)|) (3) - Wherein: β is an update coefficient representing the extent to reflect the frequency unit sensitivity difference correction coefficient CP(f,i−1) computed at the same frequency f for the previous frame in the frequency unit sensitivity difference correction coefficient CP(f, i) of the current frame, and is a value such that 0≦β<1. Note that β is an example of a second update coefficient of technology disclosed herein. Namely, the frequency unit sensitivity difference correction coefficient CP(f, i−1) of the previous frame is updated by computing the frequency unit sensitivity difference correction coefficient CP(f, i) of the current frame.
- Moreover, the frequency
unit correction section 20 generates signals M2″(f, i) of the signals M2′(f, i) corrected as expressed by the following Equation (4) based on the computed frequency unit sensitivity difference correction coefficient CP(f, i). -
M 2″(f,i)=C P(f,i)×M 2′(f,i) (4) - The frequency unit sensitivity difference correction coefficient CP(f, i) expresses the sensitivity difference at the frequency unit level between the M1(f, i) and the M2′(f, i). Multiplying the frequency unit sensitivity difference correction coefficient CP(f, i) by the M2′(f, i) enables correction to be performed by frequency unit of the sensitivity difference between the signals M1(f, i) and the signals M2′(f, i). Note that the signals M2′(f, i) are signals on which correction has already been performed at the frame unit level, and correction at the frequency unit level is correction that performs fine correction for each of the frequencies.
- The amplitude
ratio computation section 22 computes the respective amplitude spectra each of the signals M1(f, i) and signals M2″(f, i). Amplitude ratios R(f, i) are then respectively computed between amplitude spectra of the same frequency for each of the frequencies in each of the frames. - Based on the amplitude ratios R(f, i) computed by the amplitude
ratio computation section 22, the suppressioncoefficient computation section 24 then determines whether or not the input sound signal is a target voice or noise and computes a suppression coefficient. A case is now considered in which, as illustrated inFIG. 3 , a separation between themicrophone 11A and themicrophone 11B (inter-microphone distance) is d, a sound source direction is θ, and a distance from the sound source to themicrophone 11A is ds. Note that sound direction θ is a direction in which a sound source is present with respect to themicrophone array 11, and as illustrated inFIG. 3 , is expressed by an angle formed between a straight line passing through the centers of two microphones and a line segment that has one end at a central point P at the center of the two microphones and the other end at the sound source. In such a case a theoretical value RT of the amplitude ratio between theinput sound signal 1 and the input sound signal 2 (the amplitude ratio when there is no sensitivity difference occurring between the microphones) is expressed by the following Equation (5). -
R T ={ds/(ds+d×cos θ)}(0≦θ≦180) (5) - When the sound source direction of the target voice desired to be left without suppression is from θmin to θmax, then a theoretical value RT of the amplitude ratio is a value from Rmin to Rmax as expressed by the following Equation (6) and Equation (7).
-
R min =ds/(ds+d×cos θmin) (6) -
R max =ds/(ds+d×cos θmax) (7) - The suppression
coefficient computation section 24 accordingly first determines a range Rmin to Rmax based on the inter-microphone distance d, the sound source direction θ, and the distance ds from the sound source of the target voice to themicrophone 11A. Then when the computed amplitude ratios R(f, i) are within the range Rmin to Rmax, the input sound signal is determined to be the target voice, and a suppression coefficient ε(f, i) is computed as set out below. -
ε(f,i)=1.0 - when Rmin≦R(f, i)≦Rmax
-
ε(f,i)=εmin - when R(f, i)<Rmin or R(f, i)>Rmax
- Note that εmin is a value such that 0<εmin<1, and when for example a suppression amount of −3 dB is desired εmin is about 0.7, and when a suppression amount of −6 dB is desired εmin is about 0.5. Moreover, when the computed amplitude ratio R(f, i) ε falls outside of the range Rmin to Rmax, then suppression coefficient ε may be computed so as to gradually change from 1.0 to εmin as the amplitude ratio R(f, i) progresses away from the range Rmin to Rmax as expressed by the following.
-
ε(f,i)=1.0 - when Rmin≦R(f, i)≦Rmax
-
ε(f,i)=10(1.0−εmin)R(f,i)−10R min(1.0−εmin)+1.0 - when Rmin−0.1≦R(f, i)≦Rmin
-
ε(f,i)=−10(1.0−εmin)R(f,i)+10R max(1.0−εmin)+1.0 - when Rmax≦R(f, i)≦Rmax+0.1
-
ε(f,i)=εmin - when R(f, i)<Rmin−0.1, or R(f, i)>Rmax+0.1
- The suppression coefficient ε(f, i) described above is a value from 0.0 to 1.0 that becomes nearer to 0.0 the greater to degree of suppression.
- By multiplying the suppression coefficient ε(f, i) computed by the suppression
coefficient computation section 24 by the signals M1(f, i), the suppressionsignal generation section 26 generates a suppression signal in which noise has been suppressed for each of the frequencies and each frame. - The frequency-
time converter 28 takes the suppression signal that is a frequency domain signal generated by the suppressionsignal generation section 26 and converts it into an output sound signal that is a time domain signal by using for example an inverse Fourier transform, and outputs the converted signal. - The
noise suppression device 10 may, for example, be implemented by a computer 40 such as that illustrated inFIG. 4 . The computer 40 includes aCPU 42, amemory 44 and anonvolatile storage section 46. TheCPU 42, thememory 44 and thestorage section 46 are connected together through abus 48. The microphone array 11 (themicrophones - The
storage section 46 may be implemented for example by a Hard Disk Drive (HDD) or a flash memory. Thestorage section 46 serving as a storage medium is stored with anoise suppression program 50 for making the computer 40 function as thenoise suppression device 10. TheCPU 42 reads thenoise suppression program 50 from thestorage section 46, expands thenoise suppression program 50 in thememory 44 and sequentially executes the processes of thenoise suppression program 50. - The
noise suppression program 50 includes an A/D conversion process 52, time-frequency conversion process 54, adetection process 56, a frameunit correction process 58, a frequencyunit correction process 60, and an amplituderatio computation process 62. Thenoise suppression program 50 also includes a suppressioncoefficient computation process 64, a suppressionsignal generation process 66, and a frequency-time conversion process 68. - The
CPU 42 operates as the A/D converters FIG. 2 by executing the A/D conversion process 52. TheCPU 42 operates as the time-frequency converters FIG. 2 by executing the time-frequency conversion process 54. TheCPU 42 operates as thedetection section 16 illustrated inFIG. 2 by executing thedetection process 56. TheCPU 42 operates as the frameunit correction section 18 illustrated inFIG. 2 by executing the frameunit correction process 58. TheCPU 42 operates as the frequencyunit correction section 20 illustrated inFIG. 2 by executing the frequencyunit correction process 60. TheCPU 42 operates as the amplituderatio computation section 22 illustrated inFIG. 2 by executing the amplituderatio computation process 62. TheCPU 42 operates as the suppressioncoefficient computation section 24 illustrated inFIG. 2 by executing the suppressioncoefficient computation process 64. TheCPU 42 operates as the suppressionsignal generation section 26 illustrated inFIG. 2 by executing the suppressionsignal generation process 66. TheCPU 42 operates as the frequency-time converter 28 illustrated inFIG. 2 by executing the frequency-time conversion process 68. The computer 40 executing thenoise suppression program 50 accordingly functions as thenoise suppression device 10. - Note that it is possible to implement the
noise suppression device 10 with, for example, a semiconductor integrated circuit, and more particularly with an Application Specific Integrated Circuit (ASIC) and Digital Signal Processor (DSP). - Explanation next follows regarding operation of the
noise suppression device 10 according to the first exemplary embodiment. When theinput sound signal 1 and theinput sound signal 2 are output from themicrophone array 11, theCPU 42 expands thenoise suppression program 50 stored on thestorage section 46 into thememory 44, and executes the noise suppression processing illustrated inFIG. 5 . - At
step 100 of the noise suppression processing illustrated inFIG. 5 , the A/D converters input sound signal 1 and theinput sound signal 2 that are input analogue signals into the signal M1(t) and the signal M2(t) that are digital signals. - At the
next step 102, the time-frequency converters - At the
next step 104, thedetection section 16 employs the signals M2(f, i) and the signals M2(f, i) to determine, for each of the frequencies f of the frame i, whether or not the input sound signal is a stationary noise or a nonstationary sound, and to detect signals M1(f, i) and the signals M2(f, i) expressing stationary noise. - At the
next step 106, the frameunit correction section 18 employs the signals M1(f, i) and the signals M2(f, i) detected as signals expressing stationary noise to compute the frame unit sensitivity difference correction coefficient C1(i) such as for example expressed by Equation (1). - At the
next step 108, the frameunit correction section 18 multiplies the frame unit sensitivity difference correction coefficient C1(i) by the signals M2(f, i), and generates signals M2′(f, i) with the sensitivity difference between the signals M1(f, i) and the signals M2(f, i) corrected by frame unit. - At the
next step 110, the frequencyunit correction section 20 employs the signals M1(f, i) and the signals M2′(f, i) to compute the sensitivity difference correction coefficient CP(f, i) at frequency unit level as for example expressed by Equation (3). - At the
next step 112, the frequencyunit correction section 20 multiplies the sensitivity difference correction coefficient CP(f, i) by frequency unit by the signals M2′(f, i), and generates the signals M2″(f, i) with the sensitivity difference between the signals M1(f, i) and the signals M2′(f, i) corrected by frequency unit. - At the
next step 114, the amplituderatio computation section 22 computes amplitude spectra for each of the signals M1(f, i) and signals M2″(f, i). The amplituderatio computation section 22 then compares amplitude spectra against each other for the same frequency for each of the frequencies and each of the frames, and computes amplitude ratios R(f, i). - At the
next step 116, the suppressioncoefficient computation section 24 determines whether the input sound signal is the target voice or stationary noise based on the amplitude ratios R(f, i), and computes the suppression coefficient ε(f, i). - At the
next step 118, the suppressionsignal generation section 26 multiplies the suppression coefficient ε(f, i) by the signals M1(f, i) to generate suppression signals with suppressed noise for each of the frequencies of each of the frames. - At the
next step 120, the frequency-time converter 28 converts the suppression signal that is a frequency domain signal into an output sound signal that is a time domain signal by employing for example an inverse Fourier transform. - At the
next step 122, the A/D converters steps 100 to 120 is repeated. The noise suppression processing is ended when determined that no subsequent input sound signal has been input. - As explained above, according to the
noise suppression device 10 of the first exemplary embodiment, the fact that the amplitude ratio between input sound signals is close to 1.0 for a stationary noise is employed to detect stationary noise in the input sound signals, and to correct for the sensitivity difference between the microphones. Utilizing the stationary noise enables a voice to be detected from a wider range by using sensitivity difference correction than in cases in which sensitivity difference correction is performed based on a voice arriving from a specific direction detected using phase difference. Moreover, in the sensitivity difference correction, correction is performed by frequency unit to signals in which at least one signal of the input sound signals converted into frequency domain signals has first been corrected by frame unit. Thereby sensitivity difference correction is enabled to be performed rapidly even in cases in which the sensitivity difference is different for each of the frequencies. Thus according to thenoise suppression device 10 of the first exemplary embodiment, the time until a stable correction coefficient for sensitivity difference correction is achieved is shortened even in cases in which the sensitivity difference between microphones is large. Namely, rapid correction of inter-microphone sensitivity difference is enabled. A decrease is thereby enabled in audio distortion caused by noise suppression in which sensitivity difference correction is delayed. - Note that in the first exemplary embodiment, explanation has been given of a case in which signals M2(f, i) are corrected for sensitivity difference based on inter-microphone sensitivity differences, and a noise suppression coefficient is then multiplied by the signals M1(f, i) to generate a suppression signal. This envisages a case in which the target sound source is positioned close to the
microphone 11A that collects sound of theinput sound signal 1. When the target sound source is positioned close to themicrophone 11B, signals M1(f, i) may be corrected for sensitivity difference, and a noise suppression coefficient then multiplied by the signals M2(f, i) to generate a suppression signal. Either of these methods may be employed when there is no large difference between the respective distances from the target sound source to themicrophone 11A and themicrophone 11B. - Moreover, although explanation has been given in the first exemplary embodiment of cases in which the frame unit sensitivity difference correction coefficient C1(i) and the frequency sensitivity difference correction coefficient CP(f, i) by frequency unit are updated for each of the frames, there is no limitation thereto. The above noise suppression processing may be executed for a fixed period of time T1 (for example T1=1 hour), and then the finally updated values of C1(i) and CP(f, i) saved in a memory, such that the saved values of C1(i) and CP(f, i) are subsequently employed. Moreover, configuration may be made such that the above noise suppression processing is executed every fixed period of time T2 (for example T2=1 hour), and the final updated values of C1(i) and CP(f, i) after executing the above noise suppression processing for a fixed period of time T3 (for example T3=10 minutes) utilized in the interval until the next fixed period of time T2.
- Moreover, an update coefficient α in Equation (1) and an update coefficient β in Equation (3) may be set so as to be larger the longer the execution duration of the above noise suppression processing. Note that updates of the update coefficients α and β may both be performed using the same method, or may be performed using separate methods.
-
FIG. 6 illustrates anoise suppression device 210 according to a second exemplary embodiment. Note that the same reference numerals are allocated in thenoise suppression device 210 to similar parts to those of thenoise suppression device 10 of the first exemplary embodiment, and further explanation is omitted thereof. - As illustrated in
FIG. 6 , thenoise suppression device 210 includes A/D converters frequency converters detection section 216, a frameunit correction section 218, a frequencyunit correction section 20, and an amplituderatio computation section 22. Thenoise suppression device 210 also includes a suppressioncoefficient computation section 224, suppressionsignal generation section 26, a frequency-time converter 28, a phase difference utilizationrange setting section 30, a phasedifference computation section 32 and anaccuracy computation section 34. Note that the frameunit correction section 218 is an example of a first correction section of technology disclosed herein. The frequencyunit correction section 20 is an example of a second correction section of technology disclosed herein. The amplituderatio computation section 22, the suppressioncoefficient computation section 224, and the suppressionsignal generation section 26 are examples of a suppression section of technology disclosed herein. Portions of the A/D converters frequency converters detection section 216, the frameunit correction section 218, the frequencyunit correction section 20 and the frequency-time converter 28 are examples of a microphone sensitivity difference correction device of technology disclosed herein. - The phase difference utilization
range setting section 30 receives setting values for inter-microphone distance and sampling frequency, and sets a frequency band capable of utilizing phase difference to determine a sound arrival direction based on the inter-microphone distance and the sampling frequency. - Explanation next follows regarding a relationship between inter-microphone distance and sampling frequency, and the phase difference between the
input sound signal 1 and the input sound signal 2 (the difference in phase spectra for the same frequency).FIG. 7 is a graph illustrating the phase difference between theinput sound signal 1 and theinput sound signal 2 for each sound source direction when the inter-microphone distance d between themicrophone 11A and themicrophone 11B is smaller than speed of sound c/ sampling frequency Fs.FIG. 8 is a graph illustrating the phase difference between theinput sound signal 1 and theinput sound signal 2 for each sound source direction when the inter-microphone distance d is larger than speed of sound c/ sampling frequency Fs. Sound source directions of 10°, 30°, 50°, 70°, 90° are illustrated inFIG. 7 andFIG. 8 . - As illustrated in
FIG. 7 , since phase rotation does not occur in any sound source direction when the inter-microphone distance d is smaller than speed of sound c/ sampling frequency Fs, there is no impediment to utilizing the phase difference to determine the arrival direction of the sound. However, as illustrated inFIG. 8 , when the inter-microphone distance d is larger than speed of sound c/ sampling frequency Fs, phase rotation occurs in a high region frequency band higher than a given frequency (in the vicinity of 1 kHz in the example ofFIG. 8 ). It becomes difficult to utilized phase difference to determine the arrival direction of sound when phase rotation occurs. Namely, an issue arises in that there are constraints on the inter-microphone distance when phase difference is utilized to correct for sensitivity difference between microphones and for noise suppression. - The phase difference utilization
range setting section 30 accordingly computes a frequency band such that phase rotation in the phase difference between theinput sound signal 1 and theinput sound signal 2 does not arise, based on the inter-microphone distance d and the sampling frequency Fs. Then the computed frequency band is set as a phase difference utilization range for determining the arrival direction of sound by utilizing phase difference. - More specifically, the phase difference utilization
range setting section 30 uses the inter-microphone distance d, the sampling frequency Fs and the speed of sound c to compute an upper limit frequency fmax of the phase difference utilization range according to the following Equations (8) and (9). -
f max =Fs/2 (8) - when d≦c/Fs
-
f max =c/(d*2) (9) - when d>c/Fs
- The phase difference utilization
range setting section 30 sets as the phase difference utilization range a frequency band of computed fmax or lower. Setting of the phase difference utilization range may be executed only once on operation startup of the device, and the computed upper limit frequency fmax then stored for example in a memory.FIG. 9 illustrates phase differences when the sampling frequency Fs is 8 kHz, the inter-microphone distance d is 135 mm, and the sound source direction θ is 30°. In such cases, the fmax is about 1.2 kHz by Equation (9). - The phase
difference computation section 32 computes each phase spectrum of the signals M1(f, i) and the signals M2(f, i) in the phase difference utilization range (frequency band of frequency fmax or lower) that has been set by the phase difference utilizationrange setting section 30. The phasedifference computation section 32 then computes the phase difference between each of the phase spectra of the same frequency. - Then based on the phase difference computed by the phase
difference computation section 32, thedetection section 216 detects sound arrival directions other than the sound source direction of the target voice (referred to below as the “target sound direction”) by determining the arrival direction of input sound signals for each of the frequencies f in each of the frames. Sounds arriving from outside of the target sound direction are treated as being sounds arriving from far away, enabling a value in the vicinity of 1.0 to be given to the amplitude ratio between input sound signals, similarly to the treatment of stationary noise. - More specifically, the
detection section 216 determines from the phase difference computed by the phasedifference computation section 32 whether or not sound of the current frame is sound that has arrived from the target sound direction. For example, when thenoise suppression device 210 is installed in a mobile phone, the target sound direction is the direction of the mouth of the person who is holding the mobile phone and speaking. Explanation next follows regarding a case, as illustrated inFIG. 3 , in which the target sound source is placed at a position nearer to themicrophone 11A than to themicrophone 11B. - The
detection section 216, sets a determination region, for example as illustrated by diagonal lines inFIG. 9 , to determine whether or not the input sound signal is sound that has arrived from the target sound direction when the computed phase difference is contained therein. When the phase difference of the determination region is contained in the phase difference utilization range that has been set in the phase difference utilizationrange setting section 30, the sound of the frequency f component of the current frame of the input sound signal may be treated as being sound that has arrived from the target sound direction. However, when the phase difference is outside of the determination region, the sound of the frequency f component of the current frame of the input sound signal may be treated as being sound that has arrived from outside the sound source direction. - The frame
unit correction section 218 employs the signals M1(f, i) and the signals M2(f, i) detected as sound that has arrived from outside of the target sound direction by thedetection section 216 to compute the sensitivity difference correction coefficient by frame unit, and corrects the signals M2(f, i) by frame unit. For example, similarly to the frameunit correction section 18 of the first exemplary embodiment, it is possible to compute a sensitivity difference correction coefficient C1(i) by frame unit as expressed by Equation (1). Note that in the second exemplary embodiment, the fmax of Equation (1) is an upper limit frequency that has been set by the phase difference utilizationrange setting section 30. The term Σ|M1(f, i)| of Equation (1) takes a value that is the sum of the signals M1(f, i) detected by thedetection section 216 as being sound arriving from outside the target sound direction over the range fromfrequency 0 to fmax. Similar applies to the term Σ|M2(f, i)|. Moreover, the frameunit correction section 218, similarly to the frameunit correction section 18 of the first exemplary embodiment, generates signals M2′(f, i) that are the signals M2(f, i) corrected as expressed for example by Equation (2), based on the computed sensitivity difference correction coefficient C1(i) by frame unit. - The
accuracy computation section 34 computes a degree of accuracy of the sensitivity difference correction. The second exemplary embodiment, utilizes the fact that the sound that has arrived from outside the target sound direction has a value of amplitude ratio between input sound signals that is close to 1.0, similarly to with stationary noise. However, in practice sometimes the amplitude ratio between detected input sound signals as sound that has arrived from outside of the target sound direction is a value that is not close to 1.0. Suppose that a value of the amplitude ratio is employed that deviates greatly from 1.0, then sometimes this does not enable accurate sensitivity difference correction to be performed, and audio distortion occurs when noise suppression is performed. Moreover, a similar issue arises when sufficient coefficient updating is not performed. In such cases configuration is made such that noise suppression is only performed when there is a high degree of accuracy to the sensitivity difference correction. - More specifically, out of each of the frequencies in the phase difference utilization range, the
accuracy computation section 34 computes, as a probability that the input sound signal for that frame is sound from the target sound direction, a probability that a frequency with the phase difference is contained in the determination region (for example the region illustrated by diagonal lines inFIG. 9 ). Namely, the probability that a sound is from the target sound direction=the number of frequencies with phase difference contained in the determination region/the number of frequencies in the phase difference utilization range. Theaccuracy computation section 34 updates the degree of accuracy when there is a high probability that the sound is from the target sound direction. The probability that the sound is from the target sound direction is a value from 0.0 to 1.0, and hence a degree of accuracy EP(f, i) is computed such as that expressed by following Equation (10) when for example the probability that the sound comes from the target sound direction exceeds a threshold value, with a threshold value of for example 0.8. -
E P(f,i)=γ×E P(f,i−1)+(1−γ)×(|M 1(f,i)|/|M 2″(f,i)| (10) - Wherein γ here is an update coefficient representing the extent to reflect the degree of accuracy EP(f, i−1) computed for the previous frame in the degree of accuracy EP(f, i) computed for the current frame, and is a value such that 0≦γ<1. Note that γ is an example of a third update coefficient of technology disclosed herein. Namely, the degree of accuracy EP(f, i−1) for each of the frequencies of the previous frame is updated by computing the degree of accuracy EP(f, i) for each of the frequencies of the current frame.
- The suppression
coefficient computation section 224 computes the suppression coefficient ε(f, i) in a similar manner to the suppressioncoefficient computation section 24 of the first exemplary embodiment. However, for frequencies for which the degree of accuracy EP(f, i) is less than a specific threshold value (for example 1.0), this is treated as being a sensitivity difference correction coefficient that is not updated until accurate sensitivity difference correction may be performed, and the suppression coefficient ε(f, i) is taken as a 1.0 (a value for which no suppression is performed). - The
noise suppression device 210 may, for example, be implemented by a computer 240 such as that illustrated inFIG. 4 . The computer 240 includes aCPU 42, amemory 44 and anonvolatile storage section 46. TheCPU 42, thememory 44 and thestorage section 46 are connected together through abus 48. The microphone array 11 (themicrophones - The
storage section 46 may be implemented for example by a HDD or a flash memory. Thestorage section 46 serving as a storage medium is stored with anoise suppression program 250 for making the computer 240 function as thenoise suppression device 210. TheCPU 42 reads thenoise suppression program 250 from thestorage section 46, expands thenoise suppression program 250 in thememory 44 and sequentially executes the processes of thenoise suppression program 250. - The
noise suppression program 250 includes an A/D conversion process 52, time-frequency conversion process 54, adetection process 256, a frameunit correction process 258, a frequencyunit correction process 60, and an amplituderatio computation process 62. Thenoise suppression program 250 also includes a suppressioncoefficient computation process 264, a suppressionsignal generation process 66, a frequency-time conversion process 68, a phase difference utilizationrange setting process 70, a phasedifference computation process 72, and anaccuracy computation process 74. - The
CPU 42 operates as thedetection section 216 illustrated inFIG. 6 by executing thedetection process 256. TheCPU 42 operates as the frameunit correction section 218 illustrated inFIG. 6 by executing the frameunit correction process 258. TheCPU 42 operates as the suppressioncoefficient computation section 224 illustrated inFIG. 6 by executing the suppressioncoefficient computation process 264. TheCPU 42 operates as the phase difference utilizationrange setting section 30 illustrated inFIG. 6 by executing the phase difference utilizationrange setting process 70. TheCPU 42 operates as the phasedifference computation section 32 illustrated inFIG. 6 by executing the phasedifference computation process 72. TheCPU 42 operates as theaccuracy computation section 34 illustrated inFIG. 6 by executing theaccuracy computation process 74. Other processes are similar to those of thenoise suppression program 50 of the first exemplary embodiment. The computer 240 executing thenoise suppression program 250 accordingly functions as thenoise suppression device 210. - Note that it is possible to implement the
noise suppression device 210 with, for example, a semiconductor integrated circuit, and more particularly with an ASIC and DSP. - Explanation next follows regarding operation of the
noise suppression device 210 according to the second exemplary embodiment. When theinput sound signal 1 and theinput sound signal 2 are output from themicrophone array 11, theCPU 42 expands thenoise suppression program 250 stored on thestorage section 46 into thememory 44, and executes the noise suppression processing illustrated inFIG. 10 . Note that processing in the noise suppression processing of the second exemplary embodiment that is similar to the noise suppression processing in the first exemplary embodiment is allocated the same reference numerals and detailed explanation is omitted thereof. - At
step 200 of the noise suppression processing illustrated inFIG. 10 , the phase difference utilizationrange setting section 30 receives setting vales for the inter-microphone distance d and the sampling frequency Fs, and computes the frequency band capable of utilizing the phase difference to determining the arrival direction of sound, and sets the phase difference utilization range. - Then at
steps input sound signal 1 and theinput sound signal 2 that are analogue signals are converted into the signal M1(t) and the signal M2 (t) that are digital signals, and then further converted into the signals M1(f, i) and the signals M2(f, i) that are frequency domain signals. - At the
next step 202, the phasedifference computation section 32 computes the respective phase spectra of the signals M1(f, i) and the signals M2(f, i) in the phase difference utilization range set by the phase difference utilization range setting section 30 (the frequency band of frequency fmax or lower). The phasedifference computation section 32 then computes as a phase difference the difference between respective phase spectra of the same frequency. - At the
next step 204, thedetection section 216 detects the signals M1(f, i) and the signals M2(f, i) expressing the arriving sound for directions other than the target sound direction by determining the arrival direction for each of the frequencies f of each of the frames based on the phase difference computed atstep 202. - At the
next step 206, the frameunit correction section 218 employs the signals M1(f, i) and the signals M2(f, i) detected as sound arriving from directions other than the target sound direction to compute the frame unit sensitivity difference correction coefficient C1(i) such as for example expressed by Equation (1). Note that the fmax of Equation (1) is the upper limit frequency set by the phase difference utilizationrange setting section 30. The term Σ|M1(f, i)| of Equation (1) is the sum of signals M1(f, i) detected as sound arriving from directions other than the target sound direction over the range of frequencies from 0 to fmax. Similar applies to the term Σ|M2(f, i)|. - The signals M2″(f, i) subjected to sensitivity difference correction by frequency unit are then generated from the signals M2(f, i) to which sensitivity difference correction by frame unit has been performed by
steps 108 to 112. - At the
next step 208, theaccuracy computation section 34 computes as a probability that the input sound signal for that frame is sound from the target sound direction, a probability that a frequency with the phase difference is contained in the determination region (for example the region illustrated by diagonal lines inFIG. 9 ) out of each of the frequencies in the phase difference utilization range. - At the
next step 211, theaccuracy computation section 34 determines whether or not the probability computed atstep 208 has exceeded a specific threshold value (for example 0.8). Processing proceeds to step 212 when the probability that that the sound is from the target sound direction exceeds the threshold value. Atstep 212, theaccuracy computation section 34 updates the degree of accuracy EP(f, i−1) up to the previous frame by computation of the degree of accuracy EP(f, i) for example as expressed by Equation (10). However, when the probability that that the sound is from the target sound direction is determined atstep 211 to be the threshold value or lower, the processing skipsstep 212 and proceeds to step 114. - At
step 114, the amplituderatio computation section 22 computes the amplitude ratios R(f, i). At thenext step 214, the suppressioncoefficient computation section 224 computes the suppression coefficient ε(f, i) similarly to atstep 116 in the first exemplary embodiment. However, for frequencies where the degree of accuracy EP(f, i) updated atstep 212 is less than a specific threshold value (for example 1.0), the suppression coefficient ε(f, i) is made 1.0 (a value for not performing suppression). - Subsequently, in
steps 118 to 122 the output sound signal is output by processing similar to that of the first exemplary embodiment, and the noise suppression processing is ended. - As explained above, according to the
noise suppression device 210 of the second exemplary embodiment, sound arriving from directions other than the target sound direction is detected based on the computed phase difference in the frequency band capable of utilizing phase difference. For sound arriving from directions other than the target sound direction, similarly to stationary noise, the amplitude ratio between the input sound signals are values close to 1.0, and the sensitivity difference between microphones is corrected. This thereby, similarly to with the first exemplary embodiment, enables the inter-microphone sensitivity difference to be rapidly corrected for, even for cases in which there are limitations to microphone array placement. A decrease is thereby enabled in audio distortion caused by noise suppression in which sensitivity difference correction is delayed. Moreover, noise suppression processing is performed only in cases in which there is a high degree of accuracy in the sensitivity difference correction, enabling audio distortion to be prevented from occurring due to noise suppression being performed when accurate sensitivity difference correction is unable to be performed. - Moreover, although explanation has been given in the second exemplary embodiment of cases in which the frame unit sensitivity difference correction coefficient C1(i), the frequency unit frequency sensitivity difference correction coefficient CP(f, i) and the degree of accuracy EP(f, i) are updated for each of the frames, there is no limitation thereto. The above noise suppression processing may be executed for a fixed period of time T1 (for example T1=1 hour), and then the finally updated values of C1(i), CP(f, i) and EP(f, i) saved for example in a memory. Then the saved values of C1(i), CP(f, i) and EP(f, i) may be subsequently employed. Moreover, configuration may be made such that the above noise suppression processing is executed every fixed period of time T2 (for example T2=1 hour), for a fixed period of time T3 (for example T3=10 minutes). Then the final updated values of C1(i), CP(f, i) and EP(f, i) may be employed in the interval until the next fixed period of time T2. Moreover, updating of the C1(i), the CP(f, i) and the EP(f, i) may be ended when EP(f, i) for all the frequencies f is already 1.0 or above.
- Moreover, an update coefficient α in Equation (1), an update coefficient β in Equation (3) and an update coefficient γ in Equation (10) may be set so as to be larger the longer the execution duration of the above noise suppression processing. In order to rapidly complete update of each of the coefficients for each of the frequencies, according to the value of EP(f, i), for example when EP(f, i)<1.0, the values of α, β and γ may be updated as expressed by the following Equations (11) to (13). In such cases α, β and γ adopt different values for each of the frequencies.
-
α(f,i)=0.2×E P(f,i)+0.8 (11) -
β(f,i)=0.2×E P(f,i)+0.8 (12) -
γ(f,i)=0.2×E P(f,i)+0.8 (13) - Note that the update coefficients α, β and γ may all be updated using the same method, or may be updated using separate different methods.
- In each of the above exemplary embodiments, explanation has been given regarding a noise suppression device that contains a microphone sensitivity difference correction device of technology disclosed herein, however a microphone sensitivity difference correction device of technology disclosed herein may be implemented as a stand-alone, or in combination with another device. For example, the configuration may be made such that a corrected signal is output as it is, or a corrected signal may be input to a device that performs other audio processing that noise suppression processing.
- Explanation has been given here of an example of noise suppression processing results of technology disclosed herein for a case in which each of the microphones are placed as illustrated in
FIG. 1 , the sampling frequency is 8 kHz, and the inter-microphone distance is 135 mm.FIG. 11 is a graph illustrating an example of amplitude spectra of theinput sound signal 1 and theinput sound signal 2. As long as there is no sensitivity difference between each of the microphones, the output of theinput sound signal 1 output from themicrophone 11A that is placed nearer to the sound source should have larger amplitude than theinput sound signal 2. However, in the example ofFIG. 11 , the degree of suppression of themicrophone 11B is greater than that of themicrophone 11A, and so the amplitude of theinput sound signal 2 is greater than the amplitude of theinput sound signal 1. - As a comparison example to the technology disclosed herein, results of performing noise suppression on the
input sound signal 1 and theinput sound signal 2 illustrated inFIG. 11 by employing a conventional method are illustrated inFIG. 12 . The conventional method here is a method in which noise suppression processing is performed by sensitivity difference correction between each of the microphones based on sound arriving from orthogonal directions detected by employing phase difference. In this conventional method, it is only possible to perform accurate sensitivity difference correction in low frequency regions within the phase difference utilization range when the inter-microphone distance is larger than the speed of sound/sampling frequency. Thus, as illustrated inFIG. 12 , a voice is suppressed in the intermediate to high frequency regions (the peak portions). - However, results of performing noise suppression on the
input sound signal 1 and theinput sound signal 2 illustrated inFIG. 11 utilizing the technology disclosed herein are illustrated inFIG. 13 . In the noise suppression results by the technology disclosed herein illustrated inFIG. 13 , a voice is not suppressed across all the frequency bands, and only the noise (the valley portions) is suppressed. - Thus with the above method of technology disclosed herein, the degrees of freedom are raised for placing positions of each of the microphones, enabling installation to a microphone array of various devices that are getting thinner and thinner, such as smartphones. Moreover it is also possible to rapidly correct sensitivity differences between microphones, and to execute noise suppression without audio distortion.
- Note that explanation has been given above of a mode in which the
noise suppression programs storage section 46. However the noise suppression program of technology disclosed herein may be supplied in a format such as stored on a storage medium such as a CD-ROM or DVD-ROM. - An aspect of technology disclosed herein has the advantageous effect of enabling rapid correction to be performed for sensitivity differences between microphones even when there are limitations to the placement positions of the microphone arrays.
- All examples and conditional language provided herein are intended for the pedagogical purposes of aiding the reader in understanding the invention and the concepts contributed by the inventor to further the art, and are not to be construed as limitations to such specifically recited examples and conditions, nor does the organization of such examples in the specification relate to a showing of the superiority and inferiority of the invention. Although one or more embodiments of the present invention have been described in detail, it should be understood that the various changes, substitutions, and alterations could be made hereto without departing from the spirit and scope of the invention.
Claims (20)
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2013-039695 | 2013-02-28 | ||
JP2013039695A JP6020258B2 (en) | 2013-02-28 | 2013-02-28 | Microphone sensitivity difference correction apparatus, method, program, and noise suppression apparatus |
Publications (2)
Publication Number | Publication Date |
---|---|
US20140241546A1 true US20140241546A1 (en) | 2014-08-28 |
US9204218B2 US9204218B2 (en) | 2015-12-01 |
Family
ID=49911349
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/155,731 Active 2034-08-10 US9204218B2 (en) | 2013-02-28 | 2014-01-15 | Microphone sensitivity difference correction device, method, and noise suppression device |
Country Status (3)
Country | Link |
---|---|
US (1) | US9204218B2 (en) |
EP (1) | EP2773137B1 (en) |
JP (1) | JP6020258B2 (en) |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130230188A1 (en) * | 2012-03-02 | 2013-09-05 | Alberto CORNEJO LIZARRALDE | Sound suppression system and controlled generation of same at a distance |
US20150248895A1 (en) * | 2014-03-03 | 2015-09-03 | Fujitsu Limited | Voice processing device, noise suppression method, and computer-readable recording medium storing voice processing program |
US20150269954A1 (en) * | 2014-03-21 | 2015-09-24 | Joseph F. Ryan | Adaptive microphone sampling rate techniques |
US20160284338A1 (en) * | 2015-03-26 | 2016-09-29 | Kabushiki Kaisha Toshiba | Noise reduction system |
US20160284336A1 (en) * | 2015-03-24 | 2016-09-29 | Fujitsu Limited | Noise suppression device, noise suppression method, and non-transitory computer-readable recording medium storing program for noise suppression |
US20170098453A1 (en) * | 2015-06-24 | 2017-04-06 | Microsoft Technology Licensing, Llc | Filtering sounds for conferencing applications |
CN106910511A (en) * | 2016-06-28 | 2017-06-30 | 阿里巴巴集团控股有限公司 | A kind of speech de-noising method and apparatus |
CN107509155A (en) * | 2017-09-29 | 2017-12-22 | 广州视源电子科技股份有限公司 | Array microphone correction method, device, equipment and storage medium |
JP2018032931A (en) * | 2016-08-23 | 2018-03-01 | 沖電気工業株式会社 | Acoustic signal processing device, program and method |
JP2018142819A (en) * | 2017-02-27 | 2018-09-13 | 沖電気工業株式会社 | Non-target sound determination device, program and method |
US10708690B2 (en) * | 2015-09-10 | 2020-07-07 | Yayuma Audio Sp. Z.O.O. | Method of an audio signal correction |
CN111935541A (en) * | 2020-08-12 | 2020-11-13 | 北京字节跳动网络技术有限公司 | Video correction method and device, readable medium and electronic equipment |
US11227625B2 (en) * | 2019-05-31 | 2022-01-18 | Fujitsu Limited | Storage medium, speaker direction determination method, and speaker direction determination device |
CN118629383A (en) * | 2024-08-08 | 2024-09-10 | 宁波方太厨具有限公司 | Active noise reduction system, control method thereof, abnormal sound detection method and abnormal sound detection device |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2016127502A (en) * | 2015-01-06 | 2016-07-11 | 富士通株式会社 | Communication device and program |
CN107197090B (en) * | 2017-05-18 | 2020-07-14 | 维沃移动通信有限公司 | Voice signal receiving method and mobile terminal |
CN110595612B (en) * | 2019-09-19 | 2021-11-19 | 三峡大学 | Method and system for automatically calibrating sensitivity of microphone of noise acquisition device of power equipment |
CN111050268B (en) * | 2020-01-16 | 2021-11-16 | 思必驰科技股份有限公司 | Phase testing system, method, device, equipment and medium of microphone array |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020034310A1 (en) * | 2000-03-14 | 2002-03-21 | Audia Technology, Inc. | Adaptive microphone matching in multi-microphone directional system |
US20080152154A1 (en) * | 2006-12-25 | 2008-06-26 | Sony Corporation | Audio signal processing apparatus, audio signal processing method and imaging apparatus |
US20090052696A1 (en) * | 2007-06-13 | 2009-02-26 | Yamaha Corporation | Electroacoustic transducer |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0779495A (en) * | 1993-09-07 | 1995-03-20 | Matsushita Electric Ind Co Ltd | Signal controller |
JP3146804B2 (en) | 1993-11-05 | 2001-03-19 | 松下電器産業株式会社 | Array microphone and its sensitivity correction device |
JP3940662B2 (en) | 2001-11-22 | 2007-07-04 | 株式会社東芝 | Acoustic signal processing method, acoustic signal processing apparatus, and speech recognition apparatus |
US7587056B2 (en) * | 2006-09-14 | 2009-09-08 | Fortemedia, Inc. | Small array microphone apparatus and noise suppression methods thereof |
JP5070993B2 (en) * | 2007-08-27 | 2012-11-14 | 富士通株式会社 | Sound processing apparatus, phase difference correction method, and computer program |
DE112007003716T5 (en) * | 2007-11-26 | 2011-01-13 | Fujitsu Ltd., Kawasaki | Sound processing device, correction device, correction method and computer program |
JP5197458B2 (en) * | 2009-03-25 | 2013-05-15 | 株式会社東芝 | Received signal processing apparatus, method and program |
JP5240026B2 (en) * | 2009-04-09 | 2013-07-17 | ヤマハ株式会社 | Device for correcting sensitivity of microphone in microphone array, microphone array system including the device, and program |
US8620672B2 (en) * | 2009-06-09 | 2013-12-31 | Qualcomm Incorporated | Systems, methods, apparatus, and computer-readable media for phase-based processing of multichannel signal |
JP5772151B2 (en) * | 2011-03-31 | 2015-09-02 | 沖電気工業株式会社 | Sound source separation apparatus, program and method |
-
2013
- 2013-02-28 JP JP2013039695A patent/JP6020258B2/en active Active
- 2013-12-30 EP EP13199764.5A patent/EP2773137B1/en active Active
-
2014
- 2014-01-15 US US14/155,731 patent/US9204218B2/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020034310A1 (en) * | 2000-03-14 | 2002-03-21 | Audia Technology, Inc. | Adaptive microphone matching in multi-microphone directional system |
US20080152154A1 (en) * | 2006-12-25 | 2008-06-26 | Sony Corporation | Audio signal processing apparatus, audio signal processing method and imaging apparatus |
US20090052696A1 (en) * | 2007-06-13 | 2009-02-26 | Yamaha Corporation | Electroacoustic transducer |
Cited By (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9118405B2 (en) * | 2012-03-02 | 2015-08-25 | Alberto CORNEJO LIZARRALDE | Sound suppression system and controlled generation of same at a distance |
US20130230188A1 (en) * | 2012-03-02 | 2013-09-05 | Alberto CORNEJO LIZARRALDE | Sound suppression system and controlled generation of same at a distance |
US20150248895A1 (en) * | 2014-03-03 | 2015-09-03 | Fujitsu Limited | Voice processing device, noise suppression method, and computer-readable recording medium storing voice processing program |
US9761244B2 (en) * | 2014-03-03 | 2017-09-12 | Fujitsu Limited | Voice processing device, noise suppression method, and computer-readable recording medium storing voice processing program |
US20150269954A1 (en) * | 2014-03-21 | 2015-09-24 | Joseph F. Ryan | Adaptive microphone sampling rate techniques |
US9406313B2 (en) * | 2014-03-21 | 2016-08-02 | Intel Corporation | Adaptive microphone sampling rate techniques |
US9691372B2 (en) * | 2015-03-24 | 2017-06-27 | Fujitsu Limited | Noise suppression device, noise suppression method, and non-transitory computer-readable recording medium storing program for noise suppression |
US20160284336A1 (en) * | 2015-03-24 | 2016-09-29 | Fujitsu Limited | Noise suppression device, noise suppression method, and non-transitory computer-readable recording medium storing program for noise suppression |
US9747885B2 (en) * | 2015-03-26 | 2017-08-29 | Kabushiki Kaisha Toshiba | Noise reduction system |
US20160284338A1 (en) * | 2015-03-26 | 2016-09-29 | Kabushiki Kaisha Toshiba | Noise reduction system |
US20170098453A1 (en) * | 2015-06-24 | 2017-04-06 | Microsoft Technology Licensing, Llc | Filtering sounds for conferencing applications |
US10127917B2 (en) * | 2015-06-24 | 2018-11-13 | Microsoft Technology Licensing, Llc | Filtering sounds for conferencing applications |
US10708690B2 (en) * | 2015-09-10 | 2020-07-07 | Yayuma Audio Sp. Z.O.O. | Method of an audio signal correction |
CN106910511A (en) * | 2016-06-28 | 2017-06-30 | 阿里巴巴集团控股有限公司 | A kind of speech de-noising method and apparatus |
CN106910511B (en) * | 2016-06-28 | 2020-08-14 | 阿里巴巴集团控股有限公司 | Voice denoising method and device |
JP2018032931A (en) * | 2016-08-23 | 2018-03-01 | 沖電気工業株式会社 | Acoustic signal processing device, program and method |
JP2018142819A (en) * | 2017-02-27 | 2018-09-13 | 沖電気工業株式会社 | Non-target sound determination device, program and method |
CN107509155A (en) * | 2017-09-29 | 2017-12-22 | 广州视源电子科技股份有限公司 | Array microphone correction method, device, equipment and storage medium |
US11227625B2 (en) * | 2019-05-31 | 2022-01-18 | Fujitsu Limited | Storage medium, speaker direction determination method, and speaker direction determination device |
CN111935541A (en) * | 2020-08-12 | 2020-11-13 | 北京字节跳动网络技术有限公司 | Video correction method and device, readable medium and electronic equipment |
CN118629383A (en) * | 2024-08-08 | 2024-09-10 | 宁波方太厨具有限公司 | Active noise reduction system, control method thereof, abnormal sound detection method and abnormal sound detection device |
Also Published As
Publication number | Publication date |
---|---|
JP2014168188A (en) | 2014-09-11 |
EP2773137A2 (en) | 2014-09-03 |
EP2773137A3 (en) | 2017-05-24 |
EP2773137B1 (en) | 2019-10-16 |
US9204218B2 (en) | 2015-12-01 |
JP6020258B2 (en) | 2016-11-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9204218B2 (en) | Microphone sensitivity difference correction device, method, and noise suppression device | |
US9236060B2 (en) | Noise suppression device and method | |
KR100883712B1 (en) | Method of estimating sound arrival direction, and sound arrival direction estimating apparatus | |
US8886499B2 (en) | Voice processing apparatus and voice processing method | |
US9449594B2 (en) | Adaptive phase difference based noise reduction for automatic speech recognition (ASR) | |
US9113241B2 (en) | Noise removing apparatus and noise removing method | |
US8143620B1 (en) | System and method for adaptive classification of audio sources | |
US8249270B2 (en) | Sound signal correcting method, sound signal correcting apparatus and computer program | |
CN103109320B (en) | Noise suppression device | |
US20120057711A1 (en) | Noise suppression device, noise suppression method, and program | |
US20150030174A1 (en) | Microphone array device | |
US20180033448A1 (en) | Noise suppression device and noise suppressing method | |
US20110238417A1 (en) | Speech detection apparatus | |
US20150088494A1 (en) | Voice processing apparatus and voice processing method | |
WO2020110228A1 (en) | Information processing device, program and information processing method | |
US9330683B2 (en) | Apparatus and method for discriminating speech of acoustic signal with exclusion of disturbance sound, and non-transitory computer readable medium | |
US10951978B2 (en) | Output control of sounds from sources respectively positioned in priority and nonpriority directions | |
JP5459220B2 (en) | Speech detection device | |
US20180062597A1 (en) | Gain adjustment apparatus and gain adjustment method | |
US10706870B2 (en) | Sound processing method, apparatus for sound processing, and non-transitory computer-readable storage medium | |
JP6638248B2 (en) | Audio determination device, method and program, and audio signal processing device | |
JP6973652B2 (en) | Audio processing equipment, methods and programs |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: FUJITSU LIMITED, JAPAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MATSUMOTO, CHIKAKO;REEL/FRAME:032228/0891 Effective date: 20131225 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 4 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 8 |