JP4616529B2  Blind signal separation processing device  Google Patents
Blind signal separation processing device Download PDFInfo
 Publication number
 JP4616529B2 JP4616529B2 JP2001266095A JP2001266095A JP4616529B2 JP 4616529 B2 JP4616529 B2 JP 4616529B2 JP 2001266095 A JP2001266095 A JP 2001266095A JP 2001266095 A JP2001266095 A JP 2001266095A JP 4616529 B2 JP4616529 B2 JP 4616529B2
 Authority
 JP
 Japan
 Prior art keywords
 signal
 separated
 separation
 processing
 matrix
 Prior art date
 Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
 Expired  Fee Related
Links
Images
Description
[0001]
BACKGROUND OF THE INVENTION
The present invention relates to a blind signal separation processing device that performs blind separation processing on separated signals such as radio waves, light, and sound mixed in an actual environment where background noise, reflection, multipath, and the like exist.
[0002]
[Prior art]
Conventionally, blind signal separation processing is performed according to the principle described below.
[0003]
Now, the signal s _{1} from the N signal sources (t), s _{2} (t), represented by a vector describing ... s _{N} (t) and a signal group s (t) to below.
[0004]
s (t) = (s _{1} (t), s _{2} (t), ..., s _{N} (t)) ^{T} (1)
However, it is assumed that the average value of s (t) is 0 and each signal (vector component) is independent of each other. Here, the symbol (...) ^{T} represents transposition of a vector and a matrix. The object of the blind signal separation process is a digitized signal, and the time takes a discrete value. For this reason, time t coincides with the number representing the sampling order,
Here, t = 0, 1, 2 ... (2)
Suppose that
In addition, the observation signal for the signal s (t) is
x (t) = (x _{1} (t), x _{2} (t), ..., x _{N} (t)) ^{T} (3)
It expresses.
[0005]
Here, each component of the observation signal x (t) corresponds to each observation signal observed by sensors 0, 1, 2,. In general, the number of sensors and the number of signal sources do not necessarily match, but it is assumed here that they match.
[0006]
In blind signal separation processing, between s (t) and x (t),
x (t) = A (t) * s (t), (4)
The following linear relationship is assumed. Here, A (t) is a matrix representing an N × N unknown transfer function, and the sign “*” represents a convolution integral.
[0007]
Here, “convolution” refers to an operation in which an input signal is delayed in a propagation path, multiplied by a predetermined coefficient, and then added. FIG. 1 shows a simple example of the convolution mixing. The source signals s _{1} (t) and s _{2} (t) are transmitted through a propagation path whose transfer function is represented by a matrix A (t). The sensors 1 and 2 are reached as x _{1} (t) and x _{2} (t). Here, elements of the transfer function A (t) of the channel is _{A 11, A 12, A 21} , A 22. In FIG. 1, the source signal s _{1} (t) is convolved with A _{11} and A _{21} and the source signal s _{2} (t) is integrated with A _{ 12 } and A _{ 22 } and then s _{1} (t) * A _{11} s _{2} (t) * A _{12} is added to generate a signal x _{1} (t) observed by the sensor 1, and s _{1} (t) * A _{21} and s _{2} (t) * A _{22} are added. state signal x _{2} is generated, which is observed by the sensor 2 is shown.
[0008]
The process of separating the source signal from the observed signal x (t) using only the observed signal without using the prior information on the matrix A (t) and the source signal s (t) is called blind signal processing. It is.
[0009]
The separation process predicts the influence on the propagation path received until the signal generated from the signal source is observed as a function represented by an N × N matrix, and obtains its inverse matrix B (t),
u (t) = B (t) * x (t), (5)
In other words, the time domain separation signal u (t) independent of each other is reconstructed.
[0010]
When each component of the matrix A (t) is a real number, the mixture according to Equation (4) is a simple linear mixture, and when it is not a real number, it is a convolutional linear mixture. In addition, in order to represent an environment in which a signal propagates, means for assuming each component of the matrix A (t) as a FIR (Finite Impulse Response) filter is often used. In this case, equation (4) can be expressed using equation (6).
[0011]
[Expression 1]
[0012]
A representative means for restoring the signal of the original signal source from such observation data is a method of estimating the FIR coefficient blindly and forming an inverse filter to separate them.
[0013]
In order to simplify the problem of estimating the FIR coefficient blindly, Equation (6) is subjected to Windowed Fourier transform (window Fourier transform). frequency omega, the position of the window function to _{ t s, s (t), } x (t), respectively converted to A (t)
^{ s ~ (ω, t s) } , x ~ (ω, t s), and the ^{A ~ (ω), (8} ) expression holds.
[0014]
[Expression 2]
[0015]
However, Windowed Fourier transform f ^{~} of any of the time function ^{f (t) (ω, t} s) the definition of is as follows.
[0016]
[Equation 3]
[0017]
Here, M is the discrete Fourier transform score, w (t) is the window function, and ΔT is the window function moving time.
[0018]
Time  the elements of the mixing matrix A ^{~} in the frequency domain ^{(omega)} is expressed by the following described (10).
[0019]
[Expression 4]
[0020]
Here, H _{m, n} (ω) represents the amplitude of the transfer function, and τ _{m, n} represents the delay time until the nth original signal reaches the mth sensor.
[0021]
As described above, for the arbitrary frequency component after the Windowed Fourier transform, the right side of Equation (8) is a product of a simple complex matrix, which indicates that the blind separation process is simplified.
[0022]
Assume that the mixing matrix A ^{to} (ω) is estimated for a certain frequency ω, the separation matrix W (ω) is obtained, and the signal in the timefrequency domain is separated as described in the following equation (11). . Here, the separation matrix W (ω) is an inverse matrix of the mixing matrices A ^{to} (ω).
[0023]
[Equation 5]
[0024]
The resulting u ^{~ (ω, } t _{s)} collected at all frequencies, by returning to the timeseries signal, the signal separation is completed.
[0025]
Several algorithms can be applied to estimate A ^{~} (ω) (see Reference [17]), but there are roughly two methods.
[0026]
One is separation Infomax (see reference [2]) based on the independence of probability distributions. The other is a separation method based on time correlation, such as TDD (see Time Dilayed Deconvolution References [5, 6]).
[0027]
[Problems to be solved by the invention]
By the way, when the blind separation method is applied to an actual environment, there are two problems described below. References [9] and [10] propose their application in real environments.
(1) When the signal is separated using the channel misplacement problem equation (11), the order of each component of the separated signals u ^{to} (ω, t _{s} ) is only BSS (Blind Source Separation (blind separation technique)). Then, ambiguity arises. For this reason, the order of the components of the separated signal u ^{to} (ω _{1} , t _{s} ) at a certain frequency ω _{1} does not always match the order of separation at the other frequency ω _{2} . This is a phenomenon called a misplacement (channel misplacement) problem at each frequency.
[0028]
In order to solve this misplacement problem, Reference [9] proposes a method that uses the similarity in the long scale of each frequency component of the signal.
[0029]
On the other hand, the reference [10] proposes a method that uses the coherency of the transfer function corresponding to each frequency component of the signal.
[0030]
However, in actuality, none of the methods is sufficient to solve the misplacement problem, and there is a problem that frequency components in the wrong order remain. Further, in the conventional technique, the calculation process for solving this misplacement problem is complicated, and a large number of signal processes must be performed, so that the process is delayed, so that realtime processing is impossible.
(2) Problems related to background noise Furthermore, in the prior art, when there is background noise, this process is not sufficiently studied. Regarding background noise, there is a method using the subspace method according to the reference [10], but there is a problem that the number of sensors required is large and not practical.
[0031]
The present invention has been made in view of the above circumstances, and blindly separates a source signal from a signal (audio, image, radio wave, etc.) in a real environment in which background noise exists and is convoluted and mixed. Provided is a blind signal separation processing apparatus capable of
[0032]
[Means for Solving the Problems]
The blind signal separation processing apparatus according to claim 1, wherein a separated signal emitted from a plurality of signal sources is observed by a plurality of observation means, and the source signal is separated using only those observation signals. In the device
A plurality of sensors for detecting the separated signal;
A separated signal storage unit for storing the separated signal detected by each sensor;
A signal processing unit that performs processing to extract a signal from the separated signal storage unit and separate the separated signal into signals of respective signal sources;
A separated signal storage unit for storing a signal to be separated separated by the signal processing unit,
The signal storage unit
A FFT processing unit for converting the separated signal into a timefrequency domain component by performing a discrete Fourier transform by window Fourier transform;
A mixing matrix estimation processing unit that estimates a mixing matrix for each frequency component based on the signal after the DFFT processing;
A signal separation processing unit that calculates a separation matrix for each frequency component based on a mixing matrix estimated for each frequency component, and calculates a product of the separation matrix and the signal after the DFFT processing for each frequency component;
IDFFT processing for performing separation inverse Fourier transform on the basis of the product of the separation matrix for all frequency components obtained by the signal separation processing unit and the signal after DFFT processing to separate and reproduce the separated signal And
Consists, the mixing matrix estimation processing unit, as the initial setting value in estimating the mixing matrix in each frequency component, and wherein the Rukoto using an estimate of the mixing matrix of the frequency components adjacent to the target frequency component .
[0033]
Blind signal separation processing apparatus according to claim 2, wherein each sensor is a microphone, the object separation signal is a sound signal from a plurality of sound sources, the sound signals from a plurality of sound sources to the signal of each sound source separation be characterized by Rukoto.
[0036]
DETAILED DESCRIPTION OF THE INVENTION
(Principle of the present invention)
The present invention applies a relay process of a separation matrix (W (ω) in the equation (11)) at an adjacent frequency as a countermeasure for a channel inconsistency problem. In order to perform this relay processing, FastICA converted into complex numbers (see [7] and [8] in the reference) is used.
[0037]
Hereinafter, these will be described.
(1) Relay processing of separation matrix at frequency The relationship between the mixing matrix at any two frequencies can be expressed using the following equation (12).
[0038]
[Formula 6]
[0039]
Here, T (ω _{2} , ω _{1} ) is a rotation matrix. Considering the adjacent frequencies ω _{1} and ω _{2} , there is a relationship of ω _{2} = ω _{1} + Δω. When the number of samples of the windowed Fourier transform is sufficiently large, this frequency difference Δω is sufficiently small, and the coherency between the two is considered to be very high.
[0040]
That is, it is considered that Expression (13) or Expression (14) holds.
[0041]
[Expression 7]
[0042]
Now, the estimation result of the mixing matrix at the frequency ω _{1} is expressed as A ^{to} (ω _{1} ). Further, the order of the signals separated by this estimated value is represented as P _{1} . Then, in order to estimate the mixing matrix at the frequency ω _{2} , the initial value of the estimation at the frequency ω _{2} is set as A ^{to} _{initial} (ω _{2} ) = A ^{to} (ω _{1} ). If the accuracy of the estimation results A ^{to} (ω _{1} ) is sufficiently high, as shown in the equations (13) and (14), this estimation value is a mixture matrix for obtaining the same order as P _{1 at the} frequency ω _{2} . It will be close enough to the true value.
[0043]
That is, by using the estimation result of the mixing matrix at the adjacent frequency as the initial value of the estimation at the next frequency, the optimum estimation value of the mixing matrix for obtaining the same order with respect to the separated signal can be obtained. Here, this method is referred to as “relay matrix relay processing at adjacent frequencies”.
(2) In order to apply FastICA complexization “relay matrix relay processing at adjacent frequencies”, FastICA must be used as an estimation processing algorithm. However, in the present invention, since FastICA is used in the timefrequency domain, FastICA must be complexized.
[0044]
As an effect of using FastICA, even in the presence of large background noise, the calculation in the estimation of the mixing matrix converges stably, and robustness against the background noise is obtained. Further, the countermeasure against the misplacement problem by the above “relay processing of the separation matrix at the adjacent frequency” requires very little multiplication or addition calculation, so that the arithmetic processing unit can be extremely simplified.
[0045]
Further, since the final estimated value at the adjacent frequency is used as the initial value of the estimation process at the next frequency, the convergence speed is extremely fast, and the adopted FastICA has the effect of further improving the estimated speed of the mixing matrix.
(Example)
FIG. 2 is a block circuit diagram of the blind separation processing apparatus according to the present invention. In FIG. 2, reference numeral 1 denotes a sensor (also referred to as observation means) that detects a signal to be separated (also referred to as an observation signal). Here, the number of sensors 1 is N, and they are numbered in order. The separated signal detected by each sensor 1 is stored in the separated signal storage unit 2. The separated signal stored in the separated signal storage unit 2 is input to the signal processing unit 3.
[0046]
The signal processing unit 3 extracts the separated signal stored in the separated signal storage unit 2 and performs a process of separating the separated signal into signals from the signal sources.
[0047]
The signal processing unit 3 performs a discrete Fourier transform on the separated signal by a windowed FFT (Windowed Fourier transform) and converts it into a timefrequency domain component, and a mixing matrix based on the signal after the DFFT processing. A mixing matrix estimation processing unit 5 for estimating each frequency component;
A signal separation processing unit 6 that calculates a separation matrix for each frequency component based on a mixing matrix estimated for each frequency component, and calculates a product of the separation matrix and the signal after DFFT processing for each frequency component; and signal separation From the IDFFT processing unit 7 that performs discrete inverse Fourier transform based on the product of the separation matrix for all frequency components obtained by the processing unit 6 and the signal after DFFT processing to separate and reproduce the separated signal It is configured.
[0048]
The mixing matrix estimation processing unit 5 uses the estimated value of the mixing matrix of the frequency component adjacent to the target frequency component as a setting initial value when estimating the mixing matrix in each frequency component. The mixing matrix estimation algorithm in the mixing matrix estimation processing unit 5 is a complex FastICA algorithm.
The separated signal separated by the signal processing unit 3 is output to the separated signal storage unit 8, and the signal from each signal source is output to the subsequent circuit.
[0049]
FIG. 3 is a flowchart for explaining the operation of the blind separated signal processing apparatus according to the present invention. First, the signal to be separated x (t) is subjected to discrete Fourier transform by WindowedFFT, and components x ^{to} (ω _{k} , t _{s} ) (S.1, S.2). Then, when k = 1 ^{is} set as ^{ A ~ = A const (S.3) } .
[0050]
Next, the mixing matrix is estimated by FastICA that is complexized for each frequency component.
[0051]
Here, for the first frequency component, an initial value for estimating the mixing matrix is arbitrarily selected. Here, as the initial value of the mixing matrix,
A ^{to} _{initial} (ω _{k} ) = A _{const} is used (S.4).
[0052]
For subsequent frequency components, “separation matrix relay processing at adjacent frequencies” using the value of the mixing matrix estimated at the previous frequency as its initial value is applied.
[0053]
In this embodiment, the case of processing in the order of increasing frequency is shown, and the relay relationship of the mixing matrix can be expressed using the following equation (15).
[0054]
[Equation 8]
[0055]
However, k shows the order of the frequency component ω, and the frequency component ω is expressed by the following equation (16) using k.
[0056]
[Equation 9]
[0057]
Using the mixing matrix A ^{~} (ω _{k} ) estimated through the learning process using FastICA (S.5) and using the separation matrix B ^{~} (ω _{k} ) obtained as its inverse matrix A ^{~} (ω _{k} ) ^{1} (S.6), the original independent signal can be separated from the mixed signal. That is, the separation signals u ^{to} (ω _{k} ) are obtained using the product of the components x ^{to} (ω _{k} , t _{s} ) and the separation matrix B ^{to} (ω _{k} ) (S.7), and k is k _{max} (= M 1) is larger or smaller (S.8), and when k is smaller than k _{max,} the estimated mixed matrix A ^{~} (ω _{k} ) obtained last time is the initial value of the estimated mixed matrix A ^{~} obtained this time. And “+1” is added to the order k (S.9). 4 and again S.I. 4 to S.M. 7 is repeated until k _{max} (N sensors). 8, when k exceeds k _{max} , the separation signal u ^{to} (ω _{k} ) is subjected to discrete inverse Fourier transform (Windowed IFET) processing (S.10), and the target signal, that is, the time domain separation signal u ( t) is obtained and stored in the separated signal storage unit (S.11).
[0058]
4 to 7 show simulations when the blind signal processing apparatus according to this embodiment is applied to separation of acoustic signals.
[0059]
Here, as shown in FIG. 4, it is assumed that the independent signals are voices generated from two persons 20 and 21 and background noise. A mixed signal input to the microphone 22 as the sensor 1 is created under the mixing condition according to the equation (17) corresponding to the propagation path shown in FIG.
A _{11} (n) = 1.0−0.7n ^{−1} + 0.1n ^{−2}
A _{12} (n) = 0.1 + 0.8 n ^{−1} +0.3 n ^{−2}
A _{21} (n) = 0.0−0.1n ^{−1} + 0.6n ^{−2} (17)
A _{22} (n) = 1.0 + 0.5n ^{−1} −0.3n ^{−2}
Here, n sampling number (n ^{1} is the value after one sample, n ^{2} values after 2 samples) shows the noise of the type white additive noise, the SNR of the signal and the background noise 15. 0 dB. The distance between two assumed microphones is 10.0 cm. Furthermore, the nonlinear type of FastICA is G (y) = (a + y) ^{1/2} .
[0060]
In FIG. 4, the mixed sound collected by the two microphones 22 is input as an input to the blind signal separation processing device.
[0061]
5A shows a speech waveform before mixing by the human 20, FIG. 5B shows a speech waveform before mixing by the human 21, and FIG. 6A shows a mixed signal input to one microphone 22. FIG. 6B shows the sound waveform of the mixed signal input to the other microphone 22, which is created according to the equation (17).
[0062]
FIGS. 7A and 7B show the sound signal waveforms separated by the blind signal separation processing apparatus (by simulation), and FIG. 7A shows the separation of the mixed signal input to one microphone. FIG. 7B shows a separated waveform of the mixed signal input to the other microphone. As is clear from the comparison of the waveforms before and after the blind separation processing shown in FIGS. It can be seen that it is possible to faithfully separate independent voices even in an environment where there is an error without causing a misplacement problem.
[0063]
In addition, the symbols u ^{to} , X ^{to} , A ^{to} , B ^{to} , s ^{to} , and f ^{to} used in the specification mean the following symbols.
[0064]
[Expression 10]
[0065]
References used for detailed description of the invention are shown in Table 1.
[0066]
[Table 1]
[0067]
【The invention's effect】
According to the present invention, it is possible to blindly separate a source signal from a signal (sound, image, radio wave, etc.) in a real environment in which background noise exists and is convolution mixed.
[0068]
In addition, because FastICA is used to estimate the mixing matrix that represents the influence of the signal on the propagation path, the estimation of the mixing matrix converges stably even in the presence of large background noise, and it is highly robust against background noise. Is obtained.
[0069]
Further, since the countermeasure for the misplacement problem by the “relay processing of the separation matrix at the adjacent frequency” hardly requires multiplication or addition calculation, the arithmetic processing unit can be extremely simplified. Further, in the relay process, since the final estimated value is used as the initial value of the estimation process at the next frequency, there is an effect that the convergence speed is extremely fast.
[Brief description of the drawings]
FIG. 1 is a schematic diagram showing an example of convolutional mixing.
FIG. 2 is a block circuit diagram of a blind signal separation processing apparatus according to the present invention.
FIG. 3 is a flowchart for explaining the operation of the blind signal separation processing device according to the present invention;
FIG. 4 is an explanatory diagram when audio is separated by a blind signal separation processing device.
5 shows the speech waveform of the speech source shown in FIG. 4, where (a) shows the speech waveform of one person and (b) shows the speech waveform of the other person.
6 shows a voice waveform that is convoluted and mixed in the transmission path up to the microphone shown in FIG. 4, wherein (a) shows a mixed signal waveform input to one microphone, and (b) shows the other microphone. 2 shows a mixed signal waveform input to the.
7 shows a speech signal waveform separated by the blind signal separation processing device shown in FIG. 4, wherein (a) corresponds to the speech waveform shown in FIG. 5 (a), and (b) shows in FIG. 5 (b). Corresponding to the audio waveform shown, the audio waveform separated from the blind signal processing device is separately output to the subsequent speaker device.
[Explanation of symbols]
DESCRIPTION OF SYMBOLS 1 ... Sensor 2 ... Separated signal storage part 3 ... Signal processing part 4 ... DFFT processing part 5 ... Mixing matrix estimation processing part 6 ... Signal separation processing part 7 ... IDFFT processing part 8 ... Signal storage part
Claims (2)
 In the blind signal separation processing apparatus that separates the source signal using only the observation signals by observing the separated signals emitted from the plurality of signal sources with a plurality of observation means,
A plurality of sensors for detecting the separated signal;
A separated signal storage unit for storing the separated signal detected by each sensor;
A signal processing unit that performs processing to extract a signal from the separated signal storage unit and separate the separated signal into signals of respective signal sources;
A separated signal storage unit for storing a signal to be separated separated by the signal processing unit,
The signal storage unit
A FFT processing unit for converting the separated signal into a timefrequency domain component by performing a discrete Fourier transform by window Fourier transform;
A mixing matrix estimation processing unit that estimates a mixing matrix for each frequency component based on the signal after the DFFT processing;
A signal separation processing unit that calculates a separation matrix for each frequency component based on a mixing matrix estimated for each frequency component, and calculates a product of the separation matrix and the signal after the DFFT processing for each frequency component;
IDFFT processing for performing separation inverse Fourier transform on the basis of the product of the separation matrix for all frequency components obtained by the signal separation processing unit and the signal after DFFT processing to separate and reproduce the separated signal And
Consists, the mixing matrix estimation processing unit, as the initial setting value in estimating the mixing matrix in each frequency component, and wherein the Rukoto using an estimate of the mixing matrix of the frequency components adjacent to the target frequency component Blind signal separation processing device.  Each sensor is a microphone, and the separated signal is a sound signal from a plurality of sound sources.
There, the blind signal separation processing apparatus according to claim 1, characterized that you separate the sound signals from a plurality of sound sources to the signal of each sound source.
Priority Applications (1)
Application Number  Priority Date  Filing Date  Title 

JP2001266095A JP4616529B2 (en)  20010903  20010903  Blind signal separation processing device 
Applications Claiming Priority (1)
Application Number  Priority Date  Filing Date  Title 

JP2001266095A JP4616529B2 (en)  20010903  20010903  Blind signal separation processing device 
Publications (2)
Publication Number  Publication Date 

JP2003078423A JP2003078423A (en)  20030314 
JP4616529B2 true JP4616529B2 (en)  20110119 
Family
ID=19092452
Family Applications (1)
Application Number  Title  Priority Date  Filing Date 

JP2001266095A Expired  Fee Related JP4616529B2 (en)  20010903  20010903  Blind signal separation processing device 
Country Status (1)
Country  Link 

JP (1)  JP4616529B2 (en) 
Families Citing this family (6)
Publication number  Priority date  Publication date  Assignee  Title 

JP4525071B2 (en) *  20031222  20100818  日本電気株式会社  Signal separation method, signal separation system, and signal separation program 
KR100600313B1 (en) *  20040226  20060714  남승현  Multipath is a method and an apparatus for the separation of a frequency domain blind channel mixed signal 
KR100653173B1 (en)  20051101  20061127  학교법인 포항공과대학교  Multichannel blind source separation mechanism for solving the permutation ambiguity 
JP4772627B2 (en) *  20060912  20110914  株式会社東芝  Mixed signal separation and extraction device 
CN105807135A (en) *  20160315  20160727  东南大学  Singlechannel conductedelectromagneticinterferencenoise separation method 
CN107607342A (en) *  20170922  20180119  沈阳工业大学  The healthy efficiency detection method of Air Conditioning Facilities device cluster 
Citations (4)
Publication number  Priority date  Publication date  Assignee  Title 

JPH09251299A (en) *  19960315  19970922  Toshiba Corp  Microphone array input type voice recognition device and its method 
JPH10313497A (en) *  19960918  19981124  Nippon Telegr & Teleph Corp <Ntt>  Sound source separation method, system and recording medium 
JP2000181499A (en) *  19981210  20000630  Nippon Hoso Kyokai <Nhk>  Sound source signal separation circuit and microphone device using the same 
JP2000242624A (en) *  19990218  20000908  Retsu Yamakawa  Signal separation device 

2001
 20010903 JP JP2001266095A patent/JP4616529B2/en not_active Expired  Fee Related
Patent Citations (4)
Publication number  Priority date  Publication date  Assignee  Title 

JPH09251299A (en) *  19960315  19970922  Toshiba Corp  Microphone array input type voice recognition device and its method 
JPH10313497A (en) *  19960918  19981124  Nippon Telegr & Teleph Corp <Ntt>  Sound source separation method, system and recording medium 
JP2000181499A (en) *  19981210  20000630  Nippon Hoso Kyokai <Nhk>  Sound source signal separation circuit and microphone device using the same 
JP2000242624A (en) *  19990218  20000908  Retsu Yamakawa  Signal separation device 
Also Published As
Publication number  Publication date 

JP2003078423A (en)  20030314 
Similar Documents
Publication  Publication Date  Title 

Buchner et al.  TRINICON: A versatile framework for multichannel blind signal processing  
Peterson  Simulating the response of multiple microphones to a single acoustic source in a reverberant room  
US6185309B1 (en)  Method and apparatus for blind separation of mixed and convolved sources  
US5319736A (en)  System for separating speech from background noise  
US7797153B2 (en)  Speech signal separation apparatus and method  
US7895038B2 (en)  Signal enhancement via noise reduction for speech recognition  
US8271200B2 (en)  System and method for acoustic signature extraction, detection, discrimination, and localization  
Lambert et al.  Blind separation of multiple speakers in a multipath environment  
US7603401B2 (en)  Method and system for online blind source separation  
US7313518B2 (en)  Noise reduction method and device using two pass filtering  
US8271277B2 (en)  Dereverberation apparatus, dereverberation method, dereverberation program, and recording medium  
US7295972B2 (en)  Method and apparatus for blind source separation using two sensors  
KR101153093B1 (en)  Method and apparatus for multisensory speech enhamethod and apparatus for multisensory speech enhancement ncement  
US6002776A (en)  Directional acoustic signal processor and method therefor  
Rickard  The DUET blind source separation algorithm  
JP2683490B2 (en)  Adaptive noise removal device  
JP3522954B2 (en)  Microphone array input type speech recognition apparatus and method  
US5874916A (en)  Frequency selective TDOA/FDOA crosscorrelation  
JP2005091732A (en)  Method for restoring target speech based on shape of amplitude distribution of divided spectrum found by blind signal separation  
Araki et al.  Underdetermined blind separation for speech in real environments with sparseness and ICA  
JP2002510930A (en)  Separation of unknown mixed sources using multiple decorrelation technique  
JP2003337594A (en)  Voice recognition device, its voice recognition method and program  
CN101325061A (en)  Audio signal processing method and apparatus for the same  
CN101719969B (en)  Method and system for judging doubleend conversation and method and system for eliminating echo  
CN1168069C (en)  Recognition system and method 
Legal Events
Date  Code  Title  Description 

A621  Written request for application examination 
Free format text: JAPANESE INTERMEDIATE CODE: A621 Effective date: 20080828 

A977  Report on retrieval 
Free format text: JAPANESE INTERMEDIATE CODE: A971007 Effective date: 20100809 

A131  Notification of reasons for refusal 
Free format text: JAPANESE INTERMEDIATE CODE: A131 Effective date: 20100817 

A521  Written amendment 
Free format text: JAPANESE INTERMEDIATE CODE: A523 Effective date: 20100930 

TRDD  Decision of grant or rejection written  
A01  Written decision to grant a patent or to grant a registration (utility model) 
Free format text: JAPANESE INTERMEDIATE CODE: A01 Effective date: 20101019 

A01  Written decision to grant a patent or to grant a registration (utility model) 
Free format text: JAPANESE INTERMEDIATE CODE: A01 

A61  First payment of annual fees (during grant procedure) 
Free format text: JAPANESE INTERMEDIATE CODE: A61 Effective date: 20101022 

R150  Certificate of patent or registration of utility model 
Free format text: JAPANESE INTERMEDIATE CODE: R150 

FPAY  Renewal fee payment (event date is renewal date of database) 
Free format text: PAYMENT UNTIL: 20131029 Year of fee payment: 3 

FPAY  Renewal fee payment (event date is renewal date of database) 
Free format text: PAYMENT UNTIL: 20131029 Year of fee payment: 3 

R250  Receipt of annual fees 
Free format text: JAPANESE INTERMEDIATE CODE: R250 

R250  Receipt of annual fees 
Free format text: JAPANESE INTERMEDIATE CODE: R250 

R250  Receipt of annual fees 
Free format text: JAPANESE INTERMEDIATE CODE: R250 

LAPS  Cancellation because of no payment of annual fees 