WO2019080551A1 - 目标语音检测方法及装置 - Google Patents

目标语音检测方法及装置

Info

Publication number
WO2019080551A1
WO2019080551A1 PCT/CN2018/095758 CN2018095758W WO2019080551A1 WO 2019080551 A1 WO2019080551 A1 WO 2019080551A1 CN 2018095758 W CN2018095758 W CN 2018095758W WO 2019080551 A1 WO2019080551 A1 WO 2019080551A1
Authority
WO
WIPO (PCT)
Prior art keywords
detection
model
target speech
module
frame
Prior art date
Application number
PCT/CN2018/095758
Other languages
English (en)
French (fr)
Inventor
马峰
王海坤
王智国
胡国平
Original Assignee
科大讯飞股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 科大讯飞股份有限公司 filed Critical 科大讯飞股份有限公司
Priority to JP2020517383A priority Critical patent/JP7186769B2/ja
Priority to US16/757,892 priority patent/US11308974B2/en
Priority to ES18871326T priority patent/ES2964131T3/es
Priority to EP18871326.7A priority patent/EP3703054B1/en
Priority to KR1020207014261A priority patent/KR102401217B1/ko
Publication of WO2019080551A1 publication Critical patent/WO2019080551A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/03Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters
    • G10L25/15Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters the extracted parameters being formant information
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification techniques
    • G10L17/06Decision making techniques; Pattern matching strategies
    • G10L17/08Use of distortion metrics or a particular distance between probe pattern and reference templates
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/03Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01HMEASUREMENT OF MECHANICAL VIBRATIONS OR ULTRASONIC, SONIC OR INFRASONIC WAVES
    • G01H17/00Measuring mechanical vibrations or ultrasonic, sonic or infrasonic waves, not provided for in the preceding groups
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • G10L21/0264Noise filtering characterised by the type of parameter measurement, e.g. correlation techniques, zero crossing techniques or predictive techniques
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/27Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/78Detection of presence or absence of voice signals
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • G10L21/0216Noise filtering characterised by the method used for estimating noise
    • G10L2021/02161Number of inputs available containing the signal or the noise to be suppressed
    • G10L2021/02166Microphone arrays; Beamforming

Definitions

  • the present application relates to the field of voice signal processing, and in particular, to a target voice detection method and apparatus.
  • target speech detection is one of the most important steps in noise reduction.
  • the accuracy of detection directly affects the effect of noise reduction. If the target speech detection is not accurate, the effective speech will be distorted during the noise reduction process. Serious, so the accurate detection of the target voice has an important significance.
  • the existing target speech detection methods mainly have the following two categories:
  • the main microphone signal is first noise-reduced, and then the difference between the main microphone signal and the sub-microphone signal strength after noise reduction is used for voice detection; or the target voice detection is performed based on the energy difference between the voice reference signal and the noise reference signal.
  • This type of method is based on the assumption that the target signal strength picked up by the primary microphone is greater than the secondary microphone receiving target signal, and the noise signal has the same intensity in both microphones. For example, when the signal-to-noise ratio is high, the primary and secondary mic energy ratios are greater than 1, and when the signal-to-noise ratio is low, the energy ratio is less than one.
  • the use scenario of the target speech detection method based on the intensity difference has a limitation, that is, the target signal arrival master and sub-microphone intensity difference must reach a certain threshold (for example, 3 db or more) to be effective. Moreover, in the case of a large noise and a relatively low signal-to-noise ratio, the target speech detection probability is low.
  • an ideal binary mask (IBM) or an ideal ratio mask (IRM) is used as an output, and the output value can be used as a basis for the existence of the target speech.
  • IBM binary binary mask
  • IRM ideal ratio mask
  • multi-channel data a plurality of channels are combined into one channel as an input to obtain a mask.
  • the existing target learning method based on machine learning has the following problems: only single channel information is used, information is not fully utilized, and target speech detection is not effective; even if multi-channel information is utilized, each neural network still processes only one original signal. Or a mixed signal, the spatial information of the multi-channel is not well utilized, and if there is erroneous interference in other directions in the noise, the effect of the method will drop sharply.
  • the embodiment of the present invention provides a target voice detection apparatus and method, which are used to solve one or more of the traditional target voice detection methods, such as limited application scenarios, low SNR detection, and insufficient information utilization, resulting in poor detection performance. Questions.
  • a target speech detection method comprising:
  • the detection result of the target speech corresponding to the current frame is obtained according to the model output result.
  • the target speech detection model is constructed in the following manner:
  • the parameters of the target speech detection model are trained.
  • the target speech detection model is a classification model or a regression model
  • the output of the target speech detection model is an ideal binary mask or an ideal proportional mask for each frequency point of the current frame.
  • the detecting feature comprises: spatial dimension information, frequency dimension information, and time dimension information.
  • the extracting the detection feature frame by frame based on the sound signal and the different direction beam comprises:
  • the respective beam signals are sequentially spliced with the sound signals collected by the microphone array at each frequency point of each frame to obtain a multi-dimensional space vector;
  • the multi-dimensional frequency vector including the spatial information is frame-expanded to obtain a multi-dimensional time vector including spatial and frequency information.
  • the method further includes:
  • Determining whether the current frame is the target speech frame according to the model output result includes:
  • the detection result based on the intensity difference and the output result of the model are combined to obtain a detection result of the target speech corresponding to the current frame.
  • the target speech detection is performed based on the intensity difference, and the detection result based on the intensity difference is obtained:
  • a detection result based on the intensity difference is obtained based on the power ratio.
  • a target speech detecting device comprising: a signal receiving module, a beam forming module, a detecting feature extracting module, a first detecting module, and a detecting result output module; wherein:
  • the signal receiving module is configured to receive a sound signal collected based on a microphone array, and output the sound signal to the beam forming module;
  • the beam forming module is configured to perform beamforming processing on the input sound signal to obtain beams in different directions;
  • the detection feature extraction module has an input connected to the output of the signal receiving module and the beam forming module, respectively, for extracting detection features frame by frame based on the sound signal and the different direction beams, respectively, and extracting the detected detection Feature output to the first detection module;
  • the first detecting module is configured to input the detection feature of the current frame extracted by the detection feature extraction module into a pre-built target speech detection model, obtain a model output result, and send the model output result to the detection result.
  • the detection result output module is configured to obtain a detection result of the target speech corresponding to the current frame according to the model output result.
  • the device further includes: a model building module, configured to construct the target voice detection model;
  • the model building module includes:
  • a structural design unit for determining a topology of the target speech detection model
  • a training data processing unit configured to generate training data by using clean voice and analog noise, and perform target voice information labeling on the training data
  • a feature extraction unit configured to extract a detection feature of the training data
  • a training unit configured to obtain, according to the detection feature and the annotation information, a parameter of the target speech detection model.
  • the target speech detection model is a classification model or a regression model.
  • the device further comprises:
  • a second detecting module whose input is connected to an output of the beam forming module, is used for performing target speech detection based on the intensity difference, obtaining a detection result based on the intensity difference, and transmitting the detection result based on the intensity difference to the detecting Result output module;
  • the detection result output module combines the detection result based on the intensity difference and the output result of the model to obtain a detection result of the target speech corresponding to the current frame.
  • the second detecting module comprises:
  • a reference signal acquiring unit configured to obtain a voice reference signal and a noise reference signal according to the beams in different directions;
  • a calculating unit configured to separately calculate powers of the voice reference signal and the noise reference signal, and calculate a power ratio of the voice reference signal and the noise reference signal;
  • a detection result unit configured to obtain a detection result based on the intensity difference according to the power ratio.
  • a computer readable storage medium comprising computer program code, executed by a computer unit, such that the computer unit performs the steps of the target speech detection method of any of the preceding.
  • a target voice detecting device includes: a processor, a memory, and a system bus;
  • the processor and the memory are connected by the system bus;
  • the memory is for storing one or more programs, the one or more programs comprising instructions that, when executed by the processor, cause the processor to perform target speech detection as described in any of the preceding The steps in the method.
  • a computer program product wherein the computer program product, when run on a terminal device, causes the terminal device to perform the steps in the target speech detection method according to any of the preceding claims.
  • the method and device for detecting a target voice detection method provided by the embodiment of the present application, receiving a sound signal collected based on a microphone array; performing beamforming processing on the sound signal to obtain beams in different directions; respectively based on the sound signal and the different directions
  • the beam is extracted frame by frame to detect the feature; the target speech detection model and the multi-channel information are used to detect the target speech, thereby effectively improving the accuracy of the target speech detection, and there is no problem of limited application scenarios, even in the letter In an environment with low noise, accurate detection results can also be obtained.
  • the detection result based on the intensity difference that is, the detection result based on the intensity difference and the detection result based on the model, the detection result of the target speech corresponding to the current frame is obtained, and the accuracy of the detection result is further improved.
  • FIG. 1 is a flow chart of a target voice detection method in an embodiment of the present application.
  • FIG. 2 is a flowchart of constructing a target speech detection model in an embodiment of the present application
  • FIG. 3 is another flowchart of a target voice detection method in an embodiment of the present application.
  • FIG. 4 is a schematic structural diagram of a target voice detecting apparatus according to an embodiment of the present application.
  • FIG. 5 is a schematic diagram of a model building module in an embodiment of the present application.
  • FIG. 6 is another schematic structural diagram of a target speech detecting apparatus according to an embodiment of the present application.
  • FIG. 1 it is a flowchart of a target voice detection method in the embodiment of the present application, which includes the following steps:
  • Step 101 Receive a sound signal collected based on a microphone array.
  • the acquired signals are x 1 (t), x 2 (t)...x M (t), respectively.
  • Step 102 Perform beamforming processing on the sound signal to obtain beams in different directions.
  • the beamforming may be performed by using a prior art, such as an adaptive algorithm based on direction estimation, a beamforming method based on a signal structure, and the like.
  • the beamforming algorithm mainly deals with the signals collected by the microphone array, so that the microphone array has a larger gain for certain directions in the spatial domain, while the gains in other directions are smaller, as if forming a moving beam.
  • the main lobe distribution is directed to the N different directions of the beam, and the beams in the N directions can be obtained by the beamformer:
  • W n (k, l) represents the beamformer coefficients pointing to the kth frequency band in the nth direction, which are determined by different beamforming methods.
  • Step 103 Extract detection features frame by frame based on the sound signal and the different direction beams, respectively.
  • the detection feature is comprehensive information considering spatial dimension information, frequency dimension information, and time dimension information, and the specific extraction method is as follows:
  • Hypothesis Is the output signal of the preset target direction, Output signals for non-target directions.
  • each of the obtained beam signals and the sound signals collected by the microphone array are sequentially spliced at each frequency point of each frame to obtain a multi-dimensional space vector.
  • the main lobe distribution is directed to N different directions of beams, and N beam signals and M mic signals are spliced into (M+N) dimensional space vectors V 1 at each frequency point of each frame ( k,l):
  • each element in the above multidimensional space vector is separately modeled, and then the modules of all frequency points in each frame are spliced to obtain a multi-dimensional frequency vector containing spatial information.
  • the modulo MD(k,l) of all frequency points of the frame is spliced to obtain the (M+N)*K-dimensional frequency vector:
  • V 2 (l) [MD(1,l);MD(2,l);...;MD(K,l)] (3)
  • Frame expansion is performed on a multi-dimensional frequency vector containing spatial information to obtain a multi-dimensional time vector containing spatial and frequency information.
  • frame expansion is performed on the above V 2 (l), and P frames are respectively extended forward and backward to obtain time dimension information of (M+N)*K*2P dimensions:
  • V 3 (l) [V 2 (lP); V 2 (l-P+1);...;V 2 (l+P)] (4)
  • Step 104 Input the extracted detection feature of the current frame into the pre-built target speech detection model to obtain a model output result.
  • the detection feature V 3 (l) considering the spatial dimension, the frequency dimension, and the time dimension corresponding to the current frame 1 is input to the pre-built target speech detection model, and the output is ideal for each frequency point k of the current frame l.
  • IBM Ideal Binary Mask
  • IRM Ideal Ratio Mask
  • the target speech detection model may be a classification model or a regression model, and if the output is an IRM, it is a regression model, otherwise it is a classification model.
  • the target speech detection model may specifically adopt a neural network model such as a deep neural network (DNN) or a cyclic neural network (RNN).
  • DNN deep neural network
  • RNN cyclic neural network
  • Step 105 Obtain a detection result of the target voice corresponding to the current frame according to the model output result.
  • the model output may be IBM or IRM; if the model output is IBM, it may be determined according to the output whether the current frame is the target speech frame; if the model output is IRM, it is also required to judge according to the set threshold, which is greater than The threshold is the target speech frame, otherwise it is the non-target speech frame.
  • the IRM outputted by the model can also be directly used as the corresponding detection result.
  • the construction process of the above target speech detection model is as shown in FIG. 2, and includes the following steps:
  • Step 201 Determine a topology of the target voice detection model.
  • the target speech detection model may be a classification model or a regression model, which is not limited in this embodiment of the present application.
  • Step 202 Generate training data by using clean voice and analog noise, and perform target voice information labeling on the training data.
  • the clean speech includes the target speech.
  • Step 203 Extract detection features of the training data.
  • the detection feature is comprehensive information considering spatial dimension information, frequency dimension information, and time dimension information, and the specific extraction method is as described above.
  • Step 204 Train the parameters of the target speech detection model based on the detection of the signature and the annotation information.
  • the target voice detection method provided by the embodiment of the present application is configured to acquire a sound signal based on a microphone array, perform beamforming processing on the sound signal to obtain beams in different directions, and extract detection features frame by frame based on the sound signal and the different direction beams, respectively.
  • the pre-built target speech detection model and multi-channel information are used to detect the target speech, thereby effectively improving the accuracy of the target speech detection, and there is no problem of limited application scenarios, even in a low signal-to-noise environment. Under the same, you can get accurate test results.
  • a target speech detection method based on the intensity difference and the detection method based on the detection methods is also provided.
  • FIG. 3 it is a flowchart of a target voice detection method in the embodiment of the present application, which includes the following steps:
  • Step 301 Receive a sound signal collected based on a microphone array.
  • Step 302 Perform beamforming processing on the sound signal to obtain beams in different directions.
  • Step 303 Perform target speech detection based on the intensity difference, and obtain a detection result based on the intensity difference.
  • the voice reference signal and the noise reference signal are obtained according to the beams in different directions; then calculating the power of the voice reference signal and the noise reference signal respectively, and calculating a power ratio of the voice reference signal and the noise reference signal, and finally according to The power ratio is obtained based on the detection result of the intensity difference.
  • the energy ratio is defined as:
  • P F (k, l), P U (k, l) are the power estimates of the speech reference signal and the noise reference signal, respectively, and the power estimation can be performed by using the 1st order recursive method:
  • X F (k, l) is a speech reference signal, that is, a beam forming signal whose beam main lobe direction is directed to the target direction, which can be obtained by a fixed beamforming algorithm in which the main lobe direction is directed to the target speech, such as delay sum beamforming ( Algorithms such as Delay and Sumbeamforming), Constant Beam-width Beam-former, and Super-Gain Beamforming;
  • X U (k, l) is a noise reference signal, that is, a beam forming signal pointing in the direction of the zero direction, which can be obtained according to an adaptive blocking matrix, for example, a frequency domain normalized least mean square (NLMS) adaptive method can be used.
  • NLMS frequency domain normalized least mean square
  • W N (k, l) is the adaptive blocking matrix coefficient
  • is the fixed learning step size, for example, the step length can be 0.05
  • the superscript * indicates that the complex number is conjugate
  • is a small positive number, such as ⁇ Can take a value of 0.001.
  • I ratio (k, l) is the current time-frequency point target speech detection result:
  • a threshold th may also be set. If I ratio (k, l) is greater than the threshold th, the current frame is considered to be the target speech frame, otherwise the current frame is the non-target speech frame.
  • Step 304 Perform target speech detection based on the detection model to obtain a model-based detection result.
  • step 103 For the target speech detection process based on the detection model, refer to step 103 to step 104 in FIG. 1 above, and details are not described herein again.
  • Step 305 merging the detection result based on the intensity difference and the detection result based on the model to obtain a detection result of the target speech corresponding to the current frame.
  • the joint determination can be performed based on I model (k, l) and I ratio (k, l), and the adaptive noise cancellation (ANC) in the speech noise reduction is taken as an example to determine whether the target speech is as follows:
  • the detection result based on the intensity difference is also a binary result, that is, in the case of 0 or 1, when the detection results of the two modes are merged, it may be adopted.
  • other fusion manners may also be adopted, which is not limited in this embodiment.
  • steps 303 and 304 are target speech detection processes respectively based on different methods, and the two are performed independently, and there is no temporal order relationship. It can be executed in parallel or in any of the steps.
  • the target speech detection method in the embodiment of the present invention can not only obtain an accurate detection result in an environment with low signal-to-noise ratio, but also can improve the accuracy of the detection result by combining the detection result based on the intensity difference.
  • the embodiment of the present application further provides a computer readable storage medium, comprising computer program code, which is executed by a computer unit, so that the computer unit performs the steps in the target voice detection embodiment of the present application.
  • a target voice detecting device includes: a processor, a memory, and a system bus;
  • the processor and the memory are connected by the system bus;
  • the memory is for storing one or more programs, the one or more programs comprising instructions that, when executed by the processor, cause the processor to perform the steps of the target speech detection embodiment of the present application .
  • a computer program product is characterized in that, when the computer program product is run on a terminal device, the terminal device is caused to perform the steps in the target speech detection embodiment of the present application.
  • the embodiment of the present application further provides a target voice detecting device, as shown in FIG. 4, which is a schematic structural diagram of the device.
  • the device includes the following modules: a signal receiving module 401, a beam forming module 402, a detection feature extraction module 403, a first detection module 404, and a detection result output module 405. among them:
  • the signal receiving module 401 is configured to receive a sound signal collected based on the microphone array, and output the sound signal to the beam forming module 402;
  • the beam forming module 402 is configured to perform beamforming processing on the input sound signal to obtain beams in different directions;
  • the input of the detection feature extraction module 403 is respectively connected to the output of the signal receiving module 401 and the beam forming module 402, for extracting detection features frame by frame based on the sound signal and the different direction beams, respectively, and extracting The detection feature is output to the first detection module 404;
  • the first detecting module 404 is configured to input the detection feature of the current frame extracted by the detection feature extraction module 403 into the pre-built target speech detection model 400, obtain a model output result, and send the model output result to the The detection result output module 405;
  • the detection result output module 405 is configured to obtain a detection result of the target speech corresponding to the current frame according to the model output result.
  • the preprocessing mainly refers to transforming the received sound signal from the time domain to the frequency domain to obtain a frequency domain signal.
  • the detection feature extracted by the detection feature extraction module 403 is a comprehensive information considering the spatial dimension information, the frequency dimension information, and the time dimension information.
  • the specific extraction method refer to the description in the foregoing method embodiment of the present application, and details are not described herein again.
  • the target speech detection model 400 may be a classification model or a regression model, and may be pre-selected and constructed by a corresponding model building module.
  • the model building module may be part of the apparatus of the present application, or may be independent of the apparatus of the present application. The embodiment is not limited.
  • FIG. 5 shows a structure of a model building module in the embodiment of the present application, including the following units:
  • a structure design unit 51 configured to determine a topology of the target voice detection model
  • the training data processing unit 52 is configured to generate training data by using clean voice and analog noise, and perform target voice information labeling on the training data;
  • the feature extraction unit 53 is configured to extract the detection feature of the training data
  • the training unit 54 is configured to train the parameters of the target voice detection model based on the detection and labeling information.
  • the detection features extracted by the feature extraction unit 53 are also comprehensive information considering spatial dimension information, frequency dimension information, and time dimension information.
  • the description in the application method embodiment is not described herein again.
  • the target speech detecting apparatus collects a sound signal based on a microphone array, performs beamforming processing on the sound signal to obtain beams in different directions, and extracts detection features frame by frame based on the sound signal and the different direction beams, respectively.
  • the pre-built target speech detection model and multi-channel information are used to detect the target speech, thereby effectively improving the accuracy of the target speech detection, and there is no problem of limited application scenarios, even in a low signal-to-noise environment. Under the same, you can get accurate test results.
  • FIG. 6 is another schematic structural diagram of a target speech detecting apparatus according to an embodiment of the present application.
  • the device further includes:
  • a second detecting module 406 whose input is connected to the output of the beam forming module 402, for performing target speech detection based on the intensity difference, obtaining a detection result based on the intensity difference, and transmitting the detection result based on the intensity difference to the
  • the detection result output module 405 is described.
  • the second detecting module 406 may specifically include the following units:
  • a reference signal acquiring unit configured to obtain a voice reference signal and a noise reference signal according to the beams in different directions;
  • a calculating unit configured to separately calculate powers of the voice reference signal and the noise reference signal, and calculate a power ratio of the voice reference signal and the noise reference signal;
  • a detection result unit configured to obtain a detection result based on the intensity difference according to the power ratio.
  • the detection result output module 405 fuses the detection result based on the intensity difference and the model output result to obtain a detection result of the target speech corresponding to the current frame.
  • the detection result output module 405 fuses the detection result based on the intensity difference and the model output result to obtain a detection result of the target speech corresponding to the current frame.
  • the target speech detecting apparatus of the embodiment of the present invention detects the target speech based on the model and the intensity difference based method, and comprehensively considers the detection results of the two different modes, so that the obtained detection result can be more accurate.
  • the various embodiments in the specification are described in a progressive manner, and the same or similar parts between the various embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments.
  • the device embodiments described above are merely illustrative, wherein the units described as separate components may or may not be physically separate, and the components displayed as units may or may not be physical units, ie It can be located in one place or it can be distributed to multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the embodiment. Those of ordinary skill in the art can understand and implement without any creative effort.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • Computational Linguistics (AREA)
  • Signal Processing (AREA)
  • Quality & Reliability (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Game Theory and Decision Science (AREA)
  • Business, Economics & Management (AREA)
  • Circuit For Audible Band Transducer (AREA)
  • Measurement Of Velocity Or Position Using Acoustic Or Ultrasonic Waves (AREA)
  • Obtaining Desirable Characteristics In Audible-Bandwidth Transducers (AREA)

Abstract

一种目标语音语检测方法及装置,该方法包括:接收基于麦克风阵列采集的声音信号(101),对所述声音信号进行波束成形处理,得到不同方向波束(102),分别基于所述声音信号和所述不同方向波束逐帧提取检测特征(103);将提取的当前帧的检测特征输入预先构建的目标语音检测模型,得到模型输出结果(104);根据所述模型输出结果得到当前帧对应的目标语音的检测结果(105)。因此可以提高检测结果的准确性。

Description

目标语音检测方法及装置
本申请要求于2017年10月23日提交中国专利局、申请号为201710994194.5、申请名称为“目标语音检测方法及装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及语音信号处理领域,具体涉及一种目标语音检测方法及装置。
背景技术
语音作为最自然、方便快捷的交互方式之一,已在人们的日常生活和工作中得到了广泛的应用。语音信号的处理,如语音编码、降噪等,也一直是相关领域研究人员研究的热点。以语音降噪为例,目标语音检测作为降噪中最重要的步骤之一,其检测的准确性直接影响降噪的效果,如果目标语音检测不准确,在降噪过程中有效语音会失真较严重,因而目标语音的准确检测有着重要的意义。
现有的目标语音检测方法主要有以下两大类:
1、基于强度差的目标语音检测
比如,先对主麦克风信号做降噪,然后利用降噪后主麦克风信号与副麦克风信号强度差来进行语音检测;或者基于语音参考信号和噪声参考信号能量差进行目标语音检测。这类方法是基于主麦克风拾取的目标信号强度大于副麦克风接收目标信号,噪声信号在两个麦克风中的强度相同的假设。比如,当信噪比高时,主、副麦克能量比大于1,当信噪比低时,能量比小于1。
这种基于强度差的目标语音检测方法的使用场景具有局限性,即目标信号到达主、副麦克风强度差必须达到一定阈值(如3db以上)才能有效。而且,在噪声较大、信噪比比较低的情况下,目标语音检出概率较低。
2、基于机器学习的目标语音检测
比如,将单通道带噪信号作为输入,将理想二值掩模(Ideal Binary Mask,IBM)或者理想比值掩模(Ideal Ratio Mask,IRM)作为输出,其输出值即可作为目标语音存在的依据;或者利用多通道数据,先将多个通道合成一个通道作为输入,进而来获得掩模。
现有的基于机器学习的目标语音检测方法存在以下问题:只利用单通道信息,信息未充分利用,目标语音检测效果不佳;即使利用多通道信息,但每个神经网络仍然只处理一路原始信号或者一路混合信号,未很好地利用多通道的空间信息,如果噪声中存在其他方向的人声干扰,该类方法效果就会急剧下降。
发明内容
本申请实施例提供一种目标语音检测装置及方法,以解决传统目标语音检测方法存在的应用场景受限、低信噪比环境下检测、信息利用不充分导致检测效果不佳中的一个或多个问题。
为此,本申请提供如下技术方案:
一种目标语音检测方法,所述方法包括:
接收基于麦克风阵列采集的声音信号;
对所述声音信号进行波束成形处理,得到不同方向波束;
基于所述声音信号和所述不同方向波束逐帧提取检测特征;
将提取的当前帧的检测特征输入预先构建的目标语音检测模型,得到模型输出结果;
根据所述模型输出结果得到当前帧对应的目标语音的检测结果。
优选地,按以下方式构建所述目标语音检测模型:
确定目标语音检测模型的拓扑结构;
利用干净语音及模拟噪声生成训练数据,并对所述训练数据进行目标语音信息标注;
提取所述训练数据的检测特征;
基于所述检测持征及标注信息,训练得到所述目标语音检测模型的参数。
优选地,所述目标语音检测模型为分类模型或回归模型,所述目标语音检测模型的输出为当前帧每个频点的理想二进制掩码或理想比例掩码。
优选地,所述检测特征包括:空间维度信息、频率维度信息、时间维度信息。
优选地,所述基于所述声音信号和所述不同方向波束逐帧提取检测特征包括:
在每帧的每个频点上将各个波束信号与麦克风阵列采集的声音信号依次拼接,得到多维空间向量;
对所述多维空间向量中每个元素分别求模,然后将每帧所有频点的模进行拼接,得到包含了空间信息的多维频率向量;
对所述包含了空间信息的多维频率向量进行帧扩展,得到包含了空间及频率信息的多维时间向量。
优选地,所述方法还包括:
基于强度差进行目标语音检测,得到基于强度差的检测结果;
所述根据所述模型输出结果确定当前帧是否为目标语音帧包括:
融合所述基于强度差的检测结果和所述模型输出结果,得到当前帧对应的目标语音的检测结果。
优选地,所述基于强度差进行目标语音检测,得到基于强度差的检测结果包括:
根据所述不同方向的波束得到语音参考信号和噪声参考信号;
分别计算所述语音参考信号和噪声参考信号的功率;
计算语音参考信号和噪声参考信号的功率比值;
根据所述功率比值得到基于强度差的检测结果。
一种目标语音检测装置,所述装置包括:信号接收模块,波束形成模块,检测特征提取模块,第一检测模块,检测结果输出模块;其中:
所述信号接收模块,用于接收基于麦克风阵列采集的声音信号,并将所述声音信号输出给所述波束形成模块;
所述波束形成模块,用于对输入的所述声音信号进行波束成形处理,得到不同方向波束;
所述检测特征提取模块,其输入分别连接所述信号接收模块和所述波束形成模块的输出,用于分别基于所述声音信号和所述不同方向波束逐帧提取检测特征,并将提取的检测特征输出给所述第一检测模块;
所述第一检测模块,用于将所述检测特征提取模块提取的当前帧的检测特征输入预先构建的目标语音检测模型,得到模型输出结果,并将所述模型输出结果发送给所述检测结果输出模块;
所述检测结果输出模块,用于根据所述模型输出结果得到当前帧对应的目标语音的检测结果。
优选地,所述装置还包括:模型构建模块,用于构建所述目标语音检测模型;
所述模型构建模块包括:
结构设计单元,用于确定目标语音检测模型的拓扑结构;
训练数据处理单元,用于利用干净语音及模拟噪声生成训练数据,并对所述训练数据进行目标语音信息标注;
特征提取单元,用于提取所述训练数据的检测特征;
训练单元,用于基于所述检测特征及标注信息,训练得到所述目标语音检测模型的参数。
优选地,所述目标语音检测模型为分类模型或回归模型。
优选地,所述装置还包括:
第二检测模块,其输入与所述波束形成模块的输出相连,用于基于强度差进行目标语音检测,得到基于强度差的检测结果,并将所述基于强度差的检测结果发送给所述检测结果输出模块;
所述检测结果输出模块融合所述基于强度差的检测结果和所述模型 输出结果,得到当前帧对应的目标语音的检测结果。
优选地,所述第二检测模块包括:
参考信号获取单元,用于根据所述不同方向的波束得到语音参考信号和噪声参考信号;
计算单元,用于分别计算所述语音参考信号和噪声参考信号的功率,计算语音参考信号和噪声参考信号的功率比值;
检测结果单元,用于根据所述功率比值得到基于强度差的检测结果。
一种计算机可读存储介质,包括计算机程序代码,该计算机程序代码由一个计算机单元执行,使得该计算机单元执行如前述任一项所述的目标语音检测方法中的步骤。
一种目标语音检测装置,包括:处理器、存储器、系统总线;
所述处理器以及所述存储器通过所述系统总线相连;
所述存储器用于存储一个或多个程序,所述一个或多个程序包括指令,所述指令当被所述处理器执行时使所述处理器执行如前述任一项所述的目标语音检测方法中的步骤。
一种计算机程序产品,其特征在于,所述计算机程序产品在终端设备上运行时,使得所述终端设备执执行如前述任一项所述的目标语音检测方法中的步骤。
本申请实施例提供的目标语音检测方法检测方法及装置,接收基于麦克风阵列采集的声音信号;对所述声音信号进行波束成形处理,得到不同方向波束;分别基于所述声音信号和所述不同方向波束逐帧提取检测特征;利用预先构建的目标语音检测模型及多通道的信息,检测目标语音,从而有效地提高了目标语音检测的准确性,而且不存在应用场景受限的问题,即使在信噪比较低的环境下,也能够得到准确的检测结果。
进一步地,结合基于强度差的检测结果,即融合基于强度差的检测结果和基于模型的检测结果,得到当前帧对应的目标语音的检测结果,进一步提高了检测结果的准确性。
附图说明
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请中记载的一些实施例,对于本领域普通技术人员来讲,还可以根据这些附图获得其他的附图。
图1是本申请实施例目标语音检测方法的一种流程图;
图2是本申请实施例中目标语音检测模型的构建流程图;
图3是本申请实施例目标语音检测方法的另一种流程图;
图4是本申请实施例目标语音检测装置的一种结构示意图;
图5是本申请实施例中模型构建模块的一种示意图;
图6是本申请实施例目标语音检测装置的另一种结构示意图。
具体实施方式
为了使本技术领域的人员更好地理解本申请实施例的方案,下面结合附图和实施方式对本申请实施例作进一步的详细说明。
如图1所示,是本申请实施例目标语音检测方法的一种流程图,包括以下步骤:
步骤101,接收基于麦克风阵列采集的声音信号。
在具体应用中,在采集到声音信号后,还需要对其进行预处理。
以包含M个麦克风的麦克风阵列接收声音信号为例,采集的信号分别为x 1(t),x 2(t)...x M(t)。
所述预处理主要指将接收到的声音信号从时域变换到频率域,得到频域信号X(k,l)=[X 1(k,l),X 2(k,l)...X M(k,l)] T,其中k表示信号的频率(0,1,...,K),l表示帧序号。
步骤102,对所述声音信号进行波束成形处理,得到不同方向波束。
波束形成具体可以采用现有技术,比如基于方向估计的自适应算法、基于信号结构的波束形成方法等,对此本申请实施例不做限定。波束形成算法主要是通过对麦克风阵列采集到的信号进行处理,使得麦克风阵列对空间域中的某些方向具有较大的增益,而其他方向的增益较小,好像形成一个走向 的波束一样。
根据M个麦克风形成主瓣分布指向N个不同方向的波束,可通过波束形成器,得到N个方向的波束:
Figure PCTCN2018095758-appb-000001
其中,W n(k,l)表示指向第n个方向第k个频带的波束形成器系数,由不同波束形成方法确定。
步骤103,分别基于所述声音信号和所述不同方向波束逐帧提取检测特征。
所述检测特征为考虑了空间维度信息、频率维度信息和时间维度信息的综合信息,具体提取方法如下:
假设
Figure PCTCN2018095758-appb-000002
是预设的目标方向的输出信号,
Figure PCTCN2018095758-appb-000003
为非目标方向输出信号。
1.空间维度信息V 1(k,l)
具体地,在每帧的每个频点上将得到的各个波束信号与麦克风阵列采集的声音信号依次拼接,得到多维空间向量。比如,根据M个麦克风形成主瓣分布指向N个不同方向的波束,在每帧的每个频点上将N个波束信号与M个麦克信号拼接成(M+N)维空间向量V 1(k,l):
Figure PCTCN2018095758-appb-000004
需要说明的是,在实际应用中,对于目标方向信号
Figure PCTCN2018095758-appb-000005
其他方向信号
Figure PCTCN2018095758-appb-000006
麦克风采集的声音信号拼接顺序没有限制。
2.频率维度信息
首先对上述多维空间向量中每个元素分别求模,然后将每帧所有频点的模进行拼接,得到包含了空间信息的多维频率向量。比如,对上述V 1(k,l)中每个元素分别求模MD(k,l)=f(V 1(k,l)),其中f(x)=|x| 2,然后将第l帧所有频点的模MD(k,l)进行拼接,得到(M+N)*K维频率向量:
V 2(l)=[MD(1,l);MD(2,l);...;MD(K,l)]     (3)
3.时间维度信息
对包含了空间信息的多维频率向量进行帧扩展,得到包含了空间及频率信息的多维时间向量。比如,对上述V 2(l)进行帧扩展,向前、向后分别扩展P帧,得到(M+N)*K*2P维的时间维度信息:
V 3(l)=[V 2(l-P);V 2(l-P+1);...;V 2(l+P)]    (4)
步骤104,将提取的当前帧的检测特征输入预先构建的目标语音检测模型,得到模型输出结果。
即,将上述当前帧l对应的考虑了空间维度、频率维度、时间维度的检测特征V 3(l),输入到预先构建的目标语音检测模型,输出为当前帧l每个频点k的理想二值掩码(IBM,Ideal Binary Mask)或者理想浮值掩码(IRM,Ideal Ratio Mask)。以输出为IRM为例,则模型的输出可以定义为I model(k,l)。
所述目标语音检测模型可以是分类模型或回归模型,如果输出是IRM,则为回归模型,否则为分类模型。
所述目标语音检测模型具体可以选用深度神经网络(DNN)、循环神经网络(RNN)等神经网络模型。
步骤105,根据所述模型输出结果得到当前帧对应的目标语音的检测结果。
所述模型输出结果可以是IBM或IRM;如果模型输出是IBM,则根据该输出即可确定当前帧是否为目标语音帧;如果模型输出是IRM,则还需要根据设定的阈值进行判断,大于该阈值,则为目标语音帧,否则为非目标语音帧。当然,也可以直接将模型输出的IRM作为相应的检测结果。
上述目标语音检测模型的构建流程如图2所示,包括以下步骤:
步骤201,确定目标语音检测模型的拓扑结构。
前面提到,所述目标语音检测模型可以是分类模型或回归模型,对此本申请实施例不做限定。
步骤202,利用干净语音及模拟噪声生成训练数据,并对所述训练数据进行目标语音信息标注。
所述干净语音包含所述目标语音。
步骤203,提取所述训练数据的检测特征。
所述检测特征为考虑了空间维度信息、频率维度信息和时间维度信息的综合信息,具体提取方法如前所述。
步骤204,基于所述检测持征及标注信息,训练得到所述目标语音检测模型的参数。
本申请实施例提供的目标语音检测方法,基于麦克风阵列采集声音信号;对所述声音信号进行波束成形处理,得到不同方向波束;分别基于所述声音信号和所述不同方向波束逐帧提取检测特征;利用预先构建的目标语音检测 模型及多通道的信息,检测目标语音,从而有效地提高了目标语音检测的准确性,而且不存在应用场景受限的问题,即使在信噪比较低的环境下,也能够得到准确的检测结果。
为了进一步提高目标语音检测结果的准确性,在本申请方法另一实施例中,还提供一种基于强度差和基于检测模型两种检测方法结果的目标语音检测方法。
如图3所示,是本申请实施例目标语音检测方法的一种流程图,包括以下步骤:
步骤301,接收基于麦克风阵列采集的声音信号。
步骤302,对所述声音信号进行波束成形处理,得到不同方向波束。
步骤303,基于强度差进行目标语音检测,得到基于强度差的检测结果。
具体地,首先根据所述不同方向的波束得到语音参考信号和噪声参考信号;然后分别计算所述语音参考信号和噪声参考信号的功率,并计算语音参考信号和噪声参考信号的功率比值,最后根据所述功率比值得到基于强度差的检测结果。
假设语音参考信号为F,噪声参考信号为U,其能量比定义为:
Figure PCTCN2018095758-appb-000007
其中,P F(k,l),P U(k,l)分别为语音参考信号和噪声参考信号的功率估计,可采用1阶递归方式进行功率估计:
P F(k,l)=α 1P F(k,l-1)+(1-α 1)|X F(k,l)| 2    (6)
P U(k,l)=α 2P U(k,l-1)+(1-α 2)|X U(k,l)| 2    (7)
其中,X F(k,l)为语音参考信号,即波束主瓣方向指向目标方向的波束形成后信号,可通过主瓣方向指向目标语音的固定波束形成算法得到,比如延迟求和波束形成(Delay and Sumbeamforming)、恒定束宽波束形成(Constant Beam-widthbeam-former)、超增益波束形成(Super-Gainbeamforming)等算法;
X U(k,l)为噪声参考信号,即陷零方向指向目标方向的波束形成后信号,可根据自适应阻塞矩阵得到,比如可采用频域归一化最小均方(NLMS)自适应方法进行滤波器更新,得到噪声参考信号:
X U(k,l)=X 1(k,l)-W N(k,l)X 2(k,l);
Figure PCTCN2018095758-appb-000008
其中,W N(k,l)为自适应阻塞矩阵系数,α为固定学习步长,比如该步长可以取值为0.05,上标*表示复数取共轭,δ为小正数,比如δ可以取值为0.001。I ratio(k,l)为当前时频点目标语音检测结果:
Figure PCTCN2018095758-appb-000009
其中,阈值th1和th2由大量实验和/或经验得到,比如可以取th2=2,th1=0.5。
需要说明的是,还可以设定一个阈值th,如果I ratio(k,l)大于该阈值th,则认为当前帧为目标语音帧,否则当前帧为非目标语音帧。
步骤304,基于检测模型进行目标语音检测,得到基于模型的检测结果。
基于检测模型的目标语音检测过程可参照前面图1中的步骤103至步骤104,在此不再赘述。
步骤305,融合所述基于强度差的检测结果和所述基于模型的检测结果,得到当前帧对应的目标语音的检测结果。
具体地,可以基于I model(k,l)和I ratio(k,l)进行联合判定,以语音降噪中自适应噪声消除(ANC,Adaptive Noise Cancellation)为例,判定是否有目标语音如下:
Figure PCTCN2018095758-appb-000010
其中,阈值th3、th4、th5和th6由大量实验和/或经验得到,比如可以取th3=0.5,th4=0.5,th5=0.25,th6=0.25。
需要说明的是,在所述目标语音检测模型采用分类模型,而且基于强度差的检测结果也是二值结果,即0或1的情况下,在对两种方式的检测结果进行融合时,可以采用“与”或者“或”的融合方式。当然,在实际应用中,也可以采用其它融合方式,对此本申请实施例不做限定。
需要说明的是,上述步骤303和步骤304是分别基于不同方法的目标语音检测过程,两者是独立进行,而且没有时间上的先后顺序关系。可以并行执 行,也可以其中任一个步骤先执行。
可见,本申请实施例的目标语音检测方法,不仅可以在信噪比较低的环境下,也能够得到准确的检测结果,而且可以结合基于强度差检测结果,进一步提高了检测结果的准确性。
相应地,本申请实施例还提供一种计算机可读存储介质,包括计算机程序代码,该计算机程序代码由一个计算机单元执行,使得该计算机单元执行本申请目标语音检测实施例中的各步骤。
相应地,一种目标语音检测装置,包括:处理器、存储器、系统总线;
所述处理器以及所述存储器通过所述系统总线相连;
所述存储器用于存储一个或多个程序,所述一个或多个程序包括指令,所述指令当被所述处理器执行时使所述处理器执行本申请目标语音检测实施例中的各步骤。
相应地,一种计算机程序产品,其特征在于,所述计算机程序产品在终端设备上运行时,使得所述终端设备执行本申请目标语音检测实施例中的各步骤。
相应地,本申请实施例还提供一种目标语音检测装置,如图4所示,是该装置的一种结构示意图。
在该实施例中,所述装置包括以下各模块:信号接收模块401,波束形成模块402,检测特征提取模块403,第一检测模块404、检测结果输出模块405。其中:
所述信号接收模块401,用于接收基于麦克风阵列采集的声音信号,并将所述声音信号输出给所述波束形成模块402;
所述波束形成模块402,用于对输入的所述声音信号进行波束成形处理,得到不同方向波束;
所述检测特征提取模块403的输入分别连接所述信号接收模块401和所述波束形成模块402的输出,用于分别基于所述声音信号和所述不同方向波束逐帧提取检测特征,并将提取的检测特征输出给所述第一检测模块404;
所述第一检测模块404,用于将所述检测特征提取模块403提取的当前帧的检测特征输入预先构建的目标语音检测模型400,得到模型输出结果,并将所述模型输出结果发送给所述检测结果输出模块405;
所述检测结果输出模块405,用于根据所述模型输出结果得到当前帧对应的目标语音的检测结果。
需要说明的是,上述信号接收模块401在采集到声音信号后,还需要对其进行预处理,所述预处理主要指将接收到的声音信号从时域变换到频率域,得到频域信号。
上述检测特征提取模块403提取的检测特征为考虑了空间维度信息、频率维度信息和时间维度信息的综合信息,具体提取方式可以参见前面本申请方法实施例中的描述,在此不再赘述。
上述目标语音检测模型400可以是分类模型或回归模型,具体可以由相应的模型构建模块预选构建,所述模型构建模块可以作为本申请装置的一部分,也可以独立于本申请装置,对此本申请实施例不做限定。
图5示出了本申请实施例中模型构建模块的一种结构,包括以下各单元:
结构设计单元51,用于确定目标语音检测模型的拓扑结构;
训练数据处理单元52,用于利用干净语音及模拟噪声生成训练数据,并对所述训练数据进行目标语音信息标注;
特征提取单元53,用于提取所述训练数据的检测特征;
训练单元54,用于基于所述检测持征及标注信息,训练得到所述目标语音检测模型的参数。
需要说明的是,在目标语音检测模型构建过程中,所述特征提取单元53提取的检测特征同样为考虑了空间维度信息、频率维度信息和时间维度信息的综合信息,具体提取方式可以参见前面本申请方法实施例中的描述,在此不再赘述。
本申请实施例提供的目标语音检测装置,基于麦克风阵列采集声音信号;对所述声音信号进行波束成形处理,得到不同方向波束;分别基于所述声音信号和所述不同方向波束逐帧提取检测特征;利用预先构建的目标语音检测模型及多通道的信息,检测目标语音,从而有效地提高了目标语音检测的准确性,而且不存在应用场景受限的问题,即使在信噪比较低的环境下,也能够得到准确的检测结果。
如图6所示,是本申请实施例目标语音检测装置的另一种结构示意图。
与图5所示实施例不同的是,在该实施例中,所述装置还包括:
第二检测模块406,其输入与所述波束形成模块402的输出相连,用于基于强度差进行目标语音检测,得到基于强度差的检测结果,并将所述基于强度差的检测结果发送给所述检测结果输出模块405。
所述第二检测模块406具体可以包括以下各单元:
参考信号获取单元,用于根据所述不同方向的波束得到语音参考信号和噪声参考信号;
计算单元,用于分别计算所述语音参考信号和噪声参考信号的功率,计算语音参考信号和噪声参考信号的功率比值;
检测结果单元,用于根据所述功率比值得到基于强度差的检测结果。
相应地,在该实施例中,所述检测结果输出模块405融合所述基于强度差的检测结果和所述模型输出结果,得到当前帧对应的目标语音的检测结果。具本融合方式可参照前面本申请方法实施例中的描述,在此不再赘述。
本申请实施例的目标语音检测装置,分别基于模型和基于强度差的方式对目标语音进行检测,并将两种不同方式的检测结果进行综合考虑,从而可以使得到的检测结果更准确。
本说明书中的各个实施例均采用递进的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。而且,以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。本领域普通技术人员在不付出创造性劳动的情况下,即可以理解并实施。
以上对本申请实施例进行了详细介绍,本文中应用了具体实施方式对本申请进行了阐述,以上实施例的说明只是用于帮助理解本申请的方法及装置;同时,对于本领域的一般技术人员,依据本申请的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本申请的限制。

Claims (15)

  1. 一种目标语音检测方法,其特征在于,所述方法包括:
    接收基于麦克风阵列采集的声音信号;
    对所述声音信号进行波束成形处理,得到不同方向波束;
    基于所述声音信号和所述不同方向波束逐帧提取检测特征;
    将提取的当前帧的检测特征输入预先构建的目标语音检测模型,得到模型输出结果;
    根据所述模型输出结果得到当前帧对应的目标语音的检测结果。
  2. 根据权利要求1所述的方法,其特征在于,按以下方式构建所述目标语音检测模型:
    确定目标语音检测模型的拓扑结构;
    利用干净语音及模拟噪声生成训练数据,并对所述训练数据进行目标语音信息标注;
    提取所述训练数据的检测特征;
    基于所述检测持征及标注信息,训练得到所述目标语音检测模型的参数。
  3. 根据权利要求1所述的方法,其特征在于,所述目标语音检测模型为分类模型或回归模型,所述目标语音检测模型的输出为当前帧每个频点的理想二进制掩码或理想比例掩码。
  4. 根据权利要求1至3任一项所述的方法,其特征在于,所述检测特征包括:空间维度信息、频率维度信息、时间维度信息。
  5. 根据权利要求1所述的方法,其特征在于,所述基于所述声音信号和所述不同方向波束逐帧提取检测特征包括:
    在每帧的每个频点上将各个波束信号与麦克风阵列采集的声音信号依次拼接,得到多维空间向量;
    对所述多维空间向量中每个元素分别求模,然后将每帧所有频点的模进行拼接,得到包含了空间信息的多维频率向量;
    对所述包含了空间信息的多维频率向量进行帧扩展,得到包含了空间及频率信息的多维时间向量。
  6. 根据权利要求1至3、5任一项所述的方法,其特征在于,所述方法还包括:
    基于强度差进行目标语音检测,得到基于强度差的检测结果;
    所述根据所述模型输出结果确定当前帧是否为目标语音帧包括:
    融合所述基于强度差的检测结果和所述模型输出结果,得到当前帧对应的目标语音的检测结果。
  7. 根据权利要求6所述的方法,其特征在于,所述基于强度差进行目标语音检测,得到基于强度差的检测结果包括:
    根据所述不同方向的波束得到语音参考信号和噪声参考信号;
    分别计算所述语音参考信号和噪声参考信号的功率;
    计算语音参考信号和噪声参考信号的功率比值;
    根据所述功率比值得到基于强度差的检测结果。
  8. 一种目标语音检测装置,其特征在于,所述装置包括:信号接收模块,波束形成模块,检测特征提取模块,第一检测模块,检测结果输出模块;其中:
    所述信号接收模块,用于接收基于麦克风阵列采集的声音信号,并将所述声音信号输出给所述波束形成模块;
    所述波束形成模块,用于对输入的所述声音信号进行波束成形处理,得到不同方向波束;
    所述检测特征提取模块,其输入分别连接所述信号接收模块和所述波束形成模块的输出,用于分别基于所述声音信号和所述不同方向波束逐帧提取检测特征,并将提取的检测特征输出给所述第一检测模块;
    所述第一检测模块,用于将所述检测特征提取模块提取的当前帧的检测特征输入预先构建的目标语音检测模型,得到模型输出结果,并将所述模型输出结果发送给所述检测结果输出模块;
    所述检测结果输出模块,用于根据所述模型输出结果得到当前帧对应的目标语音的检测结果。
  9. 根据权利要求8所述的装置,其特征在于,所述装置还包括:模型构建模块,用于构建所述目标语音检测模型;
    所述模型构建模块包括:
    结构设计单元,用于确定目标语音检测模型的拓扑结构;
    训练数据处理单元,用于利用干净语音及模拟噪声生成训练数据,并对 所述训练数据进行目标语音信息标注;
    特征提取单元,用于提取所述训练数据的检测特征;
    训练单元,用于基于所述检测特征及标注信息,训练得到所述目标语音检测模型的参数。
  10. 根据权利要求8所述的装置,其特征在于,所述目标语音检测模型为分类模型或回归模型。
  11. 根据权利要求8至10任一项所述的装置,其特征在于,所述装置还包括:
    第二检测模块,其输入与所述波束形成模块的输出相连,用于基于强度差进行目标语音检测,得到基于强度差的检测结果,并将所述基于强度差的检测结果发送给所述检测结果输出模块;
    所述检测结果输出模块融合所述基于强度差的检测结果和所述模型输出结果,得到当前帧对应的目标语音的检测结果。
  12. 根据权利要求11所述的装置,其特征在于,所述第二检测模块包括:
    参考信号获取单元,用于根据所述不同方向的波束得到语音参考信号和噪声参考信号;
    计算单元,用于分别计算所述语音参考信号和噪声参考信号的功率,计算语音参考信号和噪声参考信号的功率比值;
    检测结果单元,用于根据所述功率比值得到基于强度差的检测结果。
  13. 一种计算机可读存储介质,其特征在于,包括计算机程序代码,该计算机程序代码由一个计算机单元执行,使得该计算机单元执行如权利要求1至7任一项所述的人机交互应用方法中的步骤。
  14. 一种目标语音检测装置,其特征在于,包括:处理器、存储器、系统总线;
    所述处理器以及所述存储器通过所述系统总线相连;
    所述存储器用于存储一个或多个程序,所述一个或多个程序包括指令,所述指令当被所述处理器执行时使所述处理器执行权利要求1至7任一项所述的方法。
  15. 一种计算机程序产品,其特征在于,所述计算机程序产品在终端设备上运行时,使得所述终端设备执行权利要求1至7任一项所述的方法。
PCT/CN2018/095758 2017-10-23 2018-07-16 目标语音检测方法及装置 WO2019080551A1 (zh)

Priority Applications (5)

Application Number Priority Date Filing Date Title
JP2020517383A JP7186769B2 (ja) 2017-10-23 2018-07-16 対象音声検出方法及び装置
US16/757,892 US11308974B2 (en) 2017-10-23 2018-07-16 Target voice detection method and apparatus
ES18871326T ES2964131T3 (es) 2017-10-23 2018-07-16 Método y aparato de detección de voz objetivo
EP18871326.7A EP3703054B1 (en) 2017-10-23 2018-07-16 Target voice detection method and apparatus
KR1020207014261A KR102401217B1 (ko) 2017-10-23 2018-07-16 타겟 음성 검출 방법 및 장치

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710994194.5A CN107785029B (zh) 2017-10-23 2017-10-23 目标语音检测方法及装置
CN201710994194.5 2017-10-23

Publications (1)

Publication Number Publication Date
WO2019080551A1 true WO2019080551A1 (zh) 2019-05-02

Family

ID=61433874

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/095758 WO2019080551A1 (zh) 2017-10-23 2018-07-16 目标语音检测方法及装置

Country Status (8)

Country Link
US (1) US11308974B2 (zh)
EP (1) EP3703054B1 (zh)
JP (1) JP7186769B2 (zh)
KR (1) KR102401217B1 (zh)
CN (1) CN107785029B (zh)
ES (1) ES2964131T3 (zh)
HU (1) HUE065118T2 (zh)
WO (1) WO2019080551A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111613247A (zh) * 2020-04-14 2020-09-01 云知声智能科技股份有限公司 一种基于麦克风阵列的前景语音检测方法及装置
CN112562649A (zh) * 2020-12-07 2021-03-26 北京大米科技有限公司 一种音频处理的方法、装置、可读存储介质和电子设备

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107785029B (zh) * 2017-10-23 2021-01-29 科大讯飞股份有限公司 目标语音检测方法及装置
CN108335694B (zh) * 2018-02-01 2021-10-15 北京百度网讯科技有限公司 远场环境噪声处理方法、装置、设备和存储介质
US10672414B2 (en) * 2018-04-13 2020-06-02 Microsoft Technology Licensing, Llc Systems, methods, and computer-readable media for improved real-time audio processing
CN108962237B (zh) * 2018-05-24 2020-12-04 腾讯科技(深圳)有限公司 混合语音识别方法、装置及计算机可读存储介质
CN110164446B (zh) * 2018-06-28 2023-06-30 腾讯科技(深圳)有限公司 语音信号识别方法和装置、计算机设备和电子设备
CN109801646B (zh) * 2019-01-31 2021-11-16 嘉楠明芯(北京)科技有限公司 一种基于融合特征的语音端点检测方法和装置
CN110223708B (zh) * 2019-05-07 2023-05-30 平安科技(深圳)有限公司 基于语音处理的语音增强方法及相关设备
CN110265065B (zh) * 2019-05-13 2021-08-03 厦门亿联网络技术股份有限公司 一种构建语音端点检测模型的方法及语音端点检测系统
CN111883166B (zh) * 2020-07-17 2024-05-10 北京百度网讯科技有限公司 一种语音信号处理方法、装置、设备以及存储介质
CN112151036B (zh) * 2020-09-16 2021-07-30 科大讯飞(苏州)科技有限公司 基于多拾音场景的防串音方法、装置以及设备
CN113077803B (zh) * 2021-03-16 2024-01-23 联想(北京)有限公司 一种语音处理方法、装置、可读存储介质及电子设备
CN113270108B (zh) * 2021-04-27 2024-04-02 维沃移动通信有限公司 语音活动检测方法、装置、电子设备及介质
CN113345469A (zh) * 2021-05-24 2021-09-03 北京小米移动软件有限公司 语音信号的处理方法、装置、电子设备及存储介质
CN115240698A (zh) * 2021-06-30 2022-10-25 达闼机器人股份有限公司 模型训练方法、语音检测定位方法、电子设备及存储介质
CN116580723B (zh) * 2023-07-13 2023-09-08 合肥星本本网络科技有限公司 一种强噪声环境下的语音检测方法和系统

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103181190A (zh) * 2010-10-22 2013-06-26 高通股份有限公司 用于远场多源追踪和分离的系统、方法、设备和计算机可读媒体
CN105590631A (zh) * 2014-11-14 2016-05-18 中兴通讯股份有限公司 信号处理的方法及装置
CN106483502A (zh) * 2016-09-23 2017-03-08 科大讯飞股份有限公司 一种声源定位方法及装置
CN107785029A (zh) * 2017-10-23 2018-03-09 科大讯飞股份有限公司 目标语音检测方法及装置

Family Cites Families (36)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002091469A (ja) * 2000-09-19 2002-03-27 Atr Onsei Gengo Tsushin Kenkyusho:Kk 音声認識装置
US7415117B2 (en) * 2004-03-02 2008-08-19 Microsoft Corporation System and method for beamforming using a microphone array
CN101218848B (zh) * 2005-07-06 2011-11-16 皇家飞利浦电子股份有限公司 用于声束形成的设备和方法
KR20090037845A (ko) 2008-12-18 2009-04-16 삼성전자주식회사 혼합 신호로부터 목표 음원 신호를 추출하는 방법 및 장치
US8175291B2 (en) * 2007-12-19 2012-05-08 Qualcomm Incorporated Systems, methods, and apparatus for multi-microphone based speech enhancement
CN101192411B (zh) * 2007-12-27 2010-06-02 北京中星微电子有限公司 大距离麦克风阵列噪声消除的方法和噪声消除系统
CN102074246B (zh) * 2011-01-05 2012-12-19 瑞声声学科技(深圳)有限公司 基于双麦克风语音增强装置及方法
US8650029B2 (en) * 2011-02-25 2014-02-11 Microsoft Corporation Leveraging speech recognizer feedback for voice activity detection
KR101811716B1 (ko) * 2011-02-28 2017-12-28 삼성전자주식회사 음성 인식 방법 및 그에 따른 음성 인식 장치
JP5318258B1 (ja) * 2012-07-03 2013-10-16 株式会社東芝 集音装置
TW201443875A (zh) * 2013-05-14 2014-11-16 Hon Hai Prec Ind Co Ltd 收音方法及收音系統
CN103578467B (zh) * 2013-10-18 2017-01-18 威盛电子股份有限公司 声学模型的建立方法、语音辨识方法及其电子装置
US9715660B2 (en) * 2013-11-04 2017-07-25 Google Inc. Transfer learning for deep neural network based hotword detection
CN105244036A (zh) * 2014-06-27 2016-01-13 中兴通讯股份有限公司 一种麦克风语音增强方法及装置
JP6221158B2 (ja) * 2014-08-27 2017-11-01 本田技研工業株式会社 自律行動ロボット、及び自律行動ロボットの制御方法
US20160180214A1 (en) * 2014-12-19 2016-06-23 Google Inc. Sharp discrepancy learning
US10580401B2 (en) * 2015-01-27 2020-03-03 Google Llc Sub-matrix input for neural network layers
US9697826B2 (en) * 2015-03-27 2017-07-04 Google Inc. Processing multi-channel audio waveforms
CN104766093B (zh) * 2015-04-01 2018-02-16 中国科学院上海微系统与信息技术研究所 一种基于麦克风阵列的声目标分类方法
CN105336340B (zh) * 2015-09-30 2019-01-01 中国电子科技集团公司第三研究所 一种用于低空目标声探测系统的风噪抑制方法和装置
JP6594222B2 (ja) 2015-12-09 2019-10-23 日本電信電話株式会社 音源情報推定装置、音源情報推定方法、およびプログラム
CN205621437U (zh) * 2015-12-16 2016-10-05 宁波桑德纳电子科技有限公司 一种声像联合定位的远距离语音采集装置
CN106504763A (zh) * 2015-12-22 2017-03-15 电子科技大学 基于盲源分离与谱减法的麦克风阵列多目标语音增强方法
CN105869651B (zh) * 2016-03-23 2019-05-31 北京大学深圳研究生院 基于噪声混合相干性的双通道波束形成语音增强方法
RU2698153C1 (ru) * 2016-03-23 2019-08-22 ГУГЛ ЭлЭлСи Адаптивное улучшение аудио для распознавания многоканальной речи
CN105788607B (zh) * 2016-05-20 2020-01-03 中国科学技术大学 应用于双麦克风阵列的语音增强方法
US9972339B1 (en) * 2016-08-04 2018-05-15 Amazon Technologies, Inc. Neural network based beam selection
CN106328156B (zh) * 2016-08-22 2020-02-18 华南理工大学 一种音视频信息融合的麦克风阵列语音增强系统及方法
US10140980B2 (en) * 2016-12-21 2018-11-27 Google LCC Complex linear projection for acoustic modeling
CN106782618B (zh) * 2016-12-23 2020-07-31 云知声(上海)智能科技有限公司 基于二阶锥规划的目标方向语音检测方法
CN106710603B (zh) * 2016-12-23 2019-08-06 云知声(上海)智能科技有限公司 利用线性麦克风阵列的语音识别方法及系统
EP3566461B1 (en) * 2017-01-03 2021-11-24 Koninklijke Philips N.V. Method and apparatus for audio capture using beamforming
US11133011B2 (en) * 2017-03-13 2021-09-28 Mitsubishi Electric Research Laboratories, Inc. System and method for multichannel end-to-end speech recognition
CN106952653B (zh) * 2017-03-15 2021-05-04 科大讯飞股份有限公司 噪声去除方法、装置和终端设备
US10546593B2 (en) * 2017-12-04 2020-01-28 Apple Inc. Deep learning driven multi-channel filtering for speech enhancement
US11120786B2 (en) * 2020-03-27 2021-09-14 Intel Corporation Method and system of automatic speech recognition with highly efficient decoding

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103181190A (zh) * 2010-10-22 2013-06-26 高通股份有限公司 用于远场多源追踪和分离的系统、方法、设备和计算机可读媒体
CN105590631A (zh) * 2014-11-14 2016-05-18 中兴通讯股份有限公司 信号处理的方法及装置
CN106483502A (zh) * 2016-09-23 2017-03-08 科大讯飞股份有限公司 一种声源定位方法及装置
CN107785029A (zh) * 2017-10-23 2018-03-09 科大讯飞股份有限公司 目标语音检测方法及装置

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111613247A (zh) * 2020-04-14 2020-09-01 云知声智能科技股份有限公司 一种基于麦克风阵列的前景语音检测方法及装置
CN111613247B (zh) * 2020-04-14 2023-03-21 云知声智能科技股份有限公司 一种基于麦克风阵列的前景语音检测方法及装置
CN112562649A (zh) * 2020-12-07 2021-03-26 北京大米科技有限公司 一种音频处理的方法、装置、可读存储介质和电子设备
CN112562649B (zh) * 2020-12-07 2024-01-30 北京大米科技有限公司 一种音频处理的方法、装置、可读存储介质和电子设备

Also Published As

Publication number Publication date
KR102401217B1 (ko) 2022-05-23
HUE065118T2 (hu) 2024-05-28
CN107785029B (zh) 2021-01-29
KR20200066367A (ko) 2020-06-09
CN107785029A (zh) 2018-03-09
EP3703054B1 (en) 2023-09-20
EP3703054A4 (en) 2021-07-28
JP7186769B2 (ja) 2022-12-09
US20200342890A1 (en) 2020-10-29
US11308974B2 (en) 2022-04-19
ES2964131T3 (es) 2024-04-04
EP3703054A1 (en) 2020-09-02
JP2021500593A (ja) 2021-01-07
EP3703054C0 (en) 2023-09-20

Similar Documents

Publication Publication Date Title
WO2019080551A1 (zh) 目标语音检测方法及装置
CN109272989B (zh) 语音唤醒方法、装置和计算机可读存储介质
CN110600017B (zh) 语音处理模型的训练方法、语音识别方法、系统及装置
CN110444214B (zh) 语音信号处理模型训练方法、装置、电子设备及存储介质
CN105068048B (zh) 基于空间稀疏性的分布式麦克风阵列声源定位方法
CN106251877B (zh) 语音声源方向估计方法及装置
CN110503970A (zh) 一种音频数据处理方法、装置及存储介质
US20130294611A1 (en) Source separation by independent component analysis in conjuction with optimization of acoustic echo cancellation
Dorfan et al. Tree-based recursive expectation-maximization algorithm for localization of acoustic sources
CN108766459A (zh) 一种多人语音混合中目标说话人估计方法及系统
CN110610718B (zh) 一种提取期望声源语音信号的方法及装置
CN112652320B (zh) 声源定位方法和装置、计算机可读存储介质、电子设备
WO2016119388A1 (zh) 一种基于语音信号构造聚焦协方差矩阵的方法及装置
Marti et al. Real time speaker localization and detection system for camera steering in multiparticipant videoconferencing environments
CN110188179B (zh) 语音定向识别交互方法、装置、设备及介质
CN112712818A (zh) 语音增强方法、装置、设备
CN112180318A (zh) 声源波达方向估计模型训练和声源波达方向估计方法
CN108269581B (zh) 一种基于频域相干函数的双麦克风时延差估计方法
CN111192569B (zh) 双麦语音特征提取方法、装置、计算机设备和存储介质
CN114664288A (zh) 一种语音识别方法、装置、设备及可存储介质
CN114495974B (zh) 音频信号处理方法
CN115910047B (zh) 数据处理方法、模型训练方法、关键词检测方法及设备
Ju et al. Tracking the moving sound target based on distributed microphone pairs
CN117054968B (zh) 基于线性阵列麦克风的声源定位系统及其方法
Gao et al. A Physical Model-Based Self-Supervised Learning Method for Signal Enhancement Under Reverberant Environment

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18871326

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020517383

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 20207014261

Country of ref document: KR

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2018871326

Country of ref document: EP

Effective date: 20200525