CN109448750A - A kind of sound enhancement method improving bioradar voice quality - Google Patents

A kind of sound enhancement method improving bioradar voice quality Download PDF

Info

Publication number
CN109448750A
CN109448750A CN201811564752.5A CN201811564752A CN109448750A CN 109448750 A CN109448750 A CN 109448750A CN 201811564752 A CN201811564752 A CN 201811564752A CN 109448750 A CN109448750 A CN 109448750A
Authority
CN
China
Prior art keywords
frame
voice
noise
signal
bispectrum
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811564752.5A
Other languages
Chinese (zh)
Other versions
CN109448750B (en
Inventor
李盛
田颖
徐教礼
吕东旭
宋欣欣
路国华
王健琪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xijing University
Original Assignee
Xijing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xijing University filed Critical Xijing University
Priority to CN201811564752.5A priority Critical patent/CN109448750B/en
Publication of CN109448750A publication Critical patent/CN109448750A/en
Application granted granted Critical
Publication of CN109448750B publication Critical patent/CN109448750B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • G10L21/0216Noise filtering characterised by the method used for estimating noise
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • G10L21/0216Noise filtering characterised by the method used for estimating noise
    • G10L21/0232Processing in the frequency domain

Landscapes

  • Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Quality & Reliability (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Machine Translation (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

A kind of sound enhancement method improving bioradar voice quality, including obtain radar voice data: by the i-th frame data xi(n) its mean value is subtractedObtain zero-mean signal frame yi(n), FFT transform is carried out, the phase spectrum φ of noisy speech is obtainedi(ω);Every frame of observation sample value is divided into speech frame, noise frame and transition frames three classes by setting decision threshold;According to speech frame, noise frame and transition frames, the bispectrum of every frame is estimated respectively;The bispectrum of obtained every frame is subjected to amplitude reconstruct using N point DFT coefficient of the least square method to every frame signal;X is composed using the reconstructed amplitude of every frame signali(ω) combines phase spectrum φi(ω) carries out FFT inverse transformation, synthetic speech signal, voice after being enhanced;The advantages of present invention has Gaussian white noise and the coloured noise that can targetedly inhibit in radar voice, realizes effective enhancing to radar voice.

Description

A kind of sound enhancement method improving bioradar voice quality
Technical field
The invention belongs to life parameters detections and the field of acquisition, in particular to a kind of radar voice based on Higher Order Cumulants Noise-reduction method.
Background technique
Conventional microphone obtains a kind of method of voice almost unique voice acquisition methods so far, uses The enhanced voice language of traditional spectrum-subtraction and use the enhanced voice of Wavelet noise-eliminating method, have the shortcomings that it is apparent, Referring to shown in Fig. 3, Fig. 4, directionality is poor, easily by the interference of ambient noise and other acoustic noises, detecting distance is close etc..
It, can be this compared with microphone voice compared with long-range detection to voice signal using millimeter wave bioradar Speech detection method there is detection ranges remote, high directivity, acoustic resistive noise jamming ability is strong, has certain penetrability etc. Plurality of advantages is expected to produce as the substitution of microphone since this method can effectively make up the deficiency of microphone voice Product and have broad application prospects.
There is apparent advantage using bioradar detection voice, but the voice detected haves the characteristics that new, ginseng According to shown in Fig. 2: radar voice is not influenced by acoustic interference, but new noise and interference component are introduced in its voice signal, First is that there are the electromagnetic noises of radar wave;Second is that the electric circuit electronics technical noise introduced during speech signal collection;Third is that extraneous Due to object fine motion in environment, vibrates and the fine motion noise for the caused external environment that resonates, the introducing of these noises reduce The quality and intelligibility of radar voice move towards practical application for this new voice acquisition methods and bring new project.
Electromagnetic noise in voice acquired in bioradar, circuit noise and extraneous environmental noise are largely Gaussians White noise, there is also least a portion of coloured noises, and for the characteristic of this noise like, the invention patent proposes a kind of tired based on high-order The radar voice de-noising new method of accumulated amount.Compared to second-order statistic, the phase information amount more horn of plenty of high-order statistic, Neng Gouyou The phase information of stick signal and the various characteristic informations of non-Gaussian signal are imitated, the gaussian signal simultaneously for order greater than two Higher Order Cumulants spectrum is zero, these good characteristics can be used for targetedly inhibiting Gaussian white noise and coloured noise, It realizes effective enhancing to radar voice, bioradar detection this means of voice is enabled to be suitable for more complicated acoustical back Scape and more remote speech detection.
Summary of the invention
In order to overcome the above-mentioned deficiencies of the prior art, the purpose of the present invention is to provide a kind of raising bioradar voice matter The sound enhancement method of amount can targetedly inhibit Gaussian white noise and coloured noise in radar voice, realization pair Effective enhancing of radar voice.
In order to achieve the above objectives, the technical solution adopted by the present invention is that:
A kind of sound enhancement method improving bioradar voice quality, comprising the following steps:
Step 1: radar voice data is obtained:
Enabling { x (1), x (2) ... x (n) } is the observation sample value of one group of radar voice, and observation sample value is divided into K frame, Every frame contains M sample point;
Step 2: zero averaging:
The i-th frame data x will be obtained in step 1i(n) its mean value is subtractedObtain zero-mean signal frame yi(n);
Step 3: the zero-mean signal frame y that step 2 is obtainedi(n), FFT transform is carried out, the phase of noisy speech is obtained Compose φi(ω);
Step 4: the every frame for the observation sample value that step 1 obtains is divided by speech frame, noise by setting decision threshold Frame and transition frames three classes;The speech frame corresponds to the voiced segments of voice data, and the corresponding silent section of noise frame, transition frames are then places In the framing of voiced segments and silent section intersection;Structural not strong, the presentation random fluctuation of noise frame, amplitude obedience Gauss point Cloth;
Step 5: according to speech frame, noise frame and the transition frames in step 4, the bispectrum of every frame is estimated respectively;
Step 6: by the bispectrum of every frame obtained in step 5 using least square method to the N point DFT coefficient of every frame signal Carry out amplitude reconstruct;
Step 7: x is composed using the reconstructed amplitude that step 6 calculates resulting every frame signaliPhase in (ω) joint step three Position spectrum φi(ω) carries out FFT inverse transformation, synthetic speech signal, voice after being enhanced.
Further, in the step four by setting decision threshold to the method for discrimination of every frame of observation sample value Are as follows:
1) assume that preceding ten frame of the observation sample value of radar voice is noise, according to the mean value of preceding ten frames logarithm gradient mγWith standard deviation sγDecision threshold: Threshold=m is setγ+ρ·sγ (1)
Wherein:
ρ is constant, sets ρ=2.3 herein;
2) each frame data y is extractedi(n) logarithm gradient 10log10i|, wherein
γi=c3(0,0)=E { yi *(n)y(n)y(n)} (4)
3) according to above-mentioned calculated thresholding Threshold, the differentiation of frame type is carried out:
If the logarithm gradient 10log of the A frame10i| < Threshold is then noise frame;
If B 10log10i| >=Threshold, but a front or rear frame has one to meet 10log10k| < Threshold, k=i-1, i+1 are then excessive frame;
If C 10log10i|≥Threshold,10·log10i+1|≥Threshold,10·log10i-1| >=Threshold meets simultaneously, then is speech frame.
Further, the method for the every frame signal bispectrum of estimation in the step five is as follows:
(1) { x is set(i)(k), k=0,1,2..., M-1 } it is { i } frame voice data, calculate the DFT coefficient of { i } frame:
(2) bispectrum of { i } frame signal is estimated by DFT coefficient:
(3) according to { i } frame the characteristics of, differentiates the type of frame, estimates bispectrum respectively:
A speech frame:
Its bi-spectrum estimation can pass through the weighted average calculation with before and after frames:
Wherein, weighting coefficient meets 2a+b=1, b >=a.
B transition frames:
Transition frames do not need to carry out average computation, directly take the estimated value of the frame
C noise frame:
Wherein, coefficient c is constant, and c < 0.01.
Further, amplitude weight is carried out using N point DFT coefficient of the least square method to every frame signal in the step six The method of structure is as follows:
If X (k) and B (k, l)=B ((2 π/N) k, (2 π/N) l), the respectively N of x (n) point DFT coefficient and double
Spectrum, from the definition of bispectrum:
| B (k, l) |=| X (k) | | X (l) | | X (k+l) | (10)
Therefore
It is represented byLinear combination:
Wherein
It is that the corresponding bispectrum sample value of all Frequency points forms (N2/ 16) × 1 dimensional vector,
It is (N/2) × 1 dimensional vector,
For (a N2/ 16) matrix × (N/2) is tieed up, which is that full rank is reversible, therefore can obtainLeast square meaning Solution in justice are as follows:
It is solvingAfterwards, | X (1) |, | X (2) | ..., | X (N-1) | value can be by
With
It finds out.
The beneficial effects of the present invention are:
The present invention is greater than order for the noise characteristic in radar voice, using Higher Order Cumulants two gaussian signal Higher Order Cumulants spectrum be zero, and can be effectively retained signal phase information and non-Gaussian signal various characteristic informations this A little good characteristics targetedly inhibit Gaussian white noise and coloured noise, realize effective enhancing to radar voice, thus The enhancing of only radar voice provides a kind of targeted new method, while also can largely expand radar language The application field and prospect of sound acquisition methods.
The defects of low frequency component energy sensing that the present invention has is insufficient, and vulnerable to ambient noise interference, and directionality is weak, benefit The low frequency component sensing capability possessed by bioradar is strong, highly sensitive, high directivity, the spies such as highly resistance acoustic interference ability Property, it is able to ascend the quality for obtaining voice signal, expands traditional voice signal detection ability, thus in more complicated acoustics background With higher-quality voice signal is obtained under farther distance condition.
Detailed description of the invention
Fig. 1 is algorithm block diagram of the invention.
Fig. 2 is raw tone sound spectrograph.
Fig. 3 is using the general enhanced voice sound spectrograph of subtraction of tradition
Fig. 4 is using the enhanced voice sound spectrograph of Wavelet noise-eliminating method.
Fig. 5 is using the enhanced voice sound spectrograph of Higher-Order Cumulants.
Specific embodiment
The present invention is described in further detail below in conjunction with the accompanying drawings.
Shown in Figure 1, the present invention is based on the basic principles of the radar sound enhancement method of Higher Order Cumulants are as follows: by zero The radar voice signal of value is classified as transition frames frame, speech frame and noise frame three classes by the way that decision threshold is arranged, for not Same voice frame type estimates its bispectrum, then carries out amplitude reconstruct by using DFT coefficient of the least square method to each frame signal, Voice signal is reconstructed in the phase spectrum for combining original signal again.
A kind of sound enhancement method improving bioradar voice quality, comprising the following steps:
Step 1: radar voice data is obtained:
Enabling { x (1), x (2) ... x (n) } is the observation sample value of one group of radar voice, and observation sample value is divided into K frame, Every frame contains M sample point;
Step 2: zero averaging:
The i-th frame data x will be obtained in step 1i(n) its mean value is subtractedObtain zero-mean signal frame yi(n);
Step 3: the zero-mean signal frame y that step 2 is obtainedi(n), FFT transform is carried out, the phase of noisy speech is obtained Compose φi(ω);
Step 4: the every frame for the observation sample value that step 1 obtains is divided by speech frame, noise by setting decision threshold Frame and transition frames three classes;The speech frame corresponds to the voiced segments of voice data, and the corresponding silent section of noise frame, transition frames are then places In the framing of voiced segments and silent section intersection;Structural not strong, the presentation random fluctuation of noise frame, amplitude obedience Gauss point Cloth;
It is described by setting decision threshold to the method for discrimination of every frame of observation sample value are as follows:
1) assume that preceding ten frame of the observation sample value of radar voice is noise, according to the mean value of preceding ten frames logarithm gradient mγWith standard deviation sγDecision threshold: Threshold=m is setγ+ρ·sγ (1)
Wherein:
ρ is constant, sets ρ=2.3 herein;
2) each frame data y is extractedi(n) logarithm gradient 10log10i|, wherein
3) according to above-mentioned calculated thresholding Threshold, the differentiation of frame type is carried out:
If the logarithm gradient 10log of the A frame10i| < Threshold is then noise frame;
If B 10log10i| >=Threshold, but a front or rear frame has one to meet 10log10k| < Threshold, k=i-1, i+1 are then excessive frame;
If C 10log10i|≥Threshold,10·log10i+1|≥Threshold,10·log10i-1| >=Threshold meets simultaneously, then is speech frame.
Step 5: according to speech frame, noise frame and the transition frames in step 4, the bispectrum of every frame is estimated respectively;
The method of the every frame signal bispectrum of estimation in the step five is as follows:
(1) { x is set(i)(k), k=0,1,2..., M-1 } it is { i } frame voice data, calculate the DFT coefficient of { i } frame:
(2) bispectrum of { i } frame signal is estimated by DFT coefficient:
(3) according to { i } frame the characteristics of, differentiates the type of frame, estimates bispectrum respectively:
A speech frame:
Its bi-spectrum estimation can pass through the weighted average calculation with before and after frames:
Wherein, weighting coefficient meets 2a+b=1, b >=a.
B transition frames:
Transition frames do not need to carry out average computation, directly take the estimated value of the frame
C noise frame:
Wherein, coefficient c is constant, and c < 0.01.
Step 6: by the bispectrum of every frame obtained in step 5 using least square method to the N point DFT coefficient of every frame signal Carry out amplitude reconstruct;
The method that amplitude reconstruct is carried out using N point DFT coefficient of the least square method to every frame signal in the step six It is as follows:
If X (k) and B (k, l)=B ((2 π/N) k, (2 π/N) l), the respectively N of x (n) point DFT coefficient and double
Spectrum, from the definition of bispectrum:
| B (k, l) |=| X (k) | | X (l) | | X (k+l) | (10)
Therefore
It is represented byLinear combination:
Wherein
It is that the corresponding bispectrum sample value of all Frequency points forms (N2/ 16) × 1 dimensional vector,
It is (N/2) × 1 dimensional vector,
For (a N2/ 16) matrix × (N/2) is tieed up, which is that full rank is reversible, therefore can obtainLeast square meaning Solution in justice are as follows:
It is solvingAfterwards, | X (1) |, | X (2) | ..., | X (N-1) | value can be by
With
It finds out.
Step 7: x is composed using the reconstructed amplitude that step 6 calculates resulting every frame signaliPhase in (ω) joint step three Position spectrum φi(ω) carries out FFT inverse transformation, synthetic speech signal, and voice after being enhanced is diffused in original language referring to Figure 5 Noise signal in sound signal has been efficiently removed, and the active constituent in original radar voice then still retain it is intact.This Mainly since the Higher Order Cumulants spectrum that gaussian signal of the Higher Order Cumulants for order greater than two is utilized in this algorithm is zero Such good characteristic can targetedly inhibit Gaussian white noise when carrying out amplitude reconstruct to every frame signal based on bispectrum Sound and coloured noise, this algorithm can also be effectively retained the phase information and non-Gaussian signal of signal while speech enhan-cement Various characteristic informations, to achieve the purpose that targetedly effectively to enhance the implementation of radar voice, to substantially increase thunder Up to the intelligibility of voice.

Claims (4)

1. a kind of sound enhancement method for improving bioradar voice quality, which comprises the following steps:
Step 1: radar voice data is obtained:
Enabling { x (1), x (2) ... x (n) } is the observation sample value of one group of radar voice, and observation sample value is divided into K frame, every frame Containing M sample point;
Step 2: zero averaging:
The i-th frame data x will be obtained in step 1i(n) its mean value is subtractedObtain zero-mean signal frame yi(n);
Step 3: the zero-mean signal frame y that step 2 is obtainedi(n), FFT transform is carried out, the phase spectrum φ of noisy speech is obtainedi (ω);
Step 4: by setting decision threshold by the every frame for the observation sample value that step 1 obtains be divided into speech frame, noise frame and Transition frames three classes;The speech frame corresponds to the voiced segments of voice data, and the corresponding silent section of noise frame, transition frames are then in hair The framing of segment and silent section intersection;Structural not strong, the presentation random fluctuation of noise frame, amplitude Gaussian distributed;
Step 5: according to speech frame, noise frame and the transition frames in step 4, the bispectrum of every frame is estimated respectively;
Step 6: the bispectrum of every frame obtained in step 5 is carried out using N point DFT coefficient of the least square method to every frame signal Amplitude reconstruct;
Step 7: x is composed using the reconstructed amplitude that step 6 calculates resulting every frame signaliPhase spectrum in (ω) joint step three φi(ω) carries out FFT inverse transformation, synthetic speech signal, voice after being enhanced.
2. a kind of sound enhancement method for improving bioradar voice quality according to claim 1, which is characterized in that institute In the step of stating four by setting decision threshold to the method for discrimination of every frame of observation sample value are as follows:
1) assume that preceding ten frame of the observation sample value of radar voice is noise, according to the mean value m of preceding ten frames logarithm gradientγWith Standard deviation sγDecision threshold: Threshold=m is setγ+ρ·sγ (1)
Wherein:
ρ is constant, sets ρ=2.3 herein;
2) each frame data y is extractedi(n) logarithm gradient 10log10i|, wherein
3) according to above-mentioned calculated thresholding Threshold, the differentiation of frame type is carried out:
If the logarithm gradient 10log of the A frame10i| < Threshold is then noise frame;
If B 10log10i| >=Threshold, but a front or rear frame has one to meet 10log10k| < Threshold, k=i-1, i+1 are then excessive frame;
If C 10log10i|≥Threshold,10·log10i+1|≥Threshold,10·log10i-1|≥ Threshold meets simultaneously, then is speech frame.
3. according to a kind of sound enhancement method for improving bioradar voice quality as claimed in claim 2, which is characterized in that described The step of five in the every frame signal bispectrum of estimation method it is as follows:
(1) { x is set(i)(k), k=0,1,2..., M-1 } it is { i } frame voice data, calculate the DFT coefficient of { i } frame:
(2) bispectrum of { i } frame signal is estimated by DFT coefficient:
(3) according to { i } frame the characteristics of, differentiates the type of frame, estimates bispectrum respectively:
A speech frame:
Its bi-spectrum estimation can pass through the weighted average calculation with before and after frames:
Wherein, weighting coefficient meets 2a+b=1, b >=a;
B transition frames:
Transition frames do not need to carry out average computation, directly take the estimated value of the frame
C noise frame:
Wherein, coefficient c is constant, and c < 0.01.
4. according to a kind of sound enhancement method for improving bioradar voice quality as claimed in claim 3, which is characterized in that described The step of six in amplitude reconstruct is carried out to the N point DFT coefficient of every frame signal using least square method method it is as follows:
If X (k) and B (k, l)=B ((2 π/N) k, (2 π/N) l), the respectively N of x (n) point DFT coefficient and bispectrum, by bispectrum Known to definition:
| B (k, l) |=| X (k) | | X (l) | | X (k+l) | (10)
Therefore
It is represented byLinear combination:
Wherein
It is that the corresponding bispectrum sample value of all Frequency points forms (N2/ 16) × 1 dimensional vector,
It is (N/2) × 1 dimensional vector,
For (a N2/ 16) matrix × (N/2) is tieed up, which is that full rank is reversible, therefore can obtainIn least square meaning Solution are as follows:
It is solvingAfterwards, | X (1) |, | X (2) | ..., | X (N-1) | value can be by
With
It finds out.
CN201811564752.5A 2018-12-20 2018-12-20 Speech enhancement method for improving speech quality of biological radar Active CN109448750B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811564752.5A CN109448750B (en) 2018-12-20 2018-12-20 Speech enhancement method for improving speech quality of biological radar

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811564752.5A CN109448750B (en) 2018-12-20 2018-12-20 Speech enhancement method for improving speech quality of biological radar

Publications (2)

Publication Number Publication Date
CN109448750A true CN109448750A (en) 2019-03-08
CN109448750B CN109448750B (en) 2023-06-23

Family

ID=65558566

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811564752.5A Active CN109448750B (en) 2018-12-20 2018-12-20 Speech enhancement method for improving speech quality of biological radar

Country Status (1)

Country Link
CN (1) CN109448750B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6018317A (en) * 1995-06-02 2000-01-25 Trw Inc. Cochannel signal processing system
US20110131260A1 (en) * 2006-06-16 2011-06-02 Bae Systems Information And Electronic Systems Integration Inc. Efficient detection algorithm system for a broad class of signals using higher-order statistics in time as well as frequency domains
CN102937477A (en) * 2012-11-06 2013-02-20 昆山北极光电子科技有限公司 Bi-spectrum analysis method for processing signals
CN103217676A (en) * 2013-05-06 2013-07-24 西安电子科技大学 Radar target identification method under noise background based on bispectrum de-noising
CN103646649A (en) * 2013-12-30 2014-03-19 中国科学院自动化研究所 High-efficiency voice detecting method
CN106845339A (en) * 2016-12-13 2017-06-13 电子科技大学 A kind of mobile phone individual discrimination method based on bispectrum and EMD fusion features
US20180190280A1 (en) * 2016-12-29 2018-07-05 Baidu Online Network Technology (Beijing) Co., Ltd. Voice recognition method and apparatus

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6018317A (en) * 1995-06-02 2000-01-25 Trw Inc. Cochannel signal processing system
US20110131260A1 (en) * 2006-06-16 2011-06-02 Bae Systems Information And Electronic Systems Integration Inc. Efficient detection algorithm system for a broad class of signals using higher-order statistics in time as well as frequency domains
CN102937477A (en) * 2012-11-06 2013-02-20 昆山北极光电子科技有限公司 Bi-spectrum analysis method for processing signals
CN103217676A (en) * 2013-05-06 2013-07-24 西安电子科技大学 Radar target identification method under noise background based on bispectrum de-noising
CN103646649A (en) * 2013-12-30 2014-03-19 中国科学院自动化研究所 High-efficiency voice detecting method
CN106845339A (en) * 2016-12-13 2017-06-13 电子科技大学 A kind of mobile phone individual discrimination method based on bispectrum and EMD fusion features
US20180190280A1 (en) * 2016-12-29 2018-07-05 Baidu Online Network Technology (Beijing) Co., Ltd. Voice recognition method and apparatus

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
YING TIAN 等: "Smart radar sensor for speech detection and enhancement", 《SENSORS & ACTUATORS: A. PHYSICAL》 *
吕婧一: "高阶统计量分析及其应用研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Also Published As

Publication number Publication date
CN109448750B (en) 2023-06-23

Similar Documents

Publication Publication Date Title
CN109800700B (en) Underwater acoustic signal target classification and identification method based on deep learning
CN108597505B (en) Voice recognition method and device and terminal equipment
CN108428456A (en) Voice de-noising algorithm
Wang et al. ia-PNCC: Noise Processing Method for Underwater Target Recognition Convolutional Neural Network.
KR101305373B1 (en) Interested audio source cancellation method and voice recognition method thereof
CN109977724B (en) Underwater target classification method
CN105489226A (en) Wiener filtering speech enhancement method for multi-taper spectrum estimation of pickup
CN105609113A (en) Bispectrum weighted spatial correlation matrix-based speech sound source localization method
CN105551501B (en) Harmonic signal fundamental frequency estimation algorithm and device
CN105044478A (en) Transmission line audible noise multi-channel signal extraction method
CN110211596A (en) One kind composing entropy cetacean whistle signal detection method based on Mel subband
Wang et al. Noise estimation using mean square cross prediction error for speech enhancement
CN107132518B (en) A kind of range extension target detection method based on rarefaction representation and time-frequency characteristics
CN103426145A (en) Synthetic aperture sonar speckle noise suppression method based on multiresolution analysis
CN106408532B (en) Synthetic aperture radar SAR image denoising method based on the estimation of shearing wave field parameter
CN109448750A (en) A kind of sound enhancement method improving bioradar voice quality
CN110865375A (en) Underwater target detection method
Bavkar et al. PCA based single channel speech enhancement method for highly noisy environment
TWI749547B (en) Speech enhancement system based on deep learning
Sun et al. Enhancement of Chinese speech based on nonlinear dynamics
Xiao et al. Detection and segmentation of underwater CW-like signals in spectrum image under strong noise background
Wang The improved MFCC speech feature extraction method and its application
Li et al. Noise reduction of ship-radiated noise based on noise-assisted bivariate empirical mode decomposition
Xuhong et al. Speech enhancement using convolution neural network-based spectrogram denoising
Liu A new wavelet threshold denoising algorithm in speech recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant