CN109448755A - Artificial cochlea's auditory scene recognition methods - Google Patents

Artificial cochlea's auditory scene recognition methods Download PDF

Info

Publication number
CN109448755A
CN109448755A CN201811276573.1A CN201811276573A CN109448755A CN 109448755 A CN109448755 A CN 109448755A CN 201811276573 A CN201811276573 A CN 201811276573A CN 109448755 A CN109448755 A CN 109448755A
Authority
CN
China
Prior art keywords
scene
ubm
parameter
recognition methods
auditory
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811276573.1A
Other languages
Chinese (zh)
Inventor
林和平
许长建
樊伟
王澄
刘根芳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lishengte Medical Science & Tech Co Ltd
Original Assignee
Lishengte Medical Science & Tech Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lishengte Medical Science & Tech Co Ltd filed Critical Lishengte Medical Science & Tech Co Ltd
Priority to CN201811276573.1A priority Critical patent/CN109448755A/en
Publication of CN109448755A publication Critical patent/CN109448755A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/27Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/45Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of analysis window

Landscapes

  • Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Complex Calculations (AREA)

Abstract

The invention discloses a kind of artificial cochlea's auditory scene recognition methods comprising following steps: (A) establishes the scene training UBM of standard;(B) voice signal is subjected to framing and windowing process;(C) pretreated voice signal is identified by frame;(D) VAD treated scene noise signal is subjected to characteristic vector pickup;(E) signal after feature extraction is handled in GMM-UBM system, obtains likelihood score value, finally identifies scene type.Artificial cochlea's auditory scene recognition methods is by establishing a series of models, it can identify different auditory scenes, instruction is provided for signal processing modules such as the enhancing of speech processor subsequent voice and speech strategies, match the signal processing of speech processor more with auditory scene, improve clarity, the intelligibility of the voice signal of patient in a noisy environment, the listening effect under music scenario also can be improved simultaneously, further improve the quality of life of artificial cave patient.

Description

Artificial cochlea's auditory scene recognition methods
Technical field
The present invention relates to a kind of auditory scene recognition methods more particularly to a kind of artificial cochlea's auditory scene recognition methods.
Background technique
Artificial cochlea is recognized in the world bilateral severe or pole profound sensorineural hearing loss patient to be made to restore to listen The unique effective ways and device felt.Existing artificial cochlea's operation workflow are as follows: sound is first converted to telecommunications by microphone acquisition Number, it by special digitized processing, is encoded according still further to certain strategy, is transmitted to body by being loaded in the transmitting coil after ear It is interior, it after the receiving coil of implant senses signal, is decoded by decoding chip, the stimulating electrode of implant is made to generate electric current, To stimulate auditory nerve to generate the sense of hearing.Due to the limitation of use environment, environment noise is necessarily adulterated in sound, is needed to sound Signal carries out certain algorithm optimization, but in view of the diversification of use environment, if only using single algorithm optimization, algorithm is excellent Signal after change is deviated with actual conditions sometimes, is unable to reach optimal auditory effect, therefore needs a kind of auditory scene Recognition methods so that different scenes use different optimization algorithms, have reached optimal auditory effect.
Summary of the invention
In view of the above drawbacks of the prior art, technical problem to be solved by the invention is to provide a kind of artificial cochleas to listen Feel scene recognition method, can identify different auditory scenes.
To achieve the above object, the present invention provides a kind of artificial cochlea's auditory scene recognition methods comprising following step Rapid: the various scene training signals of (A) model training program module collection form the scene training UBM of standard by EM algorithm; (B) voice signal is carried out framing and windowing process by preprocessor module;(C) VAD handler module is to pretreated Voice signal is identified by frame, identifies that the frame signal is scene noise signal or voice signal;(D) feature extraction program mould VAD treated scene noise signal is carried out characteristic vector pickup by block;(E) scene Recognition program module will be after feature extraction A part input UBM carries out related operation;A part input GMM operation, then in the related data in UBM and GMM data into Row operation forms new GMM;The data in UBM are compared with the data in new GMM later, obtain likelihood score value, final to know It Chu not scene type.
In stepb, which uses Hamming window or Hanning window.
Further, Hamming window:Wherein, the long N of window =256, frame pipettes 128.
In step C, which uses the VAD detection method based on short-time energy and short-time zero-crossing rate.
In step D, this feature vector, which extracts, uses MFCC or FBank.
Further, it the calculation method of the MFCC parameter of a frame scene noise signal: is calculated and is believed according to discrete Fourier transform Number discrete spectrum { S (ω) };Frequency is divided into D=30 equal part by Bark scale, and calculates its centre frequency and edge frequency, Wherein, Bark scale Ω is with frequency f transformation relationIt is filtered using D triangle band logical Wave device does logarithmic energy output E (d) (d=1,2 ..., D) that convolution finds out each frequency range with discrete spectrum { S (ω) } respectively, In, the centre frequency and edge frequency of triangular filter are aligned with corresponding Bark frequency range;Logarithmic energy output to each frequency range Discrete cosine transform is carried out to obtainIt takes Preceding 16 dimension is used as characteristic parameter.
In step E, in GMM-UBM system, scene noise model modifies certain of UBM by Bayesian adaptation method A little parameters obtain, and adaptive algorithm is divided into two steps, and the first step is expectation process, calculate scene training data in each single Gauss of UBM Statistical parameter in distribution;Second step obtains the parameter of scene noise model with the parameter weighting of new statistical parameter and UBM, Method of weighting makes in final scene noise model, by adaptive its parameter of distribution of more scene training data close to survey Try the parameter of scene noise itself, and by its adaptive distribution parameter of less test data close to the parameter of UBM.
Further, UBM and trained vector sequence X={ x is given1,x2,...,xT, each characteristic vector is calculated first to be belonged to The probability of any Gaussian Profile in UBM calculates i-th of Gaussian Profile in UBMSo Afterwards according to Pr (i | xt) and xtIt calculates for modifying weight, mean value and the statistical parameter of variance Finally, obtained by scene training data These new statistical parameters are used to update the model parameter of UBM Auto-adaptive parameter aiControl is new The balance of old parameter, scale factor γ adjust weight, so that the sum of weight all after adaptive is 1, for i-th of Gauss Distribution is used for above-mentioned auto-adaptive parameter ai, it is defined asWherein, r is a fixed value, controls the parameter of UBM Weight in adaptive, sets r=16.
Artificial cochlea's auditory scene recognition methods of the present invention can identify different sense of hearing fields by establishing a series of models Scape provides instruction for signal processing modules such as the enhancing of speech processor subsequent voice and speech strategies, makes the letter of speech processor Number processing is more matched with auditory scene, exports the stimulus signal being more consistent with practical auditory scene, patient is in noise for raising Clarity, the intelligibility of voice signal under environment, while the listening effect under music scenario also can be improved, further improve people The quality of life of work cochlea implantation patient.
It is described further below with reference to technical effect of the attached drawing to design of the invention, specific structure and generation, with It is fully understood from the purpose of the present invention, feature and effect.
Detailed description of the invention
Fig. 1 is the flow diagram of artificial cochlea's auditory scene recognition methods of the present invention.
Specific embodiment
The present invention provides a kind of artificial cochlea's auditory scene recognition methods, different auditory scene for identification, such as Classroom, street, music hall, market, railway station, food market etc..
Artificial cochlea's auditory scene recognition methods includes model training, pretreatment, VAD (Voice Activity Detection, voice activity detection) processing, feature extraction, five steps of scene Recognition.
Model training: the various scene training signals of model training program module collection (i.e. scene voice signal) establish field Jing Ku forms the scene training UBM of standard by EM (Expectation Maximization, greatest hope) algorithm (Universal Background Model, universal background model).
EM algorithm:
Feature vector set o=(o1,o2,...,oT);
Model λ:
GMM (Gaussian Mixture Model, gauss hybrid models) is distributed maximum likelihood function:
The weights omega of m-th of Gaussm:
The mean value of m-th of Gauss
The variance of m-th of Gauss
Pretreatment: voice signal is carried out framing and windowing process by preprocessor module.
By taking system sampling frequency is 16kHz as an example.
The windowing process uses Hamming window, the long N=256 of window, and frame pipettes the long half of window, i.e., and 128.
Hamming window:
Other window functions such as Hanning window also can be used in the windowing process, and frame length and frame shifting can also be according to system need It is changed setting.
VAD processing: VAD handler module is identified pretreated voice signal by frame, identifies the frame signal For scene noise signal or voice signal, wherein the identification uses the detection side VAD based on short-time energy and short-time zero-crossing rate Method.
Feature extraction: VAD treated scene noise signal is carried out characteristic vector pickup by feature extraction program module, In, this feature vector, which extracts, uses MFCC (Mel-Frequency Cepstrum Coefficient, mel-frequency cepstrum system Number) or FBank (Mel-scale Filter Bank, Meier scale filter group).
The calculation method of the MFCC parameter of one frame scene noise signal is as follows:
(1) according to discrete Fourier transform calculate signal discrete spectrum S (ω) | ω=1,2 ..., N };
(2) frequency is divided into D=30 equal part by Bark scale, and calculates its centre frequency and edge frequency, Bark is carved Spending Ω with frequency f transformation relation is
(3) the logarithm energy that convolution finds out each frequency range is done with discrete spectrum { S (ω) } respectively using D triangle bandpass filter Amount output E (d) (d=1,2 ..., D), the wherein centre frequency and edge frequency of triangular filter and corresponding Bark frequency range pair Together;
(4) discrete cosine transform is carried out to the logarithmic energy output of each frequency range to obtain
Take preceding 16 dimension as characteristic parameter.
Scene Recognition: a part input UBM after feature extraction is carried out related operation by scene Recognition program module;One Point input GMM operation, then data carry out operation in the related data in UBM and GMM, form new GMM;Later in UBM Data are compared with the data in new GMM, are obtained likelihood score value, are finally identified scene type, wherein in GMM-UBM system In system, scene noise model is obtained by certain parameters that Bayesian adaptation method modifies UBM, and adaptive algorithm is divided into two Step, the first step is expectation process, calculates statistical parameter of the scene training data in each single Gaussian Profile of UBM;Second step, with new Statistical parameter and the parameter weighting of UBM obtain the parameter of scene noise model, method of weighting makes final scene noise mould In type, by more scene training data it is adaptive be distributed its parameter close to test scene noise itself parameter, and by compared with Parameter of few its adaptive distribution parameter of test data close to UBM.
Adaptive approach is as follows, gives UBM and trained vector sequence X={ x1,x2,...,xT, each feature is calculated first Vector belongs to the probability of any Gaussian Profile in UBM.To i-th of Gaussian Profile in UBM, calculate
Then according to Pr (i | xt) and xtIt calculates for modifying weight, mean value and the statistical parameter of variance
Finally, these the new statistical parameters obtained by scene training data are used to update the model parameter of UBM
Auto-adaptive parameter aiThe balance of new and old parameter is controlled, scale factor γ adjusts weight, so that all after adaptive The sum of weight be 1.
For i-th of Gaussian Profile, it to be used for above-mentioned auto-adaptive parameter ai, it is defined as
Wherein, r is a fixed value, controls the weight of the parameter of UBM in adaptive, sets r=16.Using data phase The auto-adaptive parameter of pass, so that being adaptively related to Gaussian Profile.If the probability number n of a distributioniIt is smaller, then ai → 0, parameter of the adaptive rear scene noise model parameters close to UBM.If the probability number n of a distributioniIt is bigger, then ai→ 1, scene noise model parameter is mainly determined by scene training data.
The preferred embodiment of the present invention has been described in detail above.It should be appreciated that those skilled in the art without It needs creative work according to the present invention can conceive and makes many modifications and variations.Therefore, all technologies in the art Personnel are available by logical analysis, reasoning, or a limited experiment on the basis of existing technology under this invention's idea Technical solution, all should be within the scope of protection determined by the claims.

Claims (8)

1. a kind of artificial cochlea's auditory scene recognition methods comprising following steps: (A) model training program module collection is various Scene training signal forms the scene training UBM of standard by EM algorithm;(B) preprocessor module carries out voice signal Framing and windowing process;(C) VAD handler module is identified pretreated voice signal by frame, identifies that the frame is believed Number be scene noise signal or voice signal;(D) feature extraction program module carries out VAD treated scene noise signal Characteristic vector pickup;(E) a part input UBM after feature extraction is carried out related operation by scene Recognition program module;One Point input GMM operation, then data carry out operation in the related data in UBM and GMM, form new GMM;Later in UBM Data are compared with the data in new GMM, are obtained likelihood score value, are finally identified scene type.
2. artificial cochlea's auditory scene recognition methods as described in claim 1, it is characterised in that: in stepb, at the adding window Reason uses Hamming window or Hanning window.
3. artificial cochlea's auditory scene recognition methods as claimed in claim 2, it is characterised in that: Hamming window:Wherein, the long N=256 of window, frame pipette 128.
4. artificial cochlea's auditory scene recognition methods as described in claim 1, it is characterised in that: in step C, which is adopted With the VAD detection method based on short-time energy and short-time zero-crossing rate.
5. artificial cochlea's auditory scene recognition methods as described in claim 1, it is characterised in that: in step D, this feature to Amount, which is extracted, uses MFCC or FBank.
6. artificial cochlea's auditory scene recognition methods as claimed in claim 5, it is characterised in that: a frame scene noise signal The calculation method of MFCC parameter: the discrete spectrum { S (ω) } of signal is calculated according to discrete Fourier transform;By Bark scale frequency It is divided into D=30 equal part, and calculates its centre frequency and edge frequency, wherein Bark scale Ω is with frequency f transformation relationConvolution is done with discrete spectrum { S (ω) } respectively using D triangle bandpass filter to find out The logarithmic energy of each frequency range exports E (d) (d=1,2 ..., D), wherein the centre frequency and edge frequency of triangular filter It is aligned with corresponding Bark frequency range;Discrete cosine transform is carried out to the logarithmic energy output of each frequency range to obtainTake preceding 16 dimension as characteristic parameter.
7. artificial cochlea's auditory scene recognition methods as described in claim 1, it is characterised in that: in step E, in GMM- In UBM system, scene noise model is obtained by certain parameters that Bayesian adaptation method modifies UBM, adaptive algorithm point For two steps, the first step is expectation process, calculates statistical parameter of the scene training data in each single Gaussian Profile of UBM;Second step, The parameter of scene noise model is obtained with the parameter weighting of new statistical parameter and UBM, method of weighting makes final scene make an uproar In acoustic model, by adaptive its parameter of distribution of more scene training data close to the parameter of test scene noise itself, and By its adaptive distribution parameter of less test data close to the parameter of UBM.
8. artificial cochlea's auditory scene recognition methods as claimed in claim 7, it is characterised in that: given UBM and trained vector Sequence X={ x1,x2,...,xT, the probability that each characteristic vector belongs to any Gaussian Profile in UBM is calculated first, in UBM I-th of Gaussian Profile calculatesThen according to Pr (i | xt) and xtIt calculates and is used for the power of amendment The statistical parameter of weight, mean value and variance Finally, these the new statistical parameters obtained by scene training data are used to update The model parameter of UBM Auto-adaptive parameter aiControl the balance of new and old parameter, scale factor γ adjustment Weight, for i-th of Gaussian Profile, is used for above-mentioned auto-adaptive parameter a so that the sum of weight all after adaptive is 1i, It is defined asWherein, r is a fixed value, controls the weight of the parameter of UBM in adaptive, sets r=16.
CN201811276573.1A 2018-10-30 2018-10-30 Artificial cochlea's auditory scene recognition methods Pending CN109448755A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811276573.1A CN109448755A (en) 2018-10-30 2018-10-30 Artificial cochlea's auditory scene recognition methods

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811276573.1A CN109448755A (en) 2018-10-30 2018-10-30 Artificial cochlea's auditory scene recognition methods

Publications (1)

Publication Number Publication Date
CN109448755A true CN109448755A (en) 2019-03-08

Family

ID=65548788

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811276573.1A Pending CN109448755A (en) 2018-10-30 2018-10-30 Artificial cochlea's auditory scene recognition methods

Country Status (1)

Country Link
CN (1) CN109448755A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109859768A (en) * 2019-03-12 2019-06-07 上海力声特医学科技有限公司 Artificial cochlea's sound enhancement method
CN109893340A (en) * 2019-03-25 2019-06-18 深圳信息职业技术学院 A kind of processing method and processing device of the voice signal of cochlear implant
CN109979477A (en) * 2019-03-12 2019-07-05 上海力声特医学科技有限公司 The sound processing method of artificial cochlea
CN112820318A (en) * 2020-12-31 2021-05-18 西安合谱声学科技有限公司 Impact sound model establishment and impact sound detection method and system based on GMM-UBM
CN113038344A (en) * 2019-12-09 2021-06-25 三星电子株式会社 Electronic device and control method thereof

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101241699A (en) * 2008-03-14 2008-08-13 北京交通大学 A speaker identification system for remote Chinese teaching
CN106251861A (en) * 2016-08-05 2016-12-21 重庆大学 A kind of abnormal sound in public places detection method based on scene modeling
CN106941005A (en) * 2017-02-24 2017-07-11 华南理工大学 A kind of vocal cords method for detecting abnormality based on speech acoustics feature
CN106952643A (en) * 2017-02-24 2017-07-14 华南理工大学 A kind of sound pick-up outfit clustering method based on Gaussian mean super vector and spectral clustering
CN107103901A (en) * 2017-04-03 2017-08-29 浙江诺尔康神经电子科技股份有限公司 Artificial cochlea's sound scenery identifying system and method
DE102016214745A1 (en) * 2016-08-09 2018-02-15 Carl Von Ossietzky Universität Oldenburg Method for stimulating an implanted electrode arrangement of a hearing prosthesis
CN108231067A (en) * 2018-01-13 2018-06-29 福州大学 Sound scenery recognition methods based on convolutional neural networks and random forest classification
CN108305616A (en) * 2018-01-16 2018-07-20 国家计算机网络与信息安全管理中心 A kind of audio scene recognition method and device based on long feature extraction in short-term

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101241699A (en) * 2008-03-14 2008-08-13 北京交通大学 A speaker identification system for remote Chinese teaching
CN106251861A (en) * 2016-08-05 2016-12-21 重庆大学 A kind of abnormal sound in public places detection method based on scene modeling
DE102016214745A1 (en) * 2016-08-09 2018-02-15 Carl Von Ossietzky Universität Oldenburg Method for stimulating an implanted electrode arrangement of a hearing prosthesis
CN106941005A (en) * 2017-02-24 2017-07-11 华南理工大学 A kind of vocal cords method for detecting abnormality based on speech acoustics feature
CN106952643A (en) * 2017-02-24 2017-07-14 华南理工大学 A kind of sound pick-up outfit clustering method based on Gaussian mean super vector and spectral clustering
CN107103901A (en) * 2017-04-03 2017-08-29 浙江诺尔康神经电子科技股份有限公司 Artificial cochlea's sound scenery identifying system and method
CN108231067A (en) * 2018-01-13 2018-06-29 福州大学 Sound scenery recognition methods based on convolutional neural networks and random forest classification
CN108305616A (en) * 2018-01-16 2018-07-20 国家计算机网络与信息安全管理中心 A kind of audio scene recognition method and device based on long feature extraction in short-term

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109859768A (en) * 2019-03-12 2019-06-07 上海力声特医学科技有限公司 Artificial cochlea's sound enhancement method
CN109979477A (en) * 2019-03-12 2019-07-05 上海力声特医学科技有限公司 The sound processing method of artificial cochlea
CN109893340A (en) * 2019-03-25 2019-06-18 深圳信息职业技术学院 A kind of processing method and processing device of the voice signal of cochlear implant
CN113038344A (en) * 2019-12-09 2021-06-25 三星电子株式会社 Electronic device and control method thereof
CN112820318A (en) * 2020-12-31 2021-05-18 西安合谱声学科技有限公司 Impact sound model establishment and impact sound detection method and system based on GMM-UBM

Similar Documents

Publication Publication Date Title
CN109448755A (en) Artificial cochlea's auditory scene recognition methods
CN112509564B (en) End-to-end voice recognition method based on connection time sequence classification and self-attention mechanism
WO2019232829A1 (en) Voiceprint recognition method and apparatus, computer device and storage medium
CN108447495B (en) Deep learning voice enhancement method based on comprehensive feature set
US8842853B2 (en) Pitch perception in an auditory prosthesis
Stern et al. Hearing is believing: Biologically inspired methods for robust automatic speech recognition
CN106782565A (en) A kind of vocal print feature recognition methods and system
CN110428842A (en) Speech model training method, device, equipment and computer readable storage medium
JP2022529641A (en) Speech processing methods, devices, electronic devices and computer programs
CN109328380B (en) Recursive noise power estimation with noise model adaptation
CN105513605A (en) Voice enhancement system and method for cellphone microphone
WO2020087716A1 (en) Auditory scene recognition method for artificial cochlea
CN109121057A (en) A kind of method and its system of intelligence hearing aid
CN102509547A (en) Method and system for voiceprint recognition based on vector quantization based
CN108922541A (en) Multidimensional characteristic parameter method for recognizing sound-groove based on DTW and GMM model
CN110111769B (en) Electronic cochlea control method and device, readable storage medium and electronic cochlea
CN109859768A (en) Artificial cochlea's sound enhancement method
CN104778948B (en) A kind of anti-noise audio recognition method based on bending cepstrum feature
CN112151056A (en) Intelligent cochlear sound processing system and method with customization
CN109243466A (en) A kind of vocal print authentication training method and system
ES2849124A1 (en) Artificial cochlea ambient sound sensing method and system
CN112017658A (en) Operation control system based on intelligent human-computer interaction
CN111489763A (en) Adaptive method for speaker recognition in complex environment based on GMM model
Gandhiraj et al. Auditory-based wavelet packet filterbank for speech recognition using neural network
Zezario et al. Speech enhancement with zero-shot model selection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20190308

WD01 Invention patent application deemed withdrawn after publication