CN109272996A - A kind of noise-reduction method and system - Google Patents

A kind of noise-reduction method and system Download PDF

Info

Publication number
CN109272996A
CN109272996A CN201811332084.3A CN201811332084A CN109272996A CN 109272996 A CN109272996 A CN 109272996A CN 201811332084 A CN201811332084 A CN 201811332084A CN 109272996 A CN109272996 A CN 109272996A
Authority
CN
China
Prior art keywords
signal
vocal print
noising signal
noising
noise
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811332084.3A
Other languages
Chinese (zh)
Other versions
CN109272996B (en
Inventor
庄宏东
聂云辉
欧汉标
戴小劲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Changjia Electronic Co ltd
Original Assignee
Guangzhou Changjia Electronic Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Changjia Electronic Co ltd filed Critical Guangzhou Changjia Electronic Co ltd
Priority to CN201811332084.3A priority Critical patent/CN109272996B/en
Publication of CN109272996A publication Critical patent/CN109272996A/en
Application granted granted Critical
Publication of CN109272996B publication Critical patent/CN109272996B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/20Speech recognition techniques specially adapted for robustness in adverse environments, e.g. in noise, of stress induced speech
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0272Voice signal separating

Landscapes

  • Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Signal Processing (AREA)
  • Telephonic Communication Services (AREA)
  • Telephone Function (AREA)

Abstract

The present invention provides a kind of noise-reduction method and system, method includes: to receive the client number signal and a de-noising signal that the first client communication module is sent based on cloud communication module;The client number signal is received based on matching module, is directed toward corresponding vocal print feature library in the reservoir of cloud;The vocal print feature library that the matching module is directed toward is read based on processor module, reconstructs vocal print filter;A de-noising signal is received based on the vocal print filter and exports secondary de-noising signal to synthesis module;The synthesis module receives the secondary de-noising signal and a de-noising signal, and export three times de-noising signal to the cloud communication module;The de-noising signal three times is sent to the second client communication module based on the cloud communication module.Noise-reduction method provided by the invention and system carry out third party's voice by sound groove recognition technology in e and filter out, using the noise-reduction method and system available high quality, fine definition, low noise communication signal.

Description

A kind of noise-reduction method and system
Technical field
The present invention relates to a kind of acoustic processing fields, and in particular to arrives a kind of noise-reduction method and system.
Background technique
In speech communication field, the target that existing noise reduction technology mainly filters out is the background sound in call sound, for The voice of some non-user can not be filtered out well, between user speaks in breath, if equally had apart from equipment Closer people speaks, and the third party's voice that will lead to other than communication two party enters in communication speech, influences speech quality, no Conducive to information privacy and information interchange.
Summary of the invention
In order to overcome the defect of existing noise reduction technology, the present invention provides a kind of noise-reduction method and systems, are known by vocal print Other technology carries out filtering out for third party's voice, uses the noise-reduction method and the available high quality of system, fine definition, low noise Communication signal.
Correspondingly, the present invention provides a kind of noise-reduction methods, comprising the following steps:
The client number signal and a noise reduction that the first client communication module is sent are received based on cloud communication module Signal;
The client number signal is received based on matching module, is directed toward corresponding vocal print feature library in the reservoir of cloud;
The vocal print feature library that the matching module is directed toward is read based on processor module, reconstructs vocal print filter;
A de-noising signal is received based on the vocal print filter and exports secondary de-noising signal to synthesis module;
The synthesis module receives the secondary de-noising signal and a de-noising signal, and export three times de-noising signal to institute State cloud communication module;
The de-noising signal three times is sent to the second client communication module based on the cloud communication module.
De-noising signal is generated by following steps:
A de-noising processor based on the first client receives the main signal that the first client main microphon obtains and the The sub signal that one client secondary microphone obtains, exports a de-noising signal to the first client communication module.
The client number signal and the first client communication module hardware code are bound;
Or the client number signal and the login account of first client are bound.
The vocal print feature library is based on the client number signal and carries out subregion, includes corresponding in each vocal print feature library In the commonly used word vocal print of the client number signal, high frequency time vocal print and training vocal print.
The commonly used word vocal print is the user of corresponding client number signal based on normal in " general specification Chinese character table " The vocal print extracted in advance with word.
The high frequency time vocal print is that the frequency occurs in the multiple de-noising signals of statistics to be higher than a certain given threshold Vocal print.
The trained vocal print is the vocal print obtained based on commonly used word vocal print training.
It is described that the de-noising signal is received based on the vocal print filter and exports secondary de-noising signal to synthesizing mould Block includes the following steps;
Traversal matching is carried out to a de-noising signal based on the commonly used word vocal print, high frequency time vocal print, training vocal print, The secondary de-noising signal for corresponding to a de-noising signal time shaft is generated according to matching result, the secondary de-noising signal exists Time point value when the mating structure is matching is 1, remaining time point value is 0;
The secondary de-noising signal is sent to synthesis module.
The synthesis module receives the secondary de-noising signal and a de-noising signal, and export three times de-noising signal to institute State cloud communication module the following steps are included:
Composite selector based on the synthesis module with time sequencing read the de-noising signal and with it is corresponding when Between secondary de-noising signal alternatively standard is selected;
According to time shaft sequence, when secondary de-noising signal value is 1, composite selector is primary to the output of the first multiplier De-noising signal exports 0 signal to the second multiplier;When second of de-noising signal value is 0, the first multiplier output 0, second Multiplier exports a de-noising signal.
Synthesis adder based on the synthesis module folds the output signal of the first multiplier and the second multiplier Add, obtain de-noising signal three times and is sent to the cloud communication module.
Correspondingly, the present invention provides a kind of noise reduction system, for realizing noise-reduction method described in any of the above item.
The present invention provides a kind of microphone denoising method and system, by Application on Voiceprint Recognition comparison technology, to noise reduction De-noising signal carries out secondary noise reduction and three times noise reduction, makes the sound for only retaining specific user in the noise reduction three times ultimately generated Message breath, while filtering environmental noise, can also filter out the acoustic impacts in addition to user, generate use high-definition Person's voice signal has good practicability in specific implementation.
Detailed description of the invention
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this Some embodiments of invention for those of ordinary skill in the art without creative efforts, can be with Other attached drawings are obtained according to these attached drawings.
Fig. 1 shows the noise-reduction method flow chart of the embodiment of the present invention;
Fig. 2 shows the noise reduction system structural schematic diagrams of the embodiment of the present invention.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete Site preparation description, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is based on Embodiment in the present invention, it is obtained by those of ordinary skill in the art without making creative efforts all other Embodiment shall fall within the protection scope of the present invention.
Fig. 1 shows the noise-reduction method flow chart of the embodiment of the present invention.The embodiment of the invention provides a kind of noise-reduction method, It is mainly used for noise reduction field of conversing, the voice signal of the first client is sent to the second client after cloud server, In, a noise reduction is carried out to voice signal in the first client, server carries out secondary noise reduction and three times noise reduction beyond the clouds, so After generate final de-noising signal to the second client, specifically, noise-reduction method provided in an embodiment of the present invention the following steps are included:
S101: based on cloud communication module receive the first client communication module send client number signal and once De-noising signal;
Wherein, a de-noising signal is generated in the first client, is mainly used for preliminary wiping out background sound, reduces back Influence of the scape sound to voice signal.Specifically, being typically based on, main microphon and secondary microphone realization are set in the first client The generation of de-noising signal.
In general, main microphon and secondary microphone maintain a certain distance each other, and isolation is kept in the circuit board.Phase For the sounding distance with user, the distance difference of main microphon and secondary microphone is larger, and voice loudness of a sound difference is larger;Relatively In the sounding distance with ambient noise, main microphon and secondary microphone distance differ smaller, and background sound sound intensity difference is smaller;Cause This, although including voice and background sound in the acoustic information that main microphon obtains, secondary microphone equally includes voice and background Sound, but the voice sound intensity of main microphon and secondary microphone difference is larger, the background sound sound intensity is smaller, after being offset by superposition, sound It differs lesser background sound by force to cancel out each other, the sound intensity is close to 0;The sound intensity differs biggish voice after superposition, and the sound intensity can generate Certain decaying, but also maintain more apparent sound characteristic.
Specifically, assume that the first client is equipped with the identical capacitive main microphon of two performances and secondary microphone, Middle main microphon is often mounted in the front of the first client, close to the mouth of user;Secondary microphone is often mounted on the first visitor The back side at family end, and far from main microphon, two microphones have mainboard isolation inside the first client.
When normal voice is conversed, user's mouth generates biggish main signal Va close to main microphon;At the same time, secondary Microphone receives the sound of surrounding, generates sub signal Vb;Main signal Va and sub signal Vb is inputted a de-noising processor to carry out Processing, generates superposed signal, i.e. a de-noising signal V after two paths of signals is subtracted each othert=Va-Vb;Due to voice meeting in superposition There is certain decaying, certain multiple can be amplified to a de-noising signal.Specifically, a de-noising processor is that a difference is put Big device.
After de-noising signal generation, output to the first client communication module is sent.In order to identify user's Identity, the synchronous client for also corresponding to user's identity sent are numbered.Specifically, the client number signal pin pair It is uniquely determined in different users;The client number signal is tied up with the first client communication module hardware code The fixed or described client number signal and the login account of first client are bound.
Correspondingly, the signal that the cloud communication module of cloud server receives includes client number signal and primary drop Noise cancellation signal.
S102: the client number signal is received based on the matching module, is directed toward corresponding sound in the reservoir of cloud Line feature database;
It include the voiceprint of the multiple words or word corresponding to user in vocal print feature library, which can be used for Identify the sound of corresponding user, so as to judge the sound of user from one section of voice including more voice sounds, from And it is identified.
In order to improve the accuracy of identification in vocal print feature library, the vocal print feature library of the embodiment of the present invention include commonly used word vocal print, High frequency time vocal print and training vocal print.
Wherein, commonly used word vocal print is that user normally reads aloud vocal print caused by commonly used word, quantity and common number of words It is associated.Specifically, commonly used word can be drawn a circle to approve according to the commonly used word in " general specification Chinese character table ", user needs to pre-record institute The voice for the commonly used word stated, and form corresponding commonly used word vocal print.
High frequency time vocal print refers to the vocal print for being but not belonging to commonly used word vocal print that user often issues in communication process, It is possible that including gas sound, rarely used word, industry proper noun etc..Specifically, high frequency time vocal print is by multiple cloud server to one During secondary de-noising signal processing, the same number of the vocal print captured is related.Specifically, being carried out to a de-noising signal When processing, other than commonly used word vocal print, remaining sound is split according to certain segmentation rule, is formed unknown vocal print and is stored up It deposits;The unknown vocal print of storage has an attribute value about the frequency;In the multiple treatment process to a de-noising signal In, unknown vocal print may exist identical, and identical unknown vocal print is recorded based on the frequency attribute of unknown vocal print.When not When knowing that the frequency attribute of vocal print is more than a preset threshold, it is believed that the unknown vocal print is high frequency time vocal print, belongs to corresponding user Special vocal print.
Training vocal print, refers to the vocal print derived based on the commonly used word vocal print.Each user has corresponding Vocal print feature, by extract user's vocal print vocal print feature, i.e., user can be characterized by extracting from user's voice Certain organs structure or the characteristic parameter of acquired behavior are, it can be achieved that identification to user's sound.Specifically, the master of characteristic parameter It to include voice spectrum parameters, linear forecasting parameter, wavelet character parameter.
Wherein, voice spectrum parameters are mainly used for providing the phonatory organ feature of user, such as pass through glottis, sound channel, nose The special constructions such as chamber two extract the spectrum signature in short-term of user's voice, i.e. fundamental frequency spectrum and its profile, it is that characterization uses The driving source of person's sound and the inherent feature of sound channel, can reflect the difference of user's voice organ;And short-time spectrum at any time or The feature of amplitude variation reflects the pronunciation habit of speaker to a certain extent.Therefore, voice spectrum parameters are in Application on Voiceprint Recognition In application be mainly reflected in fundamental tone frequency spectrum and its profile, the energy of fundamental tone frame, the frequency of occurrences of fundamental tone formant and its track Parameter characterization and pattern-recognition.
Wherein, linear forecasting parameter is primarily referred to as several exemplary voice sampling or is worked as with existing mathematical model to approach Preceding voice sampling, the phonetic feature estimated with corresponding approximating parameter.It can be realized with a small amount of effective earth's surface of parameter The waveform and spectral characteristic of existing voice have the characteristics that computational efficiency is high, flexible in application.It is common linear in Application on Voiceprint Recognition at present Prediction Parameters extracting method specifically includes that linear prediction cepstrum coefficient LPCC, line spectrum pair LSP, auto-correlation and log-area ratio, Mel frequency Rate cepstrum MFCC, perception linear prediction PLP.
Wherein, wavelet character parameter is to be analyzed and processed using wavelet transformation technique to voice signal, to be indicated The wavelet coefficient of phonetic feature, wavelet transformation have many advantages, such as resolution changable, characterize without stationarity requirement and time-frequency domain compatibility, The individual information of user can effectively be characterized.Auditory Perception system is simulated using wavelet transformation, to voice signal Denoising carries out clear, voiced sound judgement.It, can be in the case where the framing of very little be long to voice signal because of the local character of wavelet transformation Spectral resolution still with higher,.By being introduced into wavelet transformation technique in MFCC characteristic parameter, can be improved to auxiliary The recognition effect in sound area.
In addition, when if correlation is little between it, it is anti-to illustrate that they distinguish for the characteristic parameter that extracts of distinct methods The different characteristic of voice signal is reflected, accordingly it is also possible to be more suitable for mould by the combination technique of different characteristic parameter to obtain The speech characteristic parameter model of formula match cognization judgement.
Based on independent characteristic parameter or speech characteristic parameter model, then according to commonly used word vocal print, derives to be formed and commonly use The training vocal print that word vocal print, non-commonly used word vocal print etc. are not belonging to commonly used word vocal print is stored, to expand vocal print feature library In vocal print amount, be conducive to vocal print feature library in screening, retain more user's voice features, improve screening precision.
S103: the vocal print feature library that the matching module is directed toward is read based on processor module, reconstructs vocal print filter;
Processor module reads matching module, and since matching module is directed toward vocal print feature library, processor module is practical to be read It is the vocal print feature library of the client number signal corresponding to the first client;Vocal print mistake is reconstructed by reading vocal print feature library Commonly used word vocal print, high frequency time vocal print and training vocal print in vocal print filter is really changed to and is corresponded in user by filter The first client client number signal vocal print.
S104: receiving the de-noising signal based on the vocal print filter and exports secondary de-noising signal to synthesizing mould Block;
Specifically, vocal print filter is really upper commonly used word vocal print, high frequency time vocal print and the training passed through using user Vocal print carries out traversal comparison to a de-noising signal, and the voice for then filtering out user issues the time.
Specifically, by way of traversal, by commonly used word vocal print, high frequency time vocal print and the training vocal print in vocal print filter It is successively retrieved in a de-noising signal, records the time-domain position in a de-noising signal when generating matching;It has traversed Cheng Hou, constructs a time shaft corresponding with a de-noising signal, generates the matched period on time shaft and is identified with signal 1, Matched signal segment is not generated with the mark of signal 0.
Secondary de-noising signal is substantially one for identifying the time shaft of user's voice time of origin.
It,, can be directly to knowledge when the screening of voice is accurate enough when the capacity in vocal print feature library is sufficiently large in specific implementation It Wei not be exported after the period progress amplitude amplification of voice, without being exported again after being synthesized with a de-noising signal, To save the time.But in order to avoid omitting some important acoustic informations, the embodiment of the present invention is by secondary de-noising signal to voice Time is identified, and is handled using the mark a de-noising signal, to retain more acoustic informations, makes most throughout one's life At de-noising signal three times it is more coherent and clear.
S105: the synthesis module receives the secondary de-noising signal and a de-noising signal, and exports noise reduction three times and believe Number to the cloud communication module;
Specifically, synthesis module according to secondary de-noising signal, i.e. time shaft handles a de-noising signal.Specifically , in a de-noising signal, including time t and amplitude UtTwo parameters, due to for means of chaotic signals, time t and amplitude UtIt Between not certain correlativity.
When generating de-noising signal three times, the time point t for being 1 in secondary de-noising signal value1, following formula pair can be based on The amplitude U of the pointt1It is handled: t in de-noising signal three times1The amplitude U of pointt1’=k Ut1;It is 0 in secondary de-noising signal value Time point t2, can be based on following formula to the amplitude U of the pointt2It is handled: t in de-noising signal three times2The amplitude U of pointt2’= Ut2/k。
Specifically, including composite selector, the first multiplier, the second multiplier and synthesis adder in synthesis module;It closes The input of a de-noising signal is used at selector input terminal, output end to be connect with the first multiplier and the second multiplier respectively; The input terminal of synthesis adder is connect with the output end of the first multiplier and the second multiplier respectively, and output end communicates mould with cloud Block connection.
The selection criteria of composite selector is changed based on the value dynamic of secondary de-noising signal, and the first multiplier and second multiplies Musical instruments used in a Buddhist or Taoist mass executes U respectivelyt1’=k Ut1And Ut2’=Ut2/ k is calculated;Synthesize adder will be through the first multiplier and the second multiplier at The sound of reason is synthesized.
In specific implementation, which with time sequencing reads a de-noising signal and with the two of the corresponding time Alternatively standard is selected secondary de-noising signal;Show one when secondary de-noising signal value is 1 according to time shaft sequence The sound of the time is voice in secondary de-noising signal, and composite selector exports a de-noising signal to the first multiplier, to second Multiplier exports 0 signal;When second of de-noising signal value is 0, show that the sound of the time in a de-noising signal is non- Voice, the first multiplier output 0, the second multiplier exports a de-noising signal.
Since the first multiplier and the second multiplier execute U respectivelyt1’=k Ut1And Ut2’=Ut2/ k is calculated, i.e., first multiplies The voice of musical instruments used in a Buddhist or Taoist mass is amplification, and the voice of the second multiplier is decaying, which is conducive to amplify voice and reduce background to make an uproar The influence of sound;Simultaneously as the first multiplier and the second multiplier have same time shaft, therefore, by synthesizing adder Directly the output signal of the first multiplier and the second multiplier is overlapped, de-noising signal three times can be obtained.
S106: the de-noising signal three times is sent to the second client communication module based on the cloud communication module.
Cloud communication module will be sent to the second client by de-noising signal three times, after the second client receives noise reduction three times Clearly noise-reduced speech signal.
Correspondingly, the embodiment of the invention also provides a kind of noise reduction system, including the first client, cloud server and Two clients.
Wherein, the first client includes main microphon, secondary microphone, a de-noising processor, the first client communication mould Block;Two input terminals of de-noising processor are connect with main microphon and secondary microphone respectively, output end and the first client Communication module connection.
Cloud server include cloud communication module, matching module, vocal print feature library, processor module, vocal print filter, Composite selector, the first multiplier, the second multiplier, synthesis adder.The input terminal of cloud communication module and the first client Communication module connection, output end are connect with matching module, vocal print filter and composite selector respectively;Matching module is directed toward vocal print A certain position in feature database simultaneously read corresponding vocal print feature;Processor module connects with matching module harmony schlieren filter device respectively It connects;Composite selector is controlled by vocal print filter, and output end is connect with the first multiplier and the second multiplier respectively;Synthesize addition The input terminal of device is connect with the output end of the first multiplier and the second multiplier respectively, and output end is connect with cloud communication module.
Second client includes the second client communication module, and the second client communication module is communicated with the cloud Module connection.
The embodiment of the invention provides a kind of microphone denoising method and system, by Application on Voiceprint Recognition comparison technology, to De-noising signal of noise reduction carries out secondary noise reduction and three times noise reduction, makes only to retain specific use in the noise reduction three times ultimately generated The acoustic information of person can also filter out the acoustic impacts in addition to user while filtering environmental noise, generate fine definition User's voice signal, in specific implementation have good practicability.
It is provided for the embodiments of the invention one kind gram wind noise-reduction method above and system is described in detail, herein Apply that a specific example illustrates the principle and implementation of the invention, the explanation of above example is only intended to help Understand method and its core concept of the invention;At the same time, for those skilled in the art, according to the thought of the present invention, There will be changes in the specific implementation manner and application range, in conclusion the content of the present specification should not be construed as to this The limitation of invention.

Claims (10)

1. a kind of noise-reduction method, which comprises the following steps:
The client number signal and a de-noising signal that the first client communication module is sent are received based on cloud communication module;
The client number signal is received based on matching module, is directed toward corresponding vocal print feature library in the reservoir of cloud;
The vocal print feature library that the matching module is directed toward is read based on processor module, reconstructs vocal print filter;
A de-noising signal is received based on the vocal print filter and exports secondary de-noising signal to synthesis module;
The synthesis module receives the secondary de-noising signal and a de-noising signal, and export three times de-noising signal to the cloud Hold communication module;
The de-noising signal three times is sent to the second client communication module based on the cloud communication module.
2. noise-reduction method as described in claim 1, which is characterized in that a de-noising signal is generated by following steps:
A de-noising processor based on the first client receives the main signal that the first client main microphon obtains and the first visitor The sub signal that family end secondary microphone obtains, exports a de-noising signal to the first client communication module.
3. noise-reduction method as described in claim 1, which is characterized in that the client number signal and first client The binding of communication module hardware code;
Or the client number signal and the login account of first client are bound.
4. noise-reduction method as described in claim 1, which is characterized in that the vocal print feature library is based on client number letter Number subregion is carried out, includes commonly used word vocal print, high frequency infrasonic sound corresponding to the client number signal in each vocal print feature library Line and training vocal print.
5. noise-reduction method as claimed in claim 4, which is characterized in that the commonly used word vocal print is corresponding client number signal User be based on the vocal print that extracts in advance of commonly used word in " general specification Chinese character table ".
6. noise-reduction method as claimed in claim 4, which is characterized in that the high frequency time vocal print is to count multiple primary drops The vocal print that the frequency is higher than a certain given threshold occurs in noise cancellation signal.
7. noise-reduction method as claimed in claim 4, which is characterized in that the trained vocal print is to be instructed based on the commonly used word vocal print The vocal print got out.
8. noise-reduction method as claimed in claim 4, which is characterized in that described described primary based on vocal print filter reception De-noising signal simultaneously exports secondary de-noising signal to synthesis module and includes the following steps;
Traversal matching is carried out to a de-noising signal based on the commonly used word vocal print, high frequency time vocal print, training vocal print, according to Matching result generates the secondary de-noising signal for corresponding to a de-noising signal time shaft, and the secondary de-noising signal is described Time point value when mating structure is matching is 1, remaining time point value is 0;
The secondary de-noising signal is sent to synthesis module.
9. noise-reduction method as claimed in claim 8, which is characterized in that the synthesis module receive the secondary de-noising signal and De-noising signal, and export three times de-noising signal to the cloud communication module the following steps are included:
Composite selector based on the synthesis module with time sequencing read a de-noising signal and with the corresponding time Alternatively standard is selected secondary de-noising signal;
According to time shaft sequence, when secondary de-noising signal value is 1, composite selector exports a noise reduction to the first multiplier Signal exports 0 signal to the second multiplier;When second of de-noising signal value is 0, the first multiplier output 0, the second multiplication Device exports a de-noising signal.
Synthesis adder based on the synthesis module is overlapped the output signal of the first multiplier and the second multiplier, obtains To de-noising signal three times and it is sent to the cloud communication module.
10. a kind of noise reduction system, which is characterized in that for realizing the described in any item noise-reduction methods of claim 1 to 9.
CN201811332084.3A 2018-11-09 2018-11-09 Noise reduction method and system Active CN109272996B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811332084.3A CN109272996B (en) 2018-11-09 2018-11-09 Noise reduction method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811332084.3A CN109272996B (en) 2018-11-09 2018-11-09 Noise reduction method and system

Publications (2)

Publication Number Publication Date
CN109272996A true CN109272996A (en) 2019-01-25
CN109272996B CN109272996B (en) 2021-11-30

Family

ID=65192499

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811332084.3A Active CN109272996B (en) 2018-11-09 2018-11-09 Noise reduction method and system

Country Status (1)

Country Link
CN (1) CN109272996B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106448649A (en) * 2016-09-26 2017-02-22 郑州云海信息技术有限公司 Centralized control method, device, and system for multiple electronic noise reduction devices
CN112634924A (en) * 2020-12-14 2021-04-09 深圳市沃特沃德股份有限公司 Noise filtering method and device based on voice call and computer equipment

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101753657A (en) * 2009-12-23 2010-06-23 中兴通讯股份有限公司 Method and device for reducing call noise
CN103366758A (en) * 2012-03-31 2013-10-23 多玩娱乐信息技术(北京)有限公司 Method and device for reducing noises of voice of mobile communication equipment
CN103456305A (en) * 2013-09-16 2013-12-18 东莞宇龙通信科技有限公司 Terminal and speech processing method based on multiple sound collecting units
CN103514884A (en) * 2012-06-26 2014-01-15 华为终端有限公司 Communication voice denoising method and terminal
CN103971696A (en) * 2013-01-30 2014-08-06 华为终端有限公司 Method, device and terminal equipment for processing voice
CN104811559A (en) * 2015-05-05 2015-07-29 上海青橙实业有限公司 Noise reduction method, communication method and mobile terminal
US20170076713A1 (en) * 2015-09-14 2017-03-16 International Business Machines Corporation Cognitive computing enabled smarter conferencing
CN107094196A (en) * 2017-04-21 2017-08-25 维沃移动通信有限公司 A kind of method and mobile terminal of de-noising of conversing
CN107172255A (en) * 2017-07-21 2017-09-15 广东欧珀移动通信有限公司 Voice signal self-adapting regulation method, device, mobile terminal and storage medium
CN107979790A (en) * 2017-11-28 2018-05-01 上海与德科技有限公司 One kind call noise-reduction method, device, equipment and medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101753657A (en) * 2009-12-23 2010-06-23 中兴通讯股份有限公司 Method and device for reducing call noise
CN103366758A (en) * 2012-03-31 2013-10-23 多玩娱乐信息技术(北京)有限公司 Method and device for reducing noises of voice of mobile communication equipment
CN103514884A (en) * 2012-06-26 2014-01-15 华为终端有限公司 Communication voice denoising method and terminal
CN103971696A (en) * 2013-01-30 2014-08-06 华为终端有限公司 Method, device and terminal equipment for processing voice
CN103456305A (en) * 2013-09-16 2013-12-18 东莞宇龙通信科技有限公司 Terminal and speech processing method based on multiple sound collecting units
CN104811559A (en) * 2015-05-05 2015-07-29 上海青橙实业有限公司 Noise reduction method, communication method and mobile terminal
US20170076713A1 (en) * 2015-09-14 2017-03-16 International Business Machines Corporation Cognitive computing enabled smarter conferencing
CN107094196A (en) * 2017-04-21 2017-08-25 维沃移动通信有限公司 A kind of method and mobile terminal of de-noising of conversing
CN107172255A (en) * 2017-07-21 2017-09-15 广东欧珀移动通信有限公司 Voice signal self-adapting regulation method, device, mobile terminal and storage medium
CN107979790A (en) * 2017-11-28 2018-05-01 上海与德科技有限公司 One kind call noise-reduction method, device, equipment and medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106448649A (en) * 2016-09-26 2017-02-22 郑州云海信息技术有限公司 Centralized control method, device, and system for multiple electronic noise reduction devices
CN112634924A (en) * 2020-12-14 2021-04-09 深圳市沃特沃德股份有限公司 Noise filtering method and device based on voice call and computer equipment
CN112634924B (en) * 2020-12-14 2024-01-09 深圳市沃特沃德信息有限公司 Noise filtering method and device based on voice call and computer equipment

Also Published As

Publication number Publication date
CN109272996B (en) 2021-11-30

Similar Documents

Publication Publication Date Title
CN106486131B (en) A kind of method and device of speech de-noising
Wu et al. A reverberation-time-aware approach to speech dereverberation based on deep neural networks
Liu et al. Bone-conducted speech enhancement using deep denoising autoencoder
KR20230043250A (en) Synthesis of speech from text in a voice of a target speaker using neural networks
Yegnanarayana et al. Epoch-based analysis of speech signals
CN109215665A (en) A kind of method for recognizing sound-groove based on 3D convolutional neural networks
JP2003255993A (en) System, method, and program for speech recognition, and system, method, and program for speech synthesis
KR20010102549A (en) Speaker recognition
CN106653048B (en) Single channel sound separation method based on voice model
CN111489763B (en) GMM model-based speaker recognition self-adaption method in complex environment
Mubeen et al. Combining spectral features of standard and throat microphones for speaker identification
TW200421262A (en) Speech model training method applied in speech recognition
CN113744715A (en) Vocoder speech synthesis method, device, computer equipment and storage medium
Rao et al. Robust speaker recognition on mobile devices
Li et al. μ-law SGAN for generating spectra with more details in speech enhancement
CN113782032B (en) Voiceprint recognition method and related device
CN109272996A (en) A kind of noise-reduction method and system
JP4381404B2 (en) Speech synthesis system, speech synthesis method, speech synthesis program
Gao Audio deepfake detection based on differences in human and machine generated speech
Wolf Channel selection and reverberation-robust automatic speech recognition
JP3916834B2 (en) Extraction method of fundamental period or fundamental frequency of periodic waveform with added noise
CN107039046B (en) Voice sound effect mode detection method based on feature fusion
JP6003352B2 (en) Data generation apparatus and data generation method
Kajita et al. Speech analysis and speech recognition using subbandautocorrelation analysis
Selouani et al. Auditory-based acoustic distinctive features and spectral cues for robust automatic speech recognition in low-snr car environments

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant