CN1567431A - Method and system for identifying status of speaker - Google Patents

Method and system for identifying status of speaker Download PDF

Info

Publication number
CN1567431A
CN1567431A CNA031415113A CN03141511A CN1567431A CN 1567431 A CN1567431 A CN 1567431A CN A031415113 A CNA031415113 A CN A031415113A CN 03141511 A CN03141511 A CN 03141511A CN 1567431 A CN1567431 A CN 1567431A
Authority
CN
China
Prior art keywords
voice
speaker
training
module
sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CNA031415113A
Other languages
Chinese (zh)
Other versions
CN1308911C (en
Inventor
吴田平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Youlang Information Science and Technology Co., Ltd., Shanghai
Original Assignee
SHANGHAI YEURON INFORMATION TECHNOLOGY Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SHANGHAI YEURON INFORMATION TECHNOLOGY Co Ltd filed Critical SHANGHAI YEURON INFORMATION TECHNOLOGY Co Ltd
Priority to CNB031415113A priority Critical patent/CN1308911C/en
Publication of CN1567431A publication Critical patent/CN1567431A/en
Application granted granted Critical
Publication of CN1308911C publication Critical patent/CN1308911C/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Landscapes

  • Machine Translation (AREA)
  • Telephonic Communication Services (AREA)

Abstract

It is a kind of speaker identity identification method and system. The system consists of voice receiving equipment, voice capture module, voice editing, pretreating module, speaker training, identification module and background database. The characteristic is: the voice receiving equipment receives the voice signal of identified person; the voice capture module can generate voice files with the received voice and store them by sequence; the voice editing, pretreating module can process and analyze the voice file, and export the micro-feature parameter of voice; the identification module can identify the speaker according to the Acoustic Fingerprinting templet that generated by training, neural network algorithm and the voice micro-feature parameter that get by processing of voice pretreating chip.

Description

A kind of speaker's personal identification method and system
Technical field:
The present invention relates to speech recognition technology, relate in particular to a kind of speech recognition technology system, particularly a kind of speaker's personal identification method.
Background technology:
Speaker's identification is a kind of non-contacting recognition technology, and its application comprises fields such as bank, security, police and judicial, security personnel, certificate false proof, information consultation.
Present speaker's identity recognizing technology, its core is to be based upon hidden Markov (the Hidden Markov Model that stochastic process is added up, HMM) on the model basis, its essence is a kind of method for mode matching of very skillization after all, is a kind of method for mode matching based on the probability statistics process.And this method, be anything but human brain to external world environment comprise that voice and visual pattern carry out perception and understand the method adopted.On using, there is the serious defective of following method in traditional speaker's identity recognizing technology:
1. two-way call problem
Conventional art must carry out feature extraction to the voice of a relative fixed length, then such characteristic sequence is trained and discerns.These are not only different with the perception of human brain (human brain is to speaker's instant perception under the current speech input), and unfavorable to using.Such as, two people just can not carry out perception to destination object under dialogue state.
2. learnability problem
The two-way call problem is the same with not handling, because conventional art must carry out feature extraction to the voice of a relative fixed length, then such characteristic sequence is trained.Discrimination is improved.In fact, because dynamic, complicacy and the polytrope of voice signal, the phonetic feature of one section finite length can not characterize a speaker's personal characteristics exactly.
3. discrimination and resolution are low
Discrimination is meant the probability of accurate recognition objective object, and resolution is meant the separating capacity between destination object and the non-destination object.Obviously, general discrimination is also high under the situation that resolution is high.But the discrimination of conventional art and resolution are all very low.Reason is two aspects.At first, the feature of traditional method for extracting not only number is few, and rigidity is strong, and is flexible little, makes robustness low; Secondly, based on the model of cognition of probability statistics, the difference between each output is very little, makes to be difficult to reach very high resolution, thereby makes discrimination low, particularly trains spatial spread behind the opener identification space from closed set, and empty knowledge rate will be very high.
Summary of the invention:
Technical matters to be solved by this invention is: present speaker's identity recognizing technology, its core is to be based upon on the hidden Markov model basis that stochastic process is added up, its essence is a kind of method for mode matching of very skillization after all, is a kind of method for mode matching based on the probability statistics process.And this method, be anything but human brain to external world environment comprise that voice and visual pattern carry out perception and understand the method adopted.On using, there is the serious defective of following method in traditional speaker's identity recognizing technology: 1. two-way call problem 2. learnability problems, 3. discriminations and resolution are low.The present invention provides the system of a kind of speaker's personal identification method and this method of realization for the technical scheme that above-mentioned technical matters adopted in the solution prior art, described this speaker's personal identification method and system thereof are by the voice receiving equipment, the voice acquisition module, voice edition, pretreatment module, speaker's training, identification module and background data base constitute, wherein, described voice receiving equipment receives identified person's voice signal, and voice signal is sent to described voice acquisition module, described voice acquisition module is become by high-speed data acquisition mechanism, described voice acquisition module can form voice document with the voice that receive and store to be used for described voice edition orderlyly, the subsequent treatment of pretreatment module, described voice edition, pretreatment module is made of voice edition device and voice signal preprocessed chip, described voice edition device is handled voice document, and the voice after the output edit, described voice signal preprocessed chip is handled the speech analysis that voice document carries out voice signal, and little characteristic parameter of output voice, described voice signal preprocessed chip further passes to voice messaging described identification module, described identification module is become with the vocal print identification mechanism by the vocal print training airplane, described vocal print training airplane receives the result of described voice edition device and described voice preprocessed chip, speech samples is trained, form speaker's exclusive vocal print coding, the vocal print template that described Application on Voiceprint Recognition machine utilization training generates, neural network algorithm, and the little characteristic parameter of speaker's voice that the processing of voice preprocessed chip obtains identifies the speaker, described background data base is described voice receiving equipment, the voice acquisition module, voice edition, pretreatment module, speaker's training, identification module provides data.Concrete, described voice receiving equipment can be microphone, phone, or high-speed data acquisition machine, described voice edition device is checked voice document, editor, cut apart and conversion process, described voice signal preprocessed chip carries out the digitizing of voice signal to voice document, pre-emphasis, windowing and add the frame analyzing and processing, described voice receiving equipment, obtain in real time with speech data and by voice stream file pattern and store, write down the relevant information of this conversation ticket simultaneously with text mode, use in order to described identification module, described voice receiving equipment, the voice acquisition module, voice edition, pretreatment module, speaker's training, identification module and background data base system adopt configuration file and carry out collaborative work from the mode that share directory obtains the ticket voice document, and described voice edition utensil has the accurate editor who supports Millisecond; Can carry out the conversion of sample frequency, channel number and sampling resolution to speech data; Can reverse, oppositely, the special efficacy of mourning in silence editor, also can generate and mourn in silence; Can cut apart file, described vocal print training airplane can be trained the speech samples of object and the speech samples of non-object, and described vocal print training airplane utilizes the speech samples of the speech samples of object and non-object to cut apart the spectrum space of such multidimensional, make the occupied spectrum space of object speech samples be mapped to the output of object, and the occupied spectrum space of non-object speech samples is mapped to the output of non-object, described vocal print training airplane can carry out the adjustment of reaction type according to the result of voice training, described Application on Voiceprint Recognition machine is according to the desynchronize output of all objects to be identified of excitation of the spectrum signature of speaker's voice, at this moment have only the output of destination object to be energized, and the output of all non-destination objects is suppressed, described Application on Voiceprint Recognition machine utilizes the multi-level clustering neural network to finish the cluster of phonic signal character fuzzy dynamic set, described Application on Voiceprint Recognition machine utilizes individual layer perceptron network to finish the conversion of the excitation group of cluster to the speaker, realizes that the excitation group is mapped to speaker's output.
The present invention and prior art contrast, and effect is positive and tangible.The perception thought of human nervous system to voice and speaker is used for reference or imitated to a kind of speaker's personal identification method of the present invention, and it is to go " perception " corresponding speaker from an omnibearing angle, rather than be based upon the comparison to some preset parameters.By Artificial Neural System's perception, speaker's identity recognizing technology of the present invention can carry out comprehensive evaluation to dynamic, a complicated spectrum distribution track, thereby is mapped to the object output of being trained.This method, its biggest advantage is exactly a learnability.Anthropoid learning process is the same, can improve the performance of identification by additional sample constantly.This point is extremely important.Must, the present invention has characteristics such as bio-imitability, increment type training, learnability, identification two-way call, strong resolution characteristic and discrimination, strong robustness, recognition speed are fast, non-speech audio filtration.
Purpose of the present invention, feature and advantage will be elaborated in conjunction with the accompanying drawings by embodiment.
Description of drawings:
Fig. 1 is the high-level schematic functional block diagram of a preferred embodiment of a kind of speaker's personal identification method of the present invention.
Fig. 2 is each module logical relation synoptic diagram of a preferred embodiment of a kind of speaker's personal identification method of the present invention.
Fig. 3 is the realization synoptic diagram of a preferred embodiment of the voice acquisition module of a kind of speaker's personal identification method of the present invention.
Fig. 4 is the vocal print training principle schematic of a preferred embodiment of a kind of speaker's personal identification method of the present invention.
Fig. 5 is the vocal print training schematic flow sheet of a preferred embodiment of a kind of speaker's personal identification method of the present invention.
Fig. 6 is the recognition principle synoptic diagram of a preferred embodiment of a kind of speaker's personal identification method of the present invention.
Fig. 7 is the recognition technology synoptic diagram of a preferred embodiment of a kind of speaker's personal identification method of the present invention.
Fig. 8 is vocal print training, the identification general flow chart of a preferred embodiment of a kind of speaker's personal identification method of the present invention.
Embodiment:
As Fig. 1, shown in Figure 2, a kind of speaker's personal identification method of the present invention, described this speaker's personal identification method, its system is by voice receiving equipment 1, voice acquisition module 2, voice edition, pretreatment module 3, speaker's training, identification module 4 and background data base constitute, it is characterized in that described voice receiving equipment 1 receives identified person's voice signal, and voice signal is sent to described voice acquisition module 2, described voice acquisition module 2 is made of high-speed data acquisition machine 21, described voice acquisition module 2 can form voice document with the voice that receive and store to be used for described voice edition orderlyly, the subsequent treatment of pretreatment module 3, described voice edition, pretreatment module 3 is made of voice edition device 31 and voice signal preprocessed chip 32,31 pairs of voice documents of described voice edition device are handled, and the voice after the output edit, 32 pairs of voice documents of described voice signal preprocessed chip carry out the speech analysis of voice signal to be handled, and little characteristic parameter of output voice, described voice signal preprocessed chip 32 further passes to voice messaging described identification module 4, described identification module 4 is made of vocal print training airplane 41 and vocal print cognitron 42, described vocal print training airplane 41 receives the result of described voice edition device 31 and described voice preprocessed chip 32, speech samples is trained, form speaker's exclusive vocal print coding, the vocal print template that described Application on Voiceprint Recognition machine 42 utilizes training to generate, neural network algorithm, and the little characteristic parameter of speaker's voice that the processing of voice preprocessed chip obtains identifies the speaker.
Principle of work of the present invention and implementation procedure are as described below in conjunction with Fig. 3, Fig. 4, Fig. 5, Fig. 6, Fig. 7 and Fig. 8:
Can be divided into two kinds of voice source in the voice receiver module, a kind of is general voice receiving equipment, and as microphone etc., other parts of system directly passed to the voice flow that receives by receiving equipment, as voice edition, pretreatment module etc.; Another kind is high-speed data acquisition machine HDC (High Data Collection, HDC), it is with the hardware decoding process, by signalling analysis the speech data of every road phone is pressed in every HDC of voice stream file pattern storage, write down the relevant information of this conversation ticket simultaneously with text mode, use in order to speaker's identification machine.The preferred embodiments of the present invention speaker's identification system is selected for use has 9 HDC machines to obtain new ticket in real time simultaneously, certainly increases or reduces the number of HDC machine as required, but have only a computer discerning.
Speech recognition system and voice ticket imput process system and background data base system adopt configuration file and carry out collaborative work from the mode that share directory obtains the ticket voice document.Configuration file is a text-only file, off-hook of each line display or on-hook signal, and write down other relevant information of this off-hook or on-hook record, as the start time, concluding time, filename, file storage path or the like, so this configuration file can be called as the relevant information file.
Voice edition, pretreatment module comprise voice edition device and two modules of voice preprocessed chip, referring to Fig. 2 voice edition, pretreatment module and with the graph of a relation of other module.Wherein the voice edition device mainly to original voice document edit, cut apart, conversion etc., the voice document that is editted by it becomes training sample, uses for the voice pre-service before the training of speaker's vocal print.The voice preprocessed chip is to do the training of speaker's vocal print, preceding speech analysis and the vocal print feature extraction of identification, voice sources is training sample or the voice document that collects, the voice preprocessed chip is output as the vocal print feature, uses for training of speaker's vocal print or identification.
Below voice edition device and voice preprocessed chip are described in detail.
One, voice edition device
The voice edition device is the software of a voice edition and processing, can carry out operations such as voice are checked, edit, cut apart, conversion.The form of program support has three kinds:
1.wav form.Support single two-channel, support all frequencies that sound card can reach, support 8,16.
2.raw form.It is the A-Law form.
3.rav form.This form is the internal format that the voice edition device is supported, it is to add what header information was formed before the data of A rule form, and characteristics are compressibility that existing header information keeps the raw file again.
Except above-mentioned general operation, also have following specific function:
1. support the accurate editor of Millisecond
2. speech data is carried out the conversion of sample frequency, channel number and sampling resolution.
3. has recording and reproducing function, special effect play such as the circulation that can carry out, F.F., rewind down.
4. can reverse, oppositely, special efficacy editor such as mourn in silence, also can generate and mourn in silence.
5. can carry out single or cut apart in batches file.Can import the piece number that will cut apart or every duration when cutting apart cuts apart.
6.A-Law the file of form is single or be converted in batches the file of wav form (decompression) or rav form (not decompressing).And optional expression arranged.
The object raw tone just can join training sample and concentrate as training sample after treatment.From system requirements, only need the root directory of regulation training sample set, as long as the sample of all training sample sets is positioned at the training sample set root directory or the sub-directory under it can.For easy to maintenance, speech samples that can each object leaves one in independently in the sub-directory, when needs increase or delete the speech samples of an object, only need copy into or to remove corresponding sub-directory just passable.Root directory and each subdirectory name can be named arbitrarily.Below be the method that each catalogue is set up:
1. the foundation and the maintenance in original positive sample storehouse
Each training objects all should have an independently original positive sample storehouse, and this storehouse also is to be safeguarded by hand by the user.File wherein all is the RAW formatted file that original A leads coding.The raw filename that its filename directly adopts machine (HDC) to generate, and whole voice document is not done any processing.
The sample in original positive sample storehouse is from original positive sample that (1) is initial; (2) Xin Zeng original positive sample.Initial original positive sample is called " seed " original sample, and just without " seed " sample of voice edition device editor, these samples are obtained by the approach beyond the systemic-function.And newly-increased original positive sample obtains behind artificial cognition by after the system identification again.
The purpose that keeps original positive sample is: be used for system's new standard abundance more automatically, thereby can determine recognition threshold; Write down the source of each positive sample, convenient afterwards to the check of positive sample correctness.
2. the foundation and the maintenance in public anti-phase sample storehouse
The object speech samples number that in system, exists more for a long time, the sample between the different objects can be used as anti-phase sample, if but the object number that exists more after a little while in the system, must need extra anti-phase sample, Here it is public anti-phase sample.Public anti-phase sample storehouse is set up before systematic training, comprises 30~100 anti-phase samples, and the length of each sample is master sample length (default is 30 seconds).Public anti-phase sample storehouse should comprise the signal that system is common, as the normal voice signal of different people.Because system has adopted the non-speech audio filtering technique, therefore, non-speech audio then needn't be added in the public anti-phase sample as fax tone, dialing tone, busy tone, busy(-back) tone, online sound.The voice edition device operation that the editor of public anti-phase sample, shearing, conversion and additional label utilize system to provide is finished, unified use " unknown " or " null " of label, and the suffix name unification of file is " .rav " (RAV form).The maintenance of public anti-phase sample also is by user's manually-operated.
3. the foundation of training sample set and maintenance
Introduce according to the front, training sample set comprises 4 sub-directories, is respectively: initial positive sample sub-directory, newly-increased positive sample sub-directory, initial anti-phase sample sub-directory, newly-increased anti-phase sample sub-directory, default concrete title is: ini-pos, new-pos, ini-neg, new-neg.The foundation of each sub-directory and maintenance are as follows.
4. initial positive sample:
Introduction according to the front, obtain by (without voice edition device editor's sample) behind " seed " original sample, the voice edition instrument that utilizes system to provide carries out manual process, mainly comprise: 1. remove the voice signal of non-object, then needn't be added in the public anti-phase sample as fax tone, dialing tone, busy tone, busy(-back) tone, online sound. 2. be converted into the RAV form; 3. the label of extra objects.4. single file is cut into a plurality of files, the length of each file (being similar to) is standard length (adopted at present 30 seconds, but can get other value); 5. in the initial positive sample sub-directory of the document storage of each well cutting under the training sample set of this object, the sample of these processing is exactly " seed " sample.Under many situations, " seed " sample may have only 1, but requires the total length of " seed " sample to be preferably in more than 30 seconds.
5. newly-increased positive sample:
The source (just pairing original positive sample) of newly-increased positive sample is: after the system identification acceptance of the bid, and by artificial listen to debate handle the speech samples that is defined as " just knowing ", detailed process is referring to " training of speaker's vocal print, identification module " part.Editing and processing to these samples is identical with initial positive sample with nomenclature principle.
6. initial anti-phase sample:
Initial anti-phase sample is the subclass of public anti-phase sample.After system obtains initial positive sample (seed specimen just), at first need to determine initial anti-phase sample.This moment, newly-increased anti-phase sample and newly-increased positive sample were empty.The user at first all copies to public anti-phase sample in the initial anti-phase sample sub-directory of this object, " screening anti-phase sample " function of start-up system then, system will determine the initial anti-phase sample of this object automatically, it is a subclass of public anti-phase sample, and system will delete unwanted anti-phase sample automatically.This process is finished on training airplane.System only allows single object to be trained at one time, does not allow to train simultaneously plural object.
7. newly-increased anti-phase sample:
The source (just pairing raw tone file) of newly-increased anti-phase sample is: after the system identification acceptance of the bid, and by the artificial speech samples of debating and handle and be defined as " empty know " of listening.
A speech samples that is judged to be empty knowledge can add in the newly-increased anti-phase sample, and trains again.On training airplane, determine that at first training objects is set to specified object, then, select this empty speech samples of knowing after " newly-increased anti-phase sample " function of start-up system, system with automatically this empty anti-phase sample of knowing is added in the newly-increased anti-phase sample storehouse (in the interpolation process, system with shear automatically, operations such as format conversion and interpolation label).Afterwards, the user can train this object again, retraining behind the more newly-increased sample of accumulation after also can waiting.
The filename of newly-increased anti-phase sample is exactly original filename, and its label is " unknown ".
Two, voice pre-service
The voice pre-service is the prerequisite and the basis of speaker's identification, only analyzes the parameter that can represent the voice signal essential characteristic, just might utilize these parameters to carry out speaker's identification efficiently.The preferred embodiments of the present invention adopt special voice preprocessed chip that voice document is carried out speech signal analysis.
The voice preprocessed chip is mainly finished following task:
1. the digitizing of voice signal
2. the signal analysis of voice and characteristic parameter extraction
Wherein the task of the digitizing of voice signal execution comprises amplification and gain control, considers ripple, sampling, A/D conversion and coding in advance, and detailed process is as follows:
1. amplify and gain control: voice signal is suitably increased, so that other signal Processing afterwards.
2. consider ripple in advance: the purpose of considering ripple in advance is that each frequency domain components medium frequency of (1) inhibition input signal exceeds f s/ 2 important (f sBe sample frequency), disturb to prevent aliasing.(2) the power supply power frequency that suppresses 50Hz is disturbed.Like this, considering ripple in advance must be the logical worry of a band ripple device, and lower limiting frequency is respectively f on it HAnd f L, general desirable f H=3400Hz, f L=60~100Hz, sampling rate is f s=8kHz.
3. voice signal has the A/D transducer to be transformed to binary digital code after pre-worry ripple and sampling.The A/D transducer is divided into linear and non-linear two classes.At present the linear A/D transducer that adopts mostly is 12, and non-linear A/D transducer then mostly is 8, it and 12 bit linear transducer equivalences.
The task that the signal analysis of voice signal and characteristic parameter extraction are carried out comprises, pre-emphasis, windowing, branch frame, cepstral analysis etc., and detailed process is as follows:
1. pre-emphasis
Because the average power spectra of voice signal is subjected to glottal excitation and mouth and nose radiation effect, front end falls by the 6dB/ octave more than 800Hz greatly, be 6dB/oct (2 frequency multiplication) or 20dB/dec (10 frequency multiplication), institute is during in the hope of the voice signal frequency spectrum, the high more corresponding composition of frequency is more little, will carry out pre-emphasis for this reason and handle in pre-service.The purpose of pre-emphasis is to promote HFS, makes the frequency spectrum of signal become smooth, remains on low frequency in the whole frequency band of high frequency, can ask frequency spectrum with same signal to noise ratio (S/N ratio), so that spectrum analysis.Concrete grammar is to consider the ripple device with the pre-emphasis numeral of the lifting high frequency characteristics with 6dB, octave to realize, is generally the single order numeral and considers the ripple device.
After carrying out the processing of pre-emphasis numeral worry ripple, next to carry out windowing and divide frame to handle.
2. add frame
The frame number of general per second is about 33~100 frames, decides on actual conditions.Though the branch frame can adopt the method for contiguous segmentation, generally to adopt the method for overlapping segmentation, can make smooth excessiveness between frame and the frame like this, keep its continuity.The overlapping of former frame and back one frame partly is called frame and moves, and frame moves with frame length ratio and generally is taken as 0~1/2.The realization that divides frame is to be weighted with finite length window movably, promptly uses certain window function w (n) to take advantage of s (n) thereby formation windowing voice signal: s w(n)=s (n) * w (n)
3. windowing
Window function commonly used in the voice signal digital processing is rectangular window and Hamming window etc., and the fundamental tone peak in cepstrum becomes unintelligible even disappearance, and this preferred embodiment adopts Hamming window, and the Hamming window window function is:
Through the process of introducing above, voice signal just is divided into the short signal that added window function of a frame one frame, when handling, takes out data frame by frame from the data field, get next frame after finishing dealing with again, obtain the time series of the speech characteristic parameter formed by each frame composition parameter at last.
4. speech characteristic parameter extracts
The characteristic parameter of voice is unit with the frame, and each frame all can be obtained a stack features parameter.The selection of speech characteristic parameter is the basis of whole speaker's identification system, speaker's identification rate there is extremely important influence, characteristic parameter comparatively commonly used at present comprises linear prediction cepstrum coefficient (Linear PredictionCepstrum Coefficient, be abbreviated as LPCC) and Mel cepstrum coefficient (Mel-Frequency Cepstrum Coefficient is abbreviated as MFCC) etc.The former is utilizing linear predictive coding (LPC) technology to ask cepstrum coefficient; The latter then directly asks cepstrum coefficient by discrete Fourier transform (DFT).Because the MFCC parameter is converted into the Mel frequency marking with linear frequency marking, emphasize the low-frequency information of voice, thereby given prominence to the information that helps discerning, shielded interference of noise, have good recognition capability and noiseproof feature, so the preferred embodiments of the present invention adopt the MFCC parameter.The roughly step of calculating the MFCC parameter is:
(1) makes fast fourier transform (FFT), obtain spectrum distribution information.
(2) frequency-region signal is passed through to press Mel frequency marking triangular filter group arranged evenly, be about to linear frequency marking and be transformed to the Mel frequency marking.
(3) then the output of triangular filter is transformed to cepstrum domain through discrete cosine transform (DCT) by (2).
C k = Σ N log ( Y j ) cos [ k ( j - 1 / 2 ) π / N ] , k = 1,2 , . . . , P - - - ( 2 )
P is the exponent number of MFCC parameter in the formula, generally can be 8 to 14 selections, and N is the triangular filter number, Y jBe the output { C of j triangular filter k} K=1,2 ..., PBe the MFCC parameter of being asked.The preferred embodiments of the present invention are the MFCC coefficient that every frame signal calculates 16 rank, with this characteristic parameter as speaker's training or identification.
Three, speaker's vocal print training, identification module
(1) training airplane
Speaker's identification system mainly contain two kinds of duties, be respectively training and discern two processes.So-called training process, utilize the speech samples (anti-phase sample) of the speech samples (positive sample) of object and non-object to cut apart the spectrum space of such multidimensional exactly, make the occupied spectrum space of object speech samples be mapped to the output of object, and the occupied spectrum space of non-object speech samples is mapped to the output of non-object, just sets up related with its voice in the distributed areas of spectrum space object.On mathematical model, utilize these voice training samples to adjust the neural network weight of a complexity exactly, make the speech samples of object be mapped to the excitation output of object, the speech samples of non-object is mapped to the inhibition output of object.The synoptic diagram of training principle as shown in Figure 4, the voice of certain object A extract through speech characteristic parameter, adjust weights according to characteristic ginseng value the output of object A is encouraged, the output of non-object A is suppressed, obtains the output of object A and the output of non-object A afterwards.
After the positive sample voice editing machine of certain object edits, just can train object by training airplane.Concrete training step is as follows, and illustrates with reference to the systematic training process of figure 5:
1. set up the training set root directory
Set up an empty list, this catalogue will be as training objects sample set root directory (hereinafter to be referred as root directory).
2. edit and set up the positive sample
The positive sample that edits is duplicated or transfers in the anyon catalogue of training set root directory or root directory.The edit request of positive sample is: the voice of non-object can not appear in (1), and the voice of non-object should be sheared; (2) length of each sample is the training sample standard length.System recommendations are 30 seconds; (3) label of each object samples must be just the same with the label of training objects, and editor's process is finished by voice edition device 31;
3. duplicate anti-phase sample
From public anti-phase sample storehouse, select 5~10 anti-phase samples to copy in root directory or the anyon catalogue, in the ini-neg sub-directory arbitrarily.The edit request of anti-phase sample is: the voice of object can not appear in (1); (2) length of each sample is the training sample standard length; (3) label of each anti-phase sample must be inequality with the label of training objects, and suggestion is got " unknown " or " null " for the label unification of anti-phase sample.Editor is also finished by voice edition device 31.
4. training objects is set
If also do not treat training objects in the list object, should at first increase corresponding object tag.The object of this label correspondence is set to current training objects, and training set root directory parameter is set to corresponding root directory.
5. start first run training
Start " screening anti-phase sample " function, carry out first run training.When the first run was trained, in fact the training parameter of Cai Yonging was: wfr=0.95, rmax=200.(" wfr ": weights factor decay factor; " rmax ": training samsara.For initial training, suggestion wfr=0.95, rmax=200; For the training that adds up, suggestion wfr=0.88, rmax=50, perhaps wfr=0.9, rmax=75.) screening anti-phase sample in fact started two processes: train and filter anti-phase sample.In training process, system concentrates the anti-phase sample training choose some randomly from training sample, and its number equals " NegSeeds " (the participating in the anti-phase sample number of training when screening anti-phase sample) in the operational factor.After training finishes, system utilizes current vocal print template identification not participate in the anti-phase sample of training immediately, with the wherein lower anti-phase sample deletion of output, stay the higher anti-phase sample of output, this screening threshold value equals " NegTh " (the screening the abundance threshold value of anti-phase sample or newly-increased anti-phase sample) in " operational factor ", what this threshold value adopted is the abundance threshold value, and the abundance threshold value is the corresponding recognition threshold of each object, can freely set.By adjusting recognition threshold, the user can select just knowing accordingly rate and empty knowledge rate at the importance of object.The length of window that adds up is an abundance method identification window length.
6. basis of calculation abundance
Vocal print template basis of calculation abundance for current training.The batch identification catalogue of selecting during basis of calculation abundance must be selected the original positive sample catalogue of corresponding object.
Recognition threshold equals the standard abundance and takes advantage of the threshold value coefficient.The threshold value coefficient defaults to 0.5, but the user can adjust according to the recognition strategy (importance of object in other words) of object.
7. identification test sample book
After the first run trains, before reaching the standard grade, should the batch identification test sample book.Two kinds of test sample books are arranged, and are respectively positive test sample book collection and anti-phase test sample book collection.Positive test sample book collection includes only the voice of object, be used for testing the just knowledge rate of this vocal print template, because the difficulty obtained of object samples the object of new training (particularly for), positive test sample collection may be seldom, even do not have, even be easy to, do not need a lot, generally in several to dozens of scopes but obtain yet.
And anti-phase test sample book should not comprise the voice of object, is used for testing its empty knowledge rate.Anti-phase test sample book collection is preferably big, generally between 100 to 1000.
Concrete recognition strategy is as follows:
Utilize these two test sets of this vocal print template batch identification, obtain this moment just knowledge rate and empty knowledge rate, and with the threshold value coefficient adjustment to the best identified effect.So-called optimum efficiency is meant: according to the batch identification result, and under the situation of adjusting the threshold value coefficient, the recognition effect that the best can reach.If the best identified effect of this moment does not satisfy user's requirement,, so just will export minimum positive sample and be increased to training sample set if just knowledge rate is too low; And if empty knowledge rate is too high, so just will exports maximum anti-phase sample and be increased to training sample set.Suggestion only increases one or two sample at every turn, and preferential to increase the positive sample.For the positive test sample book that is increased to training sample set, should transfer to original positive sample set from positive test sample book collection.
For some new object, the positive sample may be considerably less.At this moment should turn down the threshold value coefficient as far as possible, improve just knowledge rate by increasing empty knowledge rate.After waiting to get access to new positive sample, new positive sample is added to training set train again, several times afterwards can be with the threshold value coefficient adjustment to normal value.This strategy more should be like this when especially adopting the seed specimen in certain source to obtain the new speech of separate sources for hope.Such as, seed specimen source mobile phone is wished identification landline telephone voice, because mobile phone frequency spectrum and landline telephone spectral response differ bigger, should reduce the threshold value coefficient during beginning as far as possible, guarantees to obtain the landline telephone signal as far as possible.Empty knowledge this moment sample may be more, but still can overcome this problem by many methods in actual applications.Carry out supplementary training when waiting to get access to new voice, can improve recognition effect gradually.
8. train and repeat to discern test sample book
After increasing training sample, need carry out retraining.Need only this moment and start " training " function, just normal training function (another kind of special training function is exactly the anti-phase sample of aforesaid screening).For normal training function, the training parameter that suggestion is adopted is: wfr=0.88, rmax=50; Perhaps wfr=0.9, rmax=75.
Repeat 6,7,8, reach customer requirements up to just knowledge rate and empty knowledge rate.Generally need to repeat 1 to 3 time.
9. retraining
Test specimen after the assay was approved, the identification of can reaching the standard grade.For the identifying object of newly reaching the standard grade, in a period of time of beginning, should monitor recognition effect, if recognition effect is bad, should be in time the sample of wrong identification (comprise leaking and know and emptyly know) be added to training sample set and carry out retraining.After replenishing new training sample, should repeat 6,7,8, and the vocal print template that will train is at last reached the standard grade.
Void is known sample add to the training sample set ratio and be easier to realize that system also can finish the montage work to sample automatically, and generally do not need to repeat once more 6,7,8 steps after the training;
But will leak the knowledge sample newly-increased is then more complicated of positive training sample.At first, if system does not have other supplementary mode to compare, just can't know whether Lou at all and know.For this situation, system can only be according to the recognition result of positive test sample book is estimated just to know rate; If the positive test sample book does not all have, so just can only adopt following method: reduce the threshold value coefficient as far as possible and arrive just acceptable stage of empty knowledge rate, up to having obtained new positive sample and having set up positive test sample book collection.The aspect of second complexity of newly-increased positive sample is: behind the newly-increased positive sample, and the raising that may bring empty knowledge rate, therefore, behind the newly-increased positive sample, 6,7,8 steps of repetition are replenished anti-phase training sample again to be needed; The third aspect is: the positive sample can not be edited automatically, must carry out edit, deletion non-object voice signal.But by with system support voice edition device, whole editing process is very fast.
(2) cognitron
After speaker's vocal print is trained to merit, when the speech samples of new unknown object is come in, at first obtain the spectrum signature of new speech sample, use these new spectrum signatures to desynchronize and encourage the output of all objects to be identified, situation in correct training, at this moment have only the output of destination object to be energized, and the output of all non-destination objects is suppressed, thereby can identifies destination object apace.Here it is recognition principle, as shown in Figure 6.
Speaker's identity recognizing technology of the preferred embodiments of the present invention specifically is made of three parts, is made up of front end signal processing, multi-level clustering neural network and individual layer perceptron network respectively.The front end signal processing section is finished the pre-service of input speech signal and is extracted network by various features and finish extraction to phonic signal character; Be based upon multi-level clustering neural network on a kind of brand-new neural network algorithm basis and finish the cluster of phonic signal character fuzzy dynamic set; Individual layer perceptron network is finished the conversion of the excitation group of cluster to the speaker, realizes that the excitation group is mapped to speaker's output, as shown in Figure 7.
The preferred embodiments of the present invention have two kinds of acceptance of the bid methods.A kind of identification certainty degree that is called, another kind is called the identification abundance method.Introducing before these two kinds of acceptance of the bid methods, at first introducing the output abundance.
So-called output abundance is meant in certain length range, all positive or negative outputs sum that adds up.Be positive output abundance after positive output adds up, abbreviate output abundance or abundance as.And negative output is anti-phase output abundance after adding up, and is called for short anti-phase abundance.Therefore usually said abundance is meant the positive abundance.Its dimension unit of being scaled of the conversion mechanism of all abundance values by inside second, so the unit of abundance value is second.This length range that output is added up is called as identification window.Identification window unit also is second.
The identification certainty degree is defined as:
(positive abundance-anti-phase abundance)/(positive abundance+anti-phase abundance)
Obviously, the identification certainty degree is a value between (1 ,+1) scope.+ 1 shows it is object certainly, and-1 show certainly not object, and 0 expression can not be affirmed.
If in an identification window, only comprise single speaker's voice, use the identification certainty degree comparatively effective.If but comprise two people's voice, obviously discern certainty degree and can not use.At this moment can only adopt the abundance method of identification.To handle the applied environment of both sides conversation for system, though can be in the future, the outlet speech Separation, because the existence of echo, this separation can not be thorough, therefore, can only take to discern abundance and determine destination object.
Be assumed to be each identifying object and set a threshold value, as long as in any identification window, (positive) abundance of corresponding object reaches threshold value, just thinks this object acceptance of the bid.Here it is abundance method of identification.Wherein recognition window length is set to a fixing standard value, rather than whole file size.Here it is local abundance method of identification.
Local abundance method of identification can be understood as: in one section speech range, whether reached certain threshold value the relative effective time of the existence of object voice.The dimension of abundance is second, and its meaning is the summation of the weighting actuation duration of certain identifying object.System supposes that the maximum excitation output of every frame is the inverse of frame rate for the contribution of exporting abundance, supposes that frame rate is 100/ second, and the abundance of the maximum output of then every frame is 10 milliseconds, and 1/10 of maximum output then has only 1 millisecond, the implication of weighting that Here it is.Every frame in whole identification window output abundance added up has obtained total output abundance in this window, and its implication can be understood as the efficient voice length of this object in this window.Each identifying object can be set different abundance recognition thresholds, such as 5 seconds, 10 seconds or the like.Got 10 seconds such as identification abundance threshold value, its corresponding meaning representation just thinks that this identifying object is exactly a destination object if the weighting temporal summation that certain identifying object occurs surpasses 10 seconds (the efficient voice length that can be regarded as this object was above 10 seconds) in an identification window.
Before the concrete identification window length of consideration, we define the training sample standard length earlier.The default recommended value of this standard length is 30 seconds.In the editing and processing of voice document, all positive sample and anti-phase samples that participates in training all should be clipped to (being similar to) standard length.If standard length is 30 seconds, that is to say the length that positive sample and anti-phase sample are all clipped to about 30 seconds.Wherein single people's anti-phase sample should only be got wherein one section, that is to say, if certain voice document is as anti-phase sample, but this voice document may comprise several standard lengths, and one section that so only gets wherein output maximum as anti-phase sample (this editing and processing will be finished automatically by system).
The window length of local abundance method is variable, but the standard length that system recommendations window length is got the voice training file, and default recommended value is 30 seconds.In identifying, the whole voice document of system scan, and mobile continuously and smoothly identification window, as long as target threshold value during its output abundance reaches in any window is just thought acceptance of the bid, system just stops scanning, the output result.Therefore may not need to scan whole file sometimes, and in 30 seconds scopes of beginning with regard to identified be acceptance of the bid.If window length of file less than is then handled according to a window length, and the acceptance of the bid threshold value does not change.
In order to determine the acceptance of the bid threshold value of certain identifying object, at first define a notion, be called the standard abundance.
The mean value of maximum output abundance in the identified in units window of standard abundance=all original positive samples
So-called original positive sample is exactly the positive sample that does not pass through editing and processing, in fact comprises both sides' call voice information positive sample of (both having comprised the voice messaging that training objects also comprises other speaker).And usually said positive sample is meant the sample that removes non-training objects voice.Therefore, the standard abundance of so-called certain identifying object, corresponding speaker's original positive sample just, the average output abundance in the window length range of unit.
Threshold value=standard abundance * threshold value coefficient.
Wherein the threshold value coefficient is the numerical value between 0 to 1.The threshold value coefficient is big more, the threshold value abundance that is near the mark more, and the void knowledge rate of system is low more, but just knowledge rate also may reduce; The threshold value coefficient is more little, and threshold value is more near 0, and the void knowledge rate of system is high more, but just knowledge rate is also high more.Therefore, by adjusting the threshold value coefficient, effect that can regulating and controlling identification.For the object of particular importance, when perhaps the voice environment of vocal print template and identification is distinguished to some extent (such as utilizing landline telephone speech recognition mobile phone speech), lower threshold value coefficient is got in suggestion, to guarantee sufficiently high just knowledge rate; And, then can suitably provide the threshold value coefficient for not too important identifying object.
The threshold value coefficient of system default is 0.5, and just threshold value equals 50% of standard abundance, and the span of suggestion is 0.3~0.7.
Result's output of speaker's identification system comprises the relevant information records and the acceptance of the bid voice document itself of the file of getting the bid.Similar and speech recognition system and foreground voice obtain the information interactive process between the system.
Whole speaker's vocal print training and identification overall flow figure are as shown in Figure 8.
Speaker's personal identification method of the present invention has characteristics such as bio-imitability, increment type training, learnability, identification two-way call, strong resolution characteristic and discrimination, strong robustness, recognition speed are fast, non-speech audio filtration.

Claims (12)

1, a kind of speaker's personal identification method, by the voice receiving equipment, the voice acquisition module, voice edition, pretreatment module, speaker's training and identification module and background data base are realized, it is characterized in that: described voice receiving equipment receives identified person's voice signal, and voice signal is sent to described voice acquisition module, described voice acquisition module is become by high-speed data acquisition mechanism, described voice acquisition module can form voice document with the voice that receive and store to be used for described voice edition orderlyly, the subsequent treatment of pretreatment module, described voice edition device is handled voice document, and the voice after the output edit, described voice signal preprocessed chip is handled the speech analysis that voice document carries out voice signal, and little characteristic parameter of output voice, described voice signal preprocessed chip further passes to voice messaging described identification module, described identification module is become with the vocal print identification mechanism by the vocal print training airplane, described vocal print training airplane receives the result of described voice edition device and described voice preprocessed chip, speech samples is trained, form speaker's exclusive vocal print template, the vocal print template that described Application on Voiceprint Recognition machine utilization training generates, neural network algorithm, and the little characteristic parameter of speaker's voice that the processing of voice preprocessed chip obtains identifies the speaker, described background data base is described voice receiving equipment, the voice acquisition module, voice edition, pretreatment module, speaker's training, identification module provides the data support.
2, realize the system of speaker's personal identification method as claimed in claim 1, it is characterized in that: described system is by the voice receiving equipment, the voice acquisition module, voice edition, pretreatment module, speaker's training and identification module and background data base constitute, described voice receiving equipment receives identified person's voice signal, and voice signal is sent to described voice acquisition module, described voice acquisition module is become by high-speed data acquisition mechanism, described voice edition, pretreatment module is made of voice edition device and voice signal preprocessed chip, described identification module is become with the vocal print identification mechanism by the vocal print training airplane, described voice receiving equipment, the voice acquisition module, voice edition, pretreatment module, speaker's training, identification module carries out data access to described background data base.
3, speaker's identification system as claimed in claim 2 is characterized in that: described voice receiving equipment is microphone or the high-speed data acquisition equipment that call voice is converted to the PCM signal.
4, speaker's personal identification method as claimed in claim 1 is characterized in that: described voice receiving equipment, and obtain in real time with speech data and by voice stream file pattern and store, write down the relevant information of this conversation ticket simultaneously with text mode.
5, speaker's personal identification method as claimed in claim 1 is characterized in that: described voice edition device is checked, edits, is cut apart and conversion process voice document.
6, speaker's personal identification method as claimed in claim 1 is characterized in that: described voice edition device is supported Millisecond, and speech data is carried out the conversion of sample frequency, channel number and sampling resolution, and reverse, oppositely, the special efficacy of mourning in silence editor, or generate and to mourn in silence; Or file cut apart.
7, a kind of speaker's personal identification method as claimed in claim 1 is characterized in that: described voice signal preprocessed chip carries out digitizing, pre-emphasis, the windowing of voice signal and adds the frame analyzing and processing voice document.
8, a kind of speaker's personal identification method as claimed in claim 1, it is characterized in that: described voice receiving equipment, the voice acquisition module, voice edition, pretreatment module, speaker's training, identification module and background data base system adopt configuration file and carry out collaborative work from the mode that share directory obtains the ticket voice document.
9, speaker's personal identification method as claimed in claim 1, it is characterized in that: described vocal print training airplane is trained the speech samples of object and the speech samples of non-object, and described vocal print training airplane utilizes the speech samples of the speech samples of object and non-object to cut apart the spectrum space of such multidimensional, make the occupied spectrum space of object speech samples be mapped to the output of object, and the occupied spectrum space of non-object speech samples is mapped to the output of non-object, and described vocal print training airplane carries out the adjustment of reaction type according to the result of voice training.
10, speaker's personal identification method as claimed in claim 1 is characterized in that: described Application on Voiceprint Recognition machine is according to the desynchronize output of all objects to be identified of excitation of the spectrum signature of speaker's voice.
11, speaker's personal identification method as claimed in claim 1 is characterized in that: described Application on Voiceprint Recognition machine utilizes the multi-level clustering neural network to finish the cluster of phonic signal character fuzzy dynamic set.
12, speaker's personal identification method as claimed in claim 1 is characterized in that: described Application on Voiceprint Recognition machine utilizes individual layer perceptron network to finish the conversion of the excitation group of cluster to the speaker, realizes that the excitation group is mapped to speaker's output.
CNB031415113A 2003-07-10 2003-07-10 Method and system for identifying status of speaker Expired - Fee Related CN1308911C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNB031415113A CN1308911C (en) 2003-07-10 2003-07-10 Method and system for identifying status of speaker

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNB031415113A CN1308911C (en) 2003-07-10 2003-07-10 Method and system for identifying status of speaker

Publications (2)

Publication Number Publication Date
CN1567431A true CN1567431A (en) 2005-01-19
CN1308911C CN1308911C (en) 2007-04-04

Family

ID=34470948

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB031415113A Expired - Fee Related CN1308911C (en) 2003-07-10 2003-07-10 Method and system for identifying status of speaker

Country Status (1)

Country Link
CN (1) CN1308911C (en)

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100456881C (en) * 2005-07-22 2009-01-28 华为技术有限公司 Subscriber identy identifying method and calling control method and system
CN103079258A (en) * 2013-01-09 2013-05-01 广东欧珀移动通信有限公司 Method for improving speech recognition accuracy and mobile intelligent terminal
CN103680497A (en) * 2012-08-31 2014-03-26 百度在线网络技术(北京)有限公司 Voice recognition system and voice recognition method based on video
CN103875256A (en) * 2011-10-12 2014-06-18 伯斯有限公司 Source dependent wireless earpiece equalizing
CN104183245A (en) * 2014-09-04 2014-12-03 福建星网视易信息系统有限公司 Method and device for recommending music stars with tones similar to those of singers
CN104517606A (en) * 2013-09-30 2015-04-15 腾讯科技(深圳)有限公司 Method and device for recognizing and testing speech
CN104835498A (en) * 2015-05-25 2015-08-12 重庆大学 Voiceprint identification method based on multi-type combination characteristic parameters
CN104853236A (en) * 2015-01-15 2015-08-19 青岛海尔软件有限公司 Smart television switching control method and device thereof
CN105096954A (en) * 2014-05-06 2015-11-25 中兴通讯股份有限公司 Identity identifying method and device
US9424837B2 (en) 2012-01-24 2016-08-23 Auraya Pty Ltd Voice authentication and speech recognition system and method
CN107077848A (en) * 2014-09-18 2017-08-18 纽昂斯通讯公司 Method and apparatus for performing Speaker Identification
CN107329996A (en) * 2017-06-08 2017-11-07 三峡大学 A kind of chat robots system and chat method based on fuzzy neural network
WO2018036149A1 (en) * 2016-08-23 2018-03-01 深圳市鹰硕技术有限公司 Multimedia interactive teaching system and method
CN108022584A (en) * 2017-11-29 2018-05-11 芜湖星途机器人科技有限公司 Office Voice identifies optimization method
CN108735209A (en) * 2018-04-28 2018-11-02 广东美的制冷设备有限公司 Wake up word binding method, smart machine and storage medium
CN109448713A (en) * 2018-11-13 2019-03-08 平安科技(深圳)有限公司 Audio recognition method, device, computer equipment and storage medium
CN109830240A (en) * 2019-03-25 2019-05-31 出门问问信息科技有限公司 Method, apparatus and system based on voice operating instruction identification user's specific identity
CN110047490A (en) * 2019-03-12 2019-07-23 平安科技(深圳)有限公司 Method for recognizing sound-groove, device, equipment and computer readable storage medium
CN113127673A (en) * 2021-03-23 2021-07-16 上海掌数科技有限公司 Voiceprint database construction method and data calling method thereof

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5677989A (en) * 1993-04-30 1997-10-14 Lucent Technologies Inc. Speaker verification system and process
US5839103A (en) * 1995-06-07 1998-11-17 Rutgers, The State University Of New Jersey Speaker verification system using decision fusion logic
US5953700A (en) * 1997-06-11 1999-09-14 International Business Machines Corporation Portable acoustic interface for remote access to automatic speech/speaker recognition server
US6556969B1 (en) * 1999-09-30 2003-04-29 Conexant Systems, Inc. Low complexity speaker verification using simplified hidden markov models with universal cohort models and automatic score thresholding
EP1178467B1 (en) * 2000-07-05 2005-03-09 Matsushita Electric Industrial Co., Ltd. Speaker verification and identification
CN1170239C (en) * 2002-09-06 2004-10-06 浙江大学 Palm acoustic-print verifying system

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100456881C (en) * 2005-07-22 2009-01-28 华为技术有限公司 Subscriber identy identifying method and calling control method and system
CN103875256A (en) * 2011-10-12 2014-06-18 伯斯有限公司 Source dependent wireless earpiece equalizing
CN103875256B (en) * 2011-10-12 2016-11-23 伯斯有限公司 Depend on the wireless headset equilibrium in source
CN104185868B (en) * 2012-01-24 2017-08-22 澳尔亚有限公司 Authentication voice and speech recognition system and method
US9424837B2 (en) 2012-01-24 2016-08-23 Auraya Pty Ltd Voice authentication and speech recognition system and method
CN103680497A (en) * 2012-08-31 2014-03-26 百度在线网络技术(北京)有限公司 Voice recognition system and voice recognition method based on video
CN103680497B (en) * 2012-08-31 2017-03-15 百度在线网络技术(北京)有限公司 Speech recognition system and method based on video
CN103079258A (en) * 2013-01-09 2013-05-01 广东欧珀移动通信有限公司 Method for improving speech recognition accuracy and mobile intelligent terminal
CN104517606A (en) * 2013-09-30 2015-04-15 腾讯科技(深圳)有限公司 Method and device for recognizing and testing speech
CN105096954A (en) * 2014-05-06 2015-11-25 中兴通讯股份有限公司 Identity identifying method and device
CN104183245A (en) * 2014-09-04 2014-12-03 福建星网视易信息系统有限公司 Method and device for recommending music stars with tones similar to those of singers
CN107077848A (en) * 2014-09-18 2017-08-18 纽昂斯通讯公司 Method and apparatus for performing Speaker Identification
CN104853236A (en) * 2015-01-15 2015-08-19 青岛海尔软件有限公司 Smart television switching control method and device thereof
CN104835498A (en) * 2015-05-25 2015-08-12 重庆大学 Voiceprint identification method based on multi-type combination characteristic parameters
WO2018036149A1 (en) * 2016-08-23 2018-03-01 深圳市鹰硕技术有限公司 Multimedia interactive teaching system and method
CN107329996A (en) * 2017-06-08 2017-11-07 三峡大学 A kind of chat robots system and chat method based on fuzzy neural network
CN108022584A (en) * 2017-11-29 2018-05-11 芜湖星途机器人科技有限公司 Office Voice identifies optimization method
CN108735209A (en) * 2018-04-28 2018-11-02 广东美的制冷设备有限公司 Wake up word binding method, smart machine and storage medium
CN108735209B (en) * 2018-04-28 2021-01-08 广东美的制冷设备有限公司 Wake-up word binding method, intelligent device and storage medium
CN109448713A (en) * 2018-11-13 2019-03-08 平安科技(深圳)有限公司 Audio recognition method, device, computer equipment and storage medium
CN110047490A (en) * 2019-03-12 2019-07-23 平安科技(深圳)有限公司 Method for recognizing sound-groove, device, equipment and computer readable storage medium
CN109830240A (en) * 2019-03-25 2019-05-31 出门问问信息科技有限公司 Method, apparatus and system based on voice operating instruction identification user's specific identity
CN113127673A (en) * 2021-03-23 2021-07-16 上海掌数科技有限公司 Voiceprint database construction method and data calling method thereof
CN113127673B (en) * 2021-03-23 2022-07-22 上海掌数科技有限公司 Method for constructing voiceprint database and data calling method thereof

Also Published As

Publication number Publication date
CN1308911C (en) 2007-04-04

Similar Documents

Publication Publication Date Title
CN1308911C (en) Method and system for identifying status of speaker
US11776547B2 (en) System and method of video capture and search optimization for creating an acoustic voiceprint
Zakariah et al. Digital multimedia audio forensics: past, present and future
CN110298252A (en) Meeting summary generation method, device, computer equipment and storage medium
Lavner et al. A decision-tree-based algorithm for speech/music classification and segmentation
CN1662956A (en) Mega speaker identification (ID) system and corresponding methods therefor
CN1291324A (en) System and method for detecting a recorded voice
CN1287353C (en) Voice processing apparatus
CN101064043A (en) Sound-groove gate inhibition system and uses thereof
US11322159B2 (en) Caller identification in a secure environment using voice biometrics
CN1941080A (en) Soundwave discriminating unlocking module and unlocking method for interactive device at gate of building
CN109256150A (en) Speech emotion recognition system and method based on machine learning
CN1716380A (en) Audio frequency splitting method for changing detection based on decision tree and speaking person
CN110136696B (en) Audio data monitoring processing method and system
GB2486038A (en) Automatically transcribing recorded speech
CN1967657A (en) Automatic tracking and tonal modification system of speaker in program execution and method thereof
US20080281599A1 (en) Processing audio data
CN1746971A (en) Speech key of mobile
CN1787074A (en) Method for distinguishing speak person based on feeling shifting rule and voice correction
CN111276156B (en) Real-time voice stream monitoring method
CN103778917A (en) System and method for detecting identity impersonation in telephone satisfaction survey
WO2009107211A1 (en) Interrogative speech portion extraction processing program for speech data, method, and device, and client inquiry trend estimation processing program, method, and device using interrogative speech portion of speech data
CN110931016A (en) Voice recognition method and system for offline quality inspection
CN105741853A (en) Digital speech perception hash method based on formant frequency
Guaus et al. A non-linear rhythm-based style classification for broadcast speech-music discrimination

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C56 Change in the name or address of the patentee

Owner name: SHANGHAI YOULANG INFORMATION TECHNOLOGY CO., LTD.

Free format text: FORMER NAME: SHANGHAI ULANG INFORMATION TECHNOLOGY CO.,LTD.

CP03 Change of name, title or address

Address after: Room 201, No. 602 Tianshan Road branch, Shanghai, Changning District

Patentee after: Youlang Information Science and Technology Co., Ltd., Shanghai

Address before: Room 201, No. 602 Tianshan Road branch, Shanghai, Changning District

Patentee before: Shanghai Yeuron Information Technology Co., Ltd.

C17 Cessation of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20070404

Termination date: 20120710