CN103207961A - User verification method and device - Google Patents

User verification method and device Download PDF

Info

Publication number
CN103207961A
CN103207961A CN2013101442512A CN201310144251A CN103207961A CN 103207961 A CN103207961 A CN 103207961A CN 2013101442512 A CN2013101442512 A CN 2013101442512A CN 201310144251 A CN201310144251 A CN 201310144251A CN 103207961 A CN103207961 A CN 103207961A
Authority
CN
China
Prior art keywords
user
advance
eigenvector
described user
voice
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2013101442512A
Other languages
Chinese (zh)
Inventor
杨莉
曹振南
范玉峰
张海忠
高崎
姜海旺
高增
许辉
马庆怀
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dawning Information Industry Beijing Co Ltd
Original Assignee
Dawning Information Industry Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dawning Information Industry Beijing Co Ltd filed Critical Dawning Information Industry Beijing Co Ltd
Priority to CN2013101442512A priority Critical patent/CN103207961A/en
Publication of CN103207961A publication Critical patent/CN103207961A/en
Pending legal-status Critical Current

Links

Images

Abstract

The invention discloses a user verification method and device. The method comprises the following steps of: acquiring characteristic vectors from voice characteristic parameters extracted from voice signals of a user, wherein the voice characteristic parameters comprise linear prediction cepstrum coefficients, pitch periods, differential linear prediction cepstrum coefficients and/or differential pitch periods; calculating an average quantization error according to a pre-stored codebook quantization characteristic vector corresponding to the user; and determining whether the user passes voice verification according to the difference between the average quantization error and the pre-stored average quantization error corresponding to the user. The method provided by the invention can determine whether the user passes voice verification so as to quickly and reliably verify the identity of the user by processing the voice characteristic parameters extracted from the voice signals of the user and comparing the processed information with the pre-stored information, thereby improving system security in a login step.

Description

User authentication method and device
Technical field
The present invention relates to the cloud computing field, and especially, relate to a kind of user authentication method and system.
Background technology
Flourish along with cloud computing brought facility to all trades and professions, but a lot of industry departments more especially require high financial institution to information security for cloud computing is less used in the consideration of the Environmental security of cloud computing.Yet in the login link user's identity is verified the safety that can guarantee cloud computing environment well, therefore, how in the login link environment of cloud computing is carried out the emphasis that security control has become concern.At login link personal identification method commonly used password, fingerprint recognition etc. are arranged, yet, utilize people's voice to carry out identification and more and more be subject to people's attention with its distinctive advantage.For example, voice signal can not lost or be forgotten; Voice signal is the mode of noncontact, nature, thereby is accepted and uses easilier; Speech signal collection is convenient, and used equipment (microphone, phone etc.) cost is relatively low.
Though user recognition technology develops into the history that had decades today, many outstanding achievements have also been obtained, but still exist a large amount of difficult points, also do not reach gratifying degree up to now, although some recognizer is put on market, and be used for fields such as commerce, military affairs, Industry Control, but also rest on the experimental phase basically.The speech processes field comprises speech recognition and User Recognition, and among the two, User Recognition is more difficult.Nowadays increasing occasion all needs people's identity is carried out accurate recognition, but the technology of utilizing traditional mode to carry out authentication shows many deficiencies gradually.
There is the problem of potential safety hazard at the mode of authentication in the correlation technique, do not propose effective solution at present as yet.
Summary of the invention
Have the problem of potential safety hazard at the mode of authentication in the correlation technique, the present invention proposes a kind of user authentication method and device, can carry out fast people's identity, checking reliably, thus the security that improves system in the login link.
Technical scheme of the present invention is achieved in that
According to an aspect of the present invention, provide a kind of user authentication method.
This user authentication method comprises:
Obtain eigenvector according to the speech characteristic parameter that extracts from the user's voice signal, wherein, speech characteristic parameter comprises linear prediction cepstrum coefficient, pitch period, differential linearity prediction cepstrum coefficient and/or difference pitch period;
Use the codebook quantification eigenvector corresponding with the user of storage in advance, obtain average quantization error;
According to the average quantization error that obtains and storage in advance and the difference corresponding average quantization error of user, determine whether the user passes through speech verification.
Wherein, differential linearity prediction cepstrum coefficient is calculated by the linear prediction cepstrum coefficient, and the difference pitch period is calculated by pitch period.
And, before from the user's voice signal, extracting speech characteristic parameter, determine user's identity according to the personal information of user's input.
Further, when using the codebook quantification eigenvector corresponding with the user of storage in advance, call code book according to the user's who determines identity.
And, when using the codebook quantification eigenvector corresponding with the user of storage in advance, use code book that eigenvector is carried out vector quantization.
In addition, this user authentication method further comprises:
Gather user's iris information, with the iris information of gathering and the iris information comparison corresponding with the user of storing in advance;
Determine according to the comparative result of iris information whether the user verifies by iris.
According to an aspect of the present invention, provide a kind of user authentication device.
This user authentication device comprises:
Acquisition module is used for obtaining eigenvector according to the speech characteristic parameter that extracts from the user's voice signal, and wherein, speech characteristic parameter comprises linear prediction cepstrum coefficient, pitch period, differential linearity prediction cepstrum coefficient and/or difference pitch period;
Quantization modules is used for using the codebook quantification eigenvector corresponding with the user of storage in advance, obtains average quantization error;
Determination module is used for determining according to the average quantization error that obtains and that store in advance and the difference corresponding average quantization error of user whether the user passes through speech verification.
In addition, this user authentication device also comprises:
The identity determination module is used for determining user's identity according to the personal information of user's input before extracting speech characteristic parameter from the user's voice signal.
And when using the codebook quantification eigenvector corresponding with the user of storage in advance, above-mentioned quantization modules also is used for calling code book according to the user's who determines identity.
Further, when using the codebook quantification eigenvector corresponding with the user of storage in advance, above-mentioned quantization modules is used for using code book that eigenvector is carried out vector quantization.
The present invention is by handling the speech characteristic parameter that extracts from the user's voice signal, compare according to the information after handling and the information that prestores again, judge whether the user passes through speech verification, can carry out fast people's identity, verify reliably, thereby login the security that link improves system.
Description of drawings
Fig. 1 is the process flow diagram according to the user authentication method of the embodiment of the invention;
Fig. 2 is the process flow diagram that uses the cloud security login system of user authentication method of the present invention in actual applications;
Fig. 3 is the process flow diagram according to the audio recognition method of the embodiment of the invention;
Fig. 4 carried out pretreated process flow diagram to the user voice signal of gathering before implementing according to the embodiment of the invention;
Fig. 5 is the process flow diagram that extracts speech characteristic parameter according to the audio recognition method of the embodiment of the invention;
Fig. 6 is the process flow diagram of handling the speech characteristic parameter of Fig. 5 extraction according to the audio recognition method of the embodiment of the invention;
Fig. 7 is the process flow diagram according to the iris identification method of the embodiment of the invention;
Fig. 8 is the synoptic diagram according to the eye grey level histogram of the iris identification method extraction of the embodiment of the invention;
Fig. 9 carries out synoptic diagram after the smoothing processing to eye grey level histogram shown in Figure 8;
Figure 10 is the block diagram according to the user authentication device of the embodiment of the invention.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the invention, the technical scheme in the embodiment of the invention is clearly and completely described, obviously, described embodiment only is the present invention's part embodiment, rather than whole embodiment.Based on the embodiment among the present invention, the every other embodiment that those of ordinary skills obtain belongs to the scope of protection of the invention.
According to embodiments of the invention, provide a kind of user authentication method.
As shown in Figure 1, the user authentication method according to the embodiment of the invention comprises:
Step S101 obtains eigenvector according to the speech characteristic parameter that extracts from the user's voice signal, wherein, speech characteristic parameter comprises linear prediction cepstrum coefficient, pitch period, differential linearity prediction cepstrum coefficient and/or difference pitch period;
Step S103 uses the codebook quantification eigenvector corresponding with the user of storage in advance, obtains average quantization error;
Step S105 according to the average quantization error that obtains and storage in advance and the difference corresponding average quantization error of user, determines whether the user passes through speech verification.
Wherein, differential linearity prediction cepstrum coefficient is calculated by the linear prediction cepstrum coefficient, and the difference pitch period is calculated by pitch period.
And, before from the user's voice signal, extracting speech characteristic parameter, determine user's identity according to the personal information of user's input.
Further, when using the codebook quantification eigenvector corresponding with the user of storage in advance, call code book according to the user's who determines identity.
And, when using the codebook quantification eigenvector corresponding with the user of storage in advance, use code book that eigenvector is carried out vector quantization.
In actual applications, the user authentication method according to the embodiment of the invention can be used for the cloud security login system.As shown in Figure 2, use the use step of cloud security login system of user authentication method of the present invention as follows:
Step S201, login people request of access;
Step S203, abiotic recognition methods checking, for example, password input etc.;
Step S205 judges whether to be proved to be successful, if success then enter step S207, if failure then refusal login;
Step S207, comprise speech recognition and iris recognition, at first, the related personnel of internal system is carried out the collection of biometric information (voice messaging and iris information), so that training forms the biological information storehouse, the corresponding information with logining the people of collection in advance compares among the information of the login people after will handling again and the biological information stock;
Step S209 judges whether two features (that is, above-mentioned voice messaging and iris information) conform to simultaneously, if conform to simultaneously, and then successfully login, otherwise refusal login.
User identification method is divided into two parts: training method and recognition methods.In the training method stage, the input voice are carried out feature extraction, be trained to the user model storehouse then.Cognitive phase mates calculating and then judgement to the voice of input, thereby obtains a result.
Comprise two one-level modules in the cloud security login system of user authentication method having used according to an embodiment of the invention, be audio recognition method and iris identification method, below two one-level modules are described in detail.One-level module--audio recognition method comprises four secondary modules, is respectively: speech recognition, user voice signal pre-service, speech characteristic parameter extraction and user confirm vector quantization and the categorised decision of system.The one-level module--iris recognition comprises: the identification of iris inner edge and the identification of iris outer rim.
As shown in Figure 3, shown in specific as follows according to the step of the audio recognition method of the embodiment of the invention:
Step S301, the user imports voice by system with extraneous interface;
Step S303, system carries out voice collecting to the user;
Step S305, system carries out pre-service such as end-point detection, windowing to the voice of gathering;
Step S307, system is to handling through pretreated voice signal, namely extract four characteristic coefficients, be respectively: the linear prediction cepstrum coefficient (Liner Prediction Cepstral Coding, LPCC), differential linearity prediction cepstrum coefficient, pitch period and difference pitch period;
Step S309, system to the voice signal after handling carry out vector quantization (Vector Quantization, VQ);
When being in the training stage, execution in step S311 stores corresponding code book code word and average quantization error σ, for each user sets up sound template, and execution in step S313 then, when being in cognitive phase, direct execution in step S313;
Step S313, extraction will confirm that the user's of identity template (i.e. the user's who stores in advance code book) quantizes the voice of importing;
Step S315 obtains average quantization error σ ' and compares judgement with threshold value.
The one-level module of stating in realization--before the speech recognition, no matter be training stage or cognitive phase, all will carry out pre-service to the voice of input.As shown in Figure 4, the secondary module of speech recognition is carried out pre-service to user voice signal, and concrete steps are as follows:
Step S401 reads raw tone;
Step S403 carries out operations such as branch frame, windowing and end-point detection to the voice that read;
Step S405 calls littleEnergy (), obtains the short-time energy of every frame of voice messaging;
Step S407 calls littleZero (), obtains the short-time zero-crossing rate of every frame;
Step S409, calculating every frame can be worth frequently;
Step S411, whether next frame exists in the formation, if exist, then returns step S409, if there is no, then continues step S413;
Step S413 calls findBegining (), calculates the initial index of valid frame;
Step S415 calls findEnd (), calculates the index that ends of valid frame;
Step S417 obtains the available frame count certificate;
Step S419 calls trameToData (), by available frame count according to the reduction speech data.
By the voice signal of gathering is carried out pre-service, can obtain higher voice signal to noise ratio (S/N ratio), and voice signal has been divided into the voice signal of a frame one frame, be beneficial to operations such as carrying out the following characteristics extraction, and, shown in Figure 3 only is a kind of exemplary disposal route, also can adopt other technology well known in the art to come voice signal is carried out pre-service.
After disposing, voice signal will carry out feature extraction.The phonic signal character parameter extraction is the key link according to the audio recognition method of the embodiment of the invention.The characteristic parameter of voice signal can be used for characterizing the voice signal sequence.In User Recognition, extract the parameter that can characterize the user voice characteristics, and abandon the parameter that characterizes semantic content.
Feature extraction is exactly the essential characteristic that extracts the expression user personality from the user's voice signal.Generally be included in two aspects, the difference (posteriori) of action when namely generating the pronunciation of the difference (inborn) of the vocal organs of voice and vocal organs, the former mainly shows on the frequency structure of voice, has mainly comprised and has reflected that sound channel resonates and the frequency spectrum detail characteristic information of source of sound characteristics such as the spectrum envelope characteristic information of antiresonance characteristic and reflection vocal cord vibration.Representational characteristic parameter has cepstrum and gene parameter (static nature).On the latter's pronunciation custom difference shows that mainly the time of the frequency structure of voice changes, the dynamic perfromance that has mainly comprised characteristic parameter, representational characteristic parameter is the linear regression coeffficient (behavioral characteristics) of cepstrum and fundamental tone, i.e. difference cepstrum and difference fundamental tone parameter.In User Recognition, the spectrum envelope feature particularly cepstrum feature use many because use cepstrum feature can obtain reasonable recognition performance, and stable cepstrum coefficient ratio is easier to extract.Compare with cepstrum, the fundamental tone feature only is present in the voiced sound part, and the difficult extraction of accurate stable fundamental tone feature.
In general, the people can be from the tone color of sound, consciousness user's personal characteristics the information such as size of high, energy frequently.So can imagine, if utilize effective combination of feature, can obtain more stable recognition performance.For example, utilize effective combination of the fundamental tone feature in the high interval of cepstrum feature and unfailing performance to identify experiment, at first encode respectively for voiced sound portion, voiceless sound portion, tone-off part, at voiced sound portion cepstrum, difference cepstrum, fundamental tone, the difference fundamental tone, as recognition feature, utilize two-part probability weight value and the value of closing to compare at other interval cepstrum and difference cepstrums used then, can obtain recognition effect preferably.
According to above-mentioned analysis, the feature of choosing as can be known should satisfy following feature:
(1) can distinguish different users effectively, but can when same user's voice changes, keep stable relatively;
(2) be easy to from voice signal, extract;
(3) be difficult for imitated;
(4) as far as possible not in time and spatial variations.
Take all factors into consideration above various factors, according to the user authentication method of the embodiment of the invention select for use the linear prediction cepstrum coefficient (Liner Prediction Cepstral Coding, LPCC), the combination of differential linearity prediction cepstrum coefficient, pitch period and/or four kinds of characteristic parameters of difference pitch period characterizes the user's voice feature.
No matter the one-level module of stating in realization (during speech recognition) is training stage or cognitive phase, all needs to extract the speech characteristic parameter of the voice signal of user's input.As shown in Figure 5, the efficient voice data that obtain among Fig. 4 are carried out the extraction of speech characteristic parameter, concrete steps are as follows:
Step S501 to efficient voice data call getFramelLPC (), obtains the linear forecasting parameter of every frame;
Step S503, the linear forecasting parameter of every frame that step S501 is obtained calls getFramelLPCC (), obtains the linear prediction cepstrum coefficient of every frame;
Step S505, the linear prediction cepstrum coefficient of every frame that step S503 is obtained calls getMarginData (), obtains the differential linearity prediction cepstrum coefficient of every frame;
Step S507 to efficient voice data call getFramelLPC (), obtains the fundamental frequency of every frame;
Step S509, the fundamental frequency of every frame that step S507 is obtained calls getMarginData (), obtains the difference fundamental frequency of every frame;
Step S511, linear prediction cepstrum coefficient, differential linearity prediction cepstrum coefficient, fundamental frequency, difference fundamental frequency to every frame of being obtained by step S503, step S505, step S507, step S509 call getFramelFeature (), obtain assemblage characteristic.
Then, use the user to confirm that the vector quantization of system and categorised decision handle the assemblage characteristic (i.e. the eigenvector that extracts from training utterance) of Fig. 5 gained, concrete steps are as follows:
Step S601 calls getFirstCodeWord (), obtains the initial codebook of training utterance;
Step S603 adopts the LBG algorithm to obtain the optimum code word of training utterance, calculates average quantization error σ;
Step S605 unites above the two (being all quantization error σ of optimum code word peace) as user model and deposits database in;
Step S607, the model (being training sample) that extracts the user quantizes people's to be detected voice, obtains average quantization error σ ';
Step S609, calculate voice to be detected E (namely | σ-σ ' |/σ) compare with threshold value whether judgement voice to be measured are voice to be identified, if E less than threshold value, then speech verification success, if E is greater than threshold value, then speech verification failure.
In addition, the user authentication method according to the embodiment of the invention further comprises: gather user's iris information, with the iris information of gathering and the iris information comparison corresponding with the user of storing in advance; Determine according to the comparative result of iris information whether the user verifies by iris.In actual applications, the iris recognition according to the embodiment of the invention comprises the identification of iris inner edge and the identification of iris outer rim.Wherein, as shown in Figure 7, the idiographic flow of iris inner edge identification is as follows:
Step S701 according to the input eyes image, obtains the eye grey level histogram, and wherein, grey level histogram has reflected certain gray scale occurs in the image number of times or frequency effectively;
Step S703 obtains segmentation threshold, and the core index of selected threshold is exactly the grey level histogram of eye image, chooses the zone that appropriate threshold can be partitioned into pupil more accurately;
Step S705 based on threshold value, utilizes morphologic filtering to carry out the location of pupil inward flange;
Step S707, pixels statistics draws pupil center's coordinate and radius.
In actual conditions, as shown in Figure 8, be the eye grey level histogram that obtains, utilize Gauss's integer wave filter that the eye grey level histogram is carried out smoothing processing then.As shown in Figure 9, be the eye grey level histogram after level and smooth.
Shown in grey level histogram level and smooth later among Fig. 9, because the gray scale minimum of pupil region, so not tangible especially crest, mainly all be to concentrate on p1 on first distant crest location, and the area part of eyelashes eyelid mainly concentrates on second tangible crest p2, therefore the threshold value that the trough p3 position between p1 and the p2 can be cut apart as pupil.P3 has stored the edge gray-scale value that distributes few between p1 and p2.
Step S705 comprises the location to pupil region.Automatic threshold segments after the image, and the bianry image that obtains also can be mixed with other noise region except apparent pupil region, the dark areas that produces such as the eyelid eyelashes or owing to illumination.Adopt the method for morphologic filtering can reject in the image noise and and some interfere informations, and can fill and repair frontier point, " cavity " filled up.The formula of the morphologic filtering method that the technical scheme of iris inner edge identification of the present invention adopts is:
f′=erode N2-N1{dilate N2(erode N1(~f))}
Wherein, f ' represents pupil region, and f represents the eye binary map, erode () expression corrosion function, dilate () expression expansion function.
By to the negate of eye binary map, carry out N1 corrosion then, N2 expansion carried out N2-N1 corrosion again and can be drawn pupil region.According to pixels statistics, draw pupil center's coordinate and radius again, thereby draw the inward flange information of iris.
In addition, can utilize existing techniques in realizing according to the outward flange of iris identification method of the present invention, therefore, not specifically describe in the text.
The iris information that draws according to the iris outer edge compares with the information of pre-stored, if iris information meets, then is proved to be successful.And, pass through simultaneously under the situation of checking in iris checking and speech verification, can successful login system.
According to embodiments of the invention, provide a kind of user authentication device.
The user authentication device of embodiments of the invention comprises:
Acquisition module 101 is used for obtaining eigenvector according to the speech characteristic parameter that extracts from the user's voice signal, and wherein, speech characteristic parameter comprises linear prediction cepstrum coefficient, pitch period, differential linearity prediction cepstrum coefficient and/or difference pitch period;
Quantization modules 102 is used for using the codebook quantification eigenvector corresponding with the user of storage in advance, obtains average quantization error;
Determination module 103 is used for determining according to the average quantization error that obtains and that store in advance and the difference corresponding average quantization error of user whether the user passes through speech verification.
In addition, this user authentication device also comprises:
Identity determination module (not shown) is used for determining user's identity according to the personal information of user's input before extracting speech characteristic parameter from the user's voice signal.
And when using the codebook quantification eigenvector corresponding with the user of storage in advance, above-mentioned quantization modules 102 also is used for calling code book according to the user's who determines identity.
Further, when using the codebook quantification eigenvector corresponding with the user of storage in advance, above-mentioned quantization modules 102 is used for using code book that eigenvector is carried out vector quantization.
In sum, by means of technique scheme of the present invention, the present invention is by handling the speech characteristic parameter that extracts from the user's voice signal, compare according to the information after handling and the information that prestores again, judge whether the user passes through speech verification, and to the additional iris checking of user, can carry out fast people's identity, verify reliably, thereby login the security that link improves system.
The above only is preferred embodiment of the present invention, and is in order to limit the present invention, within the spirit and principles in the present invention not all, any modification of doing, is equal to replacement, improvement etc., all should be included within protection scope of the present invention.

Claims (10)

1. a user authentication method is characterized in that, comprising:
Obtain eigenvector according to the speech characteristic parameter that extracts from the user's voice signal, wherein, described speech characteristic parameter comprises linear prediction cepstrum coefficient, pitch period, differential linearity prediction cepstrum coefficient and/or difference pitch period;
Use the described eigenvector of the codebook quantification corresponding with described user of storage in advance, obtain average quantization error;
According to the described average quantization error that obtains and storage in advance and the difference corresponding average quantization error of described user, determine whether described user passes through speech verification.
2. user authentication method according to claim 1 is characterized in that, described differential linearity prediction cepstrum coefficient is calculated by described linear prediction cepstrum coefficient, and described difference pitch period is calculated by described pitch period.
3. user authentication method according to claim 1 is characterized in that, extract described speech characteristic parameter from described user's voice signal before, determines described user's identity according to the personal information of described user's input.
4. user authentication method according to claim 3 is characterized in that, when using the codebook quantification described eigenvector corresponding with described user of storage in advance, calls described code book according to the described user's who determines identity.
5. user authentication method according to claim 1 is characterized in that, uses the described eigenvector of the codebook quantification corresponding with described user of storage in advance to comprise:
Use described code book that described eigenvector is carried out vector quantization.
6. user authentication method according to claim 1 further comprises:
Gather described user's iris information, with the described iris information of gathering and the iris information comparison corresponding with described user of storing in advance;
Determine according to the comparative result of iris information whether described user verifies by iris.
7. a user authentication device is characterized in that, comprising:
Acquisition module is used for obtaining eigenvector according to the speech characteristic parameter that extracts from the user's voice signal, and wherein, described speech characteristic parameter comprises linear prediction cepstrum coefficient, pitch period, differential linearity prediction cepstrum coefficient and/or difference pitch period;
Quantization modules is used for using the described eigenvector of the codebook quantification corresponding with described user of storage in advance, obtains average quantization error;
Determination module is used for determining according to the described average quantization error that obtains and that store in advance and the difference corresponding average quantization error of described user whether described user passes through speech verification.
8. demo plant according to claim 7 comprises:
The identity determination module is used for determining described user's identity according to the personal information of described user's input before extracting described speech characteristic parameter from described user's voice signal.
9. demo plant according to claim 8 is characterized in that, when using the codebook quantification described eigenvector corresponding with described user of storage in advance, described quantization modules also is used for calling described code book according to the described user's who determines identity.
10. user authentication device according to claim 7 is characterized in that, when using the codebook quantification described eigenvector corresponding with described user of storage in advance, described quantization modules is used for using described code book that described eigenvector is carried out vector quantization.
CN2013101442512A 2013-04-23 2013-04-23 User verification method and device Pending CN103207961A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2013101442512A CN103207961A (en) 2013-04-23 2013-04-23 User verification method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2013101442512A CN103207961A (en) 2013-04-23 2013-04-23 User verification method and device

Publications (1)

Publication Number Publication Date
CN103207961A true CN103207961A (en) 2013-07-17

Family

ID=48755178

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2013101442512A Pending CN103207961A (en) 2013-04-23 2013-04-23 User verification method and device

Country Status (1)

Country Link
CN (1) CN103207961A (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103456303A (en) * 2013-08-08 2013-12-18 四川长虹电器股份有限公司 Method for controlling voice and intelligent air-conditionier system
CN103871411A (en) * 2014-04-03 2014-06-18 北京邮电大学 Text-independent speaker identifying device based on line spectrum frequency difference value
CN103985384A (en) * 2014-05-28 2014-08-13 北京邮电大学 Text-independent speaker identification device based on random projection histogram model
WO2016026325A1 (en) * 2014-08-20 2016-02-25 中兴通讯股份有限公司 Authentication method, terminal and computer storage medium based on voiceprint characteristic
CN107430854A (en) * 2015-04-13 2017-12-01 Bsh家用电器有限公司 Home appliances and the method for operating home appliances
CN107945787A (en) * 2017-11-21 2018-04-20 上海电机学院 A kind of acoustic control login management system and method based on virtual instrument technique
CN110245626A (en) * 2019-06-19 2019-09-17 北京万里红科技股份有限公司 A method of accurately detecting eyelash image in iris image
CN110931022A (en) * 2019-11-19 2020-03-27 天津大学 Voiceprint identification method based on high-frequency and low-frequency dynamic and static characteristics
CN111091836A (en) * 2019-12-25 2020-05-01 武汉九元之泰电子科技有限公司 Intelligent voiceprint recognition method based on big data
CN112822186A (en) * 2020-12-31 2021-05-18 国网江苏省电力有限公司信息通信分公司 Power system IP dispatching station notification broadcasting method and system based on voice authentication

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1403953A (en) * 2002-09-06 2003-03-19 浙江大学 Palm acoustic-print verifying system
CN1758678A (en) * 2005-10-26 2006-04-12 熊猫电子集团有限公司 Voice recognition and voice tag recoding and regulating method of mobile information terminal
CN102231277A (en) * 2011-06-29 2011-11-02 电子科技大学 Method for protecting mobile terminal privacy based on voiceprint recognition
CN102800316A (en) * 2012-08-30 2012-11-28 重庆大学 Optimal codebook design method for voiceprint recognition system based on nerve network

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1403953A (en) * 2002-09-06 2003-03-19 浙江大学 Palm acoustic-print verifying system
CN1758678A (en) * 2005-10-26 2006-04-12 熊猫电子集团有限公司 Voice recognition and voice tag recoding and regulating method of mobile information terminal
CN102231277A (en) * 2011-06-29 2011-11-02 电子科技大学 Method for protecting mobile terminal privacy based on voiceprint recognition
CN102800316A (en) * 2012-08-30 2012-11-28 重庆大学 Optimal codebook design method for voiceprint recognition system based on nerve network

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103456303A (en) * 2013-08-08 2013-12-18 四川长虹电器股份有限公司 Method for controlling voice and intelligent air-conditionier system
CN103871411A (en) * 2014-04-03 2014-06-18 北京邮电大学 Text-independent speaker identifying device based on line spectrum frequency difference value
CN103985384A (en) * 2014-05-28 2014-08-13 北京邮电大学 Text-independent speaker identification device based on random projection histogram model
CN103985384B (en) * 2014-05-28 2015-04-15 北京邮电大学 Text-independent speaker identification device based on random projection histogram model
WO2016026325A1 (en) * 2014-08-20 2016-02-25 中兴通讯股份有限公司 Authentication method, terminal and computer storage medium based on voiceprint characteristic
CN107430854B (en) * 2015-04-13 2021-02-09 Bsh家用电器有限公司 Household appliance and method for operating a household appliance
CN107430854A (en) * 2015-04-13 2017-12-01 Bsh家用电器有限公司 Home appliances and the method for operating home appliances
CN107945787A (en) * 2017-11-21 2018-04-20 上海电机学院 A kind of acoustic control login management system and method based on virtual instrument technique
CN110245626A (en) * 2019-06-19 2019-09-17 北京万里红科技股份有限公司 A method of accurately detecting eyelash image in iris image
CN110245626B (en) * 2019-06-19 2021-06-22 北京万里红科技股份有限公司 Method for accurately detecting eyelash image in iris image
CN110931022A (en) * 2019-11-19 2020-03-27 天津大学 Voiceprint identification method based on high-frequency and low-frequency dynamic and static characteristics
CN110931022B (en) * 2019-11-19 2023-09-15 天津大学 Voiceprint recognition method based on high-low frequency dynamic and static characteristics
CN111091836A (en) * 2019-12-25 2020-05-01 武汉九元之泰电子科技有限公司 Intelligent voiceprint recognition method based on big data
CN112822186A (en) * 2020-12-31 2021-05-18 国网江苏省电力有限公司信息通信分公司 Power system IP dispatching station notification broadcasting method and system based on voice authentication

Similar Documents

Publication Publication Date Title
CN103207961A (en) User verification method and device
CN106847292B (en) Method for recognizing sound-groove and device
CN102509547B (en) Method and system for voiceprint recognition based on vector quantization based
CN102142253B (en) Voice emotion identification equipment and method
CN107945790B (en) Emotion recognition method and emotion recognition system
CN110096570A (en) A kind of intension recognizing method and device applied to intelligent customer service robot
CN110473566A (en) Audio separation method, device, electronic equipment and computer readable storage medium
EP1679694A1 (en) Improving error prediction in spoken dialog systems
CN107731233A (en) A kind of method for recognizing sound-groove based on RNN
CN105938716A (en) Multi-precision-fitting-based automatic detection method for copied sample voice
CN103971690A (en) Voiceprint recognition method and device
CN102324232A (en) Method for recognizing sound-groove and system based on gauss hybrid models
CN110750774B (en) Identity recognition method and device
CN108648760B (en) Real-time voiceprint identification system and method
CN106504768A (en) Phone testing audio frequency classification method and device based on artificial intelligence
CN107093422B (en) Voice recognition method and voice recognition system
CN109741734B (en) Voice evaluation method and device and readable medium
CN109446948A (en) A kind of face and voice multi-biological characteristic fusion authentication method based on Android platform
CN106709804A (en) Interactive wealth planning consulting robot system
CN108899033A (en) A kind of method and device of determining speaker characteristic
Kumar et al. Significance of GMM-UBM based modelling for Indian language identification
CN104464738B (en) A kind of method for recognizing sound-groove towards Intelligent mobile equipment
CN106297769B (en) A kind of distinctive feature extracting method applied to languages identification
Zhang et al. Voice biometric identity authentication system based on android smart phone
CN104299611A (en) Chinese tone recognition method based on time frequency crest line-Hough transformation

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C12 Rejection of a patent application after its publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20130717