CN104598796A - Method and system for identifying identity - Google Patents

Method and system for identifying identity Download PDF

Info

Publication number
CN104598796A
CN104598796A CN201510052189.3A CN201510052189A CN104598796A CN 104598796 A CN104598796 A CN 104598796A CN 201510052189 A CN201510052189 A CN 201510052189A CN 104598796 A CN104598796 A CN 104598796A
Authority
CN
China
Prior art keywords
user
biological characteristic
text
voice
obtains
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510052189.3A
Other languages
Chinese (zh)
Other versions
CN104598796B (en
Inventor
赵敏志
雷琴辉
赵彬
郭涛
胡安洲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
iFlytek Co Ltd
Original Assignee
iFlytek Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by iFlytek Co Ltd filed Critical iFlytek Co Ltd
Priority to CN201510052189.3A priority Critical patent/CN104598796B/en
Publication of CN104598796A publication Critical patent/CN104598796A/en
Application granted granted Critical
Publication of CN104598796B publication Critical patent/CN104598796B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The invention discloses a method and a system for identifying an identity. The method comprises the following steps: generating a biological characteristic model corresponding to a user registration identity mark when a user registers; generating and displaying a verification text and recording a voice and an image of a user reading verification text when the user identity is identified; identifying the recorded voice and image of the user reading verification text, thereby acquiring a valid starting time of the voice, the text content of the voice, the valid starting time of lip motion and a lip language text content; acquiring biological characteristics of the user if the valid starting time of lip motion conforms to the valid starting time of the voice and the lip language text content and the text content of the voice both conform to the verification text; matching the acquired biological characteristics with the biological characteristic model in a user register database, thereby acquiring a matching score; confirming if the user identity is correct according to the matching score. The method provided by the invention can be utilized to effectively increase the safety of the identity identification.

Description

Personal identification method and system
Technical field
The present invention relates to identity identification technical field, be specifically related to a kind of personal identification method and system.
Background technology
Along with popularizing of development of Mobile Internet technology, mobile payment has become a very important part in people's work, study, amusement, life.Thereupon, the safety problem that mobile payment brings highlights day by day, how to ensure the information security of user and how to carry out convenient effective confirmation to user identity and become mobile payment important step.Mostly the payment platform of current main flow, be to adopt traditional pin mode, or coordinate the methods such as dynamic token (token) to confirm user identity.The method serves the protective effect to user profile to a certain extent.
But classic method can not realize confirming the identity of user, only just confirms code content, once password is stolen by other people, can cause huge economic loss.And current user has a lot of scene to need to access to your password, if use different passwords can be easy to cause forget respectively; If use unified password, be easy to be stolen by other channels by people, such as, hit storehouse attack pattern etc.Therefore, conventional cipher mode progressively demonstrates the deficiency of its security, validity and convenience in the nowadays fast-developing mobile Internet epoch.
For this reason, biological identification becomes the focus of current research gradually.Biological identification, also known as bio-identification, is that the physiological characteristic that utilizes human body intrinsic by computing machine or behavioural characteristic carry out personal identification qualification.Biological identification, compared with other authentication method, has the advantage of oneself uniqueness: first is security.Second is convenience---compared with use key, card, token or PIN, biological identification is more convenient.3rd is that biological characteristic does not exist the problem forgotten, and also need not worry the problem that password, identification number are stolen.Finally, biological characteristic has good stability and persistence.
In present stage, biological identification adopts single biological characteristic usually, even if adopt multiple biological characteristic, is also identify each biological characteristic respectively successively simply.Therefore, assailant still can take the means of breaking up one by one, and vocal print system is attacked in such as recording, and face system etc. is attacked in video recording; System is caused very easily to be broken.Greatly reduce the security of system.
Summary of the invention
The embodiment of the present invention provides a kind of personal identification method and system, to improve the security of authentication.
For this reason, the embodiment of the present invention provides following technical scheme:
A kind of personal identification method, comprising:
When user registers, generate the biological characteristic model of respective user enrollment status mark, and described biological characteristic model is saved in user registry database;
When carrying out user identity identification, obtain User Identity;
Generate and show checking text, and typing user reads aloud voice and the image of described checking text;
The voice of described checking text are read aloud to the user of typing and image identifies, obtain the dynamic effective initial time of effective initial time of described voice and speech text content, lip and lip reading content of text;
If effective initial time that described lip moves conforms to effective initial time of voice, and lip reading content of text and speech text content all conform to described checking text, then obtain the biological characteristic of user;
Based on described User Identity, the biological characteristic of acquisition is mated with the biological characteristic model in user registry database, obtains matching score;
Whether correct according to described matching score determination user identity.
Preferably, described method also comprises:
Check in described user registry database whether there is described User Identity; ;
If so, then execution generates and shows the step of checking text; Otherwise, determine user identity mistake.
Alternatively, described biological characteristic model is sound-groove model;
The biological characteristic of described acquisition user comprises: read aloud from the user of described typing the vocal print feature obtaining user the voice of described checking text;
Described the biological characteristic of acquisition to be mated with the biological characteristic model in user registry database, obtain matching score to comprise: sound-groove model corresponding with User Identity described in user registry database for described vocal print feature is mated, obtains matching score;
Or
Described biological characteristic model is faceform;
The biological characteristic of described acquisition user comprises: read aloud from the user of described typing the face characteristic obtaining user the image of described checking text;
Described the biological characteristic of acquisition to be mated with the biological characteristic model in user registry database, obtain matching score to comprise: the faceform that described face characteristic is corresponding with User Identity described in user registry database is mated, and obtains matching score;
Or
Described biological characteristic model comprises: sound-groove model and faceform;
The biological characteristic of described acquisition user comprises: read aloud from the user of described typing the vocal print feature obtaining user the voice of described checking text, and read aloud from the user of described typing the face characteristic obtaining user the image of described checking text;
Described the biological characteristic of acquisition to be mated with the biological characteristic model in user registry database, obtains matching score and comprise:
Sound-groove model corresponding with User Identity described in user registry database for the vocal print feature of acquisition is mated, obtains the first score;
The faceform that the face characteristic of acquisition is corresponding with User Identity described in user registry database is mated, and obtains the second score;
First score and the second score are merged, obtains matching score.
Correspondingly, describedly whether correctly to comprise according to described matching score determination user identity:
If described matching score is greater than the threshold value of setting, then determine that user identity is correct; Otherwise determine user identity mistake.
Alternatively, described biological characteristic model is sound-groove model;
The biological characteristic of described acquisition user comprises: read aloud from the user of described typing the vocal print feature obtaining user the voice of described checking text;
Described the biological characteristic of acquisition to be mated with the biological characteristic model in user registry database, obtain matching score and comprise: described vocal print feature is mated with sound-groove model each in user registry database, obtains many candidates matching score;
Or
Described biological characteristic model is faceform;
The biological characteristic of described acquisition user comprises: read aloud from the user of described typing the face characteristic obtaining user the image of described checking text;
Described the biological characteristic of acquisition to be mated with the biological characteristic model in user registry database, obtain matching score and comprise: described face characteristic is mated with faceform each in user registry database, obtains many candidate matches score;
Or
Described biological characteristic model comprises: sound-groove model and faceform;
The biological characteristic of described acquisition user comprises:
Read aloud from the user of described typing the vocal print feature obtaining user the voice of described checking text, and read aloud from the user of described typing the face characteristic obtaining user the image of described checking text;
Described the biological characteristic of acquisition to be mated with the biological characteristic model in user registry database, obtains matching score and comprise:
The vocal print feature of acquisition and face characteristic are carried out fusion with the vocal print feature in user registry database and faceform respectively mate, obtain many candidate matches score.
Correspondingly, describedly whether correctly to comprise according to described matching score determination user identity:
Judge that whether user's enrollment status mark that characteristic model that matching score is the highest is corresponding is identical with the described User Identity of acquisition;
If so, then determine that user identity is correct; Otherwise determine user identity mistake.
Preferably, described method also comprises:
After receiving user's registration request, generate and show Login script, and typing user reads aloud voice and the image of described Login script;
The voice of described Login script are read aloud to the user of typing and image identifies, obtain the dynamic effective initial time of effective initial time of described voice and speech text content, lip and lip reading content of text;
If effective initial time that described lip moves conforms to effective initial time of voice, and lip reading content of text all conforms to described Login script with speech text content, then allow registration; Otherwise forbid registration;
After permission registration, then perform the step of the biological characteristic model generating respective user enrollment status mark.
Alternatively, described checking text and/or Login script are static text or stochastic and dynamic text.
A kind of identification system, comprising:
Registering modules, for when user registers, generates the biological characteristic model of respective user enrollment status mark, and is saved in user registry database by described biological characteristic model;
Identify label acquisition module, for when carrying out user identity identification, obtains User Identity;
Text processing module, for generating and showing checking text;
Typing module, reads aloud voice and the image of described checking text for typing user;
Identification module, voice and image for reading aloud described checking text to the user of typing identify, obtain the dynamic effective initial time of effective initial time of described voice and speech text content, lip and lip reading content of text, and judge whether effective initial time that described lip moves conforms to effective initial time of voice, and whether lip reading content of text all conforms to described checking text with speech text content; ;
Biological characteristic extraction module, conforms to effective initial time of voice for the effective initial time moved at described lip, and when lip reading content of text and speech text content all conform to described checking text, obtains the biological characteristic of user;
Matching module, for based on described User Identity, mates the biological characteristic that described biological characteristic extraction module obtains with the biological characteristic model in user registry database, obtains matching score;
Whether judge module, for correct according to described matching score determination user identity.
Preferably, described system also comprises:
Checking module, for checking in described user registry database whether there is described User Identity; If so, then trigger described text processing module generate and show checking text; Otherwise determine user identity mistake.
Alternatively, described biological characteristic model is sound-groove model; Described biological characteristic extraction module specifically for read aloud described checking text from the user of described typing voice in obtain the vocal print feature of user; The vocal print feature that described biological characteristic extraction module obtains by the described matching judgment module sound-groove model corresponding with the User Identity that described identify label acquisition module obtains mates, and obtains matching score;
Or
Described biological characteristic model is faceform; Described biological characteristic extraction module specifically for read aloud described checking text from the user of described typing image in obtain the face characteristic of user; The faceform that the face characteristic that described biological characteristic extraction module obtains by described matching judgment module is corresponding with the User Identity that described identify label acquisition module obtains is mated, and obtains matching score;
Or
Described biological characteristic model comprises: sound-groove model and faceform;
Described biological characteristic extraction module comprises:
First extraction unit, for read aloud described checking text from the user of described typing voice in obtain the vocal print feature of user;
Second extraction unit, for read aloud described checking text from the user of described typing image in obtain the face characteristic of user;
Described matching module comprises:
First matching unit, for being mated by sound-groove model corresponding with the identify label that described identify label acquisition module obtains for the vocal print feature of described first extraction unit acquisition, obtains the first score;
Second matching unit, the faceform that the face characteristic for being obtained by described second extraction unit is corresponding with the identify label that described identify label acquisition module obtains is mated, and obtains the second score;
Integrated unit, for described first score and described second score being merged, obtains matching score.
Correspondingly, described judge module, specifically for when described matching score is greater than the threshold value of setting, determines that user identity is correct; Otherwise determine user identity mistake.
Alternatively, described biological characteristic model is sound-groove model;
Described biological characteristic extraction module, specifically for read aloud described checking text from the user of described typing voice in obtain the vocal print feature of user;
Described matching module, specifically for described vocal print feature is mated with sound-groove model each in user registry database, many candidates matching score;
Or
Described biological characteristic model is faceform;
Described biological characteristic extraction module, specifically for read aloud described checking text from the user of described typing image in obtain the face characteristic of user;
Described matching module, specifically for being mated with faceform each in user registry database by described face characteristic, obtains many candidate matches score;
Or
Described biological characteristic model comprises: sound-groove model and faceform;
Described biological characteristic extraction module comprises:
First extraction unit, for read aloud described checking text from the user of described typing voice in obtain the vocal print feature of user;
Second extraction unit, for read aloud described checking text from the user of described typing image in obtain the face characteristic of user;
Described matching module, face characteristic specifically for the vocal print feature of described first extraction unit acquisition and described second extraction unit being obtained carries out fusion with the vocal print feature in user registry database and faceform respectively and mates, and obtains many candidate matches score.
Correspondingly, whether described judge module is identical with the identify label of acquisition specifically for judging registered user's mark that characteristic model that matching score is the highest is corresponding; If so, then determine that user identity is correct; Otherwise determine user identity mistake.
Preferably, described text processing module, also for after described Registering modules receives user's registration request, generates and shows Login script;
Described typing module, also reads aloud voice and the image of described Login script for typing user;
Described identification module, voice and image also for reading aloud described Login script to the user of typing identify, obtain the dynamic effective initial time of effective initial time of described voice and speech text content, lip and lip reading content of text, and judge whether effective initial time that described lip moves conforms to effective initial time of voice, and whether lip reading content of text all conforms to described checking text with speech text content, and determine when all conforming to allow registration, otherwise forbid registration;
Described Registering modules specifically for after determine to allow registration at described identification module, generate the biological characteristic model of respective user enrollment status mark, and described biological characteristic model be saved in user registry database.
The personal identification method that the embodiment of the present invention provides and system, when carrying out user identity identification, generate and show checking text, then typing user reads aloud voice and the image of checking text, these voice and image are identified, obtains the dynamic effective initial time of effective initial time of described voice and speech text content, lip and lip reading content of text; When effective initial time that described lip moves conforms to effective initial time of voice and lip reading content of text and speech text content all conform to described checking text, the biological characteristic of user is utilized to carry out identity legitimacy checking to user further.Owing to combining dynamic biological characteristic and static biological characteristic in identification procedure, therefore efficiently avoid user profile by risk that is counterfeit, that steal, substantially increase the security of authentication; And the biological characteristic of user is inherent, do not need special memory, thus improve user identity and test convenience, the validity of recognizing.
Accompanying drawing explanation
In order to be illustrated more clearly in the embodiment of the present application or technical scheme of the prior art, be briefly described to the accompanying drawing used required in embodiment below, apparently, the accompanying drawing that the following describes is only some embodiments recorded in the present invention, for those of ordinary skill in the art, other accompanying drawing can also be obtained according to these accompanying drawings.
Fig. 1 is the process flow diagram of embodiment of the present invention personal identification method;
Fig. 2 A is a kind of process flow diagram of characteristic matching and authentication in the inventive method;
Fig. 2 B is the another kind of process flow diagram of characteristic matching and authentication in the inventive method;
Fig. 2 C is the another kind of process flow diagram of characteristic matching and authentication in the inventive method;
Fig. 2 D is the another kind of process flow diagram of characteristic matching and authentication in the inventive method;
Fig. 3 is a kind of process flow diagram of user's registration in the embodiment of the present invention;
Fig. 4 is a kind of structural representation of embodiment of the present invention identification system;
Fig. 5 A is a kind of concrete structure schematic diagram of matching module in the embodiment of the present invention;
Fig. 5 B is the another kind of concrete structure schematic diagram of matching module in the embodiment of the present invention;
Fig. 5 C is the another kind of concrete structure schematic diagram of matching module in the embodiment of the present invention.
Embodiment
In order to the scheme making those skilled in the art person understand the embodiment of the present invention better, below in conjunction with drawings and embodiments, the embodiment of the present invention is described in further detail.
As shown in Figure 1, be the process flow diagram of the personal identification method that the embodiment of the present invention provides, comprise the following steps:
Step 101, when user registers, generates the biological characteristic model of respective user enrollment status mark, and is saved in user registry database by described biological characteristic model.
Described biological characteristic model can be sound-groove model and/or faceform, can certainly be other biological characteristic model, and such as, fingerprint, iris etc., do not limit this embodiment of the present invention.The registration process of user will describe in detail below.
Step 102, when carrying out user identity identification, obtains User Identity.
Step 103, generates and show checking text, and typing user reads aloud voice and the image of described checking text.
It should be noted that, described checking text can be fixed text, also can be stochastic and dynamic text.While display checking text, can show the information for pointing out user to read aloud described checking text, can be such as " please read aloud display word " etc.Read aloud user in the process of this checking text, start recording and video recording, voice when typing user reads aloud and image.
Step 104, reads aloud the voice of described checking text to the user of typing and image identifies, obtains the dynamic effective initial time of effective initial time of described voice and speech text content, lip and lip reading content of text.
Effective initial time of voice can be passed through the technology such as VAD (Voice Activity Detector, voice activation detects) and realize, and speech recognition can utilize existing speech recognition technology.
Lip moves location and feature extraction is the condition precedent of lip reading identification, specifically can adopt more existing recognition methodss, such as, the method-variable formwork of Shape-based interpolation and Snake method, based on the method-principal component analysis method of image, based drive method-optical flow analysis method etc.
Step 105, judge whether effective initial time that described lip moves conforms to effective initial time of voice, and whether lip reading content of text all conforms to described checking text with speech text content; If so, then step 106 is performed; Otherwise, perform step 109.
Effective initial time that described lip moves conforms to effective initial time of voice and refers to that effective initial time that lip moves differs in the scope of allowing with effective initial time of voice, such as in 1s.
All conform to the described checking text matching degree of the matching degree and speech text content that refer to lip reading content of text and described checking text and described checking text of lip reading content of text and speech text content all meets certain requirements, such as, to the matching degree of checking text and the matching degree of speech text content and checking text, corresponding threshold value is arranged respectively to lip reading content of text, when the matching degree calculated is greater than corresponding threshold value, determine that content conforms to, otherwise determine that content does not conform to.
By judging whether effective initial time that lip moves conforms to effective initial time of voice, ensure that the execute-in-place of user to be identified, avoids the attack of the modes such as recording, video recording.
Step 106, obtains the biological characteristic of user.
Biological characteristic used when the specific category of described biological characteristic needs to register with user is identical, and such as, vocal print, face, fingerprint, iris etc. can be any one or its combination in any in these biological characteristics.
Step 107, based on described User Identity, mates the biological characteristic of acquisition with the biological characteristic model in user registry database, obtains matching score.
The concrete matching process of biological characteristic is relevant to the kind of extracted biological characteristic, specifically can adopt more existing matching ways, not limit this embodiment of the present invention.
Step 108, whether part is correct to determine user's body according to described matching score.
Step 109, determines user identity mistake.
Carry out by the present invention program the registration that identification needs first to carry out user identity, when user registers, obtain user's enrollment status mark, and according to the biological characteristic of user, generate should the biological characteristic model of enrollment status mark.User's enrollment status mark can be numeral, letter, special symbol or its combination etc.
In another embodiment of the inventive method, when carrying out user identity identification, also can first check in described user registry database whether there is this identify label.
The checking process of above-mentioned identify label can be carried out before above-mentioned steps 103, if that is, user's enrollment status mark comprises the User Identity of acquisition, then perform step 103 again, otherwise, determine user identity mistake.
The checking process of above-mentioned identify label also can be carried out between above-mentioned steps 105 and step 106, namely judging that effective initial time that described lip moves conforms to effective initial time of voice, and after whether lip reading content of text all conforms to described checking text with speech text content, carry out the inspection of above-mentioned identify label, if there is described identify label in user registry database, then perform the step of the biological characteristic obtaining user; Otherwise, determine user identity mistake.
It should be noted that, when mating, the living model that the biological characteristic of acquisition is only corresponding with the User Identity obtained can be mated, obtain matching score, and then judge whether described matching score is greater than the threshold value of setting, if so, then determine that user identity is correct, otherwise determine user identity mistake; Also the biological characteristic of acquisition can be matched to corresponding biological characteristic models all in user registry database, obtain many candidate matches score, and then judge that whether registered user's mark that characteristic model that wherein matching score is the highest is corresponding is identical with the User Identity of acquisition; If so, then determine that user identity is correct; Otherwise determine user identity mistake.
Mention above, described biological characteristic model can be sound-groove model and/or faceform, can certainly be other biological characteristic model, such as, and fingerprint, iris etc.
If adopt single biological characteristic model, then the biological characteristic of extraction can be mated with corresponding biological characteristic model, obtain matching score, such as: if described biological characteristic model is sound-groove model, then can read aloud from the user of typing the vocal print feature obtaining user the voice of described checking text in above-mentioned steps 106, then carry out the coupling of vocal print feature; If described biological characteristic model is faceform, then can read aloud from the user of typing the face characteristic obtaining user the image of described checking text in above-mentioned steps 106.
If adopt multi-biological characteristic model, then respectively the different biological features of acquisition can be mated with corresponding biological characteristic model in above-mentioned steps 106.Below comprehensively to adopt vocal print characteristic sum face characteristic, above-mentioned two kinds of different matching ways are described in detail.
As shown in Figure 2 A, be a kind of process flow diagram of characteristic matching and authentication in embodiment of the present invention personal identification method, comprise the following steps:
Steps A 1, mates sound-groove model corresponding with User Identity for the vocal print feature of acquisition, obtains the first score;
Steps A 2, mates faceform corresponding with User Identity for the face characteristic of acquisition, obtains the second score;
Steps A 3, merges the first score and the second score, obtains matching score.
Concrete amalgamation mode can adopt the various ways such as direct summation, weighted sum, average, does not limit this embodiment of the present invention.
Steps A 4, judges whether described matching score is greater than the threshold value of setting, if so, then performs steps A 5; Otherwise, perform steps A 6;
Steps A 5, determines that user identity is correct;
Steps A 6, determines user identity mistake.
As shown in Figure 2 B, be a kind of process flow diagram of characteristic matching and authentication in embodiment of the present invention personal identification method, comprise the following steps:
Step B1, mates sound-groove model corresponding with User Identity for the vocal print feature of acquisition, obtains the first score;
Step B2, judges whether described first score is greater than voice print matching threshold value; If so, then step B3 is performed; Otherwise, perform step B6;
Step B3, mates faceform corresponding with User Identity for the face characteristic of acquisition, obtains the second score;
Step B4, judges whether described second score is greater than face matching threshold; If so, then step B5 is performed; Otherwise, perform step B6;
Step B5, determines that user identity is correct;
Step B6, determines user identity mistake.
It should be noted that, in actual applications, also can the coupling of advanced row face characteristic, and then carry out the coupling of vocal print feature; The coupling of vocal print feature and face characteristic can certainly be carried out simultaneously, then judge whether two kinds of matching results all meet corresponding condition respectively, if all satisfied, then determine that user identity is correct, otherwise determine user identity mistake.
As shown in Figure 2 C, be a kind of process flow diagram of characteristic matching and authentication in embodiment of the present invention personal identification method, comprise the following steps:
Step C1, mates the vocal print feature of acquisition with each sound-groove model in user registry database, obtains multiple first matching score;
Step C2, determines faceform to be matched based on meeting the first matching score imposed a condition;
Described the first matching score imposed a condition that meets is: described first matching score is greater than the first matching score of first threshold; Or the first matching score of the front setting number that described first matching score is maximum;
Step C3, mates the face characteristic of acquisition with described faceform to be matched, obtains multiple second matching score;
Step C4, merges the first corresponding with it for each second matching score matching score, obtains many candidate matches score;
Step C5, judges that whether registered user's mark that characteristic model that matching score is the highest is corresponding is identical with the identify label of acquisition; If so, then step C6 is performed; Otherwise perform step C7;
Step C6, determines that user identity is correct;
Step C7, determines user identity mistake.
In actual applications, equally can the coupling of advanced row face characteristic, and then carry out the coupling of vocal print feature, this embodiment of the present invention is not limited.
As shown in Figure 2 D, be a kind of process flow diagram of characteristic matching and authentication in embodiment of the present invention personal identification method, comprise the following steps:
Step D1, mates the vocal print feature of acquisition with each sound-groove model in user registry database, obtains multiple first matching score, and selects the first matching score meeting first condition as the first score to be fused;
Step D2, mates the face characteristic of acquisition with each faceform in user registry database, obtains multiple second matching score, and selects the second matching score meeting second condition as the second score to be fused;
Step D3, registered user identify and the described to be fused second score corresponding registered user mark corresponding to described first score to be fused gets union;
Step D4, identifies the first corresponding matching score to described and concentrated registered user and the second matching score merges, and obtains many candidate matches score;
Step D5, judges that whether registered user's mark that characteristic model that matching score is the highest is corresponding is identical with the identify label of acquisition; If so, then step D6 is performed; Otherwise perform step D7;
Step D6, determines that user identity is correct;
Step D7, determines user identity mistake.
Above-mentioned first condition and second condition are respectively: matching score is greater than setting threshold value; Or the matching score of the front setting number that matching score is maximum.
In order to ensure the security that user registers further, in an alternative embodiment of the invention, when user registers, can also identify for the registration behavior of user, preventing assailant's fake user information from registering.
As shown in Figure 3, be a kind of process flow diagram of user's registration in the embodiment of the present invention, comprise the following steps:
Step 301, after receiving user's registration request, generates and shows Login script, and typing user reads aloud voice and the image of described Login script.
Step 302, reads aloud the voice of described Login script to the user of typing and image identifies, obtains the dynamic effective initial time of effective initial time of described voice and speech text content, lip and lip reading content of text.
Equally, described Login script can be fixed text, also can be stochastic and dynamic text.And the checking text extracted above can be identical with described Login script, also can be different.
Step 303, judge whether effective initial time that described lip moves conforms to effective initial time of voice, and whether lip reading content of text all conforms to described Login script with speech text content; If so, then step 304 is performed; Otherwise, perform step 306.
Step 304, determines to allow registration.
Step 305, after permission registration, generates the biological characteristic model of respective user enrollment status mark according to the biological characteristic of user.
Step 306, forbids registration.
That is, after permission registration, then generate the biological characteristic model of respective user enrollment status mark according to the biological characteristic of the user obtained.And if effective initial time that described lip moves does not conform to effective initial time of voice, or lip reading content of text does not conform to described Login script with speech text content, then show the information errors that user provides, in this case, forbid that this user registers, or prompting user re-registers.
In addition, it should be noted that, in actual applications, in above-mentioned steps 305, may not be the biological characteristic extracting user from described voice and/or image, but extract the biological characteristic of other form, this embodiment of the present invention is not limited.
Visible, the personal identification method that the embodiment of the present invention provides, when carrying out user identity identification, generate checking text, then typing user reads aloud voice and the image of checking text, these voice and image are identified, obtains the dynamic effective initial time of effective initial time of described voice and speech text content, lip and lip reading content of text; When effective initial time that described lip moves conforms to effective initial time of voice and lip reading content of text and speech text content all conform to described checking text,, utilize the biological characteristic of user to carry out identity legitimacy checking to user further.Owing to combining dynamic biological characteristic and static biological characteristic in identification procedure, therefore efficiently avoid user profile by risk that is counterfeit, that steal, substantially increase the security of authentication; And the biological characteristic of user is inherent, do not need special memory, thus improve user identity and test convenience, the validity of recognizing.
Correspondingly, the embodiment of the present invention also provides a kind of identification system, as shown in Figure 4, is a kind of structural representation of this system.This system comprises:
Registering modules 400, for when user registers, generates the biological characteristic model of respective user enrollment status mark, and is saved in user registry database 500 by described biological characteristic model;
Identify label acquisition module 401, for when carrying out user identity identification, obtains User Identity;
Text processing module 402, for generating and showing checking text;
Typing module 403, reads aloud voice and the image of described checking text for typing user;
Identification module 404, voice and image for reading aloud described checking text to the user of typing identify, obtain the dynamic effective initial time of effective initial time of described voice and speech text content, lip and lip reading content of text, and judge whether effective initial time that described lip moves conforms to effective initial time of voice, and whether lip reading content of text all conforms to described checking text with speech text content;
Biological characteristic extraction module 405, conforms to effective initial time of voice for the effective initial time moved at described lip, and when lip reading content of text and speech text content all conform to described checking text, obtains the biological characteristic of user;
Matching module 406, for based on described User Identity, mates biological characteristic model corresponding with described ID for the biological characteristic of extraction, obtains matching score;
Whether judge module 407, for correct according to described matching score determination user identity.
In another embodiment of present system, described system also can comprise further::
Checking module (not shown), for checking in described user registry database whether there is described User Identity; If so, then trigger described text processing module 402 generate and show checking text; Otherwise determine user identity mistake.
It should be noted that, in embodiments of the present invention, described biological characteristic model can be sound-groove model, faceform, Fingerprint Model, iris model etc., can be single biological characteristic model, also can comprise multiple biological characteristic model simultaneously.
Such as, if described biological characteristic model is sound-groove model, correspondingly, biological characteristic extraction module 405 can read aloud from the user of described typing the vocal print feature obtaining user the voice of described checking text; If described biological characteristic model is faceform, correspondingly, biological characteristic extraction module 405 can read aloud from the user of described typing the face characteristic obtaining user the image of described checking text; If described biological characteristic model comprises: sound-groove model and faceform, correspondingly, biological characteristic extraction module 405 can comprise: the first extraction unit, for read aloud described checking text from the user of described typing voice in obtain the vocal print feature of user; Second extraction unit, for read aloud described checking text from the user of described typing image in obtain the face characteristic of user.
In actual applications, described matching module 406 can adopt various ways to mate with the biological characteristic model in user registry database the biological characteristic obtained.Such as, the living model that the biological characteristic that biological characteristic extraction module 405 can be obtained is only corresponding with the User Identity obtained mates, obtain matching score, or the biological characteristic obtained by biological characteristic extraction module 405 matches to corresponding biological characteristic models all in user registry database, obtains many candidate matches score.Correspondingly, according to the difference of the matching way that matching module 406 specifically adopts, judge module 407 also can have different judgment modes.
Below comprehensively to adopt vocal print characteristic sum face characteristic, different matching ways and judgment mode are described in detail.
As shown in Figure 5A, be a kind of concrete structure schematic diagram of matching module in the embodiment of the present invention.
In this embodiment, described matching module comprises:
First matching unit 511, for being mated by sound-groove model corresponding with the identify label that described identify label acquisition module obtains for the vocal print feature of described first extraction unit acquisition, obtains the first score;
Second matching unit 512, the faceform that the face characteristic for being obtained by described second extraction unit is corresponding with the identify label that described identify label acquisition module obtains is mated, and obtains the second score;
Integrated unit 513, for the first score and the second score being merged, obtains matching score.
Correspondingly, the matching score that the matching module based on this embodiment obtains, above-mentioned judge module 407, specifically for when described matching score is greater than the threshold value of setting, determines that user identity is correct; Otherwise determine user identity mistake.
In actual applications, described matching module only can also comprise above-mentioned first matching unit 511 and the second matching unit 512.Correspondingly, above-mentioned judge module 407 specifically judges whether described first score is greater than voice print matching threshold value, and judges whether described second score is greater than face matching threshold; If so, then determine that user identity is correct; Otherwise determine user identity mistake.Certainly, in actual applications, namely above-mentioned judge module 407 also can judge after the first matching unit 511 obtains the first score whether described first score is greater than voice print matching threshold value, if so, then triggers the second matching unit and performs corresponding matching operation; Otherwise determine user identity mistake.
As shown in Figure 5 B, be the another kind of concrete structure schematic diagram of matching module in the embodiment of the present invention.
In this embodiment, described matching module comprises:
3rd matching unit 521, mates with each sound-groove model in user registry database for the vocal print feature obtained by described first extraction unit, obtains multiple first matching score;
First determining unit 522, for determining faceform to be matched based on meeting the first matching score imposed a condition; Described the first matching score imposed a condition that meets is: described first matching score is greater than the first matching score of first threshold; Or the first matching score of the front setting number that described first matching score is maximum;
4th matching unit 523, for being mated with described faceform to be matched by the face characteristic of acquisition, obtains multiple second matching score;
Score integrated unit 524, for the first corresponding with it for each second matching score matching score being merged, obtains many candidate matches score.
It should be noted that, in actual applications, by the coupling of the advanced row face characteristic of the 4th matching unit 523, then can also determine sound-groove model to be matched by the first determining unit 521 according to matching score, and then carry out the coupling of vocal print feature by the 3rd matching unit 521.
Correspondingly, many candidate matches score that the matching module based on this embodiment obtains, whether above-mentioned judge module 407 is identical with the identify label of acquisition specifically for judging registered user's mark that characteristic model that matching score is the highest is corresponding; If so, then determine that user identity is correct; Otherwise determine user identity mistake.
As shown in Figure 5 C, be the another kind of concrete structure schematic diagram of matching module in the embodiment of the present invention.
In this embodiment, described matching module comprises:
First match selection unit 531, vocal print feature for being obtained by described first extraction unit is mated with each sound-groove model in user registry database, obtain multiple first matching score, and select the first matching score meeting first condition as the first score to be fused;
Second match selection unit 532, face characteristic for being obtained by described second extraction unit mates with each faceform in user registry database, obtain multiple second matching score, and select the second matching score meeting second condition as the second score to be fused;
Union unit 533, gets union for the registered user that registered user identifies and described second score to be fused the is corresponding mark corresponding to described first score to be fused;
Union integrated unit 534, merges for the first matching score and the second matching score described and concentrated registered user being identified to correspondence, obtains many candidate matches score;
Above-mentioned first condition and second condition are respectively: matching score is greater than setting threshold value; Or the matching score of the front setting number that matching score is maximum.
Correspondingly, many candidate matches score that the matching module based on this embodiment obtains, whether above-mentioned judge module 407 is identical with the identify label of acquisition specifically for judging registered user's mark that characteristic model that matching score is the highest is corresponding; If so, then determine that user identity is correct; Otherwise determine user identity mistake.
The specific implementation of above-mentioned each model function can refer to the description in the inventive method embodiment above, does not repeat them here.
In addition, it should be noted that, in this practical application, Registering modules 301 only can extract a kind of biological characteristic of user, also can extract multiple biological characteristic, generates corresponding biological characteristic model.
In order to ensure the security that user registers further, in another embodiment of present system, when user registering, can also be identified by the registration behavior of above-mentioned each module for user, preventing assailant's fake user information from registering.
Particularly, when user registers, above-mentioned each module also performs following operation:
Described text processing module 402, after described Registering modules receives user's registration request, generates and shows Login script;
Described typing module 403, also reads aloud voice and the image of described Login script for typing user;
Described identification module 404, voice and image also for reading aloud described Login script to the user of typing identify, obtain the dynamic effective initial time of effective initial time of described voice and speech text content, lip and lip reading content of text, and judge whether effective initial time that described lip moves conforms to effective initial time of voice, and whether lip reading content of text all conforms to described checking text with speech text content, and notify that when all conforming to described Registering modules allows registration, otherwise notify that described Registering modules 301 forbids registration.
Correspondingly, described Registering modules, specifically for after determine to allow registration at described identification module 404, generates the biological characteristic model of respective user enrollment status mark, and is saved in user registry database by described biological characteristic model.
The identification system that the embodiment of the present invention provides, when user registers, generate checking text, then typing user reads aloud voice and the image of checking text, these voice and image are identified, obtains the dynamic effective initial time of effective initial time of described voice and speech text content, lip and lip reading content of text; When effective initial time that described lip moves conforms to effective initial time of voice and lip reading content of text and speech text content all conform to described checking text, the biological characteristic of user is utilized to carry out identity legitimacy checking to user further.Owing to combining dynamic biological characteristic and static biological characteristic in identification procedure, therefore efficiently avoid user profile by risk that is counterfeit, that steal, substantially increase the security of authentication; And the biological characteristic of user is inherent, do not need special memory, thus improve user identity and test convenience, the validity of recognizing.
Multiple checking is connected by the mode of cross validation by the scheme of the embodiment of the present invention, can portray to omnibearing stereo formula the biological characteristic of user; Meanwhile, subsystem attacked and has good defense, assailant is difficult in real time, imitate user biological feature various dimensions, significantly enhance the security of system.
Each embodiment in this instructions all adopts the mode of going forward one by one to describe, between each embodiment identical similar part mutually see, what each embodiment stressed is the difference with other embodiments.Especially, for system embodiment, because it is substantially similar to embodiment of the method, so describe fairly simple, relevant part illustrates see the part of embodiment of the method.System embodiment described above is only schematic, the wherein said unit illustrated as separating component or can may not be and physically separates, parts as unit display can be or may not be physical location, namely can be positioned at a place, or also can be distributed in multiple network element.Some or all of module wherein can be selected according to the actual needs to realize the object of the present embodiment scheme.Those of ordinary skill in the art, when not paying creative work, are namely appreciated that and implement.
Being described in detail the embodiment of the present invention above, applying embodiment herein to invention has been elaboration, the explanation of above embodiment just understands method and apparatus of the present invention for helping; Meanwhile, for one of ordinary skill in the art, according to thought of the present invention, all will change in specific embodiments and applications, in sum, this description should not be construed as limitation of the present invention.

Claims (15)

1. a personal identification method, is characterized in that, comprising:
When user registers, generate the biological characteristic model of respective user enrollment status mark, and described biological characteristic model is saved in user registry database;
When carrying out user identity identification, obtain User Identity;
Generate and show checking text, and typing user reads aloud voice and the image of described checking text;
The voice of described checking text are read aloud to the user of typing and image identifies, obtain the dynamic effective initial time of effective initial time of described voice and speech text content, lip and lip reading content of text;
If effective initial time that described lip moves conforms to effective initial time of voice, and lip reading content of text and speech text content all conform to described checking text, then obtain the biological characteristic of user;
Based on described User Identity, the biological characteristic of acquisition is mated with the biological characteristic model in user registry database, obtains matching score;
Whether correct according to described matching score determination user identity.
2. method according to claim 1, is characterized in that, described method also comprises:
Check in described user registry database whether there is described User Identity; ;
If so, then execution generates and shows the step of checking text; Otherwise, determine user identity mistake.
3. method according to claim 2, is characterized in that,
Described biological characteristic model is sound-groove model;
The biological characteristic of described acquisition user comprises: read aloud from the user of described typing the vocal print feature obtaining user the voice of described checking text;
Described the biological characteristic of acquisition to be mated with the biological characteristic model in user registry database, obtain matching score to comprise: sound-groove model corresponding with User Identity described in user registry database for described vocal print feature is mated, obtains matching score;
Or
Described biological characteristic model is faceform;
The biological characteristic of described acquisition user comprises: read aloud from the user of described typing the face characteristic obtaining user the image of described checking text;
Described the biological characteristic of acquisition to be mated with the biological characteristic model in user registry database, obtain matching score to comprise: the faceform that described face characteristic is corresponding with User Identity described in user registry database is mated, and obtains matching score;
Or
Described biological characteristic model comprises: sound-groove model and faceform;
The biological characteristic of described acquisition user comprises: read aloud from the user of described typing the vocal print feature obtaining user the voice of described checking text, and read aloud from the user of described typing the face characteristic obtaining user the image of described checking text;
Described the biological characteristic of acquisition to be mated with the biological characteristic model in user registry database, obtains matching score and comprise:
Sound-groove model corresponding with User Identity described in user registry database for the vocal print feature of acquisition is mated, obtains the first score;
The faceform that the face characteristic of acquisition is corresponding with User Identity described in user registry database is mated, and obtains the second score;
First score and the second score are merged, obtains matching score.
4. whether method according to claim 3, is characterized in that, describedly correctly comprise according to described matching score determination user identity:
If described matching score is greater than the threshold value of setting, then determine that user identity is correct; Otherwise determine user identity mistake.
5. method according to claim 2, is characterized in that,
Described biological characteristic model is sound-groove model;
The biological characteristic of described acquisition user comprises: read aloud from the user of described typing the vocal print feature obtaining user the voice of described checking text;
Described the biological characteristic of acquisition to be mated with the biological characteristic model in user registry database, obtain matching score and comprise: described vocal print feature is mated with sound-groove model each in user registry database, obtains many candidates matching score;
Or
Described biological characteristic model is faceform;
The biological characteristic of described acquisition user comprises: read aloud from the user of described typing the face characteristic obtaining user the image of described checking text;
Described the biological characteristic of acquisition to be mated with the biological characteristic model in user registry database, obtain matching score and comprise: described face characteristic is mated with faceform each in user registry database, obtains many candidate matches score;
Or
Described biological characteristic model comprises: sound-groove model and faceform;
The biological characteristic of described acquisition user comprises:
Read aloud from the user of described typing the vocal print feature obtaining user the voice of described checking text, and read aloud from the user of described typing the face characteristic obtaining user the image of described checking text;
Described the biological characteristic of acquisition to be mated with the biological characteristic model in user registry database, obtains matching score and comprise:
The vocal print feature of acquisition and face characteristic are carried out fusion with the vocal print feature in user registry database and faceform respectively mate, obtain many candidate matches score.
6. whether method according to claim 5, is characterized in that, describedly correctly comprise according to described matching score determination user identity:
Judge that whether user's enrollment status mark that characteristic model that matching score is the highest is corresponding is identical with the described User Identity of acquisition;
If so, then determine that user identity is correct; Otherwise determine user identity mistake.
7. the method according to any one of claim 1 to 6, is characterized in that, described method also comprises:
After receiving user's registration request, generate and show Login script, and typing user reads aloud voice and the image of described Login script;
The voice of described Login script are read aloud to the user of typing and image identifies, obtain the dynamic effective initial time of effective initial time of described voice and speech text content, lip and lip reading content of text;
If effective initial time that described lip moves conforms to effective initial time of voice, and lip reading content of text all conforms to described Login script with speech text content, then allow registration; Otherwise forbid registration;
After permission registration, then perform the step of the biological characteristic model generating respective user enrollment status mark.
8. method according to claim 7, is characterized in that, described checking text and/or Login script are static text or stochastic and dynamic text.
9. an identification system, is characterized in that, comprising:
Registering modules, for when user registers, generates the biological characteristic model of respective user enrollment status mark, and is saved in user registry database by described biological characteristic model;
Identify label acquisition module, for when carrying out user identity identification, obtains User Identity;
Text processing module, for generating and showing checking text;
Typing module, reads aloud voice and the image of described checking text for typing user;
Identification module, voice and image for reading aloud described checking text to the user of typing identify, obtain the dynamic effective initial time of effective initial time of described voice and speech text content, lip and lip reading content of text, and judge whether effective initial time that described lip moves conforms to effective initial time of voice, and whether lip reading content of text all conforms to described checking text with speech text content; ;
Biological characteristic extraction module, conforms to effective initial time of voice for the effective initial time moved at described lip, and when lip reading content of text and speech text content all conform to described checking text, obtains the biological characteristic of user;
Matching module, for based on described User Identity, mates the biological characteristic that described biological characteristic extraction module obtains with the biological characteristic model in user registry database, obtains matching score;
Whether judge module, for correct according to described matching score determination user identity.
10. system according to claim 9, is characterized in that, described system also comprises:
Checking module, for checking in described user registry database whether there is described User Identity; If so, then trigger described text processing module generate and show checking text; Otherwise determine user identity mistake.
11. systems according to claim 10, is characterized in that,
Described biological characteristic model is sound-groove model; Described biological characteristic extraction module specifically for read aloud described checking text from the user of described typing voice in obtain the vocal print feature of user; The vocal print feature that described biological characteristic extraction module obtains by the described matching judgment module sound-groove model corresponding with the User Identity that described identify label acquisition module obtains mates, and obtains matching score;
Or
Described biological characteristic model is faceform; Described biological characteristic extraction module specifically for read aloud described checking text from the user of described typing image in obtain the face characteristic of user; The faceform that the face characteristic that described biological characteristic extraction module obtains by described matching judgment module is corresponding with the User Identity that described identify label acquisition module obtains is mated, and obtains matching score;
Or
Described biological characteristic model comprises: sound-groove model and faceform;
Described biological characteristic extraction module comprises:
First extraction unit, for read aloud described checking text from the user of described typing voice in obtain the vocal print feature of user;
Second extraction unit, for read aloud described checking text from the user of described typing image in obtain the face characteristic of user;
Described matching module comprises:
First matching unit, for being mated by sound-groove model corresponding with the identify label that described identify label acquisition module obtains for the vocal print feature of described first extraction unit acquisition, obtains the first score;
Second matching unit, the faceform that the face characteristic for being obtained by described second extraction unit is corresponding with the identify label that described identify label acquisition module obtains is mated, and obtains the second score;
Integrated unit, for described first score and described second score being merged, obtains matching score.
12. systems according to claim 11, is characterized in that,
Described judge module, specifically for when described matching score is greater than the threshold value of setting, determines that user identity is correct; Otherwise determine user identity mistake.
13. systems according to claim 10, is characterized in that,
Described biological characteristic model is sound-groove model;
Described biological characteristic extraction module, specifically for read aloud described checking text from the user of described typing voice in obtain the vocal print feature of user;
Described matching module, specifically for described vocal print feature is mated with sound-groove model each in user registry database, many candidates matching score;
Or
Described biological characteristic model is faceform;
Described biological characteristic extraction module, specifically for read aloud described checking text from the user of described typing image in obtain the face characteristic of user;
Described matching module, specifically for being mated with faceform each in user registry database by described face characteristic, obtains many candidate matches score;
Or
Described biological characteristic model comprises: sound-groove model and faceform;
Described biological characteristic extraction module comprises:
First extraction unit, for read aloud described checking text from the user of described typing voice in obtain the vocal print feature of user;
Second extraction unit, for read aloud described checking text from the user of described typing image in obtain the face characteristic of user;
Described matching module, face characteristic specifically for the vocal print feature of described first extraction unit acquisition and described second extraction unit being obtained carries out fusion with the vocal print feature in user registry database and faceform respectively and mates, and obtains many candidate matches score.
14. systems according to claim 13, is characterized in that,
Whether described judge module is identical with the identify label of acquisition specifically for judging registered user's mark that characteristic model that matching score is the highest is corresponding; If so, then determine that user identity is correct; Otherwise determine user identity mistake.
15. systems according to any one of claim 9 to 14, is characterized in that,
Described text processing module, also for after described Registering modules receives user's registration request, generates and shows Login script;
Described typing module, also reads aloud voice and the image of described Login script for typing user;
Described identification module, voice and image also for reading aloud described Login script to the user of typing identify, obtain the dynamic effective initial time of effective initial time of described voice and speech text content, lip and lip reading content of text, and judge whether effective initial time that described lip moves conforms to effective initial time of voice, and whether lip reading content of text all conforms to described checking text with speech text content, and determine when all conforming to allow registration, otherwise forbid registration;
Described Registering modules specifically for after determine to allow registration at described identification module, generate the biological characteristic model of respective user enrollment status mark, and described biological characteristic model be saved in user registry database.
CN201510052189.3A 2015-01-30 2015-01-30 Personal identification method and system Active CN104598796B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510052189.3A CN104598796B (en) 2015-01-30 2015-01-30 Personal identification method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510052189.3A CN104598796B (en) 2015-01-30 2015-01-30 Personal identification method and system

Publications (2)

Publication Number Publication Date
CN104598796A true CN104598796A (en) 2015-05-06
CN104598796B CN104598796B (en) 2017-08-25

Family

ID=53124574

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510052189.3A Active CN104598796B (en) 2015-01-30 2015-01-30 Personal identification method and system

Country Status (1)

Country Link
CN (1) CN104598796B (en)

Cited By (41)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104966053A (en) * 2015-06-11 2015-10-07 腾讯科技(深圳)有限公司 Face recognition method and recognition system
CN105554026A (en) * 2016-01-12 2016-05-04 中北大学 Electronic record information security management system
CN105590104A (en) * 2015-12-31 2016-05-18 联想(北京)有限公司 Recognition method and apparatus, and electronic equipment
CN105718874A (en) * 2016-01-18 2016-06-29 北京天诚盛业科技有限公司 Method and device of in-vivo detection and authentication
CN105869644A (en) * 2016-05-25 2016-08-17 百度在线网络技术(北京)有限公司 Deep learning based voiceprint authentication method and device
CN105938528A (en) * 2016-04-03 2016-09-14 长春工业大学 Control system for computer events
CN105975826A (en) * 2016-04-20 2016-09-28 中国石油大学(华东) Enterprise economic management information security system
CN106098069A (en) * 2016-06-21 2016-11-09 佛山科学技术学院 A kind of identity identifying method and terminal unit
CN106126914A (en) * 2016-06-22 2016-11-16 宝鸡市中心医院 Clinical information management system
CN106295299A (en) * 2016-08-15 2017-01-04 歌尔股份有限公司 The user registering method of a kind of intelligent robot and device
CN106302339A (en) * 2015-05-25 2017-01-04 腾讯科技(深圳)有限公司 Login validation method and device, login method and device
CN106407783A (en) * 2016-09-23 2017-02-15 武汉必腾移动科技有限责任公司 An online verification method and system
CN106651340A (en) * 2015-11-02 2017-05-10 阿里巴巴集团控股有限公司 Settlement method and device
WO2017198014A1 (en) * 2016-05-19 2017-11-23 阿里巴巴集团控股有限公司 Identity authentication method and apparatus
CN107819807A (en) * 2016-09-14 2018-03-20 腾讯科技(深圳)有限公司 A kind of Information Authentication method, apparatus and equipment
CN108334768A (en) * 2018-03-19 2018-07-27 黄冈职业技术学院 A kind of computer system user auth method
CN108418805A (en) * 2018-02-05 2018-08-17 浙江齐享科技有限公司 A kind of user authentication method and system for sharing Moped Scooter
CN108446737A (en) * 2018-03-21 2018-08-24 百度在线网络技术(北京)有限公司 The method and apparatus of object for identification
CN108510364A (en) * 2018-03-30 2018-09-07 杭州法奈昇科技有限公司 Big data intelligent shopping guide system based on voiceprint identification
CN108712381A (en) * 2018-04-16 2018-10-26 出门问问信息科技有限公司 A kind of auth method and device
CN108959865A (en) * 2017-05-25 2018-12-07 阿里巴巴集团控股有限公司 A kind of verification method and device
CN109271915A (en) * 2018-09-07 2019-01-25 北京市商汤科技开发有限公司 False-proof detection method and device, electronic equipment, storage medium
CN109299594A (en) * 2018-09-04 2019-02-01 阿里巴巴集团控股有限公司 Auth method and device
CN109635533A (en) * 2018-12-07 2019-04-16 中国移动通信集团江苏有限公司 Personal identification method, device, equipment and medium
CN109801636A (en) * 2019-01-29 2019-05-24 北京猎户星空科技有限公司 Training method, device, electronic equipment and the storage medium of Application on Voiceprint Recognition model
CN110717164A (en) * 2019-12-16 2020-01-21 国网电子商务有限公司 Intelligent multidimensional weighting identity authentication and risk control method and system
CN110766841A (en) * 2019-06-12 2020-02-07 天津新泰基业电子股份有限公司 User registration and verification method and device
CN110853646A (en) * 2019-11-20 2020-02-28 深圳前海微众银行股份有限公司 Method, device and equipment for distinguishing conference speaking roles and readable storage medium
CN110891049A (en) * 2019-10-12 2020-03-17 平安国际智慧城市科技股份有限公司 Video-based account login method, device, medium and electronic equipment
CN110955874A (en) * 2019-10-12 2020-04-03 深圳壹账通智能科技有限公司 Identity authentication method, identity authentication device, computer equipment and storage medium
CN111048113A (en) * 2019-12-18 2020-04-21 腾讯科技(深圳)有限公司 Sound direction positioning processing method, device and system, computer equipment and storage medium
KR20200047650A (en) * 2018-09-07 2020-05-07 베이징 센스타임 테크놀로지 디벨롭먼트 컴퍼니 리미티드 Anti-counterfeiting detection method and device, electronic device, storage medium
WO2020107834A1 (en) * 2018-11-28 2020-06-04 平安科技(深圳)有限公司 Verification content generation method for lip-language recognition, and related apparatus
CN111292734A (en) * 2018-12-06 2020-06-16 阿里巴巴集团控股有限公司 Voice interaction method and device
CN111326175A (en) * 2020-02-18 2020-06-23 维沃移动通信有限公司 Prompting method for interlocutor and wearable device
CN111669350A (en) * 2019-03-05 2020-09-15 阿里巴巴集团控股有限公司 Identity verification method, verification information generation method, payment method and payment device
CN111768178A (en) * 2020-06-30 2020-10-13 银联商务股份有限公司 Method and related device for examining and approving business member
CN112329750A (en) * 2021-01-05 2021-02-05 北京远鉴信息技术有限公司 Digital sequence-based identity recognition method and device, electronic equipment and medium
CN112927688A (en) * 2021-01-25 2021-06-08 思必驰科技股份有限公司 Voice interaction method and system for vehicle
CN113313231A (en) * 2021-04-28 2021-08-27 上海淇玥信息技术有限公司 Anti-fraud method and system based on LipNet algorithm and electronic equipment
CN113743160A (en) * 2020-05-29 2021-12-03 北京中关村科金技术有限公司 Method, apparatus and storage medium for biopsy

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1758263A (en) * 2005-10-31 2006-04-12 浙江大学 Multi-model ID recognition method based on scoring difference weight compromised
CN102413100A (en) * 2010-09-25 2012-04-11 盛乐信息技术(上海)有限公司 Voice-print authentication system having voice-print password picture prompting function and realization method thereof
CN103475490A (en) * 2013-09-29 2013-12-25 广州网易计算机系统有限公司 Identity authentication method and device
CN103973441A (en) * 2013-01-29 2014-08-06 腾讯科技(深圳)有限公司 User authentication method and device on basis of audios and videos

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1758263A (en) * 2005-10-31 2006-04-12 浙江大学 Multi-model ID recognition method based on scoring difference weight compromised
CN102413100A (en) * 2010-09-25 2012-04-11 盛乐信息技术(上海)有限公司 Voice-print authentication system having voice-print password picture prompting function and realization method thereof
CN103973441A (en) * 2013-01-29 2014-08-06 腾讯科技(深圳)有限公司 User authentication method and device on basis of audios and videos
CN103475490A (en) * 2013-09-29 2013-12-25 广州网易计算机系统有限公司 Identity authentication method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
朱铮宇 等: "基于时空相关度融合的语音唇动一致性检测算法", 《电子学报》 *

Cited By (64)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106302339A (en) * 2015-05-25 2017-01-04 腾讯科技(深圳)有限公司 Login validation method and device, login method and device
US20170308739A1 (en) * 2015-06-11 2017-10-26 Tencent Technology (Shenzhen) Company Limited Human face recognition method and recognition system
US10650259B2 (en) * 2015-06-11 2020-05-12 Tencent Technology (Shenzhen) Company Limited Human face recognition method and recognition system based on lip movement information and voice information
CN104966053A (en) * 2015-06-11 2015-10-07 腾讯科技(深圳)有限公司 Face recognition method and recognition system
CN104966053B (en) * 2015-06-11 2018-12-28 腾讯科技(深圳)有限公司 Face identification method and identifying system
WO2016197765A1 (en) * 2015-06-11 2016-12-15 腾讯科技(深圳)有限公司 Human face recognition method and recognition system
CN106651340A (en) * 2015-11-02 2017-05-10 阿里巴巴集团控股有限公司 Settlement method and device
CN106651340B (en) * 2015-11-02 2021-06-29 创新先进技术有限公司 Settlement method and device
CN105590104A (en) * 2015-12-31 2016-05-18 联想(北京)有限公司 Recognition method and apparatus, and electronic equipment
CN105554026A (en) * 2016-01-12 2016-05-04 中北大学 Electronic record information security management system
CN105718874A (en) * 2016-01-18 2016-06-29 北京天诚盛业科技有限公司 Method and device of in-vivo detection and authentication
CN105938528B (en) * 2016-04-03 2019-03-12 长春工业大学 A kind of control system of computer events
CN105938528A (en) * 2016-04-03 2016-09-14 长春工业大学 Control system for computer events
CN105975826A (en) * 2016-04-20 2016-09-28 中国石油大学(华东) Enterprise economic management information security system
US10789343B2 (en) 2016-05-19 2020-09-29 Alibaba Group Holding Limited Identity authentication method and apparatus
WO2017198014A1 (en) * 2016-05-19 2017-11-23 阿里巴巴集团控股有限公司 Identity authentication method and apparatus
CN107404381A (en) * 2016-05-19 2017-11-28 阿里巴巴集团控股有限公司 A kind of identity identifying method and device
AU2017266971B2 (en) * 2016-05-19 2021-05-13 Advanced New Technologies Co., Ltd. Identity authentication method and apparatus
RU2738325C2 (en) * 2016-05-19 2020-12-11 Алибаба Груп Холдинг Лимитед Method and device for authenticating an individual
TWI706268B (en) * 2016-05-19 2020-10-01 香港商阿里巴巴集團服務有限公司 Identity authentication method and device
CN105869644A (en) * 2016-05-25 2016-08-17 百度在线网络技术(北京)有限公司 Deep learning based voiceprint authentication method and device
CN106098069A (en) * 2016-06-21 2016-11-09 佛山科学技术学院 A kind of identity identifying method and terminal unit
CN106098069B (en) * 2016-06-21 2019-12-20 佛山科学技术学院 Identity authentication method and terminal equipment
CN106126914A (en) * 2016-06-22 2016-11-16 宝鸡市中心医院 Clinical information management system
US10929514B2 (en) 2016-08-15 2021-02-23 Goertek Inc. User registration method and device for smart robots
CN106295299A (en) * 2016-08-15 2017-01-04 歌尔股份有限公司 The user registering method of a kind of intelligent robot and device
CN107819807A (en) * 2016-09-14 2018-03-20 腾讯科技(深圳)有限公司 A kind of Information Authentication method, apparatus and equipment
CN106407783A (en) * 2016-09-23 2017-02-15 武汉必腾移动科技有限责任公司 An online verification method and system
CN108959865A (en) * 2017-05-25 2018-12-07 阿里巴巴集团控股有限公司 A kind of verification method and device
CN108418805A (en) * 2018-02-05 2018-08-17 浙江齐享科技有限公司 A kind of user authentication method and system for sharing Moped Scooter
CN108334768A (en) * 2018-03-19 2018-07-27 黄冈职业技术学院 A kind of computer system user auth method
CN108446737A (en) * 2018-03-21 2018-08-24 百度在线网络技术(北京)有限公司 The method and apparatus of object for identification
CN108446737B (en) * 2018-03-21 2022-07-05 百度在线网络技术(北京)有限公司 Method and device for identifying objects
CN108510364A (en) * 2018-03-30 2018-09-07 杭州法奈昇科技有限公司 Big data intelligent shopping guide system based on voiceprint identification
CN108712381A (en) * 2018-04-16 2018-10-26 出门问问信息科技有限公司 A kind of auth method and device
CN109299594B (en) * 2018-09-04 2022-11-22 创新先进技术有限公司 Identity verification method and device
CN109299594A (en) * 2018-09-04 2019-02-01 阿里巴巴集团控股有限公司 Auth method and device
KR20200047650A (en) * 2018-09-07 2020-05-07 베이징 센스타임 테크놀로지 디벨롭먼트 컴퍼니 리미티드 Anti-counterfeiting detection method and device, electronic device, storage medium
CN109271915A (en) * 2018-09-07 2019-01-25 北京市商汤科技开发有限公司 False-proof detection method and device, electronic equipment, storage medium
CN109271915B (en) * 2018-09-07 2021-10-08 北京市商汤科技开发有限公司 Anti-counterfeiting detection method and device, electronic equipment and storage medium
KR102370694B1 (en) * 2018-09-07 2022-03-04 베이징 센스타임 테크놀로지 디벨롭먼트 컴퍼니 리미티드 Anti-counterfeiting detection method and device, electronic device, storage medium
WO2020107834A1 (en) * 2018-11-28 2020-06-04 平安科技(深圳)有限公司 Verification content generation method for lip-language recognition, and related apparatus
CN111292734A (en) * 2018-12-06 2020-06-16 阿里巴巴集团控股有限公司 Voice interaction method and device
CN111292734B (en) * 2018-12-06 2024-03-08 阿里巴巴集团控股有限公司 Voice interaction method and device
CN109635533B (en) * 2018-12-07 2021-09-03 中国移动通信集团江苏有限公司 Identity recognition method, device, equipment and medium
CN109635533A (en) * 2018-12-07 2019-04-16 中国移动通信集团江苏有限公司 Personal identification method, device, equipment and medium
CN109801636A (en) * 2019-01-29 2019-05-24 北京猎户星空科技有限公司 Training method, device, electronic equipment and the storage medium of Application on Voiceprint Recognition model
CN111669350A (en) * 2019-03-05 2020-09-15 阿里巴巴集团控股有限公司 Identity verification method, verification information generation method, payment method and payment device
CN110766841A (en) * 2019-06-12 2020-02-07 天津新泰基业电子股份有限公司 User registration and verification method and device
WO2021068616A1 (en) * 2019-10-12 2021-04-15 深圳壹账通智能科技有限公司 Method and device for identity authentication, computer device, and storage medium
CN110955874A (en) * 2019-10-12 2020-04-03 深圳壹账通智能科技有限公司 Identity authentication method, identity authentication device, computer equipment and storage medium
CN110891049A (en) * 2019-10-12 2020-03-17 平安国际智慧城市科技股份有限公司 Video-based account login method, device, medium and electronic equipment
CN110853646B (en) * 2019-11-20 2023-11-03 深圳前海微众银行股份有限公司 Conference speaking role distinguishing method, device, equipment and readable storage medium
CN110853646A (en) * 2019-11-20 2020-02-28 深圳前海微众银行股份有限公司 Method, device and equipment for distinguishing conference speaking roles and readable storage medium
CN110717164A (en) * 2019-12-16 2020-01-21 国网电子商务有限公司 Intelligent multidimensional weighting identity authentication and risk control method and system
CN111048113A (en) * 2019-12-18 2020-04-21 腾讯科技(深圳)有限公司 Sound direction positioning processing method, device and system, computer equipment and storage medium
CN111326175A (en) * 2020-02-18 2020-06-23 维沃移动通信有限公司 Prompting method for interlocutor and wearable device
CN113743160A (en) * 2020-05-29 2021-12-03 北京中关村科金技术有限公司 Method, apparatus and storage medium for biopsy
CN111768178A (en) * 2020-06-30 2020-10-13 银联商务股份有限公司 Method and related device for examining and approving business member
CN112329750B (en) * 2021-01-05 2021-04-20 北京远鉴信息技术有限公司 Digital sequence-based identity recognition method and device, electronic equipment and medium
CN112329750A (en) * 2021-01-05 2021-02-05 北京远鉴信息技术有限公司 Digital sequence-based identity recognition method and device, electronic equipment and medium
CN112927688A (en) * 2021-01-25 2021-06-08 思必驰科技股份有限公司 Voice interaction method and system for vehicle
CN113313231A (en) * 2021-04-28 2021-08-27 上海淇玥信息技术有限公司 Anti-fraud method and system based on LipNet algorithm and electronic equipment
CN113313231B (en) * 2021-04-28 2024-05-24 上海淇玥信息技术有限公司 LipNet algorithm-based identity recognition method and system and electronic equipment

Also Published As

Publication number Publication date
CN104598796B (en) 2017-08-25

Similar Documents

Publication Publication Date Title
CN104598796A (en) Method and system for identifying identity
US10482230B2 (en) Face-controlled liveness verification
JP6820062B2 (en) Identity authentication methods and devices, terminals and servers
WO2018177312A1 (en) Authentication method, apparatus and system
US8392975B1 (en) Method and system for image-based user authentication
JP4546168B2 (en) Biometric authentication system registration method, biometric authentication system and program thereof
JP7006584B2 (en) Biometric data processing device, biometric data processing system, biometric data processing method, biometric data processing program, storage medium for storing biometric data processing program
US20190130411A1 (en) Method and system for data processing
CN107577930B (en) Unlocking detection method of touch screen terminal and touch screen terminal
US9202035B1 (en) User authentication based on biometric handwriting aspects of a handwritten code
CN104537290A (en) Password system and achievement method based on multi-fingerprint identification
JP5564414B2 (en) Biometric authentication system and biometric authentication method
CN112671806A (en) User authentication method, user authentication equipment and user authentication system
KR101972800B1 (en) Method and apparatus for authenticating handwritten signature
CN112329750B (en) Digital sequence-based identity recognition method and device, electronic equipment and medium
CN106156590A (en) A kind of computer startup processing method and processing system
CN105897747B (en) Data storage method and device based on digital biological signature and intelligent equipment
CN115203658A (en) Identity recognition method and device, storage medium and electronic equipment
CN114731280A (en) Identity authentication method, device, terminal, storage medium and program product
CN104639749B (en) Short message sending control method and device
JP2008047051A (en) False registration prevention device, false registration prevention method, false registration prevention program, computer readable recording medium with false registration prevention program recorded and false registration prevention system
CN107623660A (en) A kind of mobile phone user identification system
JP4834512B2 (en) Biometric authentication system
CN102663457A (en) Method and system of identity authentication based on radio frequency identification and heart-sound technology
JP2022522251A (en) Handwritten signature authentication method and device based on multiple verification algorithms

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant