CN108494778A - Identity identifying method and device - Google Patents

Identity identifying method and device Download PDF

Info

Publication number
CN108494778A
CN108494778A CN201810259529.3A CN201810259529A CN108494778A CN 108494778 A CN108494778 A CN 108494778A CN 201810259529 A CN201810259529 A CN 201810259529A CN 108494778 A CN108494778 A CN 108494778A
Authority
CN
China
Prior art keywords
user
information
identity information
facial image
mentioned
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810259529.3A
Other languages
Chinese (zh)
Inventor
仲召来
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Baidu Online Network Technology Beijing Co Ltd
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN201810259529.3A priority Critical patent/CN108494778A/en
Publication of CN108494778A publication Critical patent/CN108494778A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/08Network architectures or network communication protocols for network security for authentication of entities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/08Network architectures or network communication protocols for network security for authentication of entities
    • H04L63/0861Network architectures or network communication protocols for network security for authentication of entities using biometrical features, e.g. fingerprint, retina-scan

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Computer Security & Cryptography (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The embodiment of the present application discloses identity identifying method and device.One specific implementation mode of this method includes:In response to receiving the request for belonging to preset request classification of user terminal transmission, the user belonged to user terminal carries out In vivo detection, to determine whether user is live body user, wherein the request may include the identity information of user;In response to determining that user is live body user, the facial image of user is obtained;Determine the target user's information to match with the presence or absence of the facial image and identity information with user in preset user information set, wherein user information may include facial image and identity information;In response to determining in user information set there are target user's information, determine that user is validated user.This embodiment improves the accuracy of identity authentication result.

Description

Identity identifying method and device
Technical field
The invention relates to field of computer technology, and in particular to identity identifying method and device.
Background technology
Existing identity identifying method, typically by stored facial image in the facial image of user and database into Row compares, if comparing successfully, it is concluded that user is the authentication result of validated user.But the accuracy of the authentication result is generally compared It is low, because the facial image of the user is likely to be obtained by the photo by providing disabled user or video etc. shoot Facial image.
Invention content
The embodiment of the present application proposes identity identifying method and device.
In a first aspect, the embodiment of the present application provides a kind of identity identifying method, this method includes:In response to receiving use What family end was sent belongs to the request of preset request classification, and the user belonged to above-mentioned user terminal carries out In vivo detection, with Determine whether above-mentioned user is live body user, wherein above-mentioned request includes the identity information of above-mentioned user;It is above-mentioned in response to determination User is live body user, obtains the facial image of above-mentioned user;Determine in preset user information set with the presence or absence of with it is above-mentioned Target user's information that the facial image and identity information of user matches, wherein user information includes facial image and identity Information;In response to, there are above-mentioned target user's information, determining that above-mentioned user is validated user in the above-mentioned user information set of determination.
In some embodiments, the above method further includes:After determining that above-mentioned user is validated user, generation is used to indicate Above-mentioned user passes through the prompt message of authentication, and the above-mentioned prompt message of output.
In some embodiments, the above-mentioned user belonged to above-mentioned user terminal carries out In vivo detection, including:To above-mentioned use Family end exports the character string generated at random;Receive the lip motion video that above-mentioned user terminal is sent, wherein above-mentioned lip motion Video is that above-mentioned user terminal reads the character in above-mentioned character string and the video recorded in response to above-mentioned user;To above-mentioned lip Action video is analyzed, determine the lip motion that is done when above-mentioned user reads the character in above-mentioned character string whether with reading The lip motion that should be done when the character gone out in above-mentioned character string is consistent;If consistent, it is determined that above-mentioned user is live body user.
In some embodiments, the above-mentioned user belonged to above-mentioned user terminal carries out In vivo detection, further includes:To above-mentioned User terminal sends image capture instruction;Receive the first image of the face for showing above-mentioned user that above-mentioned user terminal is sent;Base In above-mentioned first image, determine whether above-mentioned user is live body user.
In some embodiments, the facial image of the above-mentioned user of above-mentioned acquisition, including:It is extracted from above-mentioned first image The human face region extracted is generated facial image by human face region.
In some embodiments, with the presence or absence of the face figure with above-mentioned user in the preset user information set of above-mentioned determination Target user's information that picture and identity information match, including:It is extracted from the user information in above-mentioned user information set Identity information forms identity information group;It determines in above-mentioned identity information group with the presence or absence of the identity information phase with above-mentioned user The target identity information matched;If, will be in the user information where above-mentioned target identity information there are above-mentioned target identity information Facial image matched with the facial image of above-mentioned user;In response to user's letter where the above-mentioned target identity information of determination Facial image and the facial image of above-mentioned user in breath match, and the user information where above-mentioned target identity information is determined For target user's information.
In some embodiments, identity information includes identification card number;And in the above-mentioned above-mentioned identity information group of determination whether In the presence of the target identity information that the identity information with above-mentioned user matches, including:It searches and is wrapped in above-mentioned identity information group The identification card number included identity information identical with the identification card number of above-mentioned user;If finding, the identity information that will be found It is determined as target identity information.
In some embodiments, the facial image in the above-mentioned user information by where above-mentioned target identity information with it is above-mentioned The facial image of user matches, including:By facial image in the user information where above-mentioned target identity information and upper The facial image input of user human face recognition model trained in advance is stated, recognition result is obtained, wherein above-mentioned recognition result includes The facial image institute of the face and above-mentioned user shown by the facial image in user information where above-mentioned target identity information The face of display derives from the probability of same person;Determine whether above-mentioned probability is less than probability threshold value;If above-mentioned probability is not less than Above-mentioned probability threshold value, it is determined that the face of the facial image and above-mentioned user in user information where above-mentioned target identity information Image matches.
Second aspect, the embodiment of the present application provide a kind of identification authentication system, which includes:Detection unit, configuration For the request for belonging to preset request classification in response to receiving user terminal transmission, the use that above-mentioned user terminal is belonged to Whether family carries out In vivo detection, be live body user with the above-mentioned user of determination, wherein above-mentioned request includes the identity letter of above-mentioned user Breath;Acquiring unit is configured in response to the above-mentioned user of determination be live body user, obtains the facial image of above-mentioned user;First Determination unit is configured to determine in preset user information set and believe with the presence or absence of with the facial image and identity of above-mentioned user The matched target user's information of manner of breathing, wherein user information includes facial image and identity information;Second determination unit, configuration For in response to, there are above-mentioned target user's information, determining that above-mentioned user is validated user in the above-mentioned user information set of determination.
In some embodiments, above-mentioned apparatus further includes:Output unit is configured to determining that above-mentioned user is legal use Behind family, generation is used to indicate prompt message of the above-mentioned user by authentication, and the above-mentioned prompt message of output.
In some embodiments, above-mentioned detection unit is further configured to:It is generated at random to the output of above-mentioned user terminal Character string;Receive the lip motion video that above-mentioned user terminal is sent, wherein above-mentioned lip motion video is that above-mentioned user terminal is rung The video that the character in above-mentioned character string should be read in above-mentioned user and recorded;Above-mentioned lip motion video is analyzed, Determine the lip motion that is done when above-mentioned user reads the character in above-mentioned character string whether with read in above-mentioned character string Character when the lip motion that should do it is consistent;If consistent, it is determined that above-mentioned user is live body user.
In some embodiments, above-mentioned detection unit is further configured to:Image Acquisition is sent to above-mentioned user terminal Instruction;Receive the first image of the face for showing above-mentioned user that above-mentioned user terminal is sent;Based on above-mentioned first image, determine Whether above-mentioned user is live body user.
In some embodiments, above-mentioned acquiring unit is further configured to:Face is extracted from above-mentioned first image The human face region extracted is generated facial image by region.
In some embodiments, above-mentioned first determination unit includes:Subelement is extracted, is configured to from above-mentioned user information Identity information is extracted in user information in set, forms identity information group;First determination subelement is configured to determine State the target identity information to match with the presence or absence of the identity information with above-mentioned user in identity information group;Coupling subelement is matched If setting for there are above-mentioned target identity information, by the user information where above-mentioned target identity information facial image with The facial image of above-mentioned user matches;Second determination subelement is configured in response to the above-mentioned target identity information of determination Facial image and the facial image of above-mentioned user in the user information at place match, will be where above-mentioned target identity information User information is determined as target user's information.
In some embodiments, identity information includes identification card number;And above-mentioned first determination subelement further configures For:Included identification card number identity letter identical with the identification card number of above-mentioned user is searched in above-mentioned identity information group Breath;If finding, the identity information found is determined as target identity information.
In some embodiments, above-mentioned coupling subelement is further configured to:It will be where above-mentioned target identity information The human face recognition model that the facial image input of facial image and above-mentioned user in user information is trained in advance, obtains identification knot Fruit, wherein above-mentioned recognition result includes the people shown by the facial image in the user information where above-mentioned target identity information Face shown by face and the facial image of above-mentioned user derives from the probability of same person;Determine above-mentioned probability whether less than general Rate threshold value;If above-mentioned probability is not less than above-mentioned probability threshold value, it is determined that in the user information where above-mentioned target identity information Facial image and the facial image of above-mentioned user match.
The third aspect, the embodiment of the present application provide a kind of electronic equipment, which includes:One or more processing Device;Storage device, for storing one or more programs;When said one or multiple programs are by said one or multiple processors It executes so that said one or multiple processors realize the method as described in any realization method in first aspect.
Fourth aspect, the embodiment of the present application provide a kind of computer-readable medium, are stored thereon with computer program, on State the method realized when program is executed by processor as described in any realization method in first aspect.
Identity identifying method and device provided by the embodiments of the present application pass through the ownership in response to receiving user terminal transmission In vivo detection is carried out in the user that the request of preset request classification belongs to user terminal, it may be determined that whether user is living Body user is practised fraud to prevent disabled user using photo or video etc..Then by response to determining that user is live body user And the facial image of user is obtained, it determines in preset user information set and believes with the presence or absence of with the facial image and identity of user The matched target user's information of manner of breathing, there are when target user's information, to determine that user is in determining user information set Validated user.To be effectively utilized the In vivo detection done after receiving above-mentioned request, and in user information set With the presence or absence of the determination of target user's information, the accuracy of identity authentication result is improved.
Description of the drawings
By reading a detailed description of non-restrictive embodiments in the light of the attached drawings below, the application's is other Feature, objects and advantages will become more apparent upon:
Fig. 1 is that this application can be applied to exemplary system architecture figures therein;
Fig. 2 is the flow chart according to one embodiment of the identity identifying method of the application;
Fig. 3 is the schematic diagram according to an application scenarios of the identity identifying method of the application;
Fig. 4 is the flow chart according to another embodiment of the identity identifying method of the application;
Fig. 5 is the structural schematic diagram according to one embodiment of the identification authentication system of the application;
Fig. 6 is adapted for the structural schematic diagram of the computer system of the electronic equipment for realizing the embodiment of the present application.
Specific implementation mode
The application is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched The specific embodiment stated is used only for explaining related invention, rather than the restriction to the invention.It also should be noted that in order to Convenient for description, is illustrated only in attached drawing and invent relevant part with related.
It should be noted that in the absence of conflict, the features in the embodiments and the embodiments of the present application can phase Mutually combination.The application is described in detail below with reference to the accompanying drawings and in conjunction with the embodiments.
Fig. 1 shows the exemplary system of the embodiment of the identity identifying method or identification authentication system that can apply the application System framework 100.
As shown in Figure 1, system architecture 100 may include terminal device 101,102,103, network 104 and server 105. Network 104 between terminal device 101,102,103 and server 105 provide communication link medium.Network 104 can be with Including various connection types, such as wired, wireless communication link or fiber optic cables etc..
User can be interacted by network 104 with server 105 with using terminal equipment 101,102,103, to receive or send out Send message etc..The application of Image Acquisition class, the application of financing class, debt-credit class application can be installed on terminal device 101,102,103 Etc..
Terminal device 101,102,103 can be hardware, can also be software.When terminal device 101,102,103 is hard Can be various electronic equipments when part, including but not limited to smart mobile phone, tablet computer, pocket computer on knee, desk-top meter Calculation machine etc..When terminal device 101,102,103 is software, may be mounted in above-mentioned cited electronic equipment.It can be with It is implemented as multiple softwares or software module (such as providing Distributed Services), single software or software mould can also be implemented as Block.It is not specifically limited herein.
Server 105 can be to provide the server of various services.Server 105 can for example receive terminal device 101,102,103 send belong to the request of preset request classification after, use that terminal device 101,102,103 is belonged to Family carries out authentication etc..
It should be noted that the identity identifying method that the embodiment of the present application is provided generally is executed by server 105.Accordingly Ground, identification authentication system are generally positioned in server 105.
In addition, server can be hardware, can also be software.When server is hardware, multiple clothes may be implemented into The distributed server cluster of business device composition, can also be implemented as individual server.When server is software, may be implemented into Multiple softwares or software module (such as providing Distributed Services), can also be implemented as single software or software module. This is not specifically limited.
It should be understood that the number of the terminal device, network and server in Fig. 1 is only schematical.According to realization need It wants, can have any number of terminal device, network and server.
With continued reference to Fig. 2, the flow 200 of one embodiment of the identity identifying method according to the application is shown.The body The flow 200 of identity authentication method, includes the following steps:
Step 201, the request for belonging to preset request classification sent in response to receiving user terminal, to user terminal institute The user of ownership carries out In vivo detection, to determine whether user is live body user.
In the present embodiment, the executive agent (such as server 105 shown in FIG. 1) of identity identifying method can be from connecting The user terminal (such as terminal device shown in FIG. 1 101,102,103) connect receives the request for belonging to preset request classification.And Upon receiving the request, the user belonged to the user terminal carries out In vivo detection, to determine whether the user is that live body is used Family.Wherein, which may include the identity information of the user.The identity information for example may include the user name, Location and/or date of birth etc..Optionally, which can also include the identification card number of the user.In addition, preset request Classification for example can be to open an account with the relevant request classification of finance, including but not limited to loaning bill, finance product purchase, finance account Etc..
It should be noted that above-mentioned executive agent can execute following In vivo detection step determine above-mentioned user whether be Live body user:
First, above-mentioned executive agent can export the character string generated at random to above-mentioned user terminal, so that above-mentioned user End prompts above-mentioned user to read the character in the character string.Moreover, in the character during above-mentioned user reads the character string, Above-mentioned user terminal can carry out video record using the image collecting device (such as camera) that it is connected to above-mentioned user, with Lip motion video is obtained, and the lip motion video can be sent to above-mentioned executive agent.Wherein, in character string Character can be Chinese character, letter or number etc..
Then, above-mentioned executive agent can receive above-mentioned lip motion video, and analyze the lip motion video, Determine the lip motion that is done when above-mentioned user reads the character in above-mentioned character string whether with read in above-mentioned character string Character when the lip motion that should do it is consistent.If consistent, above-mentioned executive agent can determine that above-mentioned user is live body user.This In, above-mentioned executive agent for example can orient mouth region in the image that above-mentioned lip motion video is included, by right Whether mouth region carries out lip detection, should be done with when reading the character in character string with the lip motion of the above-mentioned user of determination Lip motion it is consistent.
In some optional realization methods of the present embodiment, above-mentioned executive agent can also be sent to above-mentioned user terminal schemes As acquisition instructions, so that above-mentioned user terminal carries out Image Acquisition to above-mentioned user, and show above-mentioned user's by collected First image of face is sent to above-mentioned executive agent.Above-mentioned executive agent can determine above-mentioned user based on first image Whether it is live body user.
It should be noted that above-mentioned user terminal can carry out figure using the image collecting device that it is connected to above-mentioned user As acquisition.It, can be in addition, above-mentioned user terminal is before the image collecting device connected using it shoots above-mentioned user Presupposed information (such as Chinese character, letter, number, geometric figure or combinations thereof) is shown in specified display area, and is somebody's turn to do in display During presupposed information, above-mentioned user is shot.Above-mentioned executive agent can be by detecting whether above-mentioned first image shows The presupposed information, to determine whether above-mentioned user is live body user.Because when disabled user using on terminal device photo or Video is come when above-mentioned user being pretended to be to carry out In vivo detection, the presupposed information shown by above-mentioned user terminal generally can be reflective to the terminal On the display screen of equipment.Therefore, to the photo or video taken pictures obtained by the first image on usually there will be and show this The retroreflective regions of presupposed information.
Optionally, above-mentioned executive agent can be imitated by detecting the eye areas in above-mentioned first image with the presence or absence of bright pupil It answers, to determine that the face in above-mentioned first image is derived from live body user, is also derived from photo or video etc..
It should be noted that above-mentioned executive agent can also use other biopsy methods, the present embodiment is not to work Body detecting method does any restriction.
Step 202, in response to determining that user is live body user, the facial image of user is obtained.
In the present embodiment, above-mentioned executive agent is after determining that user that above-mentioned user terminal is belonged to is live body user, on The facial image of the user can be obtained by stating executive agent.Here, above-mentioned executive agent can send face to above-mentioned user terminal Image capture instruction, so that the image collecting device that above-mentioned user terminal is connected using it shoots the face of the user, To obtain the facial image of the user, and the facial image is sent to above-mentioned executive agent.
Optionally, if above-mentioned executive agent is got during In vivo detection shows the use that above-mentioned user terminal is belonged to First image of the face at family, above-mentioned executive agent can be by first images directly as facial image, or from first figure Human face region is extracted as in, and by the face Area generation facial image.
Step 203, it determines in preset user information set with the presence or absence of the facial image and identity information phase with user Matched target user's information.
In the present embodiment, above-mentioned executive agent is after the facial image for getting the user that above-mentioned user terminal is belonged to, Above-mentioned executive agent can determine in preset user information set with the presence or absence of the facial image and identity information with the user The target user's information to match.Wherein, user information may include facial image and identity information.In addition user information set The server that above-mentioned executive agent is local or is connect with above-mentioned executive agent telecommunication can be stored in advance in
As an example, above-mentioned executive agent can by execute step identified below determine in user information set whether There are target user's information:
First, above-mentioned executive agent can extract facial image group adult from the user information in user information set Face image group.
Then, above-mentioned executive agent can search the people of the user belonged to above-mentioned user terminal in the facial image group The target facial image that face image matches.It should be noted that above-mentioned executive agent can locally be previously stored with it is trained Afterwards, convolutional neural networks for carrying out image characteristics extraction.But also it can be previously stored with and above-mentioned facial image group In the corresponding image feature information of facial image.The facial image of the user first can be inputted the volume by above-mentioned executive agent Product neural network, obtains image feature information corresponding with the facial image.Then above-mentioned executive agent can be as by obtained by Image feature information matched with the facial image corresponding image characteristic information in above-mentioned facial image group, come search with The target image characteristics information that the image feature information of gained matches.It is above-mentioned if finding the target image characteristics information Facial image corresponding to the target image characteristics information can be determined as target facial image by executive agent.
Herein, image feature information can be the information characterized for the feature to image, and the feature of image can be with It is the various fundamentals (such as color, lines, texture etc.) of image.In practice, convolutional neural networks (Convolutional Neural Network, CNN) it is a kind of feedforward neural network, its artificial neuron can respond in a part of coverage area Surrounding cells, have outstanding performance for image procossing.Therefore, it is possible to carry out image characteristics extraction using convolutional neural networks. It should be pointed out that above-mentioned be used to carry out the convolutional neural networks of image characteristics extraction utilize machine learning method and instruction Practice sample and supervision has been carried out to existing depth convolutional neural networks (such as DenseBox, VGGNet, ResNet, SegNet etc.) Obtained from training.
Optionally, above-mentioned executive agent can locally be previously stored with it is trained after human face recognition model.Above-mentioned execution Main body can utilize in the facial image and above-mentioned facial image group of the user that human face recognition model belongs to above-mentioned user terminal Facial image be compared, obtain comparison result.Wherein, comparison result may include shown by the facial image of the user The face shown by facial image in face and above-mentioned facial image group derives from the probability of same person.Above-mentioned recognition of face Model can be used for characterizing the correspondence between facial image and comparison result.Above-mentioned executive agent can be in the comparison of gained As a result the destination probability not less than probability threshold value (such as 0.95 etc.) is searched in.If finding, above-mentioned executive agent can incite somebody to action Facial image in above-mentioned facial image group, corresponding with the destination probability is determined as target facial image.It should be understood that general Rate threshold value can be adjusted according to actual needs, and the present embodiment does not do any restriction to content in this respect.
It should be noted that above-mentioned human face recognition model can be that those skilled in the art are pre- based on a large amount of statistics calculating Mapping table first formulating, for characterizing the correspondence between facial image and comparison result;Can also be using Piao Plain Bayesian model (Naive Bayesian Model, NBM), support vector machines (Support Vector Machine, SVM), XGBoost (eXtreme Gradient Boosting) or convolutional neural networks (Convolutional Neural Network, CNN) etc. the model that can be used for classifying be trained.
Finally, target facial image is found in response to above-mentioned executive agent, above-mentioned executive agent can be by target face The identity information in user information where image is compared with the identity information of the user.If above-mentioned executive agent determines mesh The identity information in user information where mark facial image is consistent with the identity information of the user, and above-mentioned executive agent can incite somebody to action User information where target facial image is determined as target user's information.
It should be noted that above-mentioned executive agent can utilize text similarity measurement algorithm (such as cosine similarity algorithm, Jaccard similarity factors or editing distance etc.) come calculate the identity information in the user information where target facial image with it is upper State the similarity between the identity information of user.If similarity is not less than similarity threshold (such as 0.95 etc.), above-mentioned execution Main body can determine that the identity information in the user information where target facial image is consistent with the identity information of above-mentioned user.Its In, similarity threshold can be adjusted according to actual needs, and the present embodiment does not do any restriction to content in this respect.
Step 204, in response to determining in user information set there are target user's information, determine that user is validated user.
In the present embodiment, determine that there are target user's letters in above-mentioned user information set in response to above-mentioned executive agent Breath, above-mentioned executive agent can determine that above-mentioned user is validated user.
In some optional realization methods of the present embodiment, above-mentioned executive agent is determining that above-mentioned user is validated user Afterwards, above-mentioned executive agent, which can generate, is used to indicate prompt message of the above-mentioned user by authentication, and exports the prompt Information.Such as the prompt message is exported to above-mentioned user terminal or service connected, for handling above-mentioned request Device.
It is a schematic diagram according to the application scenarios of the identity identifying method of the present embodiment with continued reference to Fig. 3, Fig. 3. In the application scenarios of Fig. 3, preset request classification may include loan.As shown in label 301, user A can be all using it User terminal to server send loan requests, wherein loan requests may include the identity information of user A.Then, such as label Shown in 302, above-mentioned server can carry out In vivo detection, whether to determine user A after receiving loan requests to user A For live body user.Later, as shown in label 303, determine that user A is live body user, above-mentioned server in response to above-mentioned server Man face image acquiring instruction can be sent to above-mentioned user terminal.Then, as shown in label 304, above-mentioned user terminal is receiving people After face image acquiring instruction, the face of user A can be shot using the camera that it is connected, obtain the people of user A Face image, and facial image is sent to above-mentioned server.Then, as shown in label 305, above-mentioned server can be by user The facial image and identity information of A are matched with the user information in preset user information set, to determine user information The target user's information to match with the presence or absence of the facial image and identity information with user A in set, wherein user information can To include facial image and identity information.Finally, as shown in label 306, user information set is determined in response to above-mentioned server In there are target user's information, above-mentioned server can determine user A be validated user.
The method that above-described embodiment of the application provides is effectively utilized and belongs to preset request classification receiving The In vivo detection done after request, and the determination to whether there is target user's information in user information set, improve body The accuracy of part authentication result.
With further reference to Fig. 4, it illustrates the flows 400 of another embodiment of identity identifying method.The authentication The flow 400 of method, includes the following steps:
Step 401, the request for belonging to preset request classification sent in response to receiving user terminal, to the user The belonged to user in end carries out In vivo detection, to determine whether user is live body user.
In the present embodiment, the executive agent (such as server 105 shown in FIG. 1) of identity identifying method can be from connecting The user terminal (such as terminal device shown in FIG. 1 101,102,103) connect receives the request for belonging to preset request classification.And Upon receiving the request, the user belonged to the user terminal carries out In vivo detection, to determine whether the user is that live body is used Family.Wherein, which may include the identity information of the user.The identity information for example may include the user name, Location and/or date of birth etc..Optionally, which can also include the identification card number of the user.Preset request classification Such as can be to open an account with the relevant request classification of finance, including but not limited to loan, finance product purchase, finance account Deng.
It should be noted that it is directed to biopsy method, it can mutually speaking on somebody's behalf referring to the step 201 in embodiment illustrated in fig. 2 Bright, details are not described herein.
Step 402, in response to determining that user is live body user, the facial image of user is obtained.
In the present embodiment, determine that above-mentioned user is live body user in response to above-mentioned executive agent, above-mentioned executive agent can To obtain the facial image of above-mentioned user.It here, can be referring to the step in embodiment illustrated in fig. 2 for the explanation of step 402 Rapid 202 related description, details are not described herein.
Step 403, identity information is extracted from the user information in preset user information set, forms identity information Group.
In the present embodiment, above-mentioned executive agent can be extracted from the user information in preset user information set Identity information forms identity information group.Wherein, user information may include facial image and identity information.In addition, user information Set can be stored in advance in the server that above-mentioned executive agent is local or is connect with above-mentioned executive agent telecommunication.
Step 404, the target identity information to match with the presence or absence of the identity information with user in identity information group is determined.
In the present embodiment, after above-mentioned executive agent states identity information group in composition, it may be determined that above-mentioned identity information The target identity information to match with the presence or absence of the identity information with above-mentioned user in group.
As an example, above-mentioned executive agent for example can utilize text similarity measurement algorithm (such as cosine similarity algorithm, Jaccard similarity factors or editing distance etc.) calculate the identity information of above-mentioned user and the identity in above-mentioned identity information group Similarity between information.Then the target similarity not less than similarity threshold is searched in calculated similarity.If looking into Target similarity is found, above-mentioned executive agent can believe identity in above-mentioned identity information group, corresponding with target similarity Breath is determined as target identity information.
In some optional realization methods of the present embodiment, identity information can also include identification card number.Above-mentioned execution Main body can search included identification card number identity identical with the identification card number of above-mentioned user in above-mentioned identity information group Information.If finding, the identity information found can be determined as target identity information by above-mentioned executive agent.
Step 405, in response to determining, there are target identity informations in identity information group, by the use where target identity information Facial image in the information of family is matched with the facial image of user.
In the present embodiment, it is determined in above-mentioned identity information group there are target identity information in response to above-mentioned executive agent, Above-mentioned executive agent can be by the facial image of facial image and above-mentioned user in the user information where target identity information It is matched.
As an example, above-mentioned executive agent can be by facial image in the user information where target identity information and upper The facial image input of user human face recognition model trained in advance is stated, recognition result is obtained.Wherein, recognition result may include Shown by the facial image of the face and above-mentioned user shown by the facial image in user information where target identity information Face derive from same person probability.Above-mentioned executive agent can determine whether the probability is less than probability threshold value.If this is general Rate is not less than the probability threshold value, then above-mentioned executive agent can determine the face figure in the user information where target identity information As the facial image with above-mentioned user matches.
It should be noted that above-mentioned human face recognition model can be used for characterizing it is corresponding between facial image and comparison result Relationship.Above-mentioned human face recognition model can be that those skilled in the art are calculated based on a large amount of statistics and pre-established, are used for table The mapping table of correspondence between traveller on a long journey's face image and comparison result;Can also be using model-naive Bayesian, branch Hold what the model that vector machine, XGBoost or convolutional neural networks etc. can be used for classifying was trained.
Step 406, in response to the face of facial image and user in the user information where determining target identity information Image matches, and the user information where target identity information is determined as matching with the facial image of user and identity information Target user's information.
In the present embodiment, the face in the user information where target identity information is determined in response to above-mentioned executive agent Image and the facial image of above-mentioned user match, and above-mentioned executive agent can be true by the user information where target identity information It is set to target user's information, and step 407 can be continued to execute.
Step 407, determine that user is validated user.
In the present embodiment, in response to there are target user's information in above-mentioned user information set, above-mentioned executive agent can Using the above-mentioned user of determination as validated user.
Figure 4, it is seen that compared with the corresponding embodiments of Fig. 2, the flow of the identity identifying method in the present embodiment 400 highlight composition identity information group, determine in identity information group target identity information, by the use where target identity information Facial image in the information of family is matched with the facial image of user and user's letter where determining target identity information User information where target identity information is determined as target by the facial image of facial image and user in breath after matching The step of user information.As a result, then the scheme of the present embodiment description is believed target identities by first determining target identity information The facial image in user information where ceasing is matched with the facial image of user, can be to avoid will be in user information set The facial image that respectively includes of each user information matched with the facial image of user, saved the time of authentication Cost.
With further reference to Fig. 5, as the realization to method shown in above-mentioned each figure, this application provides a kind of authentication dresses The one embodiment set, the device embodiment is corresponding with embodiment of the method shown in Fig. 2, which specifically can be applied to respectively In kind electronic equipment.
As shown in figure 5, the identification authentication system 500 of the present embodiment includes:Detection unit 501, acquiring unit 502, first Determination unit 503 and the second determination unit 504.Wherein, detection unit 501 is configured in response to receiving user terminal transmission The request of preset request classification is belonged to, the user belonged to the user terminal carries out In vivo detection, is with the determining user No is live body user, wherein the request may include the identity information of the user;Acquiring unit 502 is configured in response to true The fixed user is live body user, obtains the facial image of the user;First determination unit 503 is configured to determine preset user The target user's information to match with the presence or absence of the facial image and identity information with the user in information aggregate, wherein user Information may include facial image and identity information;Second determination unit 504 is configured in response to determining the user information collection There are target user's information in conjunction, validated user is determined that the user is.
In the present embodiment, in identification authentication system 500:Detection unit 501, acquiring unit 502, the first determination unit 503 and second determination unit 504 it is specific processing and its caused technique effect can be respectively with reference in 2 corresponding embodiment of figure The related description of step 201, step 202, step 203 and step 204, details are not described herein.
In some optional realization methods of the present embodiment, above-mentioned apparatus 500 can also include:Output unit is (in figure It is not shown), it is configured to after determining that above-mentioned user is validated user, generation is used to indicate above-mentioned user and passes through authentication Prompt message, and export the prompt message.
In some optional realization methods of the present embodiment, above-mentioned detection unit 501 can be further configured to:To Above-mentioned user terminal exports the character string generated at random;Receive the lip motion video that above-mentioned user terminal is sent, wherein the lip Action video can be that above-mentioned user terminal reads the character in the character string and the video recorded in response to above-mentioned user;To this Lip motion video is analyzed, determine the lip motion that is done when above-mentioned user reads the character in the character string whether with The lip motion that should be done when reading the character in the character string is consistent;If consistent, it is determined that above-mentioned user is live body user.
In some optional realization methods of the present embodiment, above-mentioned detection unit 501 can also be further configured to: Image capture instruction is sent to above-mentioned user terminal;Receive the first figure of the face for showing above-mentioned user that above-mentioned user terminal is sent Picture;Based on first image, determine whether above-mentioned user is live body user.
In some optional realization methods of the present embodiment, above-mentioned acquiring unit 502 can be further configured to:From Human face region is extracted in above-mentioned first image, the human face region extracted is generated into facial image.
In some optional realization methods of the present embodiment, above-mentioned first determination unit may include:Extract subelement (not shown) is configured to extract identity information from the user information in above-mentioned user information set, forms identity Information group;First determination subelement (not shown), be configured to determine above-mentioned identity information group in the presence or absence of with it is above-mentioned The target identity information that the identity information of user matches;Coupling subelement (not shown), if being configured to, there are above-mentioned Target identity information, then by the face figure of facial image and above-mentioned user in the user information where above-mentioned target identity information As being matched;Second determination subelement (not shown) is configured in response to where the above-mentioned target identity information of determination User information in facial image and the facial image of above-mentioned user match, by the user where above-mentioned target identity information Information is determined as target user's information.
In some optional realization methods of the present embodiment, identity information may include identification card number;And above-mentioned One determination subelement can be further configured to:Included identification card number and above-mentioned use are searched in above-mentioned identity information group The identical identity information of identification card number at family;If finding, the identity information found is determined as target identity information.
In some optional realization methods of the present embodiment, above-mentioned coupling subelement can be further configured to:It will The facial image input training in advance of the facial image and above-mentioned user in user information where above-mentioned target identity information Human face recognition model obtains recognition result, wherein above-mentioned recognition result may include the user where above-mentioned target identity information Face shown by the facial image of the face and above-mentioned user shown by facial image in information is from same person Probability;Determine whether above-mentioned probability is less than probability threshold value;If above-mentioned probability is not less than above-mentioned probability threshold value, it is determined that above-mentioned target Facial image in user information and the facial image of above-mentioned user where identity information match.
The device that above-described embodiment of the application provides is effectively utilized and belongs to preset request classification receiving The In vivo detection done after request, and the determination to whether there is target user's information in user information set, improve body The accuracy of part authentication result.
Below with reference to Fig. 6, it illustrates the computer systems 600 suitable for the electronic equipment for realizing the embodiment of the present application Structural schematic diagram.Electronic equipment shown in Fig. 6 is only an example, to the function of the embodiment of the present application and should not use model Shroud carrys out any restrictions.
As shown in fig. 6, computer system 600 includes central processing unit (CPU) 601, it can be read-only according to being stored in Program in memory (ROM) 602 or be loaded into the program in random access storage device (RAM) 603 from storage section 608 and Execute various actions appropriate and processing.In RAM 603, also it is stored with system 600 and operates required various programs and data. CPU 601, ROM 602 and RAM 603 are connected with each other by bus 604.Input/output (I/O) interface 605 is also connected to always Line 604.
It is connected to I/O interfaces 605 with lower component:Importation 606 including keyboard, mouse etc.;It is penetrated including such as cathode The output par, c 607 of spool (CRT), liquid crystal display (LCD) etc. and loud speaker etc.;Storage section 608 including hard disk etc.; And the communications portion 609 of the network interface card including LAN card, modem etc..Communications portion 609 via such as because The network of spy's net executes communication process.Driver 610 is also according to needing to be connected to I/O interfaces 605.Detachable media 611, such as Disk, CD, magneto-optic disk, semiconductor memory etc. are mounted on driver 610, as needed in order to be read from thereon Computer program be mounted into storage section 608 as needed.
Particularly, in accordance with an embodiment of the present disclosure, it may be implemented as computer above with reference to the process of flow chart description Software program.For example, embodiment of the disclosure includes a kind of computer program product comprising be carried on computer-readable medium On computer program, which includes the program code for method shown in execution flow chart.In such reality It applies in example, which can be downloaded and installed by communications portion 609 from network, and/or from detachable media 611 are mounted.When the computer program is executed by central processing unit (CPU) 601, executes and limited in the system of the application Above-mentioned function.
It should be noted that computer-readable medium shown in the application can be computer-readable signal media or meter Calculation machine readable storage medium storing program for executing either the two arbitrarily combines.Computer readable storage medium for example can be --- but not Be limited to --- electricity, magnetic, optical, electromagnetic, infrared ray or semiconductor system, device or device, or arbitrary above combination.Meter The more specific example of calculation machine readable storage medium storing program for executing can include but is not limited to:Electrical connection with one or more conducting wires, just It takes formula computer disk, hard disk, random access storage device (RAM), read-only memory (ROM), erasable type and may be programmed read-only storage Device (EPROM or flash memory), optical fiber, portable compact disc read-only memory (CD-ROM), light storage device, magnetic memory device, Or above-mentioned any appropriate combination.In this application, can be any include computer readable storage medium or storage journey The tangible medium of sequence, the program can be commanded the either device use or in connection of execution system, device.And at this In application, computer-readable signal media may include in a base band or as the data-signal that a carrier wave part is propagated, Wherein carry computer-readable program code.Diversified forms may be used in the data-signal of this propagation, including but unlimited In electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media can also be that computer can Any computer-readable medium other than storage medium is read, which can send, propagates or transmit and be used for By instruction execution system, device either device use or program in connection.Include on computer-readable medium Program code can transmit with any suitable medium, including but not limited to:Wirelessly, electric wire, optical cable, RF etc. or above-mentioned Any appropriate combination.
Flow chart in attached drawing and block diagram, it is illustrated that according to the system of the various embodiments of the application, method and computer journey The architecture, function and operation in the cards of sequence product.In this regard, each box in flowchart or block diagram can generation A part for a part for one module, program segment, or code of table, above-mentioned module, program segment, or code includes one or more Executable instruction for implementing the specified logical function.It should also be noted that in some implementations as replacements, institute in box The function of mark can also occur in a different order than that indicated in the drawings.For example, two boxes succeedingly indicated are practical On can be basically executed in parallel, they can also be executed in the opposite order sometimes, this is depended on the functions involved.Also it wants It is noted that the combination of each box in block diagram or flow chart and the box in block diagram or flow chart, can use and execute rule The dedicated hardware based systems of fixed functions or operations is realized, or can use the group of specialized hardware and computer instruction It closes to realize.
Being described in unit involved in the embodiment of the present application can be realized by way of software, can also be by hard The mode of part is realized.Described unit can also be arranged in the processor, for example, can be described as:A kind of processor packet Include detection unit, acquiring unit, the first determination unit and the second determination unit.Wherein, the title of these units is in certain situation Under do not constitute restriction to the unit itself, for example, detection unit is also described as the " user belonged to user terminal The unit being detected ".
As on the other hand, present invention also provides a kind of computer-readable medium, which can be Included in electronic equipment described in above-described embodiment;Can also be individualism, and without be incorporated the electronic equipment in. Above computer readable medium carries one or more program, when the electronics is set by one for said one or multiple programs It is standby when executing so that the electronic equipment includes:The preset request classification that belongs in response to receiving user terminal transmission is asked It asks, the user belonged to the user terminal carries out In vivo detection, to determine whether the user is live body user, wherein the request It may include the identity information of the user;In response to determining that the user is live body user, the facial image of the user is obtained;It determines The target user's information to match with the presence or absence of the facial image and identity information with the user in preset user information set, Wherein, user information may include facial image and identity information;In response to determining, there are target use in the user information set Family information, determines that the user is validated user.
Above description is only the preferred embodiment of the application and the explanation to institute's application technology principle.People in the art Member should be appreciated that invention scope involved in the application, however it is not limited to technology made of the specific combination of above-mentioned technical characteristic Scheme, while should also cover in the case where not departing from foregoing invention design, it is carried out by above-mentioned technical characteristic or its equivalent feature Other technical solutions of arbitrary combination and formation.Such as features described above has similar work(with (but not limited to) disclosed herein Can technical characteristic replaced mutually and the technical solution that is formed.

Claims (18)

1. a kind of identity identifying method, including:
In response to receiving the request for belonging to preset request classification of user terminal transmission, the use belonged to the user terminal Whether family carries out In vivo detection, be live body user with the determination user, wherein the request includes the identity letter of the user Breath;
It is live body user in response to the determination user, obtains the facial image of the user;
Determine the mesh to match with the presence or absence of the facial image and identity information with the user in preset user information set Mark user information, wherein user information includes facial image and identity information;
In response to, there are target user's information, determining that the user is validated user in the determination user information set.
2. according to the method described in claim 1, wherein, the method further includes:
After determining that the user is validated user, generation is used to indicate prompt message of the user by authentication, with And the output prompt message.
3. according to the method described in claim 1, wherein, the user belonged to the user terminal carries out In vivo detection, Including:
The character string generated at random is exported to the user terminal;
Receive the lip motion video that the user terminal is sent, wherein the lip motion video be the user terminal in response to The user reads the character in the character string and the video recorded;
The lip motion video is analyzed, determines the lip done when the user reads the character in the character string Whether portion's action is consistent with the lip motion that should be done when reading the character in the character string;
If consistent, it is determined that the user is live body user.
4. according to the method described in claim 1, wherein, the user belonged to the user terminal carries out In vivo detection, Further include:
Image capture instruction is sent to the user terminal;
Receive the first image of the face for showing the user that the user terminal is sent;
Based on described first image, determine whether the user is live body user.
5. according to the method described in claim 4, wherein, the facial image for obtaining the user, including:
Human face region is extracted from described first image, and the human face region extracted is generated into facial image.
6. according to the method described in one of claim 1-5, wherein whether there is in the preset user information set of determination The target user's information to match with the facial image and identity information of the user, including:
Identity information is extracted from the user information in the user information set, forms identity information group;
Determine the target identity information to match with the presence or absence of the identity information with the user in the identity information group;
If there are the target identity information, by the user information where the target identity information facial image and institute The facial image for stating user matches;
In response to the facial image of facial image and the user in the user information where the determination target identity information Match, the user information where the target identity information is determined as target user's information.
7. according to the method described in claim 6, wherein, identity information includes identification card number;And
The target identity information to match with the presence or absence of the identity information with the user in the determination identity information group, Including:
Included identification card number identity information identical with the identification card number of the user is searched in the identity information group;
If finding, the identity information found is determined as target identity information.
8. according to the method described in claim 6, wherein, the people in the user information by where the target identity information Face image is matched with the facial image of the user, including:
The facial image input of facial image and the user in user information where the target identity information is advance Trained human face recognition model, obtains recognition result, wherein the recognition result includes the use where the target identity information Face shown by the facial image of the face and the user shown by facial image in the information of family derives from same person Probability;
Determine whether the probability is less than probability threshold value;
If the probability is not less than the probability threshold value, it is determined that the face in user information where the target identity information Image and the facial image of the user match.
9. a kind of identification authentication system, including:
Detection unit is configured to the request for belonging to preset request classification in response to receiving user terminal transmission, to institute It states the user that user terminal is belonged to and carries out In vivo detection, whether be live body user with the determination user, wherein the request bag Include the identity information of the user;
Acquiring unit is configured in response to the determination user be live body user, obtains the facial image of the user;
First determination unit is configured to determine in preset user information set with the presence or absence of the facial image with the user The target user's information to match with identity information, wherein user information includes facial image and identity information;
Second determination unit is configured in response to there are target user's information in the determination user information set, really The fixed user is validated user.
10. device according to claim 9, wherein described device further includes:
Output unit is configured to after determining that the user is validated user, and generation is used to indicate the user and passes through identity The prompt message of certification, and the output prompt message.
11. device according to claim 9, wherein the detection unit is further configured to:
The character string generated at random is exported to the user terminal;
Receive the lip motion video that the user terminal is sent, wherein the lip motion video be the user terminal in response to The user reads the character in the character string and the video recorded;
The lip motion video is analyzed, determines the lip done when the user reads the character in the character string Whether portion's action is consistent with the lip motion that should be done when reading the character in the character string;
If consistent, it is determined that the user is live body user.
12. device according to claim 9, wherein the detection unit is further configured to:
Image capture instruction is sent to the user terminal;
Receive the first image of the face for showing the user that the user terminal is sent;
Based on described first image, determine whether the user is live body user.
13. device according to claim 12, wherein the acquiring unit is further configured to:
Human face region is extracted from described first image, and the human face region extracted is generated into facial image.
14. according to the device described in one of claim 9-13, wherein first determination unit includes:
Subelement is extracted, is configured to extract identity information from the user information in the user information set, forms body Part information group;
First determination subelement is configured to determine in the identity information group with the presence or absence of the identity information phase with the user Matched target identity information;
Coupling subelement, if being configured to there are the target identity information, by the user where the target identity information Facial image in information is matched with the facial image of the user;
Second determination subelement is configured in response to the face figure in the user information where the determination target identity information Picture and the facial image of the user match, and the user information where the target identity information is determined as target user's letter Breath.
15. device according to claim 14, wherein identity information includes identification card number;And
First determination subelement is further configured to:
Included identification card number identity information identical with the identification card number of the user is searched in the identity information group;
If finding, the identity information found is determined as target identity information.
16. device according to claim 14, wherein the coupling subelement is further configured to:
The facial image input of facial image and the user in user information where the target identity information is advance Trained human face recognition model, obtains recognition result, wherein the recognition result includes the use where the target identity information Face shown by the facial image of the face and the user shown by facial image in the information of family derives from same person Probability;
Determine whether the probability is less than probability threshold value;
If the probability is not less than the probability threshold value, it is determined that the face in user information where the target identity information Image and the facial image of the user match.
17. a kind of electronic equipment, including:
One or more processors;
Storage device, for storing one or more programs,
When one or more of programs are executed by one or more of processors so that one or more of processors are real Now such as method according to any one of claims 1-8.
18. a kind of computer-readable medium, is stored thereon with computer program, wherein real when described program is executed by processor Now such as method according to any one of claims 1-8.
CN201810259529.3A 2018-03-27 2018-03-27 Identity identifying method and device Pending CN108494778A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810259529.3A CN108494778A (en) 2018-03-27 2018-03-27 Identity identifying method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810259529.3A CN108494778A (en) 2018-03-27 2018-03-27 Identity identifying method and device

Publications (1)

Publication Number Publication Date
CN108494778A true CN108494778A (en) 2018-09-04

Family

ID=63316613

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810259529.3A Pending CN108494778A (en) 2018-03-27 2018-03-27 Identity identifying method and device

Country Status (1)

Country Link
CN (1) CN108494778A (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109493112A (en) * 2018-09-28 2019-03-19 深圳壹账通智能科技有限公司 Member management control method, device, computer equipment and storage medium
CN109859857A (en) * 2019-01-30 2019-06-07 深圳安泰创新科技股份有限公司 Mask method, device and the computer readable storage medium of identity information
CN109871834A (en) * 2019-03-20 2019-06-11 北京字节跳动网络技术有限公司 Information processing method and device
CN109905401A (en) * 2019-03-22 2019-06-18 深圳市元征科技股份有限公司 Real name identification method and terminal, server
CN109934191A (en) * 2019-03-20 2019-06-25 北京字节跳动网络技术有限公司 Information processing method and device
CN109977839A (en) * 2019-03-20 2019-07-05 北京字节跳动网络技术有限公司 Information processing method and device
CN110363067A (en) * 2019-05-24 2019-10-22 深圳壹账通智能科技有限公司 Auth method and device, electronic equipment and storage medium
CN110868558A (en) * 2019-11-26 2020-03-06 秒针信息技术有限公司 Recording processing method and device
CN111126229A (en) * 2019-12-17 2020-05-08 中国建设银行股份有限公司 Data processing method and device
CN112040481A (en) * 2020-08-19 2020-12-04 广东电网有限责任公司广州供电局 Secondary authentication method based on 5G communication gateway
CN112528704A (en) * 2019-09-17 2021-03-19 触景无限科技(北京)有限公司 Information processing method and device
CN116383793A (en) * 2023-04-23 2023-07-04 上海万雍科技股份有限公司 Face data processing method, device, electronic equipment and computer readable medium

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050265585A1 (en) * 2004-06-01 2005-12-01 Lumidigm, Inc. Multispectral liveness determination
CN104766063A (en) * 2015-04-08 2015-07-08 宁波大学 Living body human face identifying method
CN105119872A (en) * 2015-02-13 2015-12-02 腾讯科技(深圳)有限公司 Identity verification method, client, and service platform
CN105117695A (en) * 2015-08-18 2015-12-02 北京旷视科技有限公司 Living body detecting device and method
CN105681316A (en) * 2016-02-02 2016-06-15 腾讯科技(深圳)有限公司 Identity verification method and device
CN105844206A (en) * 2015-01-15 2016-08-10 北京市商汤科技开发有限公司 Identity authentication method and identity authentication device
CN106101136A (en) * 2016-07-22 2016-11-09 飞天诚信科技股份有限公司 The authentication method of a kind of biological characteristic contrast and system
CN106302330A (en) * 2015-05-21 2017-01-04 腾讯科技(深圳)有限公司 Auth method, device and system
GB2501362B (en) * 2012-02-21 2017-03-22 Bud Andrew Online pseudonym verification and identity validation
CN106599772A (en) * 2016-10-31 2017-04-26 北京旷视科技有限公司 Living body authentication method, identity authentication method and device
CN106778525A (en) * 2016-11-25 2017-05-31 北京旷视科技有限公司 Identity identifying method and device
CN106982426A (en) * 2017-03-30 2017-07-25 广东微模式软件股份有限公司 A kind of method and system for remotely realizing old card system of real name
CN107273794A (en) * 2017-04-28 2017-10-20 北京建筑大学 Live body discrimination method and device in a kind of face recognition process

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050265585A1 (en) * 2004-06-01 2005-12-01 Lumidigm, Inc. Multispectral liveness determination
GB2501362B (en) * 2012-02-21 2017-03-22 Bud Andrew Online pseudonym verification and identity validation
CN105844206A (en) * 2015-01-15 2016-08-10 北京市商汤科技开发有限公司 Identity authentication method and identity authentication device
CN105119872A (en) * 2015-02-13 2015-12-02 腾讯科技(深圳)有限公司 Identity verification method, client, and service platform
CN104766063A (en) * 2015-04-08 2015-07-08 宁波大学 Living body human face identifying method
CN106302330A (en) * 2015-05-21 2017-01-04 腾讯科技(深圳)有限公司 Auth method, device and system
CN105117695A (en) * 2015-08-18 2015-12-02 北京旷视科技有限公司 Living body detecting device and method
CN105681316A (en) * 2016-02-02 2016-06-15 腾讯科技(深圳)有限公司 Identity verification method and device
CN106101136A (en) * 2016-07-22 2016-11-09 飞天诚信科技股份有限公司 The authentication method of a kind of biological characteristic contrast and system
CN106599772A (en) * 2016-10-31 2017-04-26 北京旷视科技有限公司 Living body authentication method, identity authentication method and device
CN106778525A (en) * 2016-11-25 2017-05-31 北京旷视科技有限公司 Identity identifying method and device
CN106982426A (en) * 2017-03-30 2017-07-25 广东微模式软件股份有限公司 A kind of method and system for remotely realizing old card system of real name
CN107273794A (en) * 2017-04-28 2017-10-20 北京建筑大学 Live body discrimination method and device in a kind of face recognition process

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109493112A (en) * 2018-09-28 2019-03-19 深圳壹账通智能科技有限公司 Member management control method, device, computer equipment and storage medium
CN109859857A (en) * 2019-01-30 2019-06-07 深圳安泰创新科技股份有限公司 Mask method, device and the computer readable storage medium of identity information
CN109871834A (en) * 2019-03-20 2019-06-11 北京字节跳动网络技术有限公司 Information processing method and device
CN109934191A (en) * 2019-03-20 2019-06-25 北京字节跳动网络技术有限公司 Information processing method and device
CN109977839A (en) * 2019-03-20 2019-07-05 北京字节跳动网络技术有限公司 Information processing method and device
CN109905401A (en) * 2019-03-22 2019-06-18 深圳市元征科技股份有限公司 Real name identification method and terminal, server
CN110363067A (en) * 2019-05-24 2019-10-22 深圳壹账通智能科技有限公司 Auth method and device, electronic equipment and storage medium
CN112528704A (en) * 2019-09-17 2021-03-19 触景无限科技(北京)有限公司 Information processing method and device
CN110868558A (en) * 2019-11-26 2020-03-06 秒针信息技术有限公司 Recording processing method and device
CN111126229A (en) * 2019-12-17 2020-05-08 中国建设银行股份有限公司 Data processing method and device
CN112040481A (en) * 2020-08-19 2020-12-04 广东电网有限责任公司广州供电局 Secondary authentication method based on 5G communication gateway
CN112040481B (en) * 2020-08-19 2023-10-24 广东电网有限责任公司广州供电局 Secondary authentication method based on 5G communication gateway
CN116383793A (en) * 2023-04-23 2023-07-04 上海万雍科技股份有限公司 Face data processing method, device, electronic equipment and computer readable medium
CN116383793B (en) * 2023-04-23 2023-09-19 上海万雍科技股份有限公司 Face data processing method, device, electronic equipment and computer readable medium

Similar Documents

Publication Publication Date Title
CN108494778A (en) Identity identifying method and device
CN108154196B (en) Method and apparatus for exporting image
CN108898186A (en) Method and apparatus for extracting image
CN108171207A (en) Face identification method and device based on video sequence
CN108280477A (en) Method and apparatus for clustering image
CN108898185A (en) Method and apparatus for generating image recognition model
CN108269254A (en) Image quality measure method and apparatus
CN109034069A (en) Method and apparatus for generating information
CN109389589A (en) Method and apparatus for statistical number of person
CN109446990A (en) Method and apparatus for generating information
US11126827B2 (en) Method and system for image identification
WO2020006964A1 (en) Image detection method and device
CN108062544A (en) For the method and apparatus of face In vivo detection
CN108416326A (en) Face identification method and device
CN108429816A (en) Method and apparatus for generating information
CN109903392A (en) Augmented reality method and apparatus
CN108229375B (en) Method and device for detecting face image
CN109241934A (en) Method and apparatus for generating information
CN108549848A (en) Method and apparatus for output information
CN108062416B (en) Method and apparatus for generating label on map
CN110443824A (en) Method and apparatus for generating information
CN108932774A (en) information detecting method and device
CN108960110A (en) Method and apparatus for generating information
CN108509921A (en) Method and apparatus for generating information
CN108509994A (en) character image clustering method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20180904

RJ01 Rejection of invention patent application after publication