CN109934191A - Information processing method and device - Google Patents
Information processing method and device Download PDFInfo
- Publication number
- CN109934191A CN109934191A CN201910211759.7A CN201910211759A CN109934191A CN 109934191 A CN109934191 A CN 109934191A CN 201910211759 A CN201910211759 A CN 201910211759A CN 109934191 A CN109934191 A CN 109934191A
- Authority
- CN
- China
- Prior art keywords
- result
- face object
- target face
- video
- video frame
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Landscapes
- Collating Specific Patterns (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
Abstract
Embodiment of the disclosure discloses information processing method and device.One specific embodiment of this method includes: to export preset action command, and the video of capture target face object, wherein action command is used to indicate target face object and executes movement in response to detecting authentication trigger signal;Based on video, generate be used to indicate target face object whether the first result of execution movement;Video frame, and the In vivo detection model that the input of extracted video frame is trained in advance are extracted from video, to obtain testing result, wherein testing result is used to indicate whether face object corresponding to extracted video frame is living body faces;Based on testing result and first as a result, generate be used to indicate target face object whether be living body faces objective result.This embodiment improves the accuracys of In vivo detection, help to improve the reliability of authentication.
Description
Technical field
Embodiment of the disclosure is related to field of computer technology more particularly to information processing method and device.
Background technique
With the development of face recognition technology, people can realize account login, payment, unlock etc. by " brush face "
Function.This brings convenience for people's lives, but there is also risks at the same time.Since machine can identify face, just
It can also identify facial image.Accordingly, it is possible to lawless people occur using the risk of the facial image camouflage user identity of user.
Summary of the invention
Embodiment of the disclosure proposes information processing method and device.
In a first aspect, embodiment of the disclosure provides a kind of information processing method, this method comprises: in response to detecting
Authentication trigger signal exports preset action command, and the video of capture target face object, wherein action command
It is used to indicate target face object and executes movement;Based on video, generate be used to indicate target face object whether execution movement
First result;Video frame, and the In vivo detection model that the input of extracted video frame is trained in advance are extracted from video, with
Obtain testing result, wherein testing result is used to indicate whether face object corresponding to extracted video frame is living body people
Face;Based on testing result obtained and first as a result, generate be used to indicate target face object whether be living body faces mesh
Mark result.
In some embodiments, this method further include: in response to determining that objective result instruction target face object is non-live
Body face exports the prompt information for characterizing authentication failure.
In some embodiments, this method further include: in response to determining that objective result instruction target face object is living body
Face selects video frame as authentication image corresponding to target face object from extracted video frame;By identity
Authentication image is sent to server-side;And matching result is obtained from server-side, wherein matching result is used to indicate authentication figure
It seem no to match with pre-stored user's facial image.
It in some embodiments, include: that at least two video frames are extracted from video from video frame is extracted in video;By institute
The video frame of extraction inputs In vivo detection model, to obtain testing result, comprising: at least two video frames are inputted living body respectively
Detection model, to obtain at least two testing results;And target person is used to indicate as a result, generating based on testing result and first
Face object whether be living body faces objective result, comprising: be based at least two testing result obtained, generation is used to indicate
Target face object whether be living body faces the second result;Based on the first result and second as a result, generating objective result.
In some embodiments, it is used to indicate whether face object is living body as a result, generating based on the first result and second
The objective result of face, comprising: in response to determining that the first result instruction target face object performs movement and the second result
It indicates that target face object is living body faces, generates the objective result for being used to indicate that face object is living body faces.
In some embodiments, based on the first result and second as a result, generate be used to indicate target face object whether be
The objective result of living body faces, further includes: in response to determining that the first result instruction target face object is not carried out movement or the
It indicates that target person face object is non-living body face in two results, generates the mesh for being used to indicate that target face object is non-living body face
Mark result.
In some embodiments, In vivo detection model is obtained by machine learning.
In some embodiments, In vivo detection model is silent In vivo detection model.
In some embodiments, video frame is extracted from video, comprising: the video in video frame for including for video
Frame determines head pose corresponding to the video frame;The view that corresponding head pose meets preset condition is extracted from video
Frequency frame.
Second aspect, embodiment of the disclosure provide a kind of information processing unit, which includes: that output and capture are single
Member is configured in response to detect authentication trigger signal, exports preset action command, and capture target face pair
The video of elephant, wherein action command is used to indicate target face object and executes movement;First generation unit is configured to be based on
Video, generate be used to indicate target face object whether execution movement the first result;Detection unit is configured to from video
Video frame, and the In vivo detection model that the input of extracted video frame is trained in advance are extracted, to obtain testing result,
In, testing result is used to indicate whether face object corresponding to extracted video frame is living body faces;Second generation unit,
Be configured to based on testing result and first as a result, generate be used to indicate target face object whether be living body faces target knot
Fruit.
The third aspect, embodiment of the disclosure provide a kind of terminal device, comprising: one or more processors;Storage
Device is stored thereon with one or more programs;Camera is configured to acquire video;When one or more programs are by one
Or multiple processors execute, so that the method that one or more processors realize any embodiment in above- mentioned information processing method.
Fourth aspect, embodiment of the disclosure provide a kind of computer-readable medium, are stored thereon with computer program,
The method of any embodiment in above- mentioned information processing method is realized when the program is executed by processor.
The information processing method and device that embodiment of the disclosure provides, by response to detecting authentication triggering letter
Number, preset action command, and the video of capture target face object are exported, video is then based on, generation is used to indicate mesh
Whether the first of execution movement from video as a result, then extract video frame for mark face object, and by extracted video frame
Input In vivo detection model trained in advance is finally used for based on testing result and first as a result, generating with obtaining testing result
Indicate whether target face object is the objective result of living body faces, to carry out passing through action command to target face object
On the basis of acting In vivo detection, it is effectively utilized In vivo detection model and silent In vivo detection has been carried out to target face object,
In turn, it by testing result corresponding to the first result corresponding to movement In vivo detection and silent In vivo detection, can be generated
More accurate objective result improves the accuracy of In vivo detection, helps to improve the reliability of authentication.
Detailed description of the invention
By reading a detailed description of non-restrictive embodiments in the light of the attached drawings below, the disclosure is other
Feature, objects and advantages will become more apparent upon:
Fig. 1 is that one embodiment of the disclosure can be applied to exemplary system architecture figure therein;
Fig. 2 is the flow chart according to one embodiment of the information processing method of the disclosure;
Fig. 3 is the schematic diagram of an application scenarios of information processing method according to an embodiment of the present disclosure;
Fig. 4 is the flow chart according to another embodiment of the information processing method of the disclosure;
Fig. 5 is the structural schematic diagram according to one embodiment of the information processing unit of the disclosure;
Fig. 6 is adapted for the structural schematic diagram for the computer system for realizing the terminal device of embodiment of the disclosure.
Specific embodiment
The disclosure is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched
The specific embodiment stated is used only for explaining related invention, rather than the restriction to the invention.It also should be noted that in order to
Convenient for description, part relevant to related invention is illustrated only in attached drawing.
It should be noted that in the absence of conflict, the feature in embodiment and embodiment in the disclosure can phase
Mutually combination.The disclosure is described in detail below with reference to the accompanying drawings and in conjunction with the embodiments.
Fig. 1 is shown can be using the exemplary system of the embodiment of the information processing method or information processing unit of the disclosure
System framework 100.
As shown in Figure 1, system architecture 100 may include terminal device 101,102,103, network 104 and server 105.
Network 104 between terminal device 101,102,103 and server 105 to provide the medium of communication link.Network 104 can be with
Including various connection types, such as wired, wireless communication link or fiber optic cables etc..
User can be used terminal device 101,102,103 and be interacted by network 104 with server 105, to receive or send out
Send message etc..Various telecommunication customer end applications, such as payment class software, purchase can be installed on terminal device 101,102,103
Species application, web browser applications, searching class application, instant messaging tools, social platform software etc..
Terminal device 101,102,103 can be hardware, be also possible to software.When terminal device 101,102,103 is hard
When part, it can be the various electronic equipments with camera, including but not limited to smart phone, tablet computer, e-book reading
(Moving Picture Experts Group Audio Layer III, dynamic image expert compress mark for device, MP3 player
Quasi- audio level 3), MP4 (Moving Picture Experts Group Audio Layer IV, dynamic image expert compression
Standard audio level 4) player, pocket computer on knee and desktop computer etc..When terminal device 101,102,103 is
When software, it may be mounted in above-mentioned cited electronic equipment.Its may be implemented into multiple softwares or software module (such as with
To provide the multiple softwares or software module of Distributed Services), single software or software module also may be implemented into.It does not do herein
It is specific to limit.
Server 105 can be to provide the server of various services, such as send living body to terminal device 101,102,103
The model training server of detection model.Model training server, which can train, obtains In vivo detection model, and will be obtained
In vivo detection model is sent to terminal device.
It should be noted that information processing method provided by embodiment of the disclosure generally by terminal device 101,102,
103 execute, and correspondingly, information processing unit is generally positioned in terminal device 101,102,103.
It should be noted that server can be hardware, it is also possible to software.When server is hardware, may be implemented
At the distributed server cluster that multiple servers form, individual server also may be implemented into.It, can when server is software
It, can also be with to be implemented as multiple softwares or software module (such as providing multiple softwares of Distributed Services or software module)
It is implemented as single software or software module.It is not specifically limited herein.
It should be understood that the number of terminal device, network and server in Fig. 1 is only schematical.According to realization need
It wants, can have any number of terminal device, network and server.Used data during generating objective result
It does not need in the case where long-range obtain, above system framework can not include network and server, and only include terminal device.
With continued reference to Fig. 2, the process 200 of one embodiment of the information processing method according to the disclosure is shown.The letter
Cease processing method, comprising the following steps:
Step 201, in response to detecting authentication trigger signal, preset action command, and capture target are exported
The video of face object.
In the present embodiment, the executing subject (such as terminal device shown in FIG. 1 101,102,103) of information processing method
It can be exported preset dynamic in response to detecting authentication trigger signal by wired connection mode or radio connection
It instructs, and the video of capture target face object.Wherein, authentication trigger signal is for triggering authentication operation
Signal.Authentication operation for for authenticate personage corresponding to target face object whether be pre-registered user behaviour
Make.Specifically, trigger action that authentication trigger signal can execute for user (such as click body in above-mentioned executing subject
Part certification trigger button) caused by signal, or electronic equipment (such as Fig. 1 with the communication connection of above-mentioned executing subject
Shown in server 105) send signal.Particularly, above-mentioned executing subject can use camera and carry out continuing shooting, response
In taking target face object, authentication trigger signal is generated.Target face object is to carry out authentication to it
Face object.
In practice, when carrying out authentication by recognition of face, pre-registered use is utilized in order to take precautions against lawless people
The conjecture face (such as facial image) at family carries out authentication, and then realizes identity theft, in general, face matching can carried out
The preceding face object to for carrying out authentication carries out In vivo detection, can determine whether face object is living body people with this
Face.
In the present embodiment, action command is used to indicate target face object and executes movement.Specifically, action command can be with
For the instruction of various forms (such as can be voice, image or text etc.).As an example, action command can be text
" blink ".
Herein, above-mentioned executing subject can be instructed with output action, so that target face object is executed based on action command
Movement and above-mentioned executing subject can capture the video of target face object, for authenticating the identity of target face object.
Step 202, be based on video, generate be used to indicate target face object whether execution movement the first result.
In the present embodiment, based on video obtained in step 201, above-mentioned executing subject can determine target face object
Whether execute movement indicated by action command, generate be used to indicate target face object whether the first result of execution movement.
In some embodiments, it can use the mode of machine learning to generate whether instruction target face object performs the of movement
One result.For example, the first result can be the probability that target face object executes the movement;In another example the first result can be one
A Boolean, the Boolean are that 1 expression performs movement, are not carried out movement for 0 expression, vice versa.In some implementations
In example, it is also based on the first result and generates other information to be presented to the user, it is including but not limited at least one of following: text
Word, number, symbol, image, audio.
Specifically, above-mentioned executing subject can identify video, to determine whether target face object executes movement
The indicated movement of instruction is generated in response to determining that target face object performs movement indicated by action command for referring to
Show that target face object is the first result (such as Boolean " 1 ") of living body faces;In response to determining that target face object is not held
Movement indicated by row action command generates and is used to indicate the first result that target face object is not living body faces (such as cloth
Value of " 0 ").
It is appreciated that during capturing video, target face object can when target face object is living body faces
To execute movement based on action command;When target face object is not living body faces, during capturing video, target person
Face object can not execute movement based on action command.In turn, herein, can be by determining it is dynamic whether target face object executes
Indicated movement is instructed to determine whether target face object is living body faces.
Step 203, video frame, and the In vivo detection that the input of extracted video frame is trained in advance are extracted from video
Model, to obtain testing result.
In the present embodiment, based on video obtained in step 201, above-mentioned executing subject can extract video from video
Frame, and the In vivo detection model that the input of extracted video frame is trained in advance, to obtain testing result.Herein, it detects
As a result it is used to indicate whether face object corresponding to extracted video frame is living body faces.For example, testing result can be
Face object corresponding to extracted video frame is the probability of living body faces;In another example testing result can be a boolean
Value, the Boolean are that 1 expression is living body faces, are not living body faces for 0 expression, vice versa.In some embodiments,
It is also based on testing result and generates other information to be presented to the user, it is including but not limited at least one of following: text, number
Word, symbol, image, audio.
In the present embodiment, In vivo detection model is corresponding with testing result corresponding to video frame for characterizing video frame
Relationship.Specifically, as an example, In vivo detection model can be technical staff is in advance based on to a large amount of video frame and video frame
The statistics of corresponding testing result and the corresponding relationship for pre-establishing, being stored with multiple video frames with corresponding testing result
Table.
In some optional implementations of the present embodiment, In vivo detection model can be to be obtained by machine learning
's.In some embodiments, In vivo detection model is silent In vivo detection model, can predict the figure based on single image
Face as in is to come from true living body, or come from image (such as image of living body faces).
Specifically, as an example, silent In vivo detection model can be led to by above-mentioned executing subject or other electronic equipments
It crosses following steps training to obtain: firstly, obtaining training sample set, wherein training sample includes sample facial image and for sample
The pattern detection that this facial image marks in advance is as a result, pattern detection result is used to indicate sample corresponding to sample facial image
Whether face object is living body faces.For example, training sample may include positive sample and negative sample, positive sample is from true living
The facial image of body, and negative sample be from non-living body facial image (such as to facial image carry out reimaging it is obtained
Image).Then, using machine learning method, the sample facial image for including using the training sample that training sample is concentrated is as defeated
Enter, using pattern detection result corresponding to the sample facial image inputted as desired output, training obtains In vivo detection mould
Type.
Specifically, above-mentioned executing subject can extract video frame using various methods from video.For example, can using with
The method that machine extracts extracts video frame, alternatively, can extract from the sequence of frames of video corresponding to video positioned at predeterminated position
Video frame.It should be noted that herein, extracted video frame may include at least one.
In some optional implementations of the present embodiment, above-mentioned executing subject can be by following steps from video
Extract video frame: the video frame in video frame for including firstly, for video, above-mentioned executing subject can determine the video frame institute
Corresponding head pose.Then, above-mentioned executing subject can extract corresponding head pose from video and meet preset condition
Video frame.
In this implementation, head pose refers to direction of the head in three-dimensional system of coordinate.Herein, head is in three-dimensional
Direction in coordinate system can be characterized with head around the angle of X-axis, Y-axis, the Z axis rotation of three-dimensional system of coordinate.It needs to illustrate
It is the head pose on head corresponding to the facial image that head pose corresponding to video frame includes for video frame.
In this implementation, preset condition can for it is pre-set, for limiting corresponding to extracted video frame
Head pose various conditions.Such as preset condition can X-axis for head corresponding to video frame around three-dimensional system of coordinate, Y
The angle that axis, Z axis rotate, which is respectively less than, is equal to predetermined angle (such as 30 °).
It is appreciated that the face that the rotation angle on head corresponding to video frame is smaller, and video frame is recorded is special in practice
Sign is then more obvious, and In vivo detection model is typically based on face characteristic corresponding to video frame and determines face corresponding to video frame
Whether object is living body faces, therefore, using the small video frame of the rotation angle on corresponding head, be can be generated more accurate
Testing result.
Step 204, it is used to indicate whether target face object is living body faces as a result, generating based on testing result and first
Objective result.
In the present embodiment, based on obtained in testing result obtained in step 203 and step 202 first as a result, on
State executing subject can be generated be used to indicate target face object whether be living body faces objective result.Wherein, objective result
It can be for being obtained after face object progress In vivo detection as a result, for example, it can be a Boolean, " 1 " indicates face
Object is living body, and " 0 " indicates that face object is non-living body, and vice versa.Be also based on objective result generate other information with
Just it is presented to the user, it is including but not limited at least one of following: text, number, symbol, image, audio.
Specifically, above-mentioned executing subject can be based on testing result obtained and first as a result, raw using various methods
At objective result.For example, above-mentioned executing subject can include being used to indicate mesh in testing result and the first result in response to determining
Mark face object is non-living body face as a result, generation is used to indicate the objective result that target face object is non-living body face.
In some optional implementations of the present embodiment, above-mentioned executing subject can extract at least two from video
Video frame, and extracted at least two video frame is inputted to In vivo detection model trained in advance respectively, to obtain at least
Two testing results;And based on testing result obtained and first as a result, above-mentioned executing subject can pass through following steps
Generate be used to indicate target face object whether be living body faces objective result: firstly, above-mentioned executing subject can be based on institute
Obtain at least two testing results, generate be used to indicate target face object whether be living body faces the second result.Then,
Above-mentioned executing subject can be used to indicate whether target face object is living body faces as a result, generating based on the first result and second
Objective result.
Herein, the second result can serve to indicate that whether target face object is living body faces, for example, it can be one
A Boolean, " 1 " indicate that face object is living body, and " 0 " indicates that face object is non-living body, and vice versa.It is also based on
Two results generation other information is including but not limited at least one of following: text, number, symbol, figure to be presented to the user
Picture, audio.Specifically, above-mentioned executing subject can be based at least two testing result obtained, generated using various methods
Be used to indicate target face object whether be living body faces the second result.For example, in response to determination obtained at least two
Testing result indicates that target person face object is living body faces, generates and is used to indicate target face object as the second of living body faces
As a result;It include that be used to indicate target face object be not living body faces in response to determination at least two testing result obtained
Testing result, generation are used to indicate the second result that target face object is not living body faces.
In addition, above-mentioned executing subject can be based on the first result and second as a result, generating objective result using various methods.
In some optional implementations of the present embodiment, above-mentioned executing subject can be in response to determining that the first result refers to
Showing that target face object performs movement and the second result instruction target face object is living body faces, and generation is used to indicate mesh
Mark the objective result that face object is living body faces.
In addition, in some optional implementations of the present embodiment, above-mentioned executing subject may also respond to determine the
It is non-living body face that one result instruction target face object, which is not carried out movement or the second result instruction target face object, is generated
It is used to indicate the objective result that target face object is non-living body face.
In some optional implementations of the present embodiment, after obtaining objective result, above-mentioned executing subject can be rung
The prompt letter for characterizing authentication failure should be exported in determining that objective result instruction target face object is non-living body face
Breath.Wherein, prompt information can include but is not limited at least one of following: text, number, symbol, image, audio, video.
In practice, above-mentioned executing subject can will be prompted to information and export the personnel's (target person for carrying out authentication to this
Personage corresponding to face object), so that the personnel get the result of this authentication.
With continued reference to the schematic diagram that Fig. 3, Fig. 3 are according to the application scenarios of the information processing method of the present embodiment.?
In the application scenarios of Fig. 3, mobile phone 301 can export preset action command in response to detecting authentication trigger signal 302
303, and the video 305 of capture target face object 304, wherein action command 303 is used to indicate target face object 304
Movement is executed, such as action command can be audio " blink ".Then, mobile phone 301 can be based on video 305, generate for referring to
Show whether target face object 304 executes the first result 306 of movement indicated by action command 303.Then, mobile phone 301 can
To extract video frame 3051 from video 305, and extracted video frame 3051 is inputted to In vivo detection mould trained in advance
Type 307, to obtain testing result 308, wherein testing result 308, which is used to indicate face object corresponding to video frame 3051, is
No is living body faces.Finally, mobile phone 301 can be based on testing result 308 obtained and the first result 306, generate for referring to
Show target face object 304 whether be living body faces objective result 309.
Currently, in order to reduce identity theft risk, the prior art acts living body first is that passing through in authentication scene
Detection is to determine whether face object is living body faces.
Specifically, movement In vivo detection identifies face pair by the way that whether detection face object executes predetermined movement
As if no is living body faces.As an example, movement In vivo detection can by the blink of face object, open one's mouth, shake the head, point head etc.
Movement, using technologies such as the positioning of face key point, face trackings, determines whether face object is true living body.In practice, although
Living body faces and non-living body face can be identified by acting In vivo detection, but since authentication is usually personal with user
Interests are directly related, and act In vivo detection and inevitably there is error, so there are still the need for the accuracy for improving In vivo detection
It asks.
The method provided by the above embodiment of the disclosure is by the way that in response to detecting authentication trigger signal, output is default
Action command, and the video of capture target face object is then based on video, generation is used to indicate target face object and is
The first of no execution movement is as a result, to realize the movement In vivo detection to face object.On this basis, it is mentioned from video
It takes video frame, and the In vivo detection model that the input of extracted video frame is trained in advance, obtains testing result, realize pair
The silent In vivo detection of face object.In turn, testing result and movement In vivo detection institute corresponding to comprehensive silent In vivo detection
Corresponding first as a result, can be generated it is more accurate, be used to indicate target face object whether be living body faces target knot
Fruit.With this, for execute the disclosure method provided by the above embodiment electronic equipment compared to the prior art in, be used for
The electronic equipment for carrying out In vivo detection, can have more accurate In vivo detection function, facilitates for executing the disclosure
The electronic equipment of method provided by the above embodiment generates and exports more accurate identity authentication result.
With further reference to Fig. 4, it illustrates the processes 400 of another embodiment of information processing method.The information processing
The process 400 of method, comprising the following steps:
Step 401, in response to detecting authentication trigger signal, preset action command, and capture target are exported
The video of face object.
In the present embodiment, the executing subject (such as terminal device shown in FIG. 1 101,102,103) of information processing method
It can be exported preset dynamic in response to detecting authentication trigger signal by wired connection mode or radio connection
It instructs, and the video of capture target face object.Wherein, authentication trigger signal is for triggering authentication operation
Signal.Action command is used to indicate target face object and executes movement.
Herein, above-mentioned executing subject can be instructed with output action, so that target face object is executed based on action command
Movement, and the video of capture target face object, the video of the identity for authenticating target face object.
Step 402, be based on video, generate be used to indicate target face object whether execution movement the first result.
In the present embodiment, based on video obtained in step 401, above-mentioned executing subject, which can be generated, is used to indicate target
Face object whether execution movement the first result.
Step 403, video frame, and the In vivo detection that the input of extracted video frame is trained in advance are extracted from video
Model, to obtain testing result.
In the present embodiment, based on video obtained in step 401, above-mentioned executing subject can extract video from video
Frame, and the In vivo detection model that the input of extracted video frame is trained in advance, to obtain testing result.Herein, it detects
As a result it is used to indicate whether face object corresponding to extracted video frame is living body faces.In vivo detection model is for characterizing
The corresponding relationship of testing result corresponding to video frame and video frame.
Step 404, it is used to indicate whether target face object is living body faces as a result, generating based on testing result and first
Objective result.
In the present embodiment, based on obtained in testing result obtained in step 403 and step 402 first as a result, on
State executing subject can be generated be used to indicate target face object whether be living body faces objective result.Wherein, objective result
It can be for the result obtained after target face object progress In vivo detection.
Above-mentioned steps 401, step 402, step 403, step 404 respectively with step 201, the step in previous embodiment
202, step 203, step 204 are consistent, and the description above with respect to step 201, step 202, step 203 and step 204 is also suitable
In step 401, step 402 step 403 and step 404, details are not described herein again.
Step 405, in response to determining that objective result instruction target face object is living body faces, from extracted video frame
Middle selecting video frame is as authentication image corresponding to target face object.
In the present embodiment, above-mentioned executing subject can be in response to determining that objective result instruction target face object is living body
Face, selecting video frame is as authentication image corresponding to target face object from extracted video frame.Wherein, body
Part authentication image is for carrying out authentication, to determine whether target face object belongs to the image of pre-registered user.
Specifically, above-mentioned executing subject can choose authentication figure using various methods from extracted video frame
Picture.For example, can be chosen using the method randomly selected;Recognize alternatively, the highest video frame of clarity can be chosen as identity
Demonstrate,prove image.
Step 406, authentication image is sent to server-side.
In the present embodiment, authentication image can be sent to server-side by above-mentioned executing subject.Server-side be with it is upper
State authentication image and pre-stored user's face that executing subject communicates to connect, for sending to above-mentioned executing subject
Image carries out matched server-side.Wherein, user's facial image is the facial image of pre-registered user.
Step 407, matching result is obtained from server-side.
In the present embodiment, above-mentioned executing subject can obtain matching result from above-mentioned server-side.Herein, matching result
It is used to indicate whether authentication image matches with pre-stored user's facial image.For example, matching result can be body
Part authentication image and the matched probability of user's facial image;In another example matching result can be a Boolean, which is 1
It then indicates authentication image and user's facial image, indicates authentication image and user's facial image for 0, otherwise also
So.In some embodiments, it is also based on matching result and generates other information to be presented to the user, may include but unlimited
In at least one of following: text, number, symbol, image, audio, video.
It is appreciated that when matching result instruction authentication image and pre-stored user's facial image match,
Then authentication success;When matching result instruction authentication image identity authentication image and pre-stored user's facial image
When mismatch, then authentication fails.
Specifically, server-side can using various methods to authentication image and pre-stored user's facial image into
Row matching, obtains matching result.For example, it may be determined that the similarity of authentication image and user's facial image, in response to true
Similarity is determined more than or equal to preset similarity threshold, generates that instruction authentication image and user's facial image match
With as a result, not generating instruction authentication image and user's facial image not in response to determining that similarity is less than similarity threshold
The matching result matched.Wherein, similarity is the numerical value for characterizing the similarity degree of authentication image and user's facial image,
Similarity is bigger, characterizes authentication image and the similarity degree of user's facial image is higher.
Figure 4, it is seen that compared with the corresponding embodiment of Fig. 2, the process of the information processing method in the present embodiment
400 highlight in response to determining that objective result instruction target face object is living body faces, choose from extracted video frame
Video frame utilizes authentication image and pre-stored use as authentication image corresponding to target face object
Family facial image, the step of matching to face object and user.The scheme of the present embodiment description can determined as a result,
Target face object is that progress face matching in the case where living body faces is increased the progress matched condition of face, helped with this
In reducing the load for carrying out the matched server-side of face, the matching speed of server-side is improved;Also, from extracted video
Authentication image is chosen in frame compared with re-shooting after carrying out In vivo detection and obtaining authentication image, can have faster
Image acquisition speed, further, it is also possible to improve carry out In vivo detection face object and authentication image corresponding to people
The consistency of face object, to help to realize more structurally sound authentication.
With further reference to Fig. 5, as the realization to method shown in above-mentioned each figure, present disclose provides a kind of information processing apparatus
The one embodiment set, the Installation practice is corresponding with embodiment of the method shown in Fig. 2, which specifically can be applied to respectively
In kind electronic equipment.
As shown in figure 5, the information processing unit 500 of the present embodiment includes: that output and capturing unit 501, first generate list
Member 502, detection unit 503 and the second generation unit 504.Wherein, output and capturing unit 501 are configured in response to detect
Authentication trigger signal exports preset action command, and the video of capture target face object, wherein action command
It is used to indicate target face object and executes movement;First generation unit 502 is configured to based on video, and generation is used to indicate target
Face object whether execution movement the first result;Detection unit 503 is configured to extract video frame from video, and by institute
The video frame input of extraction In vivo detection model trained in advance, to obtain testing result, wherein testing result is used to indicate institute
Whether face object corresponding to the video frame of extraction is living body faces;Second generation unit 504 is configured to be based on to be obtained
Testing result and first as a result, generate be used to indicate target face object whether be living body faces objective result.
In the present embodiment, the output of information processing unit 500 and capturing unit 501 can be in response to passing through wired connection
Mode or radio connection detect authentication trigger signal, export preset action command, and capture target person
The video of face object.Wherein, authentication trigger signal is the signal for triggering authentication operation.Action command is for referring to
Show that target face object executes movement.Specifically, action command can be various forms (such as can be voice, image or
Text etc.) instruction.Video is the video for authenticating the identity of target face object.
In the present embodiment, the video obtained based on output and capturing unit 501, the first generation unit 502 can be generated
Be used to indicate target face object whether execution movement the first result.
In the present embodiment, the video obtained based on output and capturing unit 501, detection unit 503 can be from video
Video frame, and the In vivo detection model that the input of extracted video frame is trained in advance are extracted, to obtain testing result.Its
In, testing result is used to indicate whether face object corresponding to extracted video frame is living body faces.In vivo detection model
For characterizing the corresponding relationship of testing result corresponding to video frame and video frame.
In the present embodiment, the testing result and the first generation unit 502 obtained based on detection unit 503 obtain first
As a result, the second generation unit 504 can be generated be used to indicate target face object whether be living body faces objective result.Its
In, objective result can be for the result obtained after target face object progress In vivo detection.
In some optional implementations of the present embodiment, device 500 can also include: that output unit (does not show in figure
Out), it is configured in response to determine that objective result instruction target face object is non-living body face, output is recognized for characterizing identity
Demonstrate,prove the prompt information of failure.
In some optional implementations of the present embodiment, device 500 can also include: that selection unit (is not shown in figure
Out), it is configured in response to determine that objective result instruction target face object is living body faces, be selected from extracted video frame
Take video frame as authentication image corresponding to target face object;Transmission unit (not shown), be configured to by
Authentication image is sent to server-side;Acquiring unit (not shown) is configured to obtain matching result from server-side,
In, matching result is used to indicate whether authentication image matches with pre-stored user's facial image.
In some optional implementations of the present embodiment, detection unit 503 can be further configured to from video
At least two video frames of middle extraction, and extracted at least two video frame is inputted to In vivo detection mould trained in advance respectively
Type, to obtain at least two testing results;And second generation unit 504 may include: that the first generation module (does not show in figure
Out), it is configured to based at least two testing result obtained, generation is used to indicate whether target face object is living body people
Second result of face;Second generation module (not shown) is configured to be used for based on the first result and second as a result, generating
Indicate target face object whether be living body faces objective result.
In some optional implementations of the present embodiment, the second generation module can be further configured to: response
It is living body people in determining that the first result instruction target face object performs movement and the second result instruction target face object
Face generates the objective result for being used to indicate that target face object is living body faces.
In some optional implementations of the present embodiment, the second generation module can be further configured to: response
It is non-living body in determining that the first result instruction target face object is not carried out movement or the second result instruction target face object
Face generates the objective result for being used to indicate that target face object is non-living body face.
In some optional implementations of the present embodiment, In vivo detection model can be to be obtained by machine learning
's.
In some optional implementations of the present embodiment, In vivo detection model can be silent In vivo detection model.
In some optional implementations of the present embodiment, detection unit 503 may include: determining module (in figure not
Show), the video frame being configured in the video frame for including for video determines head pose corresponding to the video frame;It mentions
Modulus block (not shown) is configured to extract the video frame that corresponding head pose meets preset condition from video.
It is understood that all units recorded in the device 500 and each step phase in the method with reference to Fig. 2 description
It is corresponding.As a result, above with respect to the operation of method description, the beneficial effect of feature and generation be equally applicable to device 500 and its
In include unit, details are not described herein.
The device provided by the above embodiment 500 of the disclosure is by exporting in response to detecting authentication trigger signal
Preset action command, and the video of capture target face object, are then based on video, and generation is used to indicate target face pair
As if the first of no execution movement from video as a result, then extract video frame, and extracted video frame is inputted in advance
Trained In vivo detection model is finally used to indicate target as a result, generating based on testing result and first to obtain testing result
Face object whether be living body faces objective result, thus carrying out movement In vivo detection to face object by action command
On the basis of, it is effectively utilized In vivo detection model and silent In vivo detection has been carried out to face object, in turn, by acting living body
Testing result corresponding to the first corresponding result of detection and silent In vivo detection, can be generated more accurate target knot
Fruit improves the accuracy of In vivo detection, helps to improve the reliability of authentication.
Below with reference to Fig. 6, it illustrates the terminal device (end of example as shown in figure 1 for being suitable for being used to realize the embodiment of the present disclosure
The structural schematic diagram of end equipment 101,102,103) 600.Terminal device in the embodiment of the present disclosure can include but is not limited to all
As mobile phone, laptop, digit broadcasting receiver, PDA (personal digital assistant), PAD (tablet computer), PMP are (portable
Formula multimedia player), the mobile terminal and such as number TV, desk-top meter of car-mounted terminal (such as vehicle mounted guidance terminal) etc.
The fixed terminal of calculation machine etc..Terminal device shown in Fig. 6 is only an example, should not function to the embodiment of the present disclosure and
Use scope brings any restrictions.
As shown in fig. 6, terminal device 600 may include processing unit (such as central processing unit, graphics processor etc.)
601, random access can be loaded into according to the program being stored in read-only memory (ROM) 602 or from storage device 608
Program in memory (RAM) 603 and execute various movements appropriate and processing.In RAM 603, it is also stored with terminal device
Various programs and data needed for 600 operations.Processing unit 601, ROM 602 and RAM603 are connected with each other by bus 604.
Input/output (I/O) interface 605 is also connected to bus 604.
In general, following device can connect to I/O interface 605: including such as touch screen, touch tablet, keyboard, mouse, taking the photograph
As the input unit 606 of head, microphone, accelerometer, gyroscope etc.;Including such as liquid crystal display (LCD), loudspeaker, vibration
The output device 607 of dynamic device etc.;Storage device 608 including such as tape, hard disk etc.;And communication device 609.Communication device
609, which can permit terminal device 600, is wirelessly or non-wirelessly communicated with other equipment to exchange data.Although Fig. 6 shows tool
There is the terminal device 600 of various devices, it should be understood that being not required for implementing or having all devices shown.It can be with
Alternatively implement or have more or fewer devices.
Particularly, in accordance with an embodiment of the present disclosure, it may be implemented as computer above with reference to the process of flow chart description
Software program.For example, embodiment of the disclosure includes a kind of computer program product comprising be carried on computer-readable medium
On computer program, which includes the program code for method shown in execution flow chart.In such reality
It applies in example, which can be downloaded and installed from network by communication device 609, or from storage device 608
It is mounted, or is mounted from ROM 602.When the computer program is executed by processing unit 601, the embodiment of the present disclosure is executed
Method in the above-mentioned function that limits.
It should be noted that computer-readable medium described in the disclosure can be computer-readable signal media or meter
Calculation machine readable storage medium storing program for executing either the two any combination.Computer readable storage medium for example can be --- but not
Be limited to --- electricity, magnetic, optical, electromagnetic, infrared ray or semiconductor system, device or device, or any above combination.Meter
The more specific example of calculation machine readable storage medium storing program for executing can include but is not limited to: have the electrical connection, just of one or more conducting wires
Taking formula computer disk, hard disk, random access storage device (RAM), read-only memory (ROM), erasable type may be programmed read-only storage
Device (EPROM or flash memory), optical fiber, portable compact disc read-only memory (CD-ROM), light storage device, magnetic memory device,
Or above-mentioned any appropriate combination.In the disclosure, computer readable storage medium can be it is any include or storage journey
The tangible medium of sequence, the program can be commanded execution system, device or device use or in connection.And at this
In open, computer-readable signal media may include in a base band or as the data-signal that carrier wave a part is propagated,
In carry computer-readable program code.The data-signal of this propagation can take various forms, including but not limited to
Electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media can also be computer-readable and deposit
Any computer-readable medium other than storage media, the computer-readable signal media can send, propagate or transmit and be used for
By the use of instruction execution system, device or device or program in connection.Include on computer-readable medium
Program code can transmit with any suitable medium, including but not limited to: electric wire, optical cable, RF (radio frequency) etc. are above-mentioned
Any appropriate combination.
Above-mentioned computer-readable medium can be included in above-mentioned terminal device;It is also possible to individualism, and not
It is fitted into the terminal device.Above-mentioned computer-readable medium carries one or more program, when said one or more
When a program is executed by the terminal device, so that the terminal device: in response to detecting authentication trigger signal, output is default
Action command, and capture target face object video, wherein action command is used to indicate face object and executes movement;
Based on video, generate be used to indicate target face object whether the first result of execution movement;Video frame is extracted from video, with
And the In vivo detection model that the input of extracted video frame is trained in advance, to obtain testing result, wherein testing result is used for
Indicate whether face object corresponding to extracted video frame is living body faces;Based on testing result obtained and the first knot
Fruit, generate be used to indicate target face object whether be living body faces objective result.
The calculating of the operation for executing the disclosure can be write with one or more programming languages or combinations thereof
Machine program code, described program design language include object oriented program language-such as Java, Smalltalk, C+
+, it further include conventional procedural programming language-such as " C " language or similar programming language.Program code can
Fully to execute, partly execute on the user computer on the user computer, be executed as an independent software package,
Part executes on the remote computer or executes on a remote computer or server completely on the user computer for part.
In situations involving remote computers, remote computer can pass through the network of any kind --- including local area network (LAN)
Or wide area network (WAN)-is connected to subscriber computer, or, it may be connected to outer computer (such as utilize Internet service
Provider is connected by internet).
Flow chart and block diagram in attached drawing are illustrated according to the system of the various embodiments of the disclosure, method and computer journey
The architecture, function and operation in the cards of sequence product.In this regard, each box in flowchart or block diagram can generation
A part of one module, program segment or code of table, a part of the module, program segment or code include one or more use
The executable instruction of the logic function as defined in realizing.It should also be noted that in some implementations as replacements, being marked in box
The function of note can also occur in a different order than that indicated in the drawings.For example, two boxes succeedingly indicated are actually
It can be basically executed in parallel, they can also be executed in the opposite order sometimes, and this depends on the function involved.Also it to infuse
Meaning, the combination of each box in block diagram and or flow chart and the box in block diagram and or flow chart can be with holding
The dedicated hardware based system of functions or operations as defined in row is realized, or can use specialized hardware and computer instruction
Combination realize.
Being described in unit involved in the embodiment of the present disclosure can be realized by way of software, can also be by hard
The mode of part is realized.Wherein, the title of unit does not constitute the restriction to the unit itself under certain conditions, for example, the
One generation unit is also described as " generating the unit of the first result ".
Above description is only the preferred embodiment of the disclosure and the explanation to institute's application technology principle.Those skilled in the art
Member is it should be appreciated that the open scope involved in the disclosure, however it is not limited to technology made of the specific combination of above-mentioned technical characteristic
Scheme, while should also cover in the case where not departing from design disclosed above, it is carried out by above-mentioned technical characteristic or its equivalent feature
Any combination and the other technical solutions formed.Such as features described above has similar function with (but being not limited to) disclosed in the disclosure
Can technical characteristic replaced mutually and the technical solution that is formed.
Claims (12)
1. a kind of information processing method, comprising:
In response to detecting authentication trigger signal, preset action command, and the view of capture target face object are exported
Frequently, wherein the action command is used to indicate the target face object and executes movement;
Based on the video, generation is used to indicate the first result whether the target face object executes the movement;
Video frame is extracted from the video;
By extracted video frame input In vivo detection model trained in advance, to obtain testing result, wherein the detection knot
Fruit is used to indicate whether face object corresponding to extracted video frame is living body faces;And
It is used to indicate whether the target face object is living body faces as a result, generating based on the testing result and described first
Objective result.
2. according to the method described in claim 1, wherein, the method also includes:
Indicate that the target face object is non-living body face in response to the determination objective result, output is recognized for characterizing identity
Demonstrate,prove the prompt information of failure.
3. according to the method described in claim 1, wherein, the method also includes:
It indicates that the target face object is living body faces in response to the determination objective result, is selected from extracted video frame
Video frame is selected as authentication image corresponding to the target face object;
The authentication image is sent to server-side;And
Obtain matching result from the server-side, wherein the matching result be used to indicate the authentication image whether with
Pre-stored user's facial image matches.
4. according to the method described in claim 1, wherein,
Video frame is extracted from the video includes:
At least two video frames are extracted from the video;
Extracted video frame is inputted into the In vivo detection model, to obtain testing result, comprising:
At least two video frame is inputted into the In vivo detection model respectively, to obtain at least two testing results;And
It is used to indicate whether the target face object is living body faces as a result, generating based on the testing result and described first
Objective result, comprising:
Based at least two testing result obtained, generation is used to indicate whether the target face object is living body faces
Second result;
Based on first result and described second as a result, generating the objective result.
5. according to the method described in claim 4, wherein, based on first result and described second as a result, generating for referring to
Show the target face object whether be living body faces objective result, comprising:
Indicate that the target face object performs the movement and second result in response to determination first result
It indicates that the target face object is living body faces, generates the target knot for being used to indicate that the target face object is living body faces
Fruit.
6. according to the method described in claim 5, wherein, based on first result and described second as a result, generating for referring to
Show the target face object whether be living body faces objective result, further includes:
Indicate that the target face object is not carried out the movement or second result in response to determination first result
The middle instruction target face object is non-living body face, and generating and being used to indicate the target face object is non-living body face
Objective result.
7. according to the method described in claim 1, wherein, the In vivo detection model is obtained by machine learning.
8. according to the method described in claim 1, wherein, the In vivo detection model is silent In vivo detection model.
9. method described in one of -8 according to claim 1, wherein extract video frame from the video, comprising:
The video frame in video frame for including for the video, determines head pose corresponding to the video frame;
The video frame that corresponding head pose meets preset condition is extracted from the video.
10. a kind of information processing unit, comprising:
Output and capturing unit, are configured in response to detect authentication trigger signal, export preset action command, with
And the video of capture target face object, wherein the action command is used to indicate the target face object and executes movement;
First generation unit is configured to based on the video, and generation is used to indicate whether the target face object executes institute
State the first result of movement;
Detection unit is configured to extract video frame from the video, and extracted video frame is inputted training in advance
In vivo detection model, to obtain testing result, wherein the testing result is used to indicate corresponding to extracted video frame
Whether face object is living body faces;
Second generation unit is configured to be used to indicate the target as a result, generating based on the testing result and described first
Face object whether be living body faces objective result.
11. a kind of terminal device, comprising:
One or more processors;
Storage device is stored thereon with one or more programs;
Camera is configured to acquire video;
When one or more of programs are executed by one or more of processors, so that one or more of processors are real
The now method as described in any in claim 1-9.
12. a kind of computer-readable medium, is stored thereon with computer program, wherein the realization when program is executed by processor
Method as described in any in claim 1-9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910211759.7A CN109934191A (en) | 2019-03-20 | 2019-03-20 | Information processing method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910211759.7A CN109934191A (en) | 2019-03-20 | 2019-03-20 | Information processing method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109934191A true CN109934191A (en) | 2019-06-25 |
Family
ID=66987817
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910211759.7A Pending CN109934191A (en) | 2019-03-20 | 2019-03-20 | Information processing method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109934191A (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111046804A (en) * | 2019-12-13 | 2020-04-21 | 北京旷视科技有限公司 | Living body detection method, living body detection device, electronic equipment and readable storage medium |
CN111898529A (en) * | 2020-07-29 | 2020-11-06 | 北京字节跳动网络技术有限公司 | Face detection method and device, electronic equipment and computer readable medium |
CN112101286A (en) * | 2020-09-25 | 2020-12-18 | 北京市商汤科技开发有限公司 | Service request method, device, computer equipment and storage medium |
CN112101289A (en) * | 2020-09-25 | 2020-12-18 | 北京市商汤科技开发有限公司 | Service providing method and device, computer equipment and storage medium |
CN113255529A (en) * | 2021-05-28 | 2021-08-13 | 支付宝(杭州)信息技术有限公司 | Biological feature identification method, device and equipment |
CN117392596A (en) * | 2023-09-07 | 2024-01-12 | 中关村科学城城市大脑股份有限公司 | Data processing method, device, electronic equipment and computer readable medium |
Citations (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101702198A (en) * | 2009-11-19 | 2010-05-05 | 浙江大学 | Identification method for video and living body faces based on background comparison |
CN101770613A (en) * | 2010-01-19 | 2010-07-07 | 北京智慧眼科技发展有限公司 | Social insurance identity authentication method based on face recognition and living body detection |
CN102622588A (en) * | 2012-03-08 | 2012-08-01 | 无锡数字奥森科技有限公司 | Dual-certification face anti-counterfeit method and device |
CN103634120A (en) * | 2013-12-18 | 2014-03-12 | 上海市数字证书认证中心有限公司 | Method and system for real-name authentication based on face recognition |
CN106302330A (en) * | 2015-05-21 | 2017-01-04 | 腾讯科技(深圳)有限公司 | Auth method, device and system |
CN106446831A (en) * | 2016-09-24 | 2017-02-22 | 南昌欧菲生物识别技术有限公司 | Face recognition method and device |
CN106599772A (en) * | 2016-10-31 | 2017-04-26 | 北京旷视科技有限公司 | Living body authentication method, identity authentication method and device |
CN106982426A (en) * | 2017-03-30 | 2017-07-25 | 广东微模式软件股份有限公司 | A kind of method and system for remotely realizing old card system of real name |
CN107016608A (en) * | 2017-03-30 | 2017-08-04 | 广东微模式软件股份有限公司 | The long-range account-opening method and system of a kind of identity-based Information Authentication |
CN107066983A (en) * | 2017-04-20 | 2017-08-18 | 腾讯科技(上海)有限公司 | A kind of auth method and device |
CN107766785A (en) * | 2017-01-25 | 2018-03-06 | 丁贤根 | A kind of face recognition method |
CN108416595A (en) * | 2018-03-27 | 2018-08-17 | 百度在线网络技术(北京)有限公司 | Information processing method and device |
CN108415653A (en) * | 2018-03-27 | 2018-08-17 | 百度在线网络技术(北京)有限公司 | Screen locking method and device for terminal device |
CN108494778A (en) * | 2018-03-27 | 2018-09-04 | 百度在线网络技术(北京)有限公司 | Identity identifying method and device |
CN108509916A (en) * | 2018-03-30 | 2018-09-07 | 百度在线网络技术(北京)有限公司 | Method and apparatus for generating image |
CN108804884A (en) * | 2017-05-02 | 2018-11-13 | 北京旷视科技有限公司 | Identity authentication method, device and computer storage media |
-
2019
- 2019-03-20 CN CN201910211759.7A patent/CN109934191A/en active Pending
Patent Citations (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101702198A (en) * | 2009-11-19 | 2010-05-05 | 浙江大学 | Identification method for video and living body faces based on background comparison |
CN101770613A (en) * | 2010-01-19 | 2010-07-07 | 北京智慧眼科技发展有限公司 | Social insurance identity authentication method based on face recognition and living body detection |
CN102622588A (en) * | 2012-03-08 | 2012-08-01 | 无锡数字奥森科技有限公司 | Dual-certification face anti-counterfeit method and device |
CN103634120A (en) * | 2013-12-18 | 2014-03-12 | 上海市数字证书认证中心有限公司 | Method and system for real-name authentication based on face recognition |
CN106302330A (en) * | 2015-05-21 | 2017-01-04 | 腾讯科技(深圳)有限公司 | Auth method, device and system |
CN106446831A (en) * | 2016-09-24 | 2017-02-22 | 南昌欧菲生物识别技术有限公司 | Face recognition method and device |
CN106599772A (en) * | 2016-10-31 | 2017-04-26 | 北京旷视科技有限公司 | Living body authentication method, identity authentication method and device |
CN107766785A (en) * | 2017-01-25 | 2018-03-06 | 丁贤根 | A kind of face recognition method |
CN107016608A (en) * | 2017-03-30 | 2017-08-04 | 广东微模式软件股份有限公司 | The long-range account-opening method and system of a kind of identity-based Information Authentication |
CN106982426A (en) * | 2017-03-30 | 2017-07-25 | 广东微模式软件股份有限公司 | A kind of method and system for remotely realizing old card system of real name |
CN107066983A (en) * | 2017-04-20 | 2017-08-18 | 腾讯科技(上海)有限公司 | A kind of auth method and device |
CN108804884A (en) * | 2017-05-02 | 2018-11-13 | 北京旷视科技有限公司 | Identity authentication method, device and computer storage media |
CN108416595A (en) * | 2018-03-27 | 2018-08-17 | 百度在线网络技术(北京)有限公司 | Information processing method and device |
CN108415653A (en) * | 2018-03-27 | 2018-08-17 | 百度在线网络技术(北京)有限公司 | Screen locking method and device for terminal device |
CN108494778A (en) * | 2018-03-27 | 2018-09-04 | 百度在线网络技术(北京)有限公司 | Identity identifying method and device |
CN108509916A (en) * | 2018-03-30 | 2018-09-07 | 百度在线网络技术(北京)有限公司 | Method and apparatus for generating image |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111046804A (en) * | 2019-12-13 | 2020-04-21 | 北京旷视科技有限公司 | Living body detection method, living body detection device, electronic equipment and readable storage medium |
CN111898529A (en) * | 2020-07-29 | 2020-11-06 | 北京字节跳动网络技术有限公司 | Face detection method and device, electronic equipment and computer readable medium |
CN111898529B (en) * | 2020-07-29 | 2022-07-19 | 北京字节跳动网络技术有限公司 | Face detection method and device, electronic equipment and computer readable medium |
CN112101286A (en) * | 2020-09-25 | 2020-12-18 | 北京市商汤科技开发有限公司 | Service request method, device, computer equipment and storage medium |
CN112101289A (en) * | 2020-09-25 | 2020-12-18 | 北京市商汤科技开发有限公司 | Service providing method and device, computer equipment and storage medium |
CN113255529A (en) * | 2021-05-28 | 2021-08-13 | 支付宝(杭州)信息技术有限公司 | Biological feature identification method, device and equipment |
CN117392596A (en) * | 2023-09-07 | 2024-01-12 | 中关村科学城城市大脑股份有限公司 | Data processing method, device, electronic equipment and computer readable medium |
CN117392596B (en) * | 2023-09-07 | 2024-04-30 | 中关村科学城城市大脑股份有限公司 | Data processing method, electronic device, and computer-readable medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11017070B2 (en) | Visual data processing of response images for authentication | |
CN111033501B (en) | Secure authorization for access to private data in virtual reality | |
CN109934191A (en) | Information processing method and device | |
US10032008B2 (en) | Trust broker authentication method for mobile devices | |
CN109993150B (en) | Method and device for identifying age | |
CN109871834A (en) | Information processing method and device | |
WO2016169432A1 (en) | Identity authentication method and device, and terminal | |
CN111476871B (en) | Method and device for generating video | |
CN109858445A (en) | Method and apparatus for generating model | |
CN109977839A (en) | Information processing method and device | |
CN109086719A (en) | Method and apparatus for output data | |
CN107430858A (en) | The metadata of transmission mark current speaker | |
CN108985257A (en) | Method and apparatus for generating information | |
CN109829432A (en) | Method and apparatus for generating information | |
US9202035B1 (en) | User authentication based on biometric handwriting aspects of a handwritten code | |
CN109919244A (en) | Method and apparatus for generating scene Recognition model | |
CN110348419A (en) | Method and apparatus for taking pictures | |
CN108171211A (en) | Biopsy method and device | |
CN108521516A (en) | Control method and device for terminal device | |
CN110059624A (en) | Method and apparatus for detecting living body | |
CN109934142A (en) | Method and apparatus for generating the feature vector of video | |
CN109726536A (en) | Method for authenticating, electronic equipment and computer-readable program medium | |
CN110046571B (en) | Method and device for identifying age | |
CN110008926A (en) | The method and apparatus at age for identification | |
CN109829431A (en) | Method and apparatus for generating information |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190625 |
|
RJ01 | Rejection of invention patent application after publication |