CN109977839A - Information processing method and device - Google Patents
Information processing method and device Download PDFInfo
- Publication number
- CN109977839A CN109977839A CN201910211758.2A CN201910211758A CN109977839A CN 109977839 A CN109977839 A CN 109977839A CN 201910211758 A CN201910211758 A CN 201910211758A CN 109977839 A CN109977839 A CN 109977839A
- Authority
- CN
- China
- Prior art keywords
- video frame
- living body
- face object
- destination number
- target face
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/179—Human faces, e.g. facial parts, sketches or expressions metadata assisted face recognition
Landscapes
- Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Psychiatry (AREA)
- Social Psychology (AREA)
- Collating Specific Patterns (AREA)
Abstract
Embodiment of the disclosure discloses information processing method and device.One specific embodiment of this method includes: to export preset action command, and the video of capture target face object in response to detecting authentication trigger signal;Based on video, determine whether target face object performs movement indicated by action command;In response to determining that target face object performs movement, destination number video frame is extracted from video;Destination number video frame is inputted to vivo assessment model trained in advance respectively, to obtain respective living body value, wherein the face object that living body value is used to characterize corresponding video frame is the probability of living body faces;It is at least based on living body value obtained, selects video frame as authentication image corresponding to target face object from destination number video frame.The embodiment helps to improve the reliability of authentication.
Description
Technical field
Embodiment of the disclosure is related to field of computer technology more particularly to information processing method and device.
Background technique
With the development of face recognition technology, people can realize that account is logged in, payment, unlocked by " brush face "
Deng.This brings convenience for people's lives, but there is also risks at the same time.Since machine can identify face, then
It can certainly identify facial image.Accordingly, it is possible to lawless people occur using the risk of other people facial image camouflage identity.
Summary of the invention
Embodiment of the disclosure proposes information processing method and device.
In a first aspect, the embodiment of the present disclosure provides a kind of information processing method, this method comprises: in response to detecting body
Part certification trigger signal exports preset action command, and the video of capture target face object, wherein action command is used
Movement is executed in instruction target face object;Based on video, determine whether target face object performs movement;In response to determination
Target face performs movement, and destination number video frame is extracted from video, and destination number video frame difference is defeated
Enter vivo assessment model trained in advance, to obtain respective living body value, wherein living body value is used to characterize the people of corresponding video frame
Face object is the probability of living body faces;It is at least based on living body value obtained, selects video frame from destination number video frame
As authentication image corresponding to target face object.
In some embodiments, this method further include: authentication image is sent to server-side;And it is obtained from server-side
Matching result, wherein matching result be used to indicate authentication image whether with pre-stored user's facial image phase
Match.
In some embodiments, this method further include: destination number video frame is inputted to quality trained in advance respectively
Evaluation model, to obtain respective mass value, wherein mass value is used to characterize the quality of response video frame;And it is at least based on
Living body value obtained extracts video frame as authentication corresponding to target face object from destination number video frame
Image, comprising: be based on living body value obtained and mass value, video frame is extracted from destination number video frame as target person
Authentication image corresponding to face object.
In some embodiments, it is based on living body value obtained and mass value, view is extracted from destination number video frame
Frequency frame is as authentication image corresponding to target face object, comprising: is retrieved as mass value and living body value distributes respectively
Weight;For the video frame in destination number video frame, based on acquired weight, to mass value corresponding to the video frame
It is weighted summation process with living body value, to obtain result corresponding with the video frame;Based on obtained as a result, from number of targets
It measures and extracts video frame in a video frame as authentication image corresponding to target face object.
In some embodiments, vivo assessment model is obtained by machine learning.
Second aspect, embodiment of the disclosure provide a kind of information processing unit, which includes: that output and capture are single
Member is configured in response to detect authentication trigger signal, exports preset action command, and capture target face pair
The video of elephant, wherein action command is used to indicate target face object and executes movement;Determination unit is configured to based on video,
Determine whether target face object performs movement;Extraction unit is configured in response to determine that target face object performs
Destination number video frame is extracted in movement from video;First input unit is configured to distinguish destination number video frame
Input vivo assessment model trained in advance, to obtain respective living body value, wherein living body value is for characterizing corresponding video frame
Face object is the probability of living body faces;Selecting unit is configured at least based on living body value obtained, from destination number
Video frame is extracted in video frame as authentication image corresponding to target face object.
In some embodiments, device further include: transmission unit is configured to authentication image being sent to service
End;Acquiring unit is configured to obtain matching result from server-side, wherein matching result, which is used to indicate authentication image, is
It is no to match with pre-stored user's facial image.
In some embodiments, device further include: the second input unit is configured to destination number video frame point
Environmental Evaluation Model trained in advance is not inputted, to obtain respective mass value, wherein mass value is for characterizing corresponding video frame
Quality;And selecting unit is further configured to: living body value obtained and mass value is based on, from destination number video
Video frame is extracted in frame as authentication image corresponding to target face object.
The third aspect, embodiment of the disclosure provide a kind of terminal device, comprising: one or more processors;Storage
Device is stored thereon with one or more programs;Camera is configured to acquire video;When one or more programs are by one
Or multiple processors execute, so that the method that one or more processors realize any embodiment in above- mentioned information processing method.
Fourth aspect, embodiment of the disclosure provide a kind of computer-readable medium, are stored thereon with computer program,
The method of any embodiment in above- mentioned information processing method is realized when the program is executed by processor.
The information processing method and device that embodiment of the disclosure provides, by response to detecting authentication triggering letter
Number, preset action command, and the video of capture target face object are exported, video is then based on, determines target face pair
As if it is no perform movement indicated by action command, then in response to determine target face object perform movement, from video
Middle extraction destination number video frame, and destination number video frame is inputted to vivo assessment model trained in advance respectively,
To obtain respective living body value, wherein the face object that living body value is used to characterize corresponding video frame is the probability of living body faces, most
It is at least based on living body value obtained afterwards, selects video frame as corresponding to target face object from destination number video frame
Authentication image, thus based on living body value extract authentication image can based on movement In vivo detection, determine target
In the case that face object is living body faces, further increases face object corresponding to authentication image and carry out action live
The consistency for the face object that physical examination is surveyed, helps to improve the reliability of authentication.
Detailed description of the invention
By reading a detailed description of non-restrictive embodiments in the light of the attached drawings below, the disclosure is other
Feature, objects and advantages will become more apparent upon:
Fig. 1 is that one embodiment of the disclosure can be applied to exemplary system architecture figure therein;
Fig. 2 is the flow chart according to one embodiment of the information processing method of the disclosure;
Fig. 3 is the schematic diagram of an application scenarios of information processing method according to an embodiment of the present disclosure;
Fig. 4 is the flow chart according to another embodiment of the information processing method of the disclosure;
Fig. 5 is the structural schematic diagram according to one embodiment of the information processing unit of the disclosure;
Fig. 6 is adapted for the structural schematic diagram for the computer system for realizing the terminal device of embodiment of the disclosure.
Specific embodiment
The disclosure is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched
The specific embodiment stated is used only for explaining related invention, rather than the restriction to the invention.It also should be noted that in order to
Convenient for description, part relevant to related invention is illustrated only in attached drawing.
It should be noted that in the absence of conflict, the feature in embodiment and embodiment in the disclosure can phase
Mutually combination.The disclosure is described in detail below with reference to the accompanying drawings and in conjunction with the embodiments.
Fig. 1 is shown can be using the exemplary system of the embodiment of the information processing method or information processing unit of the disclosure
System framework 100.
As shown in Figure 1, system architecture 100 may include terminal device 101,102,103, network 104 and server 105.
Network 104 between terminal device 101,102,103 and server 105 to provide the medium of communication link.Network 104 can be with
Including various connection types, such as wired, wireless communication link or fiber optic cables etc..
User can be used terminal device 101,102,103 and be interacted by network 104 with server 105, to receive or send out
Send message etc..Various telecommunication customer end applications, such as payment class software, purchase can be installed on terminal device 101,102,103
Species application, web browser applications, searching class application, instant messaging tools, social platform software etc..
Terminal device 101,102,103 can be hardware, be also possible to software.When terminal device 101,102,103 is hard
When part, it can be the various electronic equipments with camera, including but not limited to smart phone, tablet computer, e-book reading
(Moving Picture Experts Group Audio Layer III, dynamic image expert compress mark for device, MP3 player
Quasi- audio level 3), MP4 (Moving Picture Experts Group Audio Layer IV, dynamic image expert compression
Standard audio level 4) player, pocket computer on knee and desktop computer etc..When terminal device 101,102,103 is
When software, it may be mounted in above-mentioned cited electronic equipment.Its may be implemented into multiple softwares or software module (such as with
To provide the multiple softwares or software module of Distributed Services), single software or software module also may be implemented into.It does not do herein
It is specific to limit.
Server 105 can be to provide the server of various services, such as send living body to terminal device 101,102,103
The model training server of evaluation model.Model training server, which can train, obtains vivo assessment model, and will be obtained
Vivo assessment model is sent to terminal device.
It should be noted that information processing method provided by embodiment of the disclosure generally by terminal device 101,102,
103 execute, and correspondingly, information processing unit is generally positioned in terminal device 101,102,103.
It should be noted that server can be hardware, it is also possible to software.When server is hardware, may be implemented
At the distributed server cluster that multiple servers form, individual server also may be implemented into.It, can when server is software
It, can also be with to be implemented as multiple softwares or software module (such as providing multiple softwares of Distributed Services or software module)
It is implemented as single software or software module.It is not specifically limited herein.
It should be understood that the number of terminal device, network and server in Fig. 1 is only schematical.According to realization need
It wants, can have any number of terminal device, network and server.Used in during obtaining authentication image
Data do not need in the case where long-range obtain, and above system framework can not include network and server, and only include terminal
Equipment.
With continued reference to Fig. 2, the process 200 of one embodiment of the information processing method according to the disclosure is shown.The letter
Cease processing method, comprising the following steps:
Step 201, in response to detecting authentication trigger signal, preset action command, and capture target are exported
The video of face object.
In the present embodiment, the executing subject (such as terminal device shown in FIG. 1 101,102,103) of information processing method
It can be exported preset dynamic in response to detecting authentication trigger signal by wired connection mode or radio connection
It instructs, and the video of capture target face object.Wherein, authentication trigger signal is for triggering authentication operation
Signal.Authentication operation for for authenticate personage corresponding to target face object whether be pre-registered user behaviour
Make.Specifically, trigger action that authentication trigger signal can execute for user (such as click body in above-mentioned executing subject
Part certification trigger button) caused by signal, or electronic equipment (such as Fig. 1 with the communication connection of above-mentioned executing subject
Shown in server 105) send signal.Particularly, above-mentioned executing subject can use camera and carry out continuing shooting, response
In taking face object, authentication trigger signal is generated.
In practice, when carrying out authentication by recognition of face, pre-registered use is utilized in order to take precautions against lawless people
The conjecture face (such as facial image) at family carries out authentication, and then realizes identity theft, in general, face matching can carried out
The preceding face object to for carrying out authentication carries out In vivo detection, can determine whether face object is living body people with this
Face.
In the present embodiment, action command is used to indicate target face object and executes movement.Specifically, action command can be with
For the instruction of various forms (such as can be voice, image or text etc.).As an example, action command can be text
" blink ".
Herein, above-mentioned executing subject can be instructed with output action, so that target face object is executed based on action command
Movement and above-mentioned executing subject can capture the video of target face object, for authenticating the identity of target face object.
Step 202, it is based on video, determines whether target face object performs movement.
In the present embodiment, based on video obtained in step 201, above-mentioned executing subject can determine target face object
Whether above-mentioned action command indicated by movement is performed.
Specifically, above-mentioned executing subject can identify video, to determine it is dynamic whether target face object performs
Instruct indicated movement.
It is appreciated that in shooting process, face object can be based on action command when face object is living body faces
Execute movement;When face object is not living body faces, in shooting process, face object can not be executed dynamic based on action command
Make.In turn, herein, face can be determined by the way that whether determining target face object executes movement indicated by action command
Whether object is living body faces.
Step 203, in response to determining that target face object performs movement, destination number video is extracted from video
Frame.
In the present embodiment, above-mentioned executing subject can be in response to determining that target face object performs movement, from video
Middle extraction destination number video frame.Wherein, destination number can be predetermined quantity, or be wrapped based on video
The quantity that the quantity of the video frame included is determined, for example, destination number be video included by video frame quantity two/
One.
Specifically, above-mentioned executing subject can extract destination number video frame using various methods from video, such as
It can be extracted using the method extracted at random, be located at default position alternatively, can extract from the sequence of frames of video corresponding to video
The destination number video frame set.
Step 204, destination number video frame is inputted to vivo assessment model trained in advance respectively, it is respective to obtain
Living body value.
In the present embodiment, for the destination number video frame extracted, above-mentioned executing subject can will be extracted
Destination number video frame inputs vivo assessment model trained in advance respectively, to obtain respective living body value.Wherein, living body value
Face object for characterizing corresponding video frame is the probability of living body faces.Specifically, living body value is bigger, corresponding view can be characterized
The face object of frequency frame is that the probability of living body faces is bigger.
Herein, vivo assessment model can be used for characterizing video frame pass corresponding with living body value corresponding to video frame
System.Specifically, as an example, vivo assessment model can be technical staff is in advance based on to a large amount of video frame and video frame institute
The statistics of corresponding living body value and the mapping table for pre-establishing, being stored with multiple video frames with corresponding living body value;?
It can be to be obtained after being trained using machine learning method to initial model (such as neural network) based on preset training sample
The model arrived.
In some optional implementations of the present embodiment, vivo assessment model can be to be obtained by machine learning
's.Specifically, as an example, vivo assessment model can be instructed by above-mentioned executing subject or other electronic equipments by following steps
Get: firstly, obtain training sample set, wherein training sample include sample facial image and for sample facial image it is pre-
The sample living body value first marked, sample living body value are living body people for characterizing sample face object corresponding to sample facial image
The probability of face.Then, using machine learning method, the sample facial image that includes using the training sample that training sample is concentrated as
Input, using sample living body value corresponding to the sample facial image inputted as desired output, training obtains vivo assessment mould
Type.
Step 205, it is at least based on living body value obtained, selects video frame as target from destination number video frame
Authentication image corresponding to face object.
In the present embodiment, at least based on the living body value obtained in step 204, above-mentioned executing subject can be from destination number
Select video frame as authentication image corresponding to target face object in a video frame.Wherein, authentication image is
For carrying out authentication, to determine whether target face object belongs to the image of pre-registered user.
Specifically, above-mentioned executing subject can be based on living body value obtained using various methods, regarded from destination number
Authentication image is selected in frequency frame, is made for example, above-mentioned executing subject can choose the maximum video frame of corresponding living body value
For authentication image;Alternatively, above-mentioned executing subject, which can choose corresponding living body value, is more than or equal to default living body threshold value
Video frame is as authentication image.
In some optional implementations of the present embodiment, after obtaining authentication image, above-mentioned executing subject is also
Authentication image can be sent to server-side;And matching result is obtained from server-side, wherein matching result is used to indicate
Whether authentication image matches with pre-stored user's facial image.Matching result is used to indicate authentication image institute
Whether corresponding face belongs to user corresponding to user's facial image, for example, matching result can be authentication image with
The matched probability of user's facial image;In another example matching result can be a Boolean, which, which is 1, indicates that identity is recognized
Image and user's facial image are demonstrate,proved, indicates authentication image and user's facial image for 0, vice versa.In some implementations
In example, it is also based on matching result and generates other information to be presented to the user, can include but is not limited to following at least one
: text, number, symbol, image, audio, video.
It is appreciated that when matching result indicates that face corresponding to authentication image belongs to corresponding to user's facial image
User when, then authentication success;When matching result indicates that face corresponding to authentication image is not belonging to user's face
When user corresponding to image, then authentication fails.
In this implementation, server-side can be using various methods to authentication image and pre-stored user people
Face image is matched, and matching result is obtained.For example, it may be determined that the similarity of authentication image and user's facial image,
In response to determining that similarity is more than or equal to preset similarity threshold, generates face corresponding to instruction authentication image and belong to
The matching result of user corresponding to user's facial image generates instruction body in response to determining that similarity is less than similarity threshold
Face corresponding to part authentication image is not belonging to the matching result of user corresponding to user's facial image.Wherein, similarity is
For characterizing the numerical value of the similarity degree of authentication image and user's facial image, similarity is bigger, characterizes authentication figure
As higher with the similarity degree of user's facial image.
In practice, after server-side obtains matching result, matching result can be exported to above-mentioned executing subject, so as to above-mentioned
Executing subject exports matching result to the personnel for carrying out this authentication.
With continued reference to the schematic diagram that Fig. 3, Fig. 3 are according to the application scenarios of the information processing method of the present embodiment.?
In the application scenarios of Fig. 3, mobile phone 301 can export preset action command in response to detecting authentication trigger signal 302
303, and the video 305 of capture target face object 304, wherein action command can serve to indicate that target face object 304
Movement is executed, such as action command can be audio " blink ".Then, mobile phone 301 can be based on video 305, determine target person
Whether face object 304 performs movement.Then, mobile phone 301 can perform movement in response to determining target face object 304,
Two (i.e. destination number) video frames, respectively video frame 3051 and video frame 3052 are extracted from video 305.Then, hand
Extracted video frame 3051 and video frame 3052 can be inputted vivo assessment model 306 trained in advance by machine 301 respectively, be obtained
Obtain living body value 3071 and living body value 3072, wherein the face object that living body value is used to characterize corresponding video frame is living body faces
Probability.Finally, mobile phone 301 can at least be based on living body value 3071,3072 obtained, selected from video frame 3051,3052
Video frame is as authentication image 308 corresponding to target face object 304.
Currently, in order to reduce identity theft risk, the prior art acts living body first is that passing through in authentication scene
Detection is to determine whether face object is living body faces.
Specifically, movement In vivo detection identifies face pair by the way that whether detection face object executes predetermined movement
As if no is living body faces.As an example, movement In vivo detection can by the blink of face object, open one's mouth, shake the head, point head etc.
Movement, using technologies such as the positioning of face key point, face trackings, determines whether face object is true living body.In practice, although
Living body faces can be identified by acting In vivo detection, but occur a kind of identity theft phenomenon person that is identity theft on the market
The movement needed to be implemented during itself execution movement In vivo detection, and in executing action process, switching shows stolen use
The conjecture face (such as facial image) at family, in this way, can execution In vivo detection, and from the video that obtains of shooting
When extracting authentication image, the authentication image that may extract the facial image including being stolen user in turn can
To realize identity theft.Therefore, for above-mentioned identity theft phenomenon, there are the need for the reliability for further increasing authentication
It asks.
The method provided by the above embodiment of the disclosure is by the way that in response to detecting authentication trigger signal, output is default
Action command, and capture target face object video, then be based on video, determine whether target face object performs
Movement indicated by action command realizes the movement In vivo detection to target face object.On this basis, in response to determination
Target face object performs movement, and destination number video frame is extracted from video, and extracted destination number is a
Video frame inputs vivo assessment model trained in advance respectively, to obtain respective living body value, finally at least based on obtained
Living body value selects video frame as authentication image corresponding to target face object from destination number video frame, from
And the probability that face object corresponding to authentication image obtained is living body faces can be improved, and then be directed to above-mentioned body
Part usurps phenomenon, can more precisely be extracted and acted using the electronic equipment of the method provided by the above embodiment of the disclosure
Video frame corresponding to the face object that execution acts during In vivo detection improves authentication figure as authentication image
As the consistency of corresponding face object and the face object of execution movement is mentioned with this using above-described embodiment of the disclosure
The electronic equipment of the method for confession can have more compared with electronic equipment in the prior art, for carrying out authentication
Reliable identity authentication function helps to generate and export using the electronic equipment of the method provided by the above embodiment of the disclosure
More accurate identity authentication result.
With further reference to Fig. 4, it illustrates the processes 400 of another embodiment of information processing method.The information processing
The process 400 of method, comprising the following steps:
Step 401, in response to detecting authentication trigger signal, preset action command, and capture target are exported
The video of face object.
In the present embodiment, the executing subject (such as terminal device shown in FIG. 1 101,102,103) of information processing method
It can be exported preset dynamic in response to detecting authentication trigger signal by wired connection mode or radio connection
It instructs, and the video of capture target face object.Wherein, authentication trigger signal is for triggering authentication operation
Signal.Authentication operation for for authenticate personage corresponding to target face object whether be pre-registered user behaviour
Make.Action command is used to indicate target face object and executes movement.Video is used to authenticate the identity of target face object.
Step 402, it is based on video, determines whether target face object is to perform movement.
In the present embodiment, based on video obtained in step 401, above-mentioned executing subject can determine target face object
Whether movement is performed.
Step 403, in response to determining that target face object performs movement, destination number video is extracted from video
Frame.
In the present embodiment, above-mentioned executing subject can be in response to determining that target face object performs movement, from video
Middle extraction destination number video frame.
Step 404, destination number video frame is inputted to vivo assessment model trained in advance respectively, it is respective to obtain
Living body value.
In the present embodiment, above-mentioned executing subject can distinguish destination number video frame obtained in step 402 defeated
Enter vivo assessment model trained in advance, to obtain respective living body value.
Above-mentioned steps 401, step 402, step 403, step 404 respectively with step 201, the step in previous embodiment
202, step 203, step 204 are consistent, and the description above with respect to step 201, step 202, step 203 and step 204 is also suitable
In step 401, step 402, step 403 and step 404, details are not described herein again.
Step 405, destination number video frame is inputted to Environmental Evaluation Model trained in advance respectively, it is respective to obtain
Mass value.
In the present embodiment, above-mentioned executing subject can also can input respectively extracted destination number video frame
Trained Environmental Evaluation Model in advance, to obtain respective mass value.Wherein, mass value is used to characterize the quality of corresponding video frame
Superiority and inferiority degree.Specifically, mass value is bigger, the quality that can characterize video frame is more excellent.It should be noted that video frame quality
Superiority and inferiority can be determined by various factors, such as can be by the things in the clarity of video frame, video frame in the video frame
Position etc. determines.In practice, for some video frame, if the things in the video frame can be clearlyed distinguish, to the video frame
Middle foreground and background, the profile of object, texture etc. are preferably distinguished, then it is assumed that the video frame it is quality.
Herein, Environmental Evaluation Model can be used for characterizing video frame pass corresponding with mass value corresponding to video frame
System.Specifically, as an example, Environmental Evaluation Model can be technical staff is in advance based on to a large amount of video frame and video frame institute
The statistics of corresponding mass value and the mapping table for pre-establishing, being stored with multiple video frames with corresponding mass value;?
It can be to be obtained after being trained using machine learning method to initial model (such as neural network) based on preset training sample
The model arrived.
Step 406, it is based on living body value obtained and mass value, video frame conduct is extracted from destination number video frame
Authentication image corresponding to target face object.
In the present embodiment, based on the mass value and living body value obtained in step 404 and step 405, above-mentioned executing subject
Video frame can be extracted from destination number video frame as authentication image corresponding to target face object.Wherein,
Authentication image is for carrying out authentication, to determine whether target face object belongs to the figure of pre-registered user
Picture.
Specifically, above-mentioned executing subject can be based on mass value obtained and living body value using various methods, from target
Authentication image is extracted in quantity video frame, for example, above-mentioned executing subject can extract corresponding mass value and living body
The maximum video frame of the sum of value is as authentication image;Alternatively, above-mentioned executing subject can extract corresponding mass value with
The sum of living body value is more than or equal to the video frame of default summation threshold value as authentication image.
In some optional implementations of the present embodiment, it is based on living body value obtained and mass value, above-mentioned execution
Main body can extract video frame as body corresponding to target face object by following steps from destination number video frame
Part authentication image: firstly, the available weight distributed respectively for mass value and living body value in advance of above-mentioned executing subject.Then,
For the video frame in destination number video frame, above-mentioned executing subject can be based on acquired weight, to the video frame institute
Corresponding mass value and living body value are weighted summation, to obtain result corresponding with the video frame.Finally, above-mentioned executing subject
It can be based on obtained as a result, extracting video frame from destination number video frame as body corresponding to target face object
Part authentication image.
In this implementation, above-mentioned executing subject can be based on obtained as a result, from number of targets using various methods
It measures and extracts authentication image in a video frame, for example, above-mentioned executing subject can extract the corresponding maximum video of result
Frame is as authentication image;Alternatively, above-mentioned executing subject, which can extract corresponding result, is more than or equal to default result threshold value
Video frame as authentication image.
As an example, above-mentioned executing subject extracts two video frames, respectively video frame A and video frame B from video.
For video frame A, it has been determined that mass value corresponding to the video frame is " 6 " by step 403, living body value is " 7 ";For video
Frame B has determined that mass value corresponding to the video frame is " 7 " by step 403, and living body value is " 6 ".Then, above-mentioned execution master
The available weight " 0.3 " distributed respectively for mass value and living body value in advance of body and " 0.7 ".In turn, for video frame A, on
Summation can be weighted to mass value corresponding to the video frame and living body value by stating executing subject, obtain result " 6.7 " (6.7
=0.3*6+0.7*7);For video frame B, above-mentioned executing subject can to mass value corresponding to the video frame and living body value into
Row weighted sum obtains result " 6.3 " (6.3=0.3*7+0.7*6).Finally, above-mentioned executing subject can be from video frame A and view
The corresponding maximum video frame of result is extracted in frequency frame B as authentication image, i.e. extraction video frame A is as authentication
Image.
Figure 4, it is seen that compared with the corresponding embodiment of Fig. 2, the process of the information processing method in the present embodiment
400 highlight destination number video frame input quality evaluation model, obtain mass value corresponding to video frame, and then be based on
The corresponding mass value of video frame and living body value, from destination number video frame the step of extraction authentication image.As a result, originally
Embodiment description scheme can improve authentication image corresponding to face object be living body faces probability while,
The quality of authentication image is improved, with this, helps to extract more when later use authentication image carries out face matching
It is matched for accurate face characteristic, to realize more accurate authentication.
With further reference to Fig. 5, as the realization to method shown in above-mentioned each figure, present disclose provides a kind of information processing apparatus
The one embodiment set, the Installation practice is corresponding with embodiment of the method shown in Fig. 2, which specifically can be applied to respectively
In kind electronic equipment.
As shown in figure 5, the information processing unit 500 of the present embodiment includes: output and capturing unit 501, determination unit
502, extraction unit 503, the first input unit 504 and selecting unit 505.Wherein, output and capturing unit 501 are configured to ring
Ying Yu detects authentication trigger signal, exports preset action command, and the video of capture target face object,
In, action command is used to indicate target face object and executes movement;Determination unit 502 is configured to determine target based on video
Whether face object performs movement;Extraction unit 503 is configured in response to determine that target face object performs movement, from
Destination number video frame is extracted in video;First input unit 504 is configured to respectively input destination number video frame
Trained vivo assessment model in advance, to obtain respective living body value, wherein living body value is for characterizing people corresponding to video frame
Face object is the probability of living body faces;Selecting unit 505 is configured at least based on living body value obtained, from destination number
Select video frame as authentication image corresponding to target face object in video frame.
In the present embodiment, the output of information processing unit 500 and capturing unit 501 can be in response to passing through wired connection
Mode or radio connection detect authentication trigger signal, export preset action command, and capture target person
The video of face object.Wherein, authentication trigger signal is the signal for triggering authentication operation.Action command is for referring to
Show that target face object executes movement.Video is used to authenticate the identity of target face object.
In the present embodiment, the video obtained based on output and capturing unit 501, determination unit 502 can determine target
Whether face object performs movement indicated by action command.
In the present embodiment, extraction unit 503 can be mentioned from video in response to determining that target face object executes movement
Take destination number video frame, wherein destination number can be predetermined quantity, or based on included by video
The quantity that the quantity of video frame is determined.
In the present embodiment, for the destination number video frame extracted, the first input unit 504 can will be extracted
Destination number video frame input in advance trained vivo assessment model respectively, to obtain respective living body value.Wherein, living body
Face object of the value for characterizing corresponding video frame is the probability of living body faces.Specifically, living body value is bigger, can characterize corresponding
The face object of video frame is that the probability of living body faces is bigger.Herein, vivo assessment model can be used for characterizing video frame with
The corresponding relationship of living body value corresponding to video frame.
In the present embodiment, the living body value obtained based on the first input unit 504, selecting unit 505 can be from number of targets
Measuring in a video frame selects video frame as authentication image corresponding to target face object.Wherein, authentication image
For for carrying out authentication, to determine whether target face object belongs to the image of pre-registered user.
In some optional implementations of the present embodiment, device 500 can also include: that transmission unit (does not show in figure
Out), it is configured to authentication image being sent to server-side;Acquiring unit (not shown) is configured to from server-side
Obtain matching result, wherein matching result be used to indicate authentication image whether with pre-stored user's facial image phase
Matching.
In some optional implementations of the present embodiment, device 500 can also include: the second input unit (in figure
It is not shown), it is configured to respectively input destination number video frame Environmental Evaluation Model trained in advance, it is respective to obtain
Mass value, wherein mass value is used to characterize the quality of corresponding video frame;And selecting unit 505 can be further configured to:
Based on living body value obtained and mass value, it is right as target face object institute that video frame is extracted from destination number video frame
The authentication image answered.
In some optional implementations of the present embodiment, selecting unit 505 can be further configured to: be retrieved as
The weight that mass value and living body value are distributed respectively;For the video frame in destination number video frame, based on acquired weight,
Summation is weighted to mass value corresponding to the video frame and living body value, to obtain result corresponding with the video frame;It is based on
It is obtained as a result, from destination number video frame extract video frame as authentication figure corresponding to target face object
Picture.
In some optional implementations of the present embodiment, vivo assessment model is obtained by machine learning.
It is understood that all units recorded in the device 500 and each step phase in the method with reference to Fig. 2 description
It is corresponding.As a result, above with respect to the operation of method description, the beneficial effect of feature and generation be equally applicable to device 500 and its
In include unit, details are not described herein.
The device provided by the above embodiment 500 of the disclosure is by exporting in response to detecting authentication trigger signal
Preset action command, and the video of capture target face object, are then based on video, determine whether target face object is held
Go movement, then in response to determining that target face object performs movement, the extraction destination number video frame from video, with
And extracted destination number video frame is inputted to vivo assessment model trained in advance respectively, to obtain respective living body
Value, wherein the face object that living body value is used to characterize corresponding video frame is the probability of living body faces, is finally at least based on being obtained
Living body value, select video frame as authentication image corresponding to target face object from destination number video frame,
It can determine that target face object is living body people based on movement In vivo detection to extract authentication image based on living body value
In the case where face, further increases face object corresponding to authentication image and carry out the face object of movement In vivo detection
Consistency, help to improve the reliability of authentication.
Below with reference to Fig. 6, it illustrates the terminal device (end of example as shown in figure 1 for being suitable for being used to realize the embodiment of the present disclosure
The structural schematic diagram of end equipment 101,102,103) 600.Terminal device in the embodiment of the present disclosure can include but is not limited to all
As mobile phone, laptop, digit broadcasting receiver, PDA (personal digital assistant), PAD (tablet computer), PMP are (portable
Formula multimedia player), the mobile terminal and such as number TV, desk-top meter of car-mounted terminal (such as vehicle mounted guidance terminal) etc.
The fixed terminal of calculation machine etc..Terminal device shown in Fig. 6 is only an example, should not function to the embodiment of the present disclosure and
Use scope brings any restrictions.
As shown in fig. 6, terminal device 600 may include processing unit (such as central processing unit, graphics processor etc.)
601, random access can be loaded into according to the program being stored in read-only memory (ROM) 602 or from storage device 608
Program in memory (RAM) 603 and execute various movements appropriate and processing.In RAM 603, it is also stored with terminal device
Various programs and data needed for 600 operations.Processing unit 601, ROM 602 and RAM603 are connected with each other by bus 604.
Input/output (I/O) interface 605 is also connected to bus 604.
In general, following device can connect to I/O interface 605: including such as touch screen, touch tablet, keyboard, mouse, taking the photograph
As the input unit 606 of head, microphone, accelerometer, gyroscope etc.;Including such as liquid crystal display (LCD), loudspeaker, vibration
The output device 607 of dynamic device etc.;Storage device 608 including such as tape, hard disk etc.;And communication device 609.Communication device
609, which can permit terminal device 600, is wirelessly or non-wirelessly communicated with other equipment to exchange data.Although Fig. 6 shows tool
There is the terminal device 600 of various devices, it should be understood that being not required for implementing or having all devices shown.It can be with
Alternatively implement or have more or fewer devices.
Particularly, in accordance with an embodiment of the present disclosure, it may be implemented as computer above with reference to the process of flow chart description
Software program.For example, embodiment of the disclosure includes a kind of computer program product comprising be carried on computer-readable medium
On computer program, which includes the program code for method shown in execution flow chart.In such reality
It applies in example, which can be downloaded and installed from network by communication device 609, or from storage device 608
It is mounted, or is mounted from ROM 602.When the computer program is executed by processing unit 601, the embodiment of the present disclosure is executed
Method in the above-mentioned function that limits.
It should be noted that computer-readable medium described in the disclosure can be computer-readable signal media or meter
Calculation machine readable storage medium storing program for executing either the two any combination.Computer readable storage medium for example can be --- but not
Be limited to --- electricity, magnetic, optical, electromagnetic, infrared ray or semiconductor system, device or device, or any above combination.Meter
The more specific example of calculation machine readable storage medium storing program for executing can include but is not limited to: have the electrical connection, just of one or more conducting wires
Taking formula computer disk, hard disk, random access storage device (RAM), read-only memory (ROM), erasable type may be programmed read-only storage
Device (EPROM or flash memory), optical fiber, portable compact disc read-only memory (CD-ROM), light storage device, magnetic memory device,
Or above-mentioned any appropriate combination.In the disclosure, computer readable storage medium can be it is any include or storage journey
The tangible medium of sequence, the program can be commanded execution system, device or device use or in connection.And at this
In open, computer-readable signal media may include in a base band or as the data-signal that carrier wave a part is propagated,
In carry computer-readable program code.The data-signal of this propagation can take various forms, including but not limited to
Electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media can also be computer-readable and deposit
Any computer-readable medium other than storage media, the computer-readable signal media can send, propagate or transmit and be used for
By the use of instruction execution system, device or device or program in connection.Include on computer-readable medium
Program code can transmit with any suitable medium, including but not limited to: electric wire, optical cable, RF (radio frequency) etc. are above-mentioned
Any appropriate combination.
Above-mentioned computer-readable medium can be included in above-mentioned terminal device;It is also possible to individualism, and not
It is fitted into the terminal device.Above-mentioned computer-readable medium carries one or more program, when said one or more
When a program is executed by the terminal device, so that the terminal device: in response to detecting authentication trigger signal, output is default
Action command, and the video of capture target face object, wherein action command be used to indicate target face object execute it is dynamic
Make;Based on video, determine whether target face object performs movement;In response to determining that target face object performs movement,
Destination number video frame is extracted from video, and destination number video frame is inputted to vivo assessment trained in advance respectively
Model, to obtain respective living body value, wherein the face object that living body value is used to characterize corresponding video frame is the general of living body faces
Rate;It is at least based on living body value obtained, selects video frame right as target face object institute from destination number video frame
The authentication image answered.
The calculating of the operation for executing the disclosure can be write with one or more programming languages or combinations thereof
Machine program code, described program design language include object oriented program language-such as Java, Smalltalk, C+
+, it further include conventional procedural programming language-such as " C " language or similar programming language.Program code can
Fully to execute, partly execute on the user computer on the user computer, be executed as an independent software package,
Part executes on the remote computer or executes on a remote computer or server completely on the user computer for part.
In situations involving remote computers, remote computer can pass through the network of any kind --- including local area network (LAN)
Or wide area network (WAN)-is connected to subscriber computer, or, it may be connected to outer computer (such as utilize Internet service
Provider is connected by internet).
Flow chart and block diagram in attached drawing are illustrated according to the system of the various embodiments of the disclosure, method and computer journey
The architecture, function and operation in the cards of sequence product.In this regard, each box in flowchart or block diagram can generation
A part of one module, program segment or code of table, a part of the module, program segment or code include one or more use
The executable instruction of the logic function as defined in realizing.It should also be noted that in some implementations as replacements, being marked in box
The function of note can also occur in a different order than that indicated in the drawings.For example, two boxes succeedingly indicated are actually
It can be basically executed in parallel, they can also be executed in the opposite order sometimes, and this depends on the function involved.Also it to infuse
Meaning, the combination of each box in block diagram and or flow chart and the box in block diagram and or flow chart can be with holding
The dedicated hardware based system of functions or operations as defined in row is realized, or can use specialized hardware and computer instruction
Combination realize.
Being described in unit involved in the embodiment of the present disclosure can be realized by way of software, can also be by hard
The mode of part is realized.Wherein, the title of unit does not constitute the restriction to the unit itself under certain conditions, for example, choosing
It selects unit and is also described as " unit of selection authentication image ".
Above description is only the preferred embodiment of the disclosure and the explanation to institute's application technology principle.Those skilled in the art
Member is it should be appreciated that the open scope involved in the disclosure, however it is not limited to technology made of the specific combination of above-mentioned technical characteristic
Scheme, while should also cover in the case where not departing from design disclosed above, it is carried out by above-mentioned technical characteristic or its equivalent feature
Any combination and the other technical solutions formed.Such as features described above has similar function with (but being not limited to) disclosed in the disclosure
Can technical characteristic replaced mutually and the technical solution that is formed.
Claims (10)
1. a kind of information processing method, comprising:
In response to detecting authentication trigger signal, preset action command, and the view of capture target face object are exported
Frequently, wherein the action command is used to indicate the target face object and executes movement;
Based on the video, determine whether the target face object performs the movement;
The movement is performed in response to the determination target face object, destination number video is extracted from the video
Frame;
The destination number video frame is inputted to vivo assessment model trained in advance respectively, to obtain respective living body value,
Wherein, living body value is used to characterize the probability that the face object of corresponding video frame is living body faces;And
It is at least based on living body value obtained, selects video frame as the target face from the destination number video frame
Authentication image corresponding to object.
2. according to the method described in claim 1, wherein, the method also includes:
The authentication image is sent to server-side;And
Obtain matching result from the server-side, wherein the matching result be used to indicate the authentication image whether with
Pre-stored user's facial image matches.
3. according to the method described in claim 1, wherein, the method also includes:
The destination number video frame is inputted to Environmental Evaluation Model trained in advance respectively, to obtain respective mass value,
Wherein, mass value is used to characterize the quality of corresponding video frame;And
It is at least based on living body value obtained, video frame is extracted from the destination number video frame as the target face
Authentication image corresponding to object, comprising:
Based on living body value obtained and mass value, video frame is extracted from the destination number video frame as the target
Authentication image corresponding to face object.
4. according to the method described in claim 3, wherein, living body value obtained and mass value are based on, from the destination number
Video frame is extracted in a video frame as authentication image corresponding to the target face object, comprising:
It is retrieved as mass value and weight that living body value is distributed respectively;
For the video frame in the destination number video frame, based on acquired weight, to matter corresponding to the video frame
Magnitude and living body value are weighted summation, to obtain result corresponding with the video frame;
Based on obtained as a result, extracting video frame from the destination number video frame as target face object institute
Corresponding authentication image.
5. method described in one of -4 according to claim 1, wherein the vivo assessment model is obtained by machine learning
's.
6. a kind of information processing unit, comprising:
Output and capturing unit, are configured in response to detect authentication trigger signal, export preset action command, with
And the video of capture target face object, wherein the action command is used to indicate the target face object and executes movement;
Determination unit is configured to determine whether the target face object performs the movement based on the video;
Extraction unit is configured in response to determine that the target face object performs the movement, mention from the video
Take destination number video frame;
First input unit is configured to respectively input the destination number video frame vivo assessment mould trained in advance
Type, to obtain respective living body value, wherein the face object that living body value is used to characterize corresponding video frame is the general of living body faces
Rate;
Selecting unit is configured at least select video from the destination number video frame based on living body value obtained
Frame is as authentication image corresponding to the target face object.
7. device according to claim 6, wherein described device further include:
Transmission unit is configured to the authentication image being sent to server-side;
Acquiring unit is configured to obtain matching result from the server-side, wherein the matching result is used to indicate the body
Whether part authentication image matches with pre-stored user's facial image.
8. device according to claim 6, wherein described device further include:
Second input unit is configured to respectively input the destination number video frame quality evaluation mould trained in advance
Type, to obtain respective mass value, wherein mass value is used to characterize the quality of corresponding video frame;And
The selecting unit is further configured to:
Based on living body value obtained and mass value, video frame is extracted from the destination number video frame as the target
Authentication image corresponding to face object.
9. a kind of terminal device, comprising:
One or more processors;
Storage device is stored thereon with one or more programs;
Camera is configured to acquire video;
When one or more of programs are executed by one or more of processors, so that one or more of processors are real
Now such as method as claimed in any one of claims 1 to 5.
10. a kind of computer-readable medium, is stored thereon with computer program, wherein the realization when program is executed by processor
Such as method as claimed in any one of claims 1 to 5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910211758.2A CN109977839A (en) | 2019-03-20 | 2019-03-20 | Information processing method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910211758.2A CN109977839A (en) | 2019-03-20 | 2019-03-20 | Information processing method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109977839A true CN109977839A (en) | 2019-07-05 |
Family
ID=67079701
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910211758.2A Pending CN109977839A (en) | 2019-03-20 | 2019-03-20 | Information processing method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109977839A (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111881726A (en) * | 2020-06-15 | 2020-11-03 | 马上消费金融股份有限公司 | Living body detection method and device and storage medium |
CN112309391A (en) * | 2020-03-06 | 2021-02-02 | 北京字节跳动网络技术有限公司 | Method and apparatus for outputting information |
CN112559800A (en) * | 2020-12-17 | 2021-03-26 | 北京百度网讯科技有限公司 | Method, apparatus, electronic device, medium, and product for processing video |
CN113469135A (en) * | 2021-07-28 | 2021-10-01 | 浙江大华技术股份有限公司 | Method and device for determining object identity information, storage medium and electronic device |
CN115394001A (en) * | 2022-07-29 | 2022-11-25 | 北京旷视科技有限公司 | Identity authentication system, method, electronic device, and computer-readable medium |
CN116152936A (en) * | 2023-02-17 | 2023-05-23 | 深圳市永腾翼科技有限公司 | Face identity authentication system with interactive living body detection and method thereof |
CN116778562A (en) * | 2023-08-22 | 2023-09-19 | 中移(苏州)软件技术有限公司 | Face verification method, device, electronic equipment and readable storage medium |
CN117392596A (en) * | 2023-09-07 | 2024-01-12 | 中关村科学城城市大脑股份有限公司 | Data processing method, device, electronic equipment and computer readable medium |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105956572A (en) * | 2016-05-15 | 2016-09-21 | 北京工业大学 | In vivo face detection method based on convolutional neural network |
CN106548121A (en) * | 2015-09-23 | 2017-03-29 | 阿里巴巴集团控股有限公司 | A kind of method of testing and device of vivo identification |
CN106599772A (en) * | 2016-10-31 | 2017-04-26 | 北京旷视科技有限公司 | Living body authentication method, identity authentication method and device |
CN106650597A (en) * | 2016-10-11 | 2017-05-10 | 汉王科技股份有限公司 | Living body detection method and apparatus |
CN107092818A (en) * | 2016-02-17 | 2017-08-25 | 阿里巴巴集团控股有限公司 | The implementation method and device of vivo identification |
US20180032828A1 (en) * | 2015-12-18 | 2018-02-01 | Tencent Technology (Shenzhen) Company Limited | Face liveness detection method, terminal, server and storage medium |
CN108494778A (en) * | 2018-03-27 | 2018-09-04 | 百度在线网络技术(北京)有限公司 | Identity identifying method and device |
CN108921041A (en) * | 2018-06-06 | 2018-11-30 | 深圳神目信息技术有限公司 | A kind of biopsy method and device based on RGB and IR binocular camera |
CN109034102A (en) * | 2018-08-14 | 2018-12-18 | 腾讯科技(深圳)有限公司 | Human face in-vivo detection method, device, equipment and storage medium |
CN109255322A (en) * | 2018-09-03 | 2019-01-22 | 北京诚志重科海图科技有限公司 | A kind of human face in-vivo detection method and device |
-
2019
- 2019-03-20 CN CN201910211758.2A patent/CN109977839A/en active Pending
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106548121A (en) * | 2015-09-23 | 2017-03-29 | 阿里巴巴集团控股有限公司 | A kind of method of testing and device of vivo identification |
US20180032828A1 (en) * | 2015-12-18 | 2018-02-01 | Tencent Technology (Shenzhen) Company Limited | Face liveness detection method, terminal, server and storage medium |
CN107092818A (en) * | 2016-02-17 | 2017-08-25 | 阿里巴巴集团控股有限公司 | The implementation method and device of vivo identification |
CN105956572A (en) * | 2016-05-15 | 2016-09-21 | 北京工业大学 | In vivo face detection method based on convolutional neural network |
CN106650597A (en) * | 2016-10-11 | 2017-05-10 | 汉王科技股份有限公司 | Living body detection method and apparatus |
CN106599772A (en) * | 2016-10-31 | 2017-04-26 | 北京旷视科技有限公司 | Living body authentication method, identity authentication method and device |
CN108494778A (en) * | 2018-03-27 | 2018-09-04 | 百度在线网络技术(北京)有限公司 | Identity identifying method and device |
CN108921041A (en) * | 2018-06-06 | 2018-11-30 | 深圳神目信息技术有限公司 | A kind of biopsy method and device based on RGB and IR binocular camera |
CN109034102A (en) * | 2018-08-14 | 2018-12-18 | 腾讯科技(深圳)有限公司 | Human face in-vivo detection method, device, equipment and storage medium |
CN109255322A (en) * | 2018-09-03 | 2019-01-22 | 北京诚志重科海图科技有限公司 | A kind of human face in-vivo detection method and device |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112309391A (en) * | 2020-03-06 | 2021-02-02 | 北京字节跳动网络技术有限公司 | Method and apparatus for outputting information |
CN111881726A (en) * | 2020-06-15 | 2020-11-03 | 马上消费金融股份有限公司 | Living body detection method and device and storage medium |
CN112559800A (en) * | 2020-12-17 | 2021-03-26 | 北京百度网讯科技有限公司 | Method, apparatus, electronic device, medium, and product for processing video |
CN112559800B (en) * | 2020-12-17 | 2023-11-14 | 北京百度网讯科技有限公司 | Method, apparatus, electronic device, medium and product for processing video |
US11856277B2 (en) | 2020-12-17 | 2023-12-26 | Beijing Baidu Netcom Science And Technology Co., Ltd. | Method and apparatus for processing video, electronic device, medium and product |
CN113469135A (en) * | 2021-07-28 | 2021-10-01 | 浙江大华技术股份有限公司 | Method and device for determining object identity information, storage medium and electronic device |
CN115394001B (en) * | 2022-07-29 | 2024-04-26 | 北京旷视科技有限公司 | Identity authentication system, method, electronic device, and computer-readable medium |
CN115394001A (en) * | 2022-07-29 | 2022-11-25 | 北京旷视科技有限公司 | Identity authentication system, method, electronic device, and computer-readable medium |
CN116152936A (en) * | 2023-02-17 | 2023-05-23 | 深圳市永腾翼科技有限公司 | Face identity authentication system with interactive living body detection and method thereof |
CN116778562A (en) * | 2023-08-22 | 2023-09-19 | 中移(苏州)软件技术有限公司 | Face verification method, device, electronic equipment and readable storage medium |
CN116778562B (en) * | 2023-08-22 | 2024-05-28 | 中移(苏州)软件技术有限公司 | Face verification method, device, electronic equipment and readable storage medium |
CN117392596A (en) * | 2023-09-07 | 2024-01-12 | 中关村科学城城市大脑股份有限公司 | Data processing method, device, electronic equipment and computer readable medium |
CN117392596B (en) * | 2023-09-07 | 2024-04-30 | 中关村科学城城市大脑股份有限公司 | Data processing method, electronic device, and computer-readable medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109977839A (en) | Information processing method and device | |
CN109871834A (en) | Information processing method and device | |
CN109993150B (en) | Method and device for identifying age | |
CN109858445A (en) | Method and apparatus for generating model | |
CN109086719A (en) | Method and apparatus for output data | |
CN109934191A (en) | Information processing method and device | |
CN109829432A (en) | Method and apparatus for generating information | |
CN110162670A (en) | Method and apparatus for generating expression packet | |
CN110188719A (en) | Method for tracking target and device | |
CN109919244A (en) | Method and apparatus for generating scene Recognition model | |
CN110059624A (en) | Method and apparatus for detecting living body | |
CN108345387A (en) | Method and apparatus for output information | |
CN107038784B (en) | Safe verification method and device | |
CN110046571B (en) | Method and device for identifying age | |
CN108509611A (en) | Method and apparatus for pushed information | |
CN110060441A (en) | Method and apparatus for terminal anti-theft | |
CN109918530A (en) | Method and apparatus for pushing image | |
CN109754464A (en) | Method and apparatus for generating information | |
CN108521516A (en) | Control method and device for terminal device | |
CN110059623A (en) | Method and apparatus for generating information | |
CN108600250A (en) | Authentication method | |
CN109934142A (en) | Method and apparatus for generating the feature vector of video | |
CN108877779A (en) | Method and apparatus for detecting voice tail point | |
CN108446659A (en) | Method and apparatus for detecting facial image | |
CN110110666A (en) | Object detection method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190705 |