CN105518715A - Living body detection method, equipment and computer program product - Google Patents

Living body detection method, equipment and computer program product Download PDF

Info

Publication number
CN105518715A
CN105518715A CN201580000358.7A CN201580000358A CN105518715A CN 105518715 A CN105518715 A CN 105518715A CN 201580000358 A CN201580000358 A CN 201580000358A CN 105518715 A CN105518715 A CN 105518715A
Authority
CN
China
Prior art keywords
group
virtual objects
controlled device
display
subgroup
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201580000358.7A
Other languages
Chinese (zh)
Inventor
曹志敏
陈可卿
贾开
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Megvii Technology Co Ltd
Beijing Aperture Science and Technology Ltd
Original Assignee
Beijing Megvii Technology Co Ltd
Beijing Aperture Science and Technology Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Megvii Technology Co Ltd, Beijing Aperture Science and Technology Ltd filed Critical Beijing Megvii Technology Co Ltd
Publication of CN105518715A publication Critical patent/CN105518715A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships

Abstract

The invention provides a living body detection method, living body detection equipment and a computer program product, and belongs to the technical field of human face identification. The living body detection method comprises detecting a human face action from a shot image, controlling display of a controlled object in currently-displalyed first-group virtual objects and display of second-group virtual objects based on display state of the currently-displalyed first-group virtual objects and the detected human face action, and determining a human face in the shot image is a living body human face on the condition that at least part of the first-group and second-group virtual objects are successively overlapped with at least part of target objects in the first-group and second-group virtual objects. The display of the virtual objects are controlled based on the human face action, and living body detection is performed according to the display of the virtual objects, so various ways of attacks of pictures, videos, 3D human face models, masks or the like can be effectively prevented.

Description

Biopsy method and equipment, computer program
Technical field
The disclosure relates to technical field of face recognition, relates more specifically to a kind of biopsy method and equipment and computer program.
Background technology
Current, scene on the line that face identification system is applied to security protection more and more, finance, social security field need authentication, open an account as line goes to bank, on online trading operation demonstration, unattended gate control system, line social security handle, on line medical insurance handle.In the application of these high level of securitys, except guaranteeing that the human face similarity degree of authenticatee meets the storehouse, the end stored in database, first need to verify that authenticatee is a legal biological living.That is, face identification system needs security from attacks person to use the modes such as photo, video, 3D faceform or mask to attack.
Also generally acknowledge ripe live body proof scheme in technical products in the market, existing technology or rely on special hardware device (such as, infrared camera, depth camera), or simple still photo can only be taken precautions against attack.
Therefore, the recognition of face mode of the attack not only not relying on special hardware device but also effectively can take precautions against the various ways such as photo, video, 3D faceform or mask is needed.
Summary of the invention
Propose the present invention in view of the above problems.Disclosure embodiment provides a kind of biopsy method and equipment and computer program, it can control virtual objects display by stages based on human face action, in virtual objects controlled device at least partially successively with determine In vivo detection success when the overlapping at least partially of destination object in virtual objects.
According to an aspect of disclosure embodiment, provide a kind of biopsy method, comprising: from shooting image, detect human face action; Based on display state and the human face action that detects of first group of virtual objects of display current on display screen, control the display of the controlled device in first group of virtual objects of described current display and control display second group of virtual objects; And in described first group of virtual objects and described second group of virtual objects controlled device at least partially successively with the overlapping at least partially of destination object in described first group of virtual objects and described second group of virtual objects, determine that the face in described shooting image is living body faces.
According to the another aspect of disclosure embodiment, provide a kind of In vivo detection equipment, comprising: human face action pick-up unit, be configured to detect human face action from shooting image; Virtual objects controlled device, be configured to the display state based on first group of virtual objects of display current in display device and the human face action that detects, control the display of the controlled device in first group of virtual objects of described current display and control display second group of virtual objects; And live body judgment means, be configured in described first group of virtual objects and described second group of virtual objects controlled device at least partially successively with the overlapping at least partially of destination object in described first group of virtual objects and described second group of virtual objects, determine that the face in described shooting image is living body faces.
According to the another aspect of disclosure embodiment, provide a kind of In vivo detection equipment, comprising: one or more processor; One or more storer; Store computer program instructions in which memory, perform following steps when described computer program instructions is run by described processor: from shooting image, detect human face action; Based on display state and the human face action that detects of first group of virtual objects of display current in display device, control the display of the controlled device in first group of virtual objects of described current display and control display second group of virtual objects; And in described first group of virtual objects and described second group of virtual objects controlled device at least partially successively with the overlapping at least partially of destination object in described first group of virtual objects and described second group of virtual objects, determine that the face in described shooting image is living body faces.
According to the one side again of disclosure embodiment, provide a kind of computer program, comprise one or more computer-readable recording medium, described computer-readable recording medium stores computer program instructions, described computer program instructions by during computer run perform following steps: from shooting image detect human face action; Based on display state and the human face action that detects of first group of virtual objects of display current on display screen, control the display of the controlled device in first group of virtual objects of described current display and control display second group of virtual objects; In described first group of virtual objects and described second group of virtual objects controlled device at least partially successively with the overlapping at least partially of destination object in described first group of virtual objects and described second group of virtual objects, determine that the face in described shooting image is living body faces.
According to the biopsy method of disclosure embodiment and equipment and computer program, by controlling virtual objects display based on human face action and carrying out In vivo detection according to virtual objects display, the attack that special hardware device takes precautions against the various ways such as photo, video, 3D faceform or mask effectively can not relied on, thus the cost of In vivo detection can be reduced.Further, by identifying the multiple action attributes in human face action, multiple state parameters of virtual objects can be controlled, described virtual objects can be made to change display state in many aspects, such as, make described virtual objects perform complicated predetermined action or make described virtual objects realize having with initial display effect the display effect of a great difference.Therefore, the accuracy of In vivo detection can be improved further, and and then the security of applying according to the biopsy method of the embodiment of the present invention and the application scenarios of equipment and computer program can be improved.
Accompanying drawing explanation
Be described in more detail disclosure embodiment in conjunction with the drawings, above-mentioned and other object of the present disclosure, Characteristics and advantages will become more obvious.Accompanying drawing is used to provide the further understanding to disclosure embodiment, and forms a part for instructions, is used from the explanation disclosure with disclosure embodiment one, does not form restriction of the present disclosure.In the accompanying drawings, identical reference number represents same parts or step usually.
Fig. 1 is the schematic block diagram of the electronic equipment for the biopsy method and equipment realizing disclosure embodiment;
Fig. 2 is the indicative flowchart of the biopsy method according to disclosure embodiment;
Fig. 3 is the indicative flowchart according to the human face action detecting step in the biopsy method of disclosure embodiment;
Fig. 4 is the indicative flowchart according to the virtual objects display and control step in the biopsy method of disclosure embodiment;
Fig. 5 is another indicative flowchart of the biopsy method according to disclosure embodiment;
Fig. 6, Fig. 7 and Fig. 8 are the examples of the virtual objects shown on the display screen according to the disclosure first embodiment;
Fig. 9 is another indicative flowchart of the biopsy method according to disclosure embodiment;
Figure 10 A and Figure 10 B is the example of the virtual objects shown on the display screen according to the disclosure second embodiment;
Figure 11 is the schematic block diagram of the In vivo detection equipment according to disclosure embodiment;
Figure 12 is the schematic block diagram of another In vivo detection equipment according to disclosure embodiment;
Figure 13 is the schematic block diagram according to the human face action pick-up unit in the In vivo detection equipment of disclosure embodiment; And
Figure 14 is the schematic block diagram according to the virtual objects controlled device in the In vivo detection equipment of disclosure embodiment.
Embodiment
In order to make object of the present disclosure, technical scheme and advantage more obvious, describe in detail below with reference to accompanying drawings according to example embodiment of the present disclosure.Obviously, described embodiment is only a part of this disclosure embodiment, instead of whole embodiment of the present disclosure, should be understood that the disclosure not by the restriction of example embodiment described herein.Based on the disclosure embodiment described in the disclosure, other embodiments all that those skilled in the art obtain when not paying creative work all should fall within protection domain of the present disclosure.
First, the example electronic device 100 of biopsy method for realizing disclosure embodiment and equipment is described with reference to Fig. 1.
As shown in Figure 1, electronic equipment 100 comprises one or more processor 102, one or more memory storage 104, output unit 108 and image collecting device 110, and these assemblies are interconnected by bindiny mechanism's (not shown) of bus system 112 and/or other form.The assembly and the structure that it should be noted that the electronic equipment 100 shown in Fig. 1 are illustrative, and not restrictive, and as required, described electronic equipment 100 also can have other assemblies and structure.
Described processor 102 can be the processing unit of CPU (central processing unit) (CPU) or other form with data-handling capacity and/or instruction execution capability, and other assembly that can control in described electronic equipment 100 is with the function of carry out desired.
Described memory storage 104 can comprise one or more computer program, and described computer program can comprise various forms of computer-readable recording medium, such as volatile memory and/or nonvolatile memory.Described volatile memory such as can comprise random access memory (RAM) and/or cache memory (cache) etc.Described nonvolatile memory such as can comprise ROM (read-only memory) (ROM), hard disk, flash memory etc.Described computer-readable recording medium can store one or more computer program instructions, processor 102 can run described programmed instruction, to realize the function of function and/or other expectation (realized by processor) in the embodiment of the present invention hereinafter described.Various application program and various data can also be stored, the various data etc. that the view data that such as described image collecting device 110 gathers etc. and described application program use and/or produce in described computer-readable recording medium.
Described output unit 108 externally (such as user) can export various information (such as image or sound), and it is one or more to comprise in display and loudspeaker etc.
Described image collecting device 110 can take the image (such as photo, video etc.) of predetermined viewfinder range, and is stored in described memory storage 104 by captured image and uses for other assembly.
Exemplarily, example electronic device 100 for the biopsy method and equipment that realize disclosure embodiment can be the electronic equipment being integrated with human face image collecting device being arranged in man face image acquiring end, such as smart mobile phone, panel computer, personal computer, identification apparatus etc. based on recognition of face.Such as, in security protection application, described electronic equipment 100 can be deployed in the image acquisition end of gate control system, and can be such as the identification apparatus based on recognition of face; In financial application field, personal terminal place can be deployed in, such as smart phone, panel computer, personal computer etc.
Alternatively, can be deployed in man face image acquiring end for the output unit 108 of the example electronic device 100 of the biopsy method and equipment that realize disclosure embodiment and image collecting device 110, and the processor 102 in described electronic equipment 100 can be deployed in server end (or high in the clouds).
Below, with reference to Fig. 2, the method for detecting human face 200 according to disclosure embodiment is described.
In step S210, from shooting image, detect human face action.Particularly, the image collecting device 110 in the electronic equipment 100 of the method for detecting human face for realizing disclosure embodiment as shown in Figure 1 or other image collecting device that can transmit image to described electronic equipment 100 independent of described electronic equipment 100 can be utilized, gather the gray scale of predetermined coverage or coloured image as shooting image, described shooting image can be photo, also can be the frame in video.Described image capture device can be camera, the camera of panel computer, the camera of personal computer of smart phone or can be even IP Camera.
The human face action be described with reference to Figure 3 in step S210 detects.
In step S310, locating human face's key point in described shooting image.Exemplarily, in this step, first can determine whether comprise face in obtained image, orient face key point when face being detected.
Face key point is the key point that some sign abilities of face are strong, such as eyes, canthus, eye center, eyebrow, cheekbone peak, nose, nose, the wing of nose, face, the corners of the mouth and face's outline point etc.
Exemplarily, a large amount of facial images can be collected in advance, such as N opens facial image, such as, N=10000, artificially marks out predetermined a series of face key points often opening in facial image, and described predetermined a series of face key points can to include but not limited in above-mentioned face key point at least partially.According to the shape facility often opened in facial image near each face key point, based on parametric shape model, utilize machine learning algorithm (as degree of depth study (DeepLearning), or the regression algorithm (localfeature-basedregressionalgorithm) based on local feature) carry out the training of face Critical point model, thus obtain face Critical point model.
Particularly, Face datection and face key point location can be carried out based on the face Critical point model set up in shooting image in step S310.Such as, in shooting image, the position of face key point can be optimized iteratively, finally obtain the coordinate position of each face key point.Again such as, the method returned based on cascade locating human face's key point in shooting image can be adopted.
Being positioned in human face action identification of face key point plays an important role, but should be appreciated that the disclosure not by the restriction of the concrete face key point localization method adopted.Existing Face datection and face key point location algorithm can be adopted to perform the face key point location in step S310.Should be appreciated that, the biopsy method 100 of disclosure embodiment is not limited to utilize existing Face datection and face key point location algorithm to carry out face key point location, and should contain utilize in the future exploitation Face datection and face key point location algorithm to carry out face key point location.
In step S320, from described shooting image, extract image texture information.Exemplarily, can according to the Pixel Information in described shooting image, the monochrome information of such as pixel, extracts the fine information of face, such as eyeball position information, Shape of mouth, micro-expression information etc.Existing image texture information extraction algorithm can be adopted to perform the image texture information extraction in step S320.Should be appreciated that, the biopsy method 100 of disclosure embodiment is not limited to utilize existing image texture information extraction algorithm to carry out image texture information extraction, and should contain and utilize the image texture information extraction algorithm of in the future exploitation to carry out image texture information extraction.
Should be appreciated that, step S310 and S320 can select an execution, or can both all perform.When step S310 and S320 all performs, they can synchronously perform, or can successively perform.
In step S330, based on located face key point and/or described image texture information, obtain the value of human face action attribute.The described human face action attribute obtained based on located face key point such as can include but not limited to that eyes open the degree of closing, face opens the degree of closing, the distance etc. of face pitch rate, face degree of deflection, face and camera.Described human face action attribute based on described image texture information acquisition can include but not limited to eyeball deflection degree, the upper and lower degree of deflection of eyeball etc.
Alternatively, based on the last shooting image of current taken image and current taken image, the value of human face action attribute can be obtained; Or, based on first shooting image and current taken image, the value of human face action attribute can be obtained; Or, based on shooting image several before current taken image and current taken image, the value of human face action attribute can be obtained.
Alternatively, the value of human face action attribute can be obtained based on located face key point by the mode of geometrical learning, machine learning or image procossing.Such as, the degree of closing is opened for eyes, adopted multiple key point can be drawn a circle to approve at eyes one, such as 8-20 key point, such as, the inner eye corner of left eye, the tail of the eye, upper eyelid central point and lower eyelid central point, and the inner eye corner of right eye, the tail of the eye, upper eyelid central point and lower eyelid central point.Then, by locating these key points on shooting image, determine the coordinate of these key points on shooting image, distance between the upper eyelid center of calculating left eye (right eye) and lower eyelid center is as the upper lower eyelid distance of left eye (right eye), distance between the inner eye corner of calculating left eye (right eye) and the tail of the eye is as the interior tail of the eye distance of left eye (right eye), in calculating left eye (or right eye), lower eyelid distance and the ratio of the interior tail of the eye distance of left eye (or right eye) are as the first distance ratio X, determine that eyes are opened according to this first distance ratio and close degree Y.Such as, the threshold X max of the first distance ratio X can be set, and specify: Y=X/Xmax, determine that eyes are opened thus and close degree Y.Y is larger, then represent that eyes of user is opened larger.
Return Fig. 2, in step S220, based on display state and the human face action that detects of first group of virtual objects of display current on display screen, control the display of the controlled device in first group of virtual objects of described current display and control display second group of virtual objects.
Described human face action attribute can comprise at least one action attributes, and the state parameter of virtual objects can comprise at least one state parameter.An action attributes can be only corresponding with a state parameter, or an action attributes can be corresponding with multiple state parameter successively according to time sequencing.
Alternatively, the mapping relations between human face action attribute and the state parameter of virtual objects can preset, or can be starting to determine at random when performing biopsy method according to disclosure embodiment.Biopsy method according to disclosure embodiment can also comprise: the mapping relations between described human face action attribute and the state parameter of virtual objects are prompted to user.
Exemplarily, the state of the virtual objects that can show on the display screen according to detected human face action control break.At least one object display on the display screen in described first group of virtual objects is upgraded according to detected human face action.In described first group of virtual objects, the initial display position of object and/or initial display form are predetermined or determine at random at least partially.Particularly, the motion state, display position, size, shape, color etc. of described virtual objects can such as be changed.
Alternatively, can control to show new virtual objects on the display screen, i.e. second group of virtual objects according to detected human face action.Alternatively, can, according to the display situation of virtual objects at least partially in described first group of virtual objects, control to show new virtual objects on the display screen, i.e. second group of virtual objects.In described second group objects, the initial display position of object and/or initial display form are predetermined or determine at random at least partially.
As previously mentioned, described virtual objects can comprise the first group objects, when starting to perform according to the biopsy method of disclosure embodiment by described first group objects display on the display screen, the display of at least one object in described first group objects can be upgraded by first group of human face action attribute.In addition, described virtual objects can also comprise the second group objects, described in when starting to perform according to the biopsy method of disclosure embodiment, the second group objects does not all show on the display screen, can control whether to show at least one object in described second group objects by the second group human face action attribute different from first group of human face action attribute; Or can control whether to show at least one object in described second group objects according to the display situation of described first group objects.
Particularly, in described first group objects, the state parameter of at least one object can be display position, size, shape, color, motion state etc., can change the motion state, display position, size, shape, color etc. of at least one object in described first group objects thus according to the value of described first group of human face action attribute.
Alternatively, the state parameter that in described second group objects, at least one object is each at least can comprise visibility status, and can comprise display position, size, shape, color, motion state etc.Can control whether to show at least one object in described second group objects according to the display situation of at least one object in the value of described second group of human face action attribute or described first group objects, namely in described second group objects, whether at least one object is in visibility status, and can change the motion state, display position, size, shape, color etc. of at least one object in described second group objects according to the value of the value of described second group of human face action attribute and/or described first group of human face action attribute.
Be described with reference to Figure 4 the operation of step S220.Described human face action attribute at least comprises the first action attributes.
In step S410, upgrade the value of the state parameter of the controlled device in described first group of virtual objects according to the value of the first action attributes.
Particularly, can be by a kind of human face action best property of attribute mapping a certain state parameter of virtual objects.Such as, eyes of user can be opened the degree of closing or face opens the size that the degree of closing is mapped as virtual objects, and open according to eyes of user the size that value that the degree of closing or face open the degree of closing upgrades virtual objects.Again such as, user's face pitch rate can be mapped as virtual objects vertical display position on the display screen, and upgrade virtual objects vertical display position on the display screen according to the value of user's face pitch rate.
Alternatively, the face that can calculate in the current taken image face opened in the degree of closing and the first shooting image preserved before opens the ratio K 1 of the degree of closing, and the ratio K 1 of face being opened the degree of closing is mapped as the size S of virtual objects.Particularly, linear function S=a*K1+b can be adopted realize mapping.In addition, alternatively, face location in current taken image can be calculated and depart from the degree K2 of initial middle position, and face location is mapped as the position W of virtual objects.Particularly, linear function W=c*K2+d can be adopted realize mapping.
In step S420, according to the value of the state parameter of the described controlled device after renewal, described display screen shows described controlled device.
Alternatively, in step S430, upgrade the value of the state parameter attribute of described second group of virtual objects according to the display state of first group of virtual objects of display current on display screen.
Alternatively, described human face action attribute can also comprise the second action attributes.In step S430, upgrade the value of the state parameter attribute of described second group of virtual objects according to the value of the second action attributes.
In step S440, according to the value of the state parameter of the described second group of virtual objects after renewal, described display screen shows described second group of virtual objects.
Step S430 can perform with step S410 simultaneously or successively perform, and step S440 can perform with step S420 simultaneously or successively perform.
Return Fig. 2, in step S230, judge controlled device in described first group of virtual objects and described second group of virtual objects at least partially whether successively with the overlapping at least partially of destination object in described first group of virtual objects and described second group of virtual objects.
Particularly, controlled device overlaps can comprise with destination object: position overlaps, position overlaps and size is identical, position overlaps and shape is identical, position overlaps and color is identical.
In described first group of virtual objects and described second group of virtual objects controlled device at least partially successively with the overlapping at least partially of destination object in described first group of virtual objects and described second group of virtual objects, the face determined in described shooting image in step S240 is living body faces.
According to the biopsy method of disclosure embodiment, by using the state controling parameter of various human face action parameter as virtual objects, control to show virtual objects on the display screen according to human face action, whether can overlap with destination object according to shown controlled device and carry out In vivo detection.
Below, the biopsy method according to disclosure embodiment is further described with reference to specific embodiment.
First embodiment
In this first embodiment, described virtual objects comprises the first group objects and the second group objects, current display virtual objects is on the display screen the first group objects, current do not show on the display screen and based at least one object in described first group objects display situation and the virtual objects shown is the second group objects.Described first group objects comprises at least two objects, and described second group objects comprises at least one object.Alternatively, in described first group objects and described second group objects, the initial display position of object and/or initial display form are predetermined or determine at random at least partially.
In this embodiment, in described first group objects, the first state parameter of each object is the display position of this object, and in described second group objects, the first and second state parameters of each object are respectively display position and the visibility status of this object.
Alternatively, described first group objects comprises the first subgroup object and the second subgroup object, and described second group objects comprises the 3rd subgroup object, and described first subgroup object and the 3rd subgroup object are controlled device, and described second subgroup object is destination object.The quantity of controlled device can be preset, and when the controlled device of predetermined quantity overlaps with destination object successively, determine living body faces to be detected.
Alternatively, described first group objects comprises the first subgroup object and the second subgroup object, and described second group objects comprises the 3rd subgroup object, and described first subgroup object is controlled device, and described second subgroup object and the 3rd subgroup object are destination object.The quantity of destination object can be preset, and when controlled device overlaps with the destination object of predetermined quantity successively, determine living body faces to be detected.
Alternatively, described first group objects comprises the first subgroup object and the second subgroup object, described second group objects comprises the 3rd subgroup object and the 4th subgroup object, described first subgroup object and the 3rd subgroup object are controlled device, and described second subgroup object and the 4th subgroup object are destination object.
The quantity of described first subgroup object and the second subgroup object and the quantity of described 3rd subgroup object and the 4th subgroup object can be preset.Can defining objects pair, each object is to comprising a controlled device and a destination object.The quantity that object is right can be pre-defined, when the controlled device of the object pairs of predetermined quantity overlaps with destination object, determine living body faces to be detected.
Fig. 5 shows the exemplary process diagram of the biopsy method 500 according to disclosure embodiment.
In step S510, initialize Timer.Can according to user's input initialization timer, or can take in image face be detected time auto-initiation timer, or can take in image face predetermined action be detected time auto-initiation timer.In addition, after initialize Timer, by the display at least partially of each object in described first group objects on the display screen.
In step S520, gather the image (the first image) of predetermined coverage in real time as shooting image.Particularly, the image collecting device 110 in the electronic equipment 100 of the method for detecting human face for realizing disclosure embodiment as shown in Figure 1 or other image collecting device that can transmit image to described electronic equipment 100 independent of described electronic equipment 100 can be utilized, gather the gray scale of predetermined coverage or coloured image as shooting image, described shooting image can be photo, also can be the frame in video.
Step S210 in step S530 and Fig. 2 is corresponding, no longer repeats at this.
In step S540, control the display of the controlled device in first group of virtual objects of described current display based on detected human face action, and show second group of virtual objects based on the display state of described first group of virtual objects.
Step S550 to judge in predetermined timing controlled device in first group of virtual objects and second group of virtual objects at least partially whether successively with the overlapping at least partially of destination object in first group of virtual objects and second group of virtual objects, described predetermined timing can be predetermined.Particularly, described step S550 can comprise and judges whether described timer exceeds predetermined timing and whether described controlled device overlaps with described destination object successively.Alternatively, can timeout flag be produced when described timer exceeds described predetermined timing, can judge whether timer exceeds described predetermined timing according to this timeout flag in step S550.
When step S550 determine described timer exceed described predetermined timing and described controlled device at least partially not yet successively with the overlapping at least partially of described destination object, determine living body faces not detected in step S570.When step S550 determine described timer do not exceed described predetermined timing and described controlled device at least partially successively with the overlapping at least partially of described destination object, determine living body faces not detected in step S560.When step S550 determine described timer not yet exceed described predetermined timing and described controlled device at least partially not yet successively with the overlapping at least partially of described destination object, return step S520.
When returning step S520, gathering the image (the second image) of described predetermined coverage in real time as shooting image, and next performing step S530-S550.Here, for distinguishing the image of the described predetermined coverage successively gathered, the image first gathered being called the first image, the image of rear collection is called the second image.Should be appreciated that, the first image and the second image are the images in identical viewfinder range, are only the time differences gathered.Step S520-S550 as shown in Figure 5 repeats, until determine living body faces to be detected in step S560, or until determines living body faces not detected in step S570.
Fig. 6 shows the example of the first group objects and the second group objects.In this example, the quantity presetting controlled device is 1, and the quantity presetting destination object is 3.
As shown in Figure 6, in an initial condition, described first group objects comprises the first object A and the second object B 1, and described first object A is controlled device, and described second object B 1 is background object, and described background object is destination object.
In addition, also show the 3rd object B 2 and the 4th object B 3 in Fig. 6, described 3rd object B 2 and the 4th object B 3 are shown as described second group objects successively and are background object, and described background object is destination object.Particularly, when the first object A and the second object B 1 overlap, display the 3rd object B 2 is as described second group objects; When the first object A and the 3rd object B 2 overlap, display the 4th object B 3 is as described second group objects.
Described human face action attribute comprises the first action attributes, the state parameter of described first object A comprises first state parameter of described first object A, the state parameter of described second object B 1 comprises the first state parameter of described second object B 1, the state parameter of described 3rd object B 2 comprises the first state parameter of described 3rd object B 2, and the state parameter of described 4th object B 3 comprises the first state parameter of described 4th object B 3.
First, upgrade the value of first state parameter of described first object A according to the value of described first action attributes, and on described display screen, show described first object A according to the value of first state parameter of the described first object A after renewal.
After described first object A overlaps with the display position of described second object B 1, the value of the second state parameter of the 3rd object B 2 in described second group objects is set to represent visual value, to show the 3rd object B 2 in described second group objects.Alternatively, the value upgrading first state parameter of described first object A according to the value of described first action attributes can be continued, and on described display screen, show described first object A according to the value of first state parameter of the described first object A after renewal.Alternatively, described human face action attribute can also comprise second action attributes different from described first action attributes, the value upgrading first state parameter of described first object A according to the value of described second action attributes can be continued, and on described display screen, show described first object A according to the value of first state parameter of the described first object A after renewal.
After described first object A overlaps with the display position of described 3rd object B 2, the value of the second state parameter of the 4th object B 3 in described second group objects is set to represent visual value, to show the 4th object B 3 in described second group objects.Alternatively, the value upgrading first state parameter of described first object A according to the value of the described first or second action attributes can be continued, and on described display screen, show described first object A according to the value of first state parameter of the described first object A after renewal.Alternatively, described human face action attribute can also comprise three action attributes different from described first and second action attributes, the value upgrading first state parameter of described first object A according to the value of described 3rd action attributes can be continued, and on described display screen, show described first object A according to the value of first state parameter of the described first object A after renewal.
When described first object A overlaps with described second object B 1, the 3rd object B 2 and the 4th object B 3 successively, determine In vivo detection success.Alternatively, in predetermined timing when described first object A overlaps with described second object B 1, the 3rd object B 2 and the 4th object B 3 successively, determine In vivo detection success.
When the biopsy method shown in application drawing 5, judge whether described timer exceeds described predetermined timing in step S550, and judge whether described first object A overlaps with the second object B 1, the 3rd object B 2 and the 4th object B 3 successively.
When step S550 determines that described timer exceeds described predetermined timing and described first object A all not to overlap with the second object B 1, the 3rd object B 2 and the 4th object B 3 or all do not overlap with the 3rd object B 2 and the 4th object B 3 or do not overlap with the 4th object B 3, determine living body faces not detected in step S570.
When step S550 determines that described timer does not exceed described predetermined timing and described first object A overlaps with the second object B 1, the 3rd object B 2 and the 4th object B 3 successively, determine living body faces to be detected in step S560.
On the other hand, when step S550 determines that described timer does not exceed described predetermined timing and described first object A all not to overlap with the second object B 1, the 3rd object B 2 and the 4th object B 3 or all do not overlap with the 3rd object B 2 and the 4th object B 3 or do not overlap with the 4th object B 3, turn back to step S520.
More specifically, when turning back to step S520 from step S550, following steps can also be performed: judge whether to show described 4th object, when determining to judge whether when not yet showing described 4th object to show described 3rd object, when determining to judge when not yet showing described 3rd object whether described first object overlaps with described second object, and when determining that described first object shows described 3rd object when overlapping with described second object, and then turn back to step S520; When determining to judge when not yet showing described 4th object but show described 3rd object whether described first object overlaps with described 3rd object, and when determining that described first object shows described 4th object when overlapping with described 3rd object, and then turn back to step S520.
Alternatively, can target setting object total quantity and when described first object A overlaps with each destination object successively, or when described first object A overlaps with the destination object of predetermined quantity successively, or when described first object A is successively with overlapping at least partially in the destination object of predetermined quantity, determine In vivo detection success.
Fig. 7 shows another example of the first group objects and the second group objects.In this example, the quantity presetting controlled device is 3, and the quantity presetting destination object is 1.
As shown in Figure 7, in an initial condition, described first group objects comprises the first object A1 and the second object B, and described first object A1 is controlled device, and described second object B is background object, and described background object is destination object.
In addition, also show the 3rd object A2 and the 4th object A3 in Fig. 7, described 3rd object A2 and the 4th object A3 is shown as described second group objects successively and is controlled device.Particularly, when the first object A1 and the second object B overlap, the 3rd object A2 is as described second group objects in display; When the 3rd object A2 and the second object B overlap, the 4th object A3 is as described second group objects in display.
Described human face action attribute comprises the first action attributes, the state parameter of described first object A1 comprises first state parameter of described first object A1, the state parameter of described second object B comprises the first state parameter of described second object B, the state parameter of described 3rd object A2 comprises first state parameter of described 3rd object A2, and the state parameter of described 4th object A3 comprises first state parameter of described 4th object A3.
First, upgrade the value of first state parameter of described first object A1 according to the value of described first action attributes, and on described display screen, show described first object A1 according to the value of first state parameter of the described first object A1 after renewal.
After described first object A1 overlaps with the display position of described second object B, the value of second state parameter of the 3rd object A2 in described second group objects is set to represent visual value, to show the 3rd object A2 in described second group objects.Alternatively, the value upgrading first state parameter of described 3rd object A2 according to the value of described first action attributes can be continued, and on described display screen, show described 3rd object A2 according to the value of first state parameter of the described 3rd object A2 after renewal, and the display position of described first object A1 remains unchanged.Alternatively, described human face action attribute can also comprise second action attributes different from described first action attributes, the value upgrading first state parameter of described 3rd object A2 according to the value of described second action attributes can be continued, and on described display screen, show described 3rd object A2 according to the value of first state parameter of the described 3rd object A2 after renewal.
After described 3rd object A2 overlaps with the display position of described second object B, the value of second state parameter of the 4th object A3 in described second group objects is set to represent visual value, to show the 4th object A3 in described second group objects.Alternatively, the value upgrading first state parameter of described 4th object A3 according to the value of the described first or second action attributes can be continued, and on described display screen, show described 4th object A3 according to the value of first state parameter of the described 4th object A3 after renewal, and the display position of described first and second object A1 and A2 remains unchanged.Alternatively, described human face action attribute can also comprise three action attributes different from described first and second action attributes, the value upgrading first state parameter of described 4th object A3 according to the value of described 3rd action attributes can be continued, and on described display screen, show described 4th object A3 according to the value of first state parameter of the described 4th object A3 after renewal.
When described first object A1, described 3rd object A2 and described 4th object A3 overlap with described second object B successively, determine In vivo detection success.Alternatively, in predetermined timing when described first object A1, described 3rd object A2 and described 4th object A3 are successively with described second object B, determine In vivo detection success.
When the biopsy method shown in application drawing 5, judge whether described timer exceeds described predetermined timing in step S550, and judge whether described first object A1, described 3rd object A2 and described 4th object A3 overlap with described second object B successively.
When step S550 determines that described timer exceeds described predetermined timing and described first object A1 does not overlap with described second object B or described 3rd object A2 does not overlap with described second object B or described 4th object A3 does not overlap with described second object B, determine living body faces not detected in step S570.
When step S550 determines that described timer does not exceed described predetermined timing and described first object A1, described 3rd object A2 and described 4th object A3 overlap with described second object B successively, determine living body faces to be detected in step S560.
On the other hand, when step S550 determines that described timer does not exceed described predetermined timing and the first object A1 does not overlap with described second object B or described 3rd object A2 does not overlap with described second object B or described 4th object A3 does not overlap with described second object B, turn back to step S520.
More specifically, when turning back to step S520 from step S550, following steps can also be performed: judge whether to show described 4th object, when determining to judge whether when not yet showing described 4th object to show described 3rd object, when determining to judge when not yet showing described 3rd object whether described first object overlaps with described second object, and when determining that described first object shows described 3rd object when overlapping with described second object, and then turn back to step S520; When determining to judge when not yet showing described 4th object but show described 3rd object whether described 3rd object overlaps with described second object, and when determining that described 3rd object shows described 4th object when overlapping with described second object, and then turn back to step S520.
Alternatively, can set controlled device total quantity and when each controlled device overlaps with destination object successively, or when the controlled device of predetermined quantity overlaps with destination object successively, or when overlapping with destination object successively at least partially in the controlled device of predetermined quantity, determine In vivo detection success.
Fig. 8 shows the example of the first group objects and the second group objects.In this example, the quantity presetting controlled device is 3, and the quantity presetting destination object is 3.
As shown in Figure 8, in an initial condition, described first group objects comprises the first object A1 and the second object B 1, and described first object A1 is controlled device, and described second object B 1 is background object, and described background object is destination object.
In addition, also show the 3rd object A2 and the 4th object B 2 and the 5th object A3 and the 6th object B 3 in Fig. 8, described 3rd object A2 and the 5th object A3 is controlled device, and described 4th object B 2 and the 6th object B 3 are background object.Particularly, when the first object A1 and the second object B 1 overlap, display the 3rd object A2 and the 4th object B 2 are as described second group objects; When the 3rd object A2 and the 4th object B 2 overlap, display the 5th object A3 and the 6th object B 3 are as described second group objects.
Described human face action attribute comprises the first action attributes.First, upgrade the value of first state parameter of described first object A1 according to the value of described first action attributes, and on described display screen, show described first object A1 according to the value of first state parameter of the described first object A1 after renewal.
After described first object A1 overlaps with the display position of described second object B 1, show the 3rd object A2 in described second group objects and the 4th object B 2.Alternatively, the value upgrading first state parameter of described 3rd object A2 according to the value of described first action attributes can be continued, and on described display screen, show described 3rd object A2 according to the value of first state parameter of the described 3rd object A2 after renewal.Alternatively, described human face action attribute can also comprise second action attributes different from described first action attributes, the value upgrading first state parameter of described 3rd object A2 according to the value of described second action attributes can be continued, and on described display screen, show described 3rd object A2 according to the value of first state parameter of the described 3rd object A2 after renewal.
After described 3rd object A2 overlaps with the display position of described 4th object B 2, show the 5th object A3 in described second group objects.Alternatively, the value upgrading first state parameter of described 5th object A3 according to the value of the described first or second action attributes can be continued, and on described display screen, show described 5th object A3 according to the value of first state parameter of the described 5th object A3 after renewal.Alternatively, described human face action attribute can also comprise three action attributes different from described first and second action attributes, the value upgrading first state parameter of described 5th object A3 according to the value of described 3rd action attributes can be continued, and on described display screen, show described 5th object A3 according to the value of first state parameter of the described 5th object A3 after renewal.
When described first object A1, described 3rd object A2 and described 5th object A3 overlap with described second object B 1, the 4th object B 2 and the 6th object B 3 successively, determine In vivo detection success.Alternatively, in the given time when described first object A1, described 3rd object A2 and described 5th object A3 overlap with described second object B 1, the 4th object B 2 and the 6th object B 3 successively, determine In vivo detection success.
When the biopsy method shown in application drawing 5, judge whether described timer exceeds described predetermined timing in step S550, and judge whether the first object A1, described 3rd object A2 and described 5th object A3 overlap with described second object B 1, the 4th object B 2 and the 6th object B 3 successively.
When step S550 determines that described timer exceeds described predetermined timing and described 5th object A3 does not overlap with the 6th object B 3 or described 3rd object A2 does not overlap with the 4th object B 2 or described first object A1 does not overlap with the second object B 1, determine living body faces not detected in step S570.
When step S550 determines that described timer does not exceed described predetermined timing and described first object A1, described 3rd object A2 and described 5th object A3 overlap with described second object B 1, the 4th object B 2 and the 6th object B 3 successively, determine living body faces to be detected in step S560.
On the other hand, when step S550 determines that described timer does not exceed described predetermined timing and described 5th object A3 does not overlap with the 6th object B 3 or described 3rd object A2 does not overlap with the 4th object B 2 or described first object A1 does not overlap with the second object B 1, turn back to step S520.
More specifically, when turning back to step S520 from step S550, following steps can also be performed: judge whether to show the described 5th and the 6th object, when determining to judge whether when not yet showing the described 5th and the 6th object to show described third and fourth object, when determining to judge when not yet showing described third and fourth object whether described first object overlaps with described second object, and when determining that described first object shows described third and fourth object when overlapping with described second object, and then turn back to step S520; When determining to judge when not yet showing the described 5th and the 6th object but show described third and fourth object whether described 3rd object overlaps with described 4th object, and show the described 5th and the 6th object when determining whether described 3rd object overlaps with described 4th object, and then turn back to step S520.
Alternatively, the quantity that the object that comprises in described second group objects is right can be set, wherein object A2 and object B 2 can be regarded as an object pair, and when the object B i that described each object Ai is corresponding with it successively overlaps, determine In vivo detection success.Alternatively, in the given time when the object B i that described each object Ai is corresponding with it successively overlaps, In vivo detection success is determined.
Alternatively, as shown in figs 6-8, the horizontal level of described first object A and described second object B is all different with upright position, in the case, described first action attributes can comprise the first sub-action attributes and the second sub-action attributes, first state parameter of described first object A can comprise the first sub-state parameter and the second sub-state parameter, the value of described first sub-state parameter is the horizontal position coordinate of described first object A, the value of described second sub-state parameter is the vertical position coordinate of described first object A, the horizontal position coordinate of described first object A on described display screen can be upgraded according to the value of described first sub-action attributes, and upgrade the vertical position coordinate of described first object A on described display screen according to the value of described second sub-action attributes.
Such as, described first action attributes can be defined as the position of described face in shooting image, and upgrade the display position of described first object A on described display screen according to the position coordinates of face in shooting image.In the case, described first sub-action attributes can be defined as face shooting image in horizontal level and described second sub-action attributes is defined as face shooting image in upright position, the horizontal position coordinate of described first object A on described display screen can be upgraded according to the horizontal position coordinate of face in shooting image, and upgrade the vertical position coordinate of described first object A on described display screen according to the vertical position coordinate of face in shooting image.
Again such as, described first sub-action attributes can be defined as face degree of deflection and described second sub-action attributes can be defined as face pitch rate, then can upgrade the horizontal position coordinate of described first object A on described display screen according to the value of face degree of deflection, and upgrade the vertical position coordinate of described first object A on described display screen according to the value of face pitch rate.
Second embodiment
In this second embodiment, described virtual objects comprises the first group objects and the second group objects, current display virtual objects is on the display screen the first group objects, current do not show on the display screen and according to human face action display virtual objects be the second group objects.Described first group objects comprises at least two objects, and described second group objects comprises at least one object.Alternatively, in described first group objects and described second group objects, the initial display position of object and/or initial display form are predetermined or determine at random at least partially.
In this embodiment, in described first group objects, the first state parameter of each object is the display position of this object, and in described second group objects, the first and second state parameters of each object are respectively display position and the visibility status of this object.
Alternatively, described first group objects comprises the first subgroup object and the second subgroup object, and described second group objects comprises the 3rd subgroup object, and described first subgroup object and the 3rd subgroup object are controlled device, and described second subgroup object is destination object.The quantity of controlled device can be preset, and when the controlled device of predetermined quantity overlaps with destination object successively, determine living body faces to be detected.
Alternatively, described first group objects comprises the first subgroup object and the second subgroup object, and described second group objects comprises the 3rd subgroup object, and described first subgroup object is controlled device, and described second subgroup object and the 3rd subgroup object are destination object.The quantity of destination object can be preset, and when controlled device overlaps with the destination object of predetermined quantity successively, determine living body faces to be detected.
Alternatively, described first group objects comprises the first subgroup object and the second subgroup object, described second group objects comprises the 3rd subgroup object and the 4th subgroup object, described first subgroup object and the 3rd subgroup object are controlled device, and described second subgroup object and the 4th subgroup object are destination object.
The quantity of described first subgroup object and the second subgroup object and the quantity of described 3rd subgroup object and the 4th subgroup object can be preset.Can defining objects pair, each object is to comprising a controlled device and a destination object.The quantity that object is right can be pre-defined, when the controlled device of the object pairs of predetermined quantity overlaps with destination object, determine living body faces to be detected.
Fig. 9 shows the exemplary process diagram of the biopsy method 900 according to disclosure embodiment.
In step S910, initialize Timer.Can according to user's input initialization timer, or can take in image face be detected time auto-initiation timer, or can take in image face predetermined action be detected time auto-initiation timer.In addition, after initialize Timer, by the display at least partially of each object in described first group objects on the display screen.
In step S920, gather the image (the first image) of predetermined coverage in real time as shooting image.Particularly, the image collecting device 110 in the electronic equipment 100 of the method for detecting human face for realizing disclosure embodiment as shown in Figure 1 or other image collecting device that can transmit image to described electronic equipment 100 independent of described electronic equipment 100 can be utilized, gather the gray scale of predetermined coverage or coloured image as shooting image, described shooting image can be photo, also can be the frame in video.
Step S530 in step S930 and Fig. 5 is corresponding, no longer repeats at this.
In step S940, value based on the first action attributes in detected human face action controls the display of the controlled device in first group of virtual objects of described current display, and shows second group of virtual objects based on the value of the second action attributes in detected human face action.
Step S950 to judge in predetermined timing controlled device in first group of virtual objects and second group of virtual objects at least partially whether successively with the overlapping at least partially of destination object in first group of virtual objects and second group of virtual objects, described predetermined timing can be predetermined.Particularly, described step S950 can comprise judge described timer whether exceed predetermined timing and described controlled device at least partially whether successively with the overlapping at least partially of described destination object.Alternatively, can timeout flag be produced when described timer exceeds described predetermined timing, can judge whether timer exceeds described predetermined timing according to this timeout flag in step S950.
When step S950 determine described timer exceed described predetermined timing and described controlled device at least partially not yet successively with the overlapping at least partially of described destination object, determine living body faces not detected in step S970.When step S950 determine described timer do not exceed described predetermined timing and described controlled device at least partially successively with the overlapping at least partially of described destination object, determine living body faces not detected in step S960.When step S950 determine described timer not yet exceed described predetermined timing and described controlled device at least partially not yet successively with the overlapping at least partially of described destination object, return step S920.
When returning step S920, gathering the image (the second image) of described predetermined coverage in real time as shooting image, and next performing step S930-S950.Here, for distinguishing the image of the described predetermined coverage successively gathered, the image first gathered being called the first image, the image of rear collection is called the second image.Should be appreciated that, the first image and the second image are the images in identical viewfinder range, are only the time differences gathered.Step S920-S950 as shown in Figure 9 repeats, until determine living body faces to be detected in step S960, or until determines living body faces not detected in step S970.
Figure 10 A shows the example of the first group objects.In this example, the quantity presetting controlled device is 2, and the quantity presetting destination object is 1.
As shown in Figure 10 A, in an initial condition, described first group objects comprises the first object A1 and the second object B, and described first object A1 is controlled device, and described second object B is background object, and described background object is destination object.Not shown second group objects in Figure 10 A, described second group objects comprises the 3rd object A2, and the 3rd object A2 is controlled device.The display position of described first object A1, described 3rd object A2 and/or described destination object B is determined at random.
Particularly, upgrade the display position coordinate of described first object A1 according to the value of the first action attributes, upgrade the visibility status value of described 3rd object A2 according to the value of the second action attributes, such as, visibility status value is that 0 instruction is not visible, does not namely show described second object; Visibility status value is that 1 instruction is visual, namely shows described second object.Alternatively, when the display position of described 3rd object A2 overlaps with the display position of described second object B, determine living body faces to be detected.Alternatively, when the display position of described first object A1 and the 3rd object A2 overlaps with the display position of described destination object B, determine living body faces to be detected.
Particularly, the described first object A1 of initial display and do not show described 3rd object A2, the display position of described first object A1 is changed according to described first action attributes, change the visibility status of described second object according to described second action attributes, and described in when changing according to described second action attributes value, the display position of the first object A1 determines the display position of described 3rd object A2.Such as, described in when the display position of described 3rd object A2 changes with described second action attributes value, the display position of the first object A1 is identical, when the display position of described 3rd object A2 overlaps with the display position of described destination object B, determine In vivo detection success.
For the example shown in Figure 10 A, in In vivo detection, under following scene, only just determine In vivo detection success, that is: the display position of described first object A1 is changed according to described first action attributes, described first object A1 is moved to described destination object B place, then the change of described second action attributes is detected when described first object A1 is positioned at described destination object B place, and accordingly at the described 3rd object A2 of described destination object B place display.Particularly, such as described first object A1 is alignment clamp, and described second object B is target center, and described 3rd object A2 is bullet.
When the biopsy method shown in application drawing 9, judge whether described timer exceeds described predetermined timing in step S950, and judge whether described 3rd object A2 overlaps with described second object B.
When step S950 determines that described timer exceeds described predetermined timing and described 3rd object A2 not yet shows or described 3rd object A2 has shown but do not overlapped with the second object B, determine living body faces not detected in step S970.
When step S950 determines that described timer does not exceed described predetermined timing and described 3rd object A2 overlaps with described second object B, determine living body faces to be detected in step S960.
On the other hand, when step S950 determines that described timer does not exceed described predetermined timing and described 3rd object A2 not yet shows, step S920 is turned back to.
Figure 10 B shows another example of the first group objects and the second group objects.In this example, the quantity presetting controlled device is 2, and the quantity presetting destination object is 2.
As shown in Figure 10 B, in an initial condition, described first group objects comprises the first object A1 and the second object B 1, and described first object A1 is controlled device, and described second object B 1 is background object, and described background object is destination object.
In addition, also show the 3rd object A2 and the 4th object B 2 in Figure 10 B, described 3rd object A2 is controlled device, and described 4th object B 2 is background object.Particularly, when the first object A1 and the second object B 1 overlap, display the 3rd object A2 and the 4th object B 2 are as described second group objects.
In described first object A1, the second object B 1, the 3rd object A2 and the 4th object B 2, the value of the state parameter of at least one can be determined at random.Such as, the display position of described first object A1, the second object B 1, the 3rd object A2 and the 4th object B 2 is determined at random.
Described human face action attribute comprises the first action attributes and the second action attributes.The display position coordinate of described first object A1 is upgraded according to the value of described first action attributes, the visibility status value of described third and fourth object is upgraded according to the value of described second action attributes, such as, visibility status value is that 0 instruction is not visible, does not namely show described third and fourth object; Visibility status value is that 1 instruction is visual, namely shows described third and fourth object.
In addition, the display position coordinate of described 3rd object can also be upgraded according to the value of described first action attributes.Alternatively, described human face action attribute also comprises three action attributes different from described first action attributes, upgrades the display position coordinate of described 3rd object according to the value of described 3rd action attributes.
Particularly, the described first object A1 of initial display and the second object B 1 but do not show described 3rd object A2 and the 4th object B 2, change the display position of described first object A1 according to described first action attributes, change the visibility status of described second object according to described second action attributes.Described in when can change according to described second action attributes value, the display position of the first object A1 determines the initial display position of described 3rd object A2, or can determine the initial display position of described 3rd object A2 randomly.In this example, under following scene, only just determine In vivo detection success, that is: the display position of described first object A1 is changed according to described first action attributes, described first object A1 is moved to described second object B 1 place, then the change of described second action attributes is detected when described first object A1 is positioned at described second object B 1 place, and accordingly at random site or the described 3rd object A2 of display position determined display position place display according to described second object B 1, and show described 4th object B 2 randomly, then the display position of described 3rd object A3 is changed according to described first action attributes or the 3rd action attributes different from the first action attributes, until described 3rd object A2 is moved to described 4th object B 2 place.
As previously mentioned, described first action attributes can comprise the first sub-action attributes and the second sub-action attributes, first state parameter of described first object A1 can comprise the first sub-state parameter and the second sub-state parameter, the value of the described first sub-state parameter of described first object A1 and the value of described second sub-state parameter are respectively horizontal position coordinate and the vertical position coordinate of described first object A, the horizontal position coordinate of described first object A on described display screen and vertical position coordinate can be upgraded respectively according to the value of the value of described first sub-action attributes and described second sub-action attributes.
In addition, described 3rd action attributes also can comprise the 3rd sub-action attributes and the 4th sub-action attributes, first state parameter of described second object A2 can comprise the first sub-state parameter and the second sub-state parameter, the value of the first sub-state parameter of described second object A2 and the value of the second sub-state parameter are respectively horizontal position coordinate and the vertical position coordinate of described second object A2, the horizontal position coordinate of described second object A2 on described display screen and vertical position coordinate can be upgraded respectively according to the value of the value of described 3rd sub-action attributes and described 4th sub-action attributes.
Such as, described first sub-action attributes and the second sub-action attributes face degree of deflection and face pitch rate can be defined as respectively, or described 3rd sub-action attributes and the 4th sub-action attributes eyes left-right rotation degree and the upper and lower degree of rotation of eyes can be defined as respectively.
Described in the embodiments the specific implementation of the biopsy method according to disclosure embodiment above the first to the second, should be appreciated that, the various concrete operations in the first to the second embodiment can be combined as required.
Next, with reference to Figure 11 and Figure 12, the In vivo detection equipment according to disclosure embodiment is described.Described In vivo detection equipment can be the electronic equipment being integrated with human face image collecting device, such as smart mobile phone, panel computer, personal computer, identification apparatus etc. based on recognition of face.Alternatively, described In vivo detection equipment can also comprise human face image collecting device and the check processing device of separation, described check processing device can receive shooting image from described human face image collecting device, and carries out In vivo detection according to the shooting image received.Described check processing device can be server, smart mobile phone, panel computer, personal computer, identification apparatus etc. based on recognition of face.
The details performing each operation due to this In vivo detection equipment is substantially identical with the details of the biopsy method above described for Fig. 2-4, therefore in order to avoid repeating, only concise and to the point description is carried out to described In vivo detection equipment hereinafter, and omit the description to same detail.
As shown in figure 11, human face action pick-up unit 1110, virtual objects controlled device 1120 and live body judgment means 1130 is comprised according to the In vivo detection equipment 1100 of disclosure embodiment.Human face action pick-up unit 1110, virtual objects controlled device 1120 and live body judgment means 1130 can realize by processor 102 as shown in Figure 1.
As shown in figure 12, image collecting device 1240, human face action pick-up unit 1110, virtual objects controlled device 1120, live body judgment means 1130, display device 1250 and memory storage 1260 is comprised according to the In vivo detection equipment 1200 of disclosure embodiment.Image collecting device 1240 can realize by image collecting device 110 as shown in Figure 1, human face action pick-up unit 1110, virtual objects controlled device 1120 and live body judgment means 1130 can realize by processor 102 as shown in Figure 1, display device 1250 can realize by output unit 108 as shown in Figure 1, and memory storage 1260 can realize by memory storage 104 as shown in Figure 1.
The image collecting device 1240 in In vivo detection equipment 1200 or other image collecting device that can transmit image to described In vivo detection equipment 1100 or 1200 independent of described In vivo detection equipment 1100 or 1200 can be utilized, gather the gray scale of predetermined coverage or coloured image as shooting image, described shooting image can be photo, also can be the frame in video.Described image capture device can be camera, the camera of panel computer, the camera of personal computer of smart phone or can be even IP Camera.
Human face action pick-up unit 1110 is configured to detect human face action from shooting image.
As shown in figure 13, human face action pick-up unit 1110 can comprise key point locating device 1310, texture information extraction element 1320 and action attributes determining device 1330.
Described key point locating device 1310 is configured to locating human face's key point in described shooting image.Exemplarily, first described key point locating device 1310 can determine whether comprise face in obtained image, orient face key point when face being detected.The details that described key point locating device 1310 operates is identical with the details described in step S310, does not repeat them here.
Described texture information extraction element 1320 is configured to extract image texture information from described shooting image.Exemplarily, described texture information extraction element 1320 can according to the Pixel Information in described shooting image, the monochrome information of such as pixel, extracts the fine information of face, such as eyeball position information, Shape of mouth, micro-expression information etc.
Described action attributes determining device 1330, based on located face key point and/or described image texture information, obtains the value of human face action attribute.The described human face action attribute obtained based on located face key point such as can include but not limited to that eyes open the degree of closing, face opens the degree of closing, the distance etc. of face pitch rate, face degree of deflection, face and camera.Described human face action attribute based on described image texture information acquisition can include but not limited to eyeball deflection degree, the upper and lower degree of deflection of eyeball etc.The details that described action attributes determining device 1330 operates is identical with the details described in step S330, does not repeat them here.
Described virtual objects controlled device 1120 is configured to display state based on first group of virtual objects of display current on display screen and the human face action that detects, controls the display in described display device 1250 of controlled device in first group of virtual objects of described current display and controls to show second group of virtual objects in described display device 1250
Exemplarily, the state of the virtual objects that can show on the display screen according to detected human face action control break.At least one object display on the display screen in described first group of virtual objects is upgraded according to detected human face action.In described first group of virtual objects, the initial display position of object and/or initial display form are predetermined or determine at random at least partially.Particularly, the motion state, display position, size, shape, color etc. of described virtual objects can such as be changed.
Alternatively, can control to show new virtual objects on the display screen, i.e. second group of virtual objects according to detected human face action.Alternatively, can, according to the display situation of described first group of virtual objects, control to show new virtual objects on the display screen, i.e. second group of virtual objects.In at least one object described of described second group objects, the initial display position of object and/or initial display form are predetermined or determine at random at least partially.
The state parameter attribute of described second group of virtual objects at least can comprise visibility status.The display of at least one object in described first group objects can be controlled according to the value of described first group of human face action attribute, and can control whether to show at least one object in described second group objects according to the display situation of at least one object in the value of described second group of human face action attribute or described first group objects.
As shown in figure 14, described virtual objects controlled device 1120 can comprise human face action mapping device 1410 and virtual objects presents device 1420.
Alternatively, described human face action attribute comprises the first action attributes.In the case, described human face action mapping device 1410 upgrades the value of the state parameter of the controlled device in described first group of virtual objects according to the value of the first action attributes, and can upgrade the value of the state parameter attribute of described second group of virtual objects according to the display state of first group of virtual objects of display current on display screen.
Alternatively, described human face action attribute can comprise the first and second action attributes.In the case, described human face action mapping device 1410 can upgrade the value of the state parameter of the controlled device in described first group of virtual objects according to the value of the first action attributes, and can upgrade the value of the state parameter attribute of described second group of virtual objects according to the value of the second action attributes.
Particularly, can be by a kind of human face action best property of attribute mapping a certain state parameter of virtual objects.Such as, eyes of user can be opened the degree of closing or face opens the size that the degree of closing is mapped as virtual objects, and open according to eyes of user the size that value that the degree of closing or face open the degree of closing upgrades virtual objects.Again such as, user's face pitch rate can be mapped as virtual objects vertical display position on the display screen, and upgrade virtual objects vertical display position on the display screen according to the value of user's face pitch rate.Alternatively, the mapping relations between human face action attribute and the state parameter of virtual objects can preset.
Described virtual objects presents the value of device 1420 according to the state parameter of the described controlled device after renewal, and described display screen shows described controlled device.Alternatively, described virtual objects presents device 1420 also according to the value of the state parameter of the described second group of virtual objects after renewal, and described display screen shows described second group of virtual objects
Described live body judgment means 1130 be configured to judge controlled device in described first group of virtual objects and described second group of virtual objects at least partially successively with the overlapping at least partially of destination object in described first group of virtual objects and described second group of virtual objects when, determine that the face in described shooting image is living body faces.
Particularly, controlled device overlaps can comprise with destination object: position overlaps, position overlaps and size is identical, position overlaps and shape is identical, position overlaps and color is identical.
Described human face action mapping device 1410 and described virtual objects present device 1420 can perform various operations in above-mentioned the first to the second embodiment, does not repeat them here.
In addition, the living body detection device 1100 and 1200 according to disclosure embodiment can also comprise timer, for carrying out timing to predetermined timing.Described timer also can be realized by processor 102.Can according to user's input initialization timer, or can take in image face be detected time auto-initiation timer, or can take in image face predetermined action be detected time auto-initiation timer.In the case, described live body judgment means 1130 to be configured to judge in described predetermined timing controlled device in described first group of virtual objects and described second group of virtual objects at least partially whether successively with the overlapping at least partially of destination object in described first group of virtual objects and described second group of virtual objects, and when to judge in described predetermined timing controlled device in described first group of virtual objects and described second group of virtual objects at least partially successively with the overlapping at least partially of destination object in described first group of virtual objects and described second group of virtual objects, determine that the face in described shooting image is living body faces.
Described memory storage 1260 is for storing described shooting image.In addition, described memory storage 1260 is also for storing state parameter and the state parameter value of described virtual objects.In addition, described memory storage 1260 also presents virtual objects that device 1420 presents for storing described virtual objects and stores the background image etc. that will show in display device 1250.
In addition, described memory storage 1260 can store computer program instructions, described computer program instructions can realize the biopsy method according to disclosure embodiment when being run by described processor 102, and/or can realize according to the key point locating device 1310 in the In vivo detection equipment of disclosure embodiment, texture information extraction element 1320 and action attributes determining device 1330.
In addition, according to disclosure embodiment, additionally provide a kind of computer program, it comprises computer-readable recording medium, and described computer-readable recording medium stores computer program instructions.Described computer program instructions by the biopsy method that can realize during computer run according to disclosure embodiment, and/or can realize all or part of function according to the key point locating device in the In vivo detection equipment of disclosure embodiment, texture information extraction element and action attributes determining device.
According to the biopsy method of disclosure embodiment and equipment and computer program, by controlling virtual objects display based on human face action and carrying out In vivo detection according to virtual objects display, the attack that special hardware device takes precautions against the various ways such as photo, video, 3D faceform or mask effectively can not relied on, thus the cost of In vivo detection can be reduced.Further, by identifying the multiple action attributes in human face action, multiple state parameters of virtual objects can be controlled, described virtual objects can be made to change display state in many aspects, such as, make described virtual objects perform complicated predetermined action or make described virtual objects realize having with initial display effect the display effect of a great difference.Therefore, the accuracy of In vivo detection can be improved further, and and then the security of applying according to the biopsy method of the embodiment of the present invention and the application scenarios of equipment and computer program can be improved.
Described computer-readable recording medium can be the combination in any of one or more computer-readable recording medium.Described computer-readable recording medium such as can comprise the combination in any of the storage card of smart phone, the memory unit of panel computer, the hard disk of personal computer, random access memory (RAM), ROM (read-only memory) (ROM), Erasable Programmable Read Only Memory EPROM (EPROM), portable compact disc ROM (read-only memory) (CD-ROM), USB storage or above-mentioned storage medium.
The example embodiment of the present invention described in detail is above only illustrative, instead of restrictive.It should be appreciated by those skilled in the art that without departing from the principles and spirit of the present invention, various amendment can be carried out to these embodiments, combination or sub-portfolio, and such amendment should fall within the scope of the present invention.

Claims (20)

1. a biopsy method, comprising:
Human face action is detected from shooting image;
Based on display state and the human face action that detects of first group of virtual objects of display current on display screen, control the display of the controlled device in first group of virtual objects of described current display and control display second group of virtual objects; And
In described first group of virtual objects and described second group of virtual objects controlled device at least partially successively with the overlapping at least partially of destination object in described first group of virtual objects and described second group of virtual objects, determine that the face in described shooting image is living body faces.
2. biopsy method as claimed in claim 1, also comprises:
Gather the first image of predetermined coverage in real time as described shooting image;
Wherein, described biopsy method also comprises: when detected human face action be predetermined human face action and described controlled device at least partially not yet successively with the overlapping at least partially of described destination object, gather the second image of described predetermined coverage in real time as described shooting image.
3. biopsy method as claimed in claim 1, wherein, detects human face action and comprises from shooting image:
Locating human face's key point in described shooting image, and/or image texture information is extracted from described shooting image; And
Based on located face key point and/or the image texture information extracted, obtain the value of human face action attribute.
4. biopsy method as claimed in claim 3, wherein, control the display of the controlled device in first group of virtual objects of described current display based on the display state of first group of virtual objects of display current on display screen and the human face action that detects and control display second group of virtual objects and comprise:
The value of the state parameter of the controlled device in described first group of virtual objects is upgraded according to the value of the first action attributes;
Upgrade the value of the state parameter attribute of described second group of virtual objects according to the display state of first group of virtual objects of display current on display screen, or upgrade the value of state parameter attribute of described second group of virtual objects according to the value of the second action attributes; And
According to the value of the state parameter of the described controlled device after renewal, described display screen shows described controlled device, according to the value of the state parameter of the described second group of virtual objects after renewal, described display screen shows described second group of virtual objects.
5. the biopsy method as described in claim 3 or 4, wherein, described first group of virtual objects comprises the first subgroup object and the second subgroup object, described second group of virtual objects comprises the 3rd subgroup object, wherein, described first subgroup object and the 3rd subgroup object are controlled device, and described second subgroup object is destination object; Or described first subgroup object is controlled device, described second subgroup object and the 3rd subgroup object are destination object; Or
Described first group of virtual objects comprises the first subgroup object and the second subgroup object, described second group of virtual objects comprises the 3rd subgroup object and the 4th subgroup object, described first subgroup object and the 3rd subgroup object are controlled device, and described second subgroup object and the 4th subgroup object are destination object.
6. the biopsy method as described in claim 3 or 4, wherein, described human face action attribute comprises following at least one item: eyes open the degree of closing, face opens the degree of closing, the distance of face pitch rate, face degree of deflection, face and camera, eyeball left-right rotation degree, the upper and lower degree of rotation of eyeball.
7. biopsy method as claimed in claim 3, wherein, described first group objects comprises the first controlled device and first object object, and described second group objects comprises the second destination object,
Wherein, show the first controlled device and first object object on the display screen and the first controlled device and first object object do not overlap when, control the display position of described first controlled device according to detected human face action;
Show the first controlled device and first object object on the display screen and the first controlled device and first object object overlap when, show described second destination object,
Show the first controlled device and the second destination object on the display screen and the first controlled device and the second destination object do not overlap when, control the display position of described first controlled device according to detected human face action;
When described first controlled device sequentially overlaps with described first object object and the second destination object, determine that the face in described shooting image is living body faces.
8. biopsy method as claimed in claim 3, wherein, described first group objects comprises the first controlled device and first object object, and described second group objects comprises the second controlled device,
Wherein, show the first controlled device and first object object on the display screen and the first controlled device and first object object do not overlap when, control the display position of described first controlled device according to detected human face action;
Show the first controlled device and first object object on the display screen and the first controlled device and first object object overlap when, show described second controlled device,
Show the second controlled device and first object object on the display screen and the second controlled device does not overlap with first object object when, control the display position of described second controlled device according to detected human face action;
When described first controlled device and the second controlled device overlap with described first object object successively, determine that the face in described shooting image is living body faces.
9. biopsy method as claimed in claim 3, wherein, described first group objects comprises the first controlled device and first object object, and described second group objects comprises the second controlled device and the second destination object,
Wherein, show the first controlled device and first object object on the display screen and the first controlled device and first object object do not overlap when, control the display position of described first controlled device according to detected human face action;
Show the first controlled device and first object object on the display screen and the first controlled device and first object object overlap when, show described second controlled device and the second destination object;
Show the second controlled device and the second destination object on the display screen and the second controlled device does not overlap with the second destination object when, control the display position of described second controlled device according to detected human face action;
When described first controlled device overlaps with described first object object and described second controlled device overlaps with described second destination object, determine that the face in described shooting image is living body faces.
10. biopsy method as claimed in claim 3, wherein, described first group objects comprises the first controlled device and first object object, and described second group objects comprises the second controlled device, described human face action attribute comprises the first action attributes and the second action attributes
Wherein, when showing the first controlled device and first object object on the display screen, control the display position of described first controlled device according to the value of the first action attributes, and control described second controlled device of display according to the value of the second action attributes,
When described first controlled device overlaps with described first object object and described second controlled device overlaps with described first object object, determine that the face in described shooting image is living body faces.
11. biopsy methods as claimed in claim 1, wherein, in predetermined timing described controlled device at least partially successively with the overlapping at least partially of described destination object, determine that the face in described shooting image is living body faces.
12. 1 kinds of In vivo detection equipment, comprising:
One or more processor;
One or more storer; And
Store computer program instructions in which memory, perform following steps when described computer program instructions is run by described processor: from shooting image, detect human face action; Based on display state and the human face action that detects of first group of virtual objects of display current in display device, control the display of the controlled device in first group of virtual objects of described current display and control display second group of virtual objects; And in described first group of virtual objects and described second group of virtual objects controlled device at least partially successively with the overlapping at least partially of destination object in described first group of virtual objects and described second group of virtual objects, determine that the face in described shooting image is living body faces.
13. In vivo detection equipment as claimed in claim 12, also comprise:
Image collecting device, for gathering the first image of predetermined coverage in real time as described shooting image; And
Described display device,
Wherein, described image collecting device also when detected human face action be predetermined human face action and described controlled device at least partially not yet successively with the overlapping at least partially of described destination object, gather the second image of described predetermined coverage in real time as described shooting image.
14. In vivo detection equipment as claimed in claim 12, wherein, detect human face action and comprise from shooting image:
Locating human face's key point in described shooting image, and/or image texture information is extracted from described shooting image; And
Based on located face key point and/or the image texture information extracted, obtain the value of human face action attribute.
15. In vivo detection equipment as claimed in claim 14, wherein, control the display of the controlled device in first group of virtual objects of described current display based on the display state of first group of virtual objects of display current in display device and the human face action that detects and control display second group of virtual objects and comprise:
The value of the state parameter of the controlled device in described first group of virtual objects is upgraded according to the value of the first action attributes;
Upgrade the value of the state parameter attribute of described second group of virtual objects according to the display state of first group of virtual objects of display current in display device, or upgrade the value of state parameter attribute of described second group of virtual objects according to the value of the second action attributes; And
According to the value of the state parameter of the described controlled device after renewal, show described controlled device on said display means, according to the value of the state parameter of the described second group of virtual objects after renewal, show described second group of virtual objects on said display means.
16. In vivo detection equipment as described in claims 14 or 15, wherein, described first group of virtual objects comprises the first subgroup object and the second subgroup object, described second group of virtual objects comprises the 3rd subgroup object, wherein, described first subgroup object and the 3rd subgroup object are controlled device, and described second subgroup object is destination object; Or described first subgroup object is controlled device, described second subgroup object and the 3rd subgroup object are destination object; Or
Described first group of virtual objects comprises the first subgroup object and the second subgroup object, described second group of virtual objects comprises the 3rd subgroup object and the 4th subgroup object, described first subgroup object and the 3rd subgroup object are controlled device, and described second subgroup object and the 4th subgroup object are destination object.
17. In vivo detection equipment as claimed in claim 12, wherein, perform following steps when described computer program instructions is run by described processor: initialize Timer;
Wherein, in described first group of virtual objects and described second group of virtual objects controlled device determine that the face in described shooting image is that living body faces comprises with when the overlapping at least partially of destination object in described first group of virtual objects and described second group of virtual objects successively at least partially: when described timer do not exceed predetermined timing and in described first group of virtual objects and described second group of virtual objects controlled device at least partially successively with the overlapping at least partially of destination object in described first group of virtual objects and described second group of virtual objects, determine that the face in described shooting image is living body faces.
18. 1 kinds of computer programs, comprise one or more computer-readable recording medium, and described computer-readable recording medium stores computer program instructions, described computer program instructions by during computer run perform following steps:
Human face action is detected from shooting image;
Based on display state and the human face action that detects of first group of virtual objects of display current on display screen, control the display of the controlled device in first group of virtual objects of described current display and control display second group of virtual objects; And
In described first group of virtual objects and described second group of virtual objects controlled device at least partially successively with the overlapping at least partially of destination object in described first group of virtual objects and described second group of virtual objects, determine that the face in described shooting image is living body faces.
19. computer programs as claimed in claim 18, wherein, control the display of the controlled device in first group of virtual objects of described current display based on the display state of first group of virtual objects of display current on display screen and the human face action that detects and control display second group of virtual objects and comprise:
The value of the state parameter of the controlled device in described first group of virtual objects is upgraded according to the value of the first action attributes;
Upgrade the value of the state parameter attribute of described second group of virtual objects according to the display state of first group of virtual objects of display current on display screen, or upgrade the value of state parameter attribute of described second group of virtual objects according to the value of the second action attributes; And
According to the value of the state parameter of the described controlled device after renewal, described display screen shows described controlled device, according to the value of the state parameter of the described second group of virtual objects after renewal, described display screen shows described second group of virtual objects.
20. computer programs as claimed in claim 19, wherein, described first group of virtual objects comprises the first subgroup object and the second subgroup object, described second group of virtual objects comprises the 3rd subgroup object, wherein, described first subgroup object and the 3rd subgroup object are controlled device, and described second subgroup object is destination object; Or described first subgroup object is controlled device, described second subgroup object and the 3rd subgroup object are destination object; Or
Described first group of virtual objects comprises the first subgroup object and the second subgroup object, described second group of virtual objects comprises the 3rd subgroup object and the 4th subgroup object, described first subgroup object and the 3rd subgroup object are controlled device, and described second subgroup object and the 4th subgroup object are destination object.
CN201580000358.7A 2015-06-30 2015-06-30 Living body detection method, equipment and computer program product Pending CN105518715A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2015/082828 WO2017000217A1 (en) 2015-06-30 2015-06-30 Living-body detection method and device and computer program product

Publications (1)

Publication Number Publication Date
CN105518715A true CN105518715A (en) 2016-04-20

Family

ID=55725029

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201580000358.7A Pending CN105518715A (en) 2015-06-30 2015-06-30 Living body detection method, equipment and computer program product

Country Status (2)

Country Link
CN (1) CN105518715A (en)
WO (1) WO2017000217A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107808115A (en) * 2017-09-27 2018-03-16 联想(北京)有限公司 A kind of biopsy method, device and storage medium
CN111240482A (en) * 2020-01-10 2020-06-05 北京字节跳动网络技术有限公司 Special effect display method and device

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108171211A (en) * 2018-01-19 2018-06-15 百度在线网络技术(北京)有限公司 Biopsy method and device
CN111353842A (en) * 2018-12-24 2020-06-30 阿里巴巴集团控股有限公司 Processing method and system of push information
CN116452703B (en) * 2023-06-15 2023-10-27 深圳兔展智能科技有限公司 User head portrait generation method, device, computer equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101159016A (en) * 2007-11-26 2008-04-09 清华大学 Living body detecting method and system based on human face physiologic moving
CN201845368U (en) * 2010-09-21 2011-05-25 北京海鑫智圣技术有限公司 Human face and fingerprint access control with living body detection function
US20130188840A1 (en) * 2012-01-20 2013-07-25 Cyberlink Corp. Liveness detection system based on face behavior
CN103778360A (en) * 2012-10-26 2014-05-07 华为技术有限公司 Face unlocking method and device based on motion analysis
US8856541B1 (en) * 2013-01-10 2014-10-07 Google Inc. Liveness detection
CN104166835A (en) * 2013-05-17 2014-11-26 诺基亚公司 Method and device for identifying living user

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100592322C (en) * 2008-01-04 2010-02-24 浙江大学 An automatic computer authentication method for photographic faces and living faces
KR101660838B1 (en) * 2009-04-01 2016-09-28 삼성전자주식회사 Imaging apparatus and controlling method of the same
CN103400122A (en) * 2013-08-20 2013-11-20 江苏慧视软件科技有限公司 Method for recognizing faces of living bodies rapidly
CN103593598B (en) * 2013-11-25 2016-09-21 上海骏聿数码科技有限公司 User's on-line authentication method and system based on In vivo detection and recognition of face

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101159016A (en) * 2007-11-26 2008-04-09 清华大学 Living body detecting method and system based on human face physiologic moving
CN201845368U (en) * 2010-09-21 2011-05-25 北京海鑫智圣技术有限公司 Human face and fingerprint access control with living body detection function
US20130188840A1 (en) * 2012-01-20 2013-07-25 Cyberlink Corp. Liveness detection system based on face behavior
CN103778360A (en) * 2012-10-26 2014-05-07 华为技术有限公司 Face unlocking method and device based on motion analysis
US8856541B1 (en) * 2013-01-10 2014-10-07 Google Inc. Liveness detection
CN104166835A (en) * 2013-05-17 2014-11-26 诺基亚公司 Method and device for identifying living user

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107808115A (en) * 2017-09-27 2018-03-16 联想(北京)有限公司 A kind of biopsy method, device and storage medium
CN111240482A (en) * 2020-01-10 2020-06-05 北京字节跳动网络技术有限公司 Special effect display method and device

Also Published As

Publication number Publication date
WO2017000217A1 (en) 2017-01-05

Similar Documents

Publication Publication Date Title
CN105518582A (en) Vivo detection method and device, computer program product
CN105518714A (en) Vivo detection method and equipment, and computer program product
US10339402B2 (en) Method and apparatus for liveness detection
CN109711243B (en) Static three-dimensional face in-vivo detection method based on deep learning
CN107609383B (en) 3D face identity authentication method and device
US9985963B2 (en) Method and system for authenticating liveness face, and computer program product thereof
CN108805047B (en) Living body detection method and device, electronic equipment and computer readable medium
CN107590430A (en) Biopsy method, device, equipment and storage medium
EP3373202B1 (en) Verification method and system
US20200380279A1 (en) Method and apparatus for liveness detection, electronic device, and storage medium
CN105518715A (en) Living body detection method, equipment and computer program product
CN105518708A (en) Method and equipment for verifying living human face, and computer program product
CN110223322B (en) Image recognition method and device, computer equipment and storage medium
CN108875468B (en) Living body detection method, living body detection system, and storage medium
CN105426827A (en) Living body verification method, device and system
CN105518711A (en) In-vivo detection method, in-vivo detection system, and computer program product
CN105718863A (en) Living-person face detection method, device and system
CN108109010A (en) A kind of intelligence AR advertisement machines
CN110866454B (en) Face living body detection method and system and computer readable storage medium
CN110633664A (en) Method and device for tracking attention of user based on face recognition technology
WO2020020022A1 (en) Method for visual recognition and system thereof
CN113205057A (en) Face living body detection method, device, equipment and storage medium
CN110287848A (en) The generation method and device of video
US20230306792A1 (en) Spoof Detection Based on Challenge Response Analysis
CN111680546A (en) Attention detection method, attention detection device, electronic equipment and storage medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 100190 Beijing, Haidian District Academy of Sciences, South Road, No. 2, block A, No. 313

Applicant after: MEGVII INC.

Applicant after: Beijing maigewei Technology Co., Ltd.

Address before: 100190 Beijing, Haidian District Academy of Sciences, South Road, No. 2, block A, No. 313

Applicant before: MEGVII INC.

Applicant before: Beijing aperture Science and Technology Ltd.

CB02 Change of applicant information
RJ01 Rejection of invention patent application after publication

Application publication date: 20160420

RJ01 Rejection of invention patent application after publication