CN108734057A - The method, apparatus and computer storage media of In vivo detection - Google Patents
The method, apparatus and computer storage media of In vivo detection Download PDFInfo
- Publication number
- CN108734057A CN108734057A CN201710253338.1A CN201710253338A CN108734057A CN 108734057 A CN108734057 A CN 108734057A CN 201710253338 A CN201710253338 A CN 201710253338A CN 108734057 A CN108734057 A CN 108734057A
- Authority
- CN
- China
- Prior art keywords
- detected
- face
- pictures
- vivo detection
- picture
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/40—Spoof detection, e.g. liveness detection
- G06V40/45—Detection of the body part being alive
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/94—Hardware or software architectures specially adapted for image or video understanding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/64—Three-dimensional objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
Abstract
An embodiment of the present invention provides a kind of method, apparatus of In vivo detection and computer storage media, this method includes:Obtain at least two pictures to be detected;It includes face that whether at least two pictures to be detected described in judging, which meet in each picture to be detected,;If it is determined that including face in each picture to be detected, it then carries out 3D shape and judges that In vivo detection, continuity judge that In vivo detection, sight angle judge that In vivo detection and screen reproduction judge at least one of In vivo detection In vivo detection, and determine final In vivo detection result.It can be seen that, the embodiment of the present invention can carry out 3D shape based at least two pictures to be detected and judge that In vivo detection, continuity judge that In vivo detection, sight angle judge that In vivo detection and screen reproduction judge at least one of In vivo detection In vivo detection with the final In vivo detection result of determination, the attacks such as photograph print, static images reproduction can be effectively prevented, ensure the safety of verification.
Description
Technical field
The present invention relates to field of image recognition, the method, apparatus and computer that relate more specifically to a kind of In vivo detection are deposited
Storage media.
Background technology
Currently, In vivo detection system is increasingly being applied to the field that security protection, finance, social security etc. need authentication
In.For example, when carrying out authentication using face etc., need to take precautions against the attacks such as photo, screen reproduction.
In particular, need uploading pictures to be verified in authentication procedures on most of line, and current side
Method can not take precautions against the attacks such as photo, screen reproduction.
Invention content
The present invention is proposed in view of the above problem.The present invention provides a kind of method, apparatus of In vivo detection and meters
Calculation machine storage medium can be effectively prevented from the attacks such as photo, static images reproduction.
According to the first aspect of the invention, a kind of method of In vivo detection is provided, including:
Obtain at least two pictures to be detected;
It includes face that whether at least two pictures to be detected described in judging, which meet in each picture to be detected,;
If in one or one or more picture to be detected at least two pictures to be detected not
Including face, then the result of the In vivo detection is non-living body;
If it is determined that including face in each picture to be detected, then at least two pictures to be detected are based on
It carries out 3D shape and judges that In vivo detection, continuity judge that In vivo detection, sight angle judge that In vivo detection and screen reproduction are sentenced
At least one of disconnected In vivo detection In vivo detection, and determine that final live body is examined according to the result of at least one In vivo detection
Survey result.
Illustratively, whether at least two pictures to be detected described in the judgement, which meet in each picture to be detected, is wrapped
Containing face, including:
Using advance trained Face datection algorithm, judge to whether there is face area in each picture to be detected
Domain.
Illustratively, 3D shape judgement includes:
According at least two pictures to be detected, using advance trained neural network, three-dimensional structure is carried out to face
It builds to obtain the result of three-dimensional structure;
Using advance trained grader, judge the face in the three-dimensional result built depth information whether with
Real human face is consistent.
Illustratively, it is described in advance trained neural network be by using image data collection including depth information with
And gradient descent method is trained neural network, the image data collection including depth information received by depth camera
Collection obtains.
Illustratively, at least two pictures to be detected described in the basis, it is right using advance trained neural network
Face carries out three-dimensional and builds before obtaining the result of three-dimensional structure, further includes:
The angle of face at least two pictures to be detected described in determining is within corresponding angular range.
Illustratively, described using advance trained grader, judge the face in the result of the three-dimensional structure
Whether depth information is consistent with real human face, including:
Using advance trained grader, judge the positive face in the three-dimensional result built depth information whether with
Real human face is consistent.
Illustratively, further include:
If it is determined that the depth information of the face in the result of the three-dimensional structure is consistent with real human face, it is determined that described
Face at least two pictures to be detected is live body.
Illustratively, continuity judgement includes:
Carry out the judgement of face continuity, human body continuity judges and background continuity judges;And
The result and the background continuity that result, the human body continuity judged according to the face continuity judges
The result of judgement determines the result that the continuity judges.
Illustratively, face continuity judgement includes:
Face recognition features in each picture to be detected of at least two pictures to be detected described in determining;
According to the difference between the face recognition features, judge described in face at least two pictures to be detected whether
For same people.
Illustratively, human body continuity judgement includes:
The corresponding human region image in position in each picture to be detected of at least two pictures to be detected described in determining;
According to the similarity between the human region image, judge described in human body area at least two pictures to be detected
Whether domain is consistent.
Illustratively, the corresponding human body in position in each picture to be detected of at least two pictures to be detected described in determining
Area image, including:
The image that counterpart is bold in the square region of small fixed proportion is extracted immediately below face, as the human body area
Area image.
Illustratively, background continuity judgement includes:
The background image in same position region in each picture to be detected of selected at least two pictures to be detected;
According to the similarity between the background image, whether the background at least two pictures to be detected described in judgement belongs to
In Same Scene.
Illustratively, same position region in each picture to be detected of at least two pictures to be detected is selected
Background image, including:
The region except human face region in each picture to be detected, random selection predefined size and predetermined quantity
Background image.
Illustratively, if the ratio that the background image of the predetermined quantity is judged as belonging to Same Scene is more than predetermined
Ratio, then the background continuity judge to determine described in background at least two pictures to be detected belong to Same Scene.
Illustratively, further include:
If by the face continuity judge to determine described in face at least two pictures to be detected for same people,
And/or if judging that the human region at least two pictures to be detected described in determination is consistent by the human body continuity,
And/or if judge that the background at least two pictures to be detected described in determination belongs to same field by the background continuity
Scape, it is determined that the face at least two pictures to be detected is live body.
Illustratively, sight angle judgement includes:
The angle of the sight of people in each picture to be detected of at least two pictures to be detected described in determining;
Judge whether the angle of the sight of the people in each picture to be detected meets predefined angular range.
Illustratively, further include:
If judging to determine that the angle of the sight of the people in each picture to be detected is full by the sight angle
The foot predefined angular range, it is determined that the face at least two pictures to be detected is live body.
Illustratively, screen reproduction judgement includes:
Whether at least two pictures to be detected are screen reproduction described in judging.
Illustratively, further include:
If judging that at least two pictures to be detected described in determination are not screen reproduction by the screen reproduction, it is determined that
Face at least two pictures to be detected is live body.
Illustratively, the result according at least one In vivo detection determines that final In vivo detection result includes:
The result of at least one In vivo detection be by the case of, determine that final In vivo detection result is logical for In vivo detection
It crosses.
Second aspect provides a kind of device of In vivo detection, including:Acquisition module, judgment module and determining module.
Acquisition module, for obtaining at least two pictures to be detected;
Judgment module is wrapped for judging whether at least two pictures to be detected meet in each picture to be detected
Containing face;
Determining module, if for one or one or more picture to be detected at least two pictures to be detected
In do not include face, then the result of the In vivo detection be non-living body;
If it is determined that including face in each picture to be detected, then at least two pictures to be detected are based on
It carries out 3D shape and judges that In vivo detection, continuity judge that In vivo detection, sight angle judge that In vivo detection and screen reproduction are sentenced
At least one of disconnected In vivo detection In vivo detection, and determine that final live body is examined according to the result of at least one In vivo detection
Survey result.
Device described in second aspect is implemented for the side of aforementioned first aspect and its various exemplary In vivo detections
Method.
The third aspect, provides a kind of device of In vivo detection, including memory, processor and is stored in the memory
Computer program that is upper and running on the processor, the processor realize first aspect and each when executing described program
The step of example the method.
Fourth aspect provides a kind of computer storage media, is stored thereon with computer program, and described program is handled
The step of first aspect and each example the method are realized when device executes.
It is carried out it can be seen that the method for the In vivo detection in the embodiment of the present invention can be based at least two pictures to be detected
3D shape judges that In vivo detection, continuity judge that In vivo detection, sight angle judge that In vivo detection and screen reproduction judge to live
At least one of physical examination survey In vivo detection is with the final In vivo detection of determination as a result, it is possible to be effectively prevented photograph print, static state
The attacks such as picture reproduction ensure the safety of verification.
Description of the drawings
The embodiment of the present invention is described in more detail in conjunction with the accompanying drawings, the above and other purposes of the present invention,
Feature and advantage will be apparent.Attached drawing is used for providing further understanding the embodiment of the present invention, and constitutes explanation
A part for book is not construed as limiting the invention for explaining the present invention together with the embodiment of the present invention.In the accompanying drawings,
Identical reference label typically represents same parts or step.
Fig. 1 is a schematic block diagram of the electronic equipment of the embodiment of the present invention;
Fig. 2 is a schematic flow chart of the method for the In vivo detection of the embodiment of the present invention;
Fig. 3 is the schematic flow chart that the 3D shape of the embodiment of the present invention judges the method for In vivo detection;
Fig. 4 is the schematic flow chart that the continuity of the embodiment of the present invention judges the method for In vivo detection;
Fig. 5 is the schematic flow chart that the sight angle of the embodiment of the present invention judges the method for In vivo detection;
Fig. 6 is a schematic block diagram of the device of the In vivo detection of the embodiment of the present invention;
Fig. 7 is another schematic block diagram of the device of the In vivo detection of the embodiment of the present invention.
Specific implementation mode
In order to enable the object, technical solutions and advantages of the present invention become apparent, root is described in detail below with reference to accompanying drawings
According to example embodiments of the present invention.Obviously, described embodiment is only a part of the embodiment of the present invention, rather than this hair
Bright whole embodiments, it should be appreciated that the present invention is not limited by example embodiment described herein.Based on described in the present invention
The embodiment of the present invention, those skilled in the art's obtained all other embodiment in the case where not making the creative labor
It should all fall under the scope of the present invention.
The embodiment of the present invention can be applied to electronic equipment, and Fig. 1 show one of the electronic equipment of the embodiment of the present invention
Schematic block diagram.Electronic equipment 10 shown in FIG. 1 include one or more processors 102, one or more storage device 104,
Input unit 106, output device 108, imaging sensor 110 and one or more non-image sensors 114, these components are logical
Cross bus system 112 and/or other forms interconnection.It should be noted that the component and structure of electronic equipment 10 shown in FIG. 1 only show
Example property, and not restrictive, as needed, the electronic equipment can also have other assemblies and structure.
The processor 102 may include CPU 1021 and GPU 1022 or have data-handling capacity and/or instruction
The processing unit of the other forms of executive capability, such as field programmable gate array (Field-Programmable Gate
Array, FPGA) or advanced reduced instruction set machine (Advanced RISC (Reduced Instruction Set
Computer) Machine, ARM) etc., and processor 102 can control other components in the electronic equipment 10 to execute
Desired function.
The storage device 104 may include one or more computer program products, and the computer program product can
To include various forms of computer readable storage mediums, such as volatile memory 1041 and/or nonvolatile memory
1042.The volatile memory 1041 for example may include random access memory (Random Access Memory, RAM)
And/or cache memory (cache) etc..The nonvolatile memory 1042 for example may include read-only memory
(Read-Only Memory, ROM), hard disk, flash memory etc..One or more can be stored on the computer readable storage medium
A computer program instructions, processor 102 can run described program instruction, to realize various desired functions.In the meter
Can also store various application programs and various data in calculation machine readable storage medium storing program for executing, for example, the application program use and/or
The various data etc. generated.
The input unit 106 can be the device that user is used for inputting instruction, and may include keyboard, mouse, wheat
One or more of gram wind and touch screen etc..
The output device 108 can export various information (such as image or sound) to external (such as user), and
May include one or more of display, loud speaker etc..
Described image sensor 110 can shoot the desired image of user (such as photo, video etc.), and will be captured
Image be stored in the storage device 104 for other components use.
When note that the component and structure of electronic equipment shown in FIG. 1 10 are only exemplary, although the electronics shown in Fig. 1
Equipment 10 includes multiple and different devices, but as needed, some of which device can not be necessary, some of which
The quantity of device can be more etc., and the present invention does not limit this.
The embodiment of the present invention can also be applied to server, and server is properly termed as high in the clouds or cloud server.The present invention
This is not limited.
Fig. 2 is a schematic flow chart of the method for the In vivo detection of the embodiment of the present invention.Method packet shown in Fig. 2
It includes:
S101 obtains at least two pictures to be detected.
Assuming that method shown in Fig. 2 is executed by electronic equipment, which is in image collecting device and image
Existing device.Wherein, image collecting device can be camera or video camera etc.;Picture production device can be screen, such as liquid crystal
Display screen, touch screen etc..
In S101, at least two pictures to be detected can be obtained by image collecting device, alternatively, image can be passed through
Harvester obtains one section of video, and at least two pictures to be detected are obtained from this section of video.Illustratively, at least two are waited for
Detection picture can be the different image of at least two frames in this section of video.
Assuming that at least two pictures to be detected are N pictures to be detected, N is the positive integer more than or equal to 2.So, exist
In S101, can N pictures be shot by camera.Alternatively, video can be shot by video camera, and N frame figures are intercepted from video
Piece, for example, it may be continuous N frames picture or can be the N frame pictures with certain intervals.
Illustratively, before S101, prompt message can be presented in electronic equipment, so as to user according to the prompt message into
The corresponding operation of row, so that image acquisition device at least two pictures to be detected.
By taking N=2 as an example, prompt message can first indicate user's face screen and sight watches screen attentively, to acquire first
Open picture to be detected.Subsequent prompt message can indicate that face (leftward or rightward) is turned certain angle by user and sight watches screen attentively again
Curtain, to acquire second picture to be detected.
Optionally, include face in each picture to be detected at least in two pictures to be detected, different is to be detected
Face in picture has different angles from screen, and the sight of the people in every picture to be detected watches screen attentively.
Angle in the embodiment of the present invention can refer to the plane where face and between screen (image collecting device)
Angle, or it can be appreciated that be sight and the screen (image collecting device) in two fronts at eye level of people appear direction it
Between angle, alternatively, it can be appreciated that be the line between two of people and the angle between screen (image collecting device),
Etc..
S102, judges whether at least two pictures to be detected meet in each picture to be detected comprising face.
Illustratively, using advance trained Face datection algorithm, judge in each picture to be detected whether
There are human face regions.If by judging to determine that certain picture to be detected there are human face region, can determine the mapping to be checked
Piece includes face.
Wherein, trained Face datection algorithm is referred to as human-face detector in advance.Before S102, it can pass through
Training obtains the people's face detection algorithm.For example, it can include the picture of face based on multiple, pass through the method for machine learning
Obtain the people's face detection algorithm.For example, the picture that by multiple can include face is input to neural network, the people is obtained with study
Face detection algorithm.Here, multiple include the picture of face can indicate human face region therein by the key point marked out.
S1031, if do not wrapped in one or one or more picture to be detected at least two pictures to be detected
Containing face, then the result of the In vivo detection is non-living body.
It is understood that if certain one or several sheets mapping to be checked described in being determined in S102 at least two pictures to be detected
Piece does not include face, then can not can send out warning information by the In vivo detection process.The warning information can indicate live body
The result of detection is non-living body;Alternatively, the warning information can also indicate that detection failure, it is to be checked to reacquire at least two
Mapping piece, or reacquire the one or several sheets picture to be detected not comprising face.
Such as, if it is determined that i-th picture to be detected in N pictures to be detected does not include face, i.e., can not determine i-th
Human face region in picture to be detected, then can send out warning information, to reacquire N pictures to be detected (or again
Obtain i-th picture to be detected).
S1032, if it is determined that include face in each picture to be detected, then it is to be checked based on described at least two
Mapping piece carries out 3D shape and judges that In vivo detection, continuity judge that In vivo detection, sight angle judge In vivo detection and screen
Reproduction judges at least one of In vivo detection In vivo detection, and is determined finally according to the result of at least one In vivo detection
In vivo detection result.
If execution 3D shape is needed to judge that In vivo detection, continuity judge that In vivo detection, sight angle are sentenced in S1032
Disconnected In vivo detection and screen reproduction judge more than one in In vivo detection, then the priority of this more than one In vivo detection
Sequence does not limit, such as sequentially can successively execute, and can also simultaneously execute parallel.
It is understood that judging that In vivo detection, continuity judge that In vivo detection, sight angle judge In vivo detection in 3D shape
Judge that the result of at least one of In vivo detection In vivo detection is to determine final live body in the case of with screen reproduction
Testing result passes through for In vivo detection.
It will judge 3D shape that In vivo detection, continuity judge that In vivo detection, sight angle judge In vivo detection below
Judge that In vivo detection is illustrated respectively with screen reproduction.
Wherein, as shown in figure 3,3D shape judgement may include:
S302, using advance trained neural network, three is carried out to face according at least two pictures to be detected
Dimension structure obtains the result of three-dimensional structure.
S303 judges the depth information of the face in the result of the three-dimensional structure using advance trained grader
Whether it is consistent with real human face.
Optionally, as shown in figure 3, before S302, can also include:
S301:The angle of face described in determining at least two pictures to be detected be in corresponding angular range it
It is interior.
Specifically, trained human face posture detection algorithm (or being face gesture detector) difference in advance can be used
Obtain the angle of the face in every picture to be detected, subsequently determine whether all angles whether in corresponding angular range it
It is interior.
If it is required that the angle of the face in i-th picture to be detected of N pictures to be detected is β i degree, i-th is judged
The angle of the face in picture to be detected is opened whether in angular range beta i- θ to β i+ θ.Wherein, θ indicates the error allowed.
For example, by taking N=2 as an example, it is assumed that it is required that the angle of the face of first picture to be detected is 0 degree, it is desirable that the
The angle of the face of two pictures to be detected is 20 degree, then may determine that whether the angle of the face of first picture to be detected is located
Within the scope of -5 to 5 degree, and judge whether the angle of the face of second picture to be detected is within the scope of 15 to 25 degree,
Wherein, the error of permission is 5 degree.
It is understood that if it is determined that the face of certain one or several sheets picture to be detected at least two pictures to be detected
Angle be not at corresponding angular range, then warning information can not can be sent out, by the In vivo detection process so as to again
At least two pictures to be detected are obtained, or reacquire the one or several sheets picture to be detected not comprising face.
Such as, if it is determined that the angle of the face of i-th picture to be detected in N pictures to be detected is not at its correspondence
Angular range, then can send out warning information, so as to reacquire N pictures to be detected (or reacquire i-th it is to be detected
Picture).
Illustratively, advance trained nerve net can be used according at least two pictures to be detected in S302
Network carries out three-dimensional structure, wherein described three to the face in the first picture to be detected in described at least two pictures to be detected
The result of dimension structure includes the depth information of the face in the described first picture to be detected.
That is, can be to carrying out three-dimensional structure based on the face in the first picture to be detected, without to all
Picture to be detected carry out three-dimensional structure, can ensure the efficiency of processing in this way.Wherein, the first picture to be detected can be at least
Any one picture to be detected in two pictures to be detected.Optionally, the first picture to be detected can be at least two to be checked
The angle of face is minimum in mapping piece one.That is, the angle of the face in the first picture to be detected is less than other
The angle of face in one picture to be detected.For example, if face in the first picture to be detected is positive face, the three-dimensional in S302
The result of structure includes the depth information of positive face.
It is, for example, possible to use in advance trained convolutional neural networks (Convolution al Neural Network,
CNN), with the RGB of at least two pictures to be detected (RGB, red-green-blue) pixel value as convolutional neural networks
The relative depth information of the human face region in the first picture to be detected is predicted in input.
Illustratively, it is described in advance trained neural network be by using image data collection including depth information with
And gradient descent method is trained neural network, wherein the image data collection including depth information is by depth phase
Machine is collected to obtain.Before the method for the present invention, depth camera can be used to acquire the picture number for largely including depth information
According to as image data collection, or image data collection including depth information can be obtained from other databases, use ladder
The weights of the methods of degree decline adjustment neural network, to realize the training to convolutional neural networks.Specifically, the mistake of the training
Journey may refer to related content in the prior art, and which is not described herein again.
It is understood that if be unable to get depth information in S302, or obtained depth information is 0, then can not pass through
The In vivo detection process, can send out warning information, to reacquire at least two pictures to be detected.
Illustratively, the result of three-dimensional structure includes the depth information of the face in the described first picture to be detected.So
In S303, advance trained grader can be used, judges that the depth information of the face in the described first picture to be detected is
The no depth information for meeting real human face.It is, for example, possible to use advance trained grader, judges the knot of the three-dimensional structure
Whether the depth information of the positive face in fruit is consistent with real human face.
Wherein, grader can be CNN or support vector machines (Support Vector Machine, SVM) etc..It is exemplary
Ground can use the methods of depth camera or three-dimensional face data collection to collect the data of a large amount of picture and depth before this,
It trains to obtain the grader by the method for machine learning.
If it is determined that the depth information of the face in the result of the three-dimensional structure is consistent with real human face, it is determined that described
Face at least two pictures to be detected is live body.For example, if S303 is determined by judgement in the first picture to be detected
The depth information of face meets the depth information of real human face, then illustrates that the detected person at least two pictures to be detected has
The 3D shape being consistent with face.On the contrary, it is appreciated that if the depth information of the face in the first picture to be detected described in S303
It is unsatisfactory for the depth information of real human face, then warning information can not can be sent out, by the In vivo detection process to obtain again
Take at least two pictures to be detected.
Judged by 3D shape shown in Fig. 3, depth information can be based on and carry out In vivo detection, can effectively defend to beat
Print the attacks such as photo, static images reproduction.
Wherein, as shown in figure 4, continuity judgement may include:
S401 carries out the judgement of face continuity, human body continuity judges and background continuity judges.
S402, the result and the background that the result judged according to the face continuity, the human body continuity judge
The result that continuity judges determines the result that the continuity judges.
Illustratively, the face continuity judgement in S401 can be as follows shown in S1, and the judgement of human body continuity can be as follows
Shown in S2, background continuity judges can be as follows shown in S3.
S1:Face recognition features in each picture to be detected of at least two pictures to be detected described in determining;According to
Difference between the face recognition features, judge described in face at least two pictures to be detected whether be same people.
For the human face region in each picture to be detected, advance trained neural network can be used, it is calculated
In face recognition features.Wherein, face recognition features can be expressed as the form of tensor.
For example, for N pictures to be detected, N number of face recognition features can be obtained, that is to say, that can obtain with extremely
Few two one-to-one at least two face recognition features of picture to be detected.It is possible to further more N number of recognition of face spy
The difference of sign.Face recognition features are referred to for example, can first determine, if N number of face recognition features are special with reference to recognition of face with this
Difference between sign is respectively less than or is equal to some predetermined threshold value, then is judged by S1.If i-th face recognition features with
This is more than some predetermined threshold value with reference to the difference between face recognition features, illustrates the people in i-th picture to be detected
Face is not belonging to same person with other faces, then can not be judged by S1.Wherein, can be N number of people with reference to face recognition features
Some face recognition features in face identification feature, alternatively, can be the average face obtained based on N number of face recognition features
Identification feature.Here, " judged by S1 " refer to:Determine that the face at least two pictures to be detected is same people.
S2:The corresponding human region figure in position in each picture to be detected of at least two pictures to be detected described in determining
Picture;According to the similarity between the human region image, the human region at least two pictures to be detected described in judgement is
It is no consistent.
Illustratively, the corresponding human region image in position refers to the image of same person body region.For example, being neck
The human region image of center position.
For the human face region in each picture to be detected, people can be selected at the first position except human face region
Body region image.Illustratively, position that can be according to human face region therein and size are obtained in the lower section of the human face region
Human region image.For example, counterpart can be extracted immediately below face to be bold in the square region of small fixed proportion
Image is as human region image.Then, the human region image extracted from least two pictures to be detected can be inputted
To advance trained neural network, judge whether the similarity degree for having enough.
For example, for N pictures to be detected, N human region images can be obtained, that is to say, that can obtain with extremely
Few two one-to-one at least two human regions images of picture to be detected.It is possible to further compare N human region figures
The difference of picture.Human region image is referred to for example, can first determine, if N human region images refer to human region figure with this
Similarity as between is all higher than or is equal to some preset threshold value, then is judged by S2.If i-th human region image
It is less than some preset threshold value with reference to the similarity between human region image with this, illustrates in i-th picture to be detected
Human body and other human bodies it is inconsistent, then can not be judged by S2.Wherein, can be N human body areas with reference to human region image
A certain human region image in area image, alternatively, can be obtained based on N human region images.Here, " pass through S2
Judge " refer to:Determine that the human region at least two pictures to be detected is consistent.
S3:The Background in same position region in each picture to be detected of selected at least two pictures to be detected
Picture;According to the similarity between the background image, whether the background at least two pictures to be detected described in judgement belongs to same
One scene.
Specifically, multiple background images of the multiple regions of every picture to be detected can be selected, and by every Background
It is compared as carrying out similarity with the background image of the corresponding region in other pictures to be detected.
Illustratively, it can be randomly choosed predetermined big in the region except the human face region in each picture to be detected
Small and predetermined quantity background image.If the ratio that the background image of the predetermined quantity is judged as belonging to Same Scene is super
Predetermined ratio is crossed, then the background continuity judges that the background at least two pictures to be detected described in determination belongs to same field
Scape.
For example, it may be determined that M (M is positive integer) a region is selected and M region one for each picture to be detected
One corresponding M background images.The comparison of M groups can be then carried out, the jth group comparison process during wherein the M groups compare is:
For j-th of region in M region, obtain in M background images of every picture to be detected with j-th of area
The corresponding jth background image in domain.If at least two pictures to be detected are N pictures to be detected, the N back ofs the body altogether can be obtained
Scape image.The background image of these acquisitions can be input in advance trained neural network, obtain the value of similarity.
Assuming that the value of similarity is 1 expression similarity highest, the value of similarity indicates that similarity is minimum for 0, then, work as needle
When the value of the similarity obtained to j-th of region in M region is more than pre-defined threshold value, show at least two it is to be detected
The background image in j-th of region of picture has enough similarities.
If the quantity or ratio in the region with enough similarities reach pre-defined number of regions threshold value in M region
Or predetermined ratio, then it can determine that the background at least two pictures to be detected belongs to Same Scene.
Wherein it is possible to according to business scenario it needs to be determined that continuity deterministic process, variable in backgrounds such as railway stations
Scene under, in S3 deterministic processes pre-define number of regions threshold value or predetermined ratio can set it is smaller, you can with reduce
To the criterion of S3 processes.
It should be noted that the embodiment of the present invention is not construed as limiting the execution sequence of the process of S1, S2 and S3 in S401.As one kind
Realization method, S1, S2 and S3 can be executed parallel, so as to make full use of the processing capacity of electronic equipment, improve processing
Speed.
Further, in S402, if judging at least two mappings to be checked described in determination by the face continuity
Face in piece is same people, and/or, if judging at least two pictures to be detected described in determination by the human body continuity
In human region it is consistent, and/or, if by the background continuity judge to determine described at least two pictures to be detected
Background belong to Same Scene, it is determined that face at least two pictures to be detected is live body.
Wherein, as shown in figure 5, sight angle judgement may include:
S501, determine described at least two pictures to be detected each picture to be detected in people sight angle;
S502, judges whether the angle of the sight of the people in each picture to be detected meets predefined angle model
It encloses.
If judging to determine that the angle of the sight of the people in each picture to be detected is full by the sight angle
The foot predefined angular range, it is determined that the face at least two pictures to be detected is live body.
Specifically, for each picture to be detected at least two pictures to be detected, it can use and train in advance
Grader, calculate human eye sight relative screen angle.According to the value of the angle, it can be determined that whether human eye is look at screen
Curtain.For example, if be calculated camera with respect to human eye direction in gazing direction of human eyes at predefined angular range (such as 10 degree)
Within, then illustrate that human eye is look at screen.
The process confirms that human eye is look at screen by the angle of the sight of determining people, so as to defend using simple
3D model viewers rotate the attack that face carries out.
Wherein, screen reproduction judgement may include:Whether at least two pictures to be detected are screen reproduction described in judging.
If judging that at least two pictures to be detected described in determination are not screen reproduction by the screen reproduction, it is determined that
Face at least two pictures to be detected is live body.
Specifically, for each picture to be detected at least two pictures to be detected, it can use and train in advance
Neural network judge whether to be screen reproduction picture according to readability, screen reflecting, moire fringes, the information such as frame.It should
Process can effectively defend the attack carried out using screen reproduction.
It can be seen that the In vivo detection of S1032 can be judged by 3D shape, continuity judges, sight angle is sentenced
At least one of disconnected and screen reproduction judgement carries out.For example, for the higher scene of security level, In vivo detection can be required
Process simultaneously pass through 3D shape judge, continuity judge, sight angle judge and screen reproduction judge.Specifically, if
It can be judged simultaneously by 3D shape judgement, continuity judgement, sight angle and screen reproduction judges, then can determined at least
Face in two pictures to be detected is live body.It can effectively prevent attacking for photograph print, 3D models, screen reproduction etc. in this way
It hits, to effectively ensure the safety of verification.
Fig. 6 is a schematic block diagram of the device of the In vivo detection of the embodiment of the present invention.Device 60 shown in fig. 6 wraps
It includes:Acquisition module 601, judgment module 602 and determining module 603.
Acquisition module 601, for obtaining at least two pictures to be detected;
Judgment module 602, for judging whether at least two pictures to be detected meet in each picture to be detected
It include face;
Determining module 603, for if it is determined that module 602 determines one or one at least two pictures to be detected
Or more picture to be detected in do not include face, then the result of the In vivo detection be non-living body;If it is determined that module 602 is true
Include face in the fixed each picture to be detected, is then based at least two pictures progress 3D shapes to be detected and sentences
Disconnected In vivo detection, continuity judge that In vivo detection, sight angle judge that In vivo detection and screen reproduction judge in In vivo detection
At least one In vivo detection, and final In vivo detection result is determined according to the result of at least one In vivo detection.
Illustratively, judgment module 602 is specifically used for, using advance trained Face datection algorithm, judging described each
It opens and whether there is human face region in picture to be detected.
Illustratively, as shown in fig. 7, determining module 603 may include that 3D shape judges In vivo detection submodule 6031, connects
Continuous property judges that In vivo detection submodule 6032, sight angle judge that In vivo detection submodule 6033 and screen reproduction judge that live body is examined
Survey submodule 6034.
Illustratively, 3D shape judges that In vivo detection submodule 6031 includes that three-dimensional construction unit 701 and first judges
Unit 702.
Three-dimensional construction unit 701 uses advance trained nerve net at least two pictures to be detected according to
Network carries out three-dimensional structure to face and obtains the result of three-dimensional structure;
First judging unit 702, for using advance trained grader, judging in the three-dimensional result built
Whether the depth information of face is consistent with real human face.
Illustratively, it is described in advance trained neural network be by using image data collection including depth information with
And gradient descent method is trained neural network, the image data collection including depth information received by depth camera
Collection obtains.
Illustratively, 3D shape judges that In vivo detection submodule 6031 further includes the first determination unit, for determining
The angle for stating the face at least two pictures to be detected is within corresponding angular range.
Illustratively, the first judging unit 702 is specifically used for, using advance trained grader, judging the three-dimensional
Whether the depth information of the positive face in the result of structure is consistent with real human face.
Illustratively, 3D shape judges the first determination unit in In vivo detection submodule 6031, can be also used for:Such as
Fruit determines that the depth information of the face in the result of the three-dimensional structure is consistent with real human face, it is determined that described at least two are waited for
It is live body to detect the face in picture.
Illustratively, continuity judges that In vivo detection submodule 6032 may include:Second judgment unit 801 and second is really
Order member 802.
Second judgment unit 801, for carrying out the judgement of face continuity, human body continuity judges and background continuity is sentenced
It is disconnected;And
Second determination unit 802, what result, the human body continuity for being judged according to the face continuity judged
As a result the result judged with the background continuity determines the result that the continuity judges.
Second judgment unit 801 includes that face continuity judgment sub-unit, human body continuity judgment sub-unit and background connect
Continuous property judgment sub-unit.
Face continuity judgment sub-unit, each picture to be detected at least two pictures to be detected described in determination
In face recognition features;According to the difference between the face recognition features, judge described at least two pictures to be detected
Face whether be same people.
Human body continuity judgment sub-unit, each picture to be detected at least two pictures to be detected described in determination
The corresponding human region image in middle position;According to the similarity between the human region image, at least two are waited for described in judgement
Whether the human region detected in picture is consistent.
Optionally, human body continuity judgment sub-unit is bold small fixation specifically for extracting counterpart immediately below face
Image in the square region of ratio is as the human region image.
Background continuity judgment sub-unit, each picture to be detected for selected at least two pictures to be detected
The background image in middle same position region;According to the similarity between the background image, judge described at least two it is to be detected
Whether the background in picture belongs to Same Scene.
Illustratively, background continuity judgment sub-unit is specifically used for the human face region in each picture to be detected
Except region, randomly choose predefined size and predetermined quantity background image.Optionally, if the background of the predetermined quantity
The ratio that image is judged as belonging to Same Scene is more than predetermined ratio, then the background continuity judges at least two described in determination
The background opened in picture to be detected belongs to Same Scene.
Illustratively, continuity judges that In vivo detection submodule 6032 can be used for:If passing through the face continuity
The face at least two pictures to be detected described in determining is judged for same people, and/or, if sentenced by the human body continuity
Human region at least two pictures to be detected described in disconnected determination is consistent, and/or, judge if passing through the background continuity
Background at least two pictures to be detected described in determining belongs to Same Scene, it is determined that at least two pictures to be detected
Face be live body.
Illustratively, sight angle judges In vivo detection submodule 6033, may include:
Third determination unit 901, in each picture to be detected of at least two pictures to be detected described in determination
The angle of the sight of people;
Whether third judging unit 902, the angle for judging the sight of the people in each picture to be detected are full
The predefined angular range of foot.
Illustratively, third determination unit 901 can be also used for:If judging to determine by the sight angle described every
The angle of the sight of people in one picture to be detected meets the predefined angular range, it is determined that described at least two are waited for
It is live body to detect the face in picture.
Illustratively, screen reproduction judges In vivo detection submodule 6034, can be used for judge described at least two it is to be checked
Whether mapping piece is screen reproduction.
Illustratively, screen reproduction judges In vivo detection submodule 6034, if can be also used for through the screen turning
It is not screen reproduction to clap at least two pictures to be detected described in judging to determine, it is determined that at least two pictures to be detected
Face is live body.
Illustratively, determining module 603 is specifically used for:The result of at least one In vivo detection be by feelings
Under condition, determine that final In vivo detection result passes through for In vivo detection.
The method that Fig. 6 or shown in Fig. 7 devices 60 are implemented for aforementioned In vivo detection shown in Fig. 2 to Fig. 5.
In addition, the embodiment of the present invention additionally provides the device of another In vivo detection, including memory, processor and storage
The computer program run on the memory and on the processor, processor realize earlier figures when executing described program
2 to shown in Fig. 5 the step of method.For example, the device of the In vivo detection is computer equipment.
In addition, the embodiment of the present invention additionally provides a kind of electronic equipment, which may include shown in Fig. 6 or Fig. 7
Device 60.The method that aforementioned In vivo detection shown in Fig. 2 to Fig. 5 may be implemented in the electronic equipment.
In addition, the embodiment of the present invention additionally provides a kind of computer storage media, it is stored thereon with computer program.Work as institute
When stating computer program and being executed by processor, earlier figures 2 may be implemented to shown in Fig. 5 the step of method.For example, the computer is deposited
Storage media is computer readable storage medium.
The process of In vivo detection in the embodiment of the present invention can effectively prevent photograph print, 3D models, screen reproduction etc.
Attack, to effectively ensure verification safety.
Although describing example embodiment by reference to attached drawing here, it should be understood that the above example embodiment is merely exemplary
, and be not intended to limit the scope of the invention to this.Those of ordinary skill in the art can carry out various changes wherein
And modification, it is made without departing from the scope of the present invention and spiritual.All such changes and modifications are intended to be included in appended claims
Within required the scope of the present invention.
Those of ordinary skill in the art may realize that lists described in conjunction with the examples disclosed in the embodiments of the present disclosure
Member and algorithm steps can be realized with the combination of electronic hardware or computer software and electronic hardware.These functions are actually
It is implemented in hardware or software, depends on the specific application and design constraint of technical solution.Professional technician
Each specific application can be used different methods to achieve the described function, but this realization is it is not considered that exceed
The scope of the present invention.
In several embodiments provided herein, it should be understood that disclosed device and method can pass through it
Its mode is realized.For example, apparatus embodiments described above are merely indicative, for example, the division of the unit, only
Only a kind of division of logic function, formula that in actual implementation, there may be another division manner, such as multiple units or component can be tied
Another equipment is closed or is desirably integrated into, or some features can be ignored or not executed.
In the instructions provided here, numerous specific details are set forth.It is to be appreciated, however, that the implementation of the present invention
Example can be put into practice without these specific details.In some instances, well known method, structure is not been shown in detail
And technology, so as not to obscure the understanding of this description.
Similarly, it should be understood that in order to simplify the present invention and help to understand one or more of each inventive aspect,
To in the description of exemplary embodiment of the present invention, each feature of the invention be grouped together into sometimes single embodiment, figure,
Or in descriptions thereof.However, the method for the present invention should be construed to reflect following intention:It is i.e. claimed
The present invention claims the more features of feature than being expressly recited in each claim.More precisely, such as corresponding power
As sharp claim reflects, inventive point is that the spy of all features less than some disclosed single embodiment can be used
It levies to solve corresponding technical problem.Therefore, it then follows thus claims of specific implementation mode are expressly incorporated in this specific
Embodiment, wherein each claim itself is as a separate embodiment of the present invention.
It will be understood to those skilled in the art that other than mutually exclusive between feature, any combinations pair may be used
All features disclosed in this specification (including adjoint claim, abstract and attached drawing) and so disclosed any method
Or all processes or unit of equipment are combined.Unless expressly stated otherwise, this specification (including want by adjoint right
Ask, make a summary and attached drawing) disclosed in each feature can be replaced by providing the alternative features of identical, equivalent or similar purpose.
In addition, it will be appreciated by those of skill in the art that although some embodiments described herein include other embodiments
In included certain features rather than other feature, but the combination of the feature of different embodiments means in of the invention
Within the scope of and form different embodiments.For example, in detail in the claims, embodiment claimed it is one of arbitrary
It mode can use in any combination.
The all parts embodiment of the present invention can be with hardware realization, or to run on one or more processors
Software module realize, or realized with combination thereof.It will be understood by those of skill in the art that can use in practice
Microprocessor or digital signal processor (DSP) realize some moulds in article analytical equipment according to the ... of the embodiment of the present invention
The some or all functions of block.The present invention is also implemented as the part or complete for executing method as described herein
The program of device (for example, computer program and computer program product) in portion.It is such to realize that the program of the present invention store
It on a computer-readable medium, or can be with the form of one or more signal.Such signal can be from internet
It downloads and obtains on website, either provide on carrier signal or provide in any other forms.
It should be noted that the present invention will be described rather than limits the invention for above-described embodiment, and ability
Field technique personnel can design alternative embodiment without departing from the scope of the appended claims.In the claims,
Any reference mark between bracket should not be configured to limitations on claims.Word "comprising" does not exclude the presence of not
Element or step listed in the claims.Word "a" or "an" before element does not exclude the presence of multiple such
Element.The present invention can be by means of including the hardware of several different elements and being come by means of properly programmed computer real
It is existing.In the unit claims listing several devices, several in these devices can be by the same hardware branch
To embody.The use of word first, second, and third does not indicate that any sequence.These words can be explained and be run after fame
Claim.
The above description is merely a specific embodiment or to the explanation of specific implementation mode, protection of the invention
Range is not limited thereto, and any one skilled in the art in the technical scope disclosed by the present invention, can be easily
Expect change or replacement, should be covered by the protection scope of the present invention.Protection scope of the present invention should be with claim
Subject to protection domain.
Claims (22)
1. a kind of method of In vivo detection, which is characterized in that including:
Obtain at least two pictures to be detected;
It includes face that whether at least two pictures to be detected described in judging, which meet in each picture to be detected,;
If not including face, institute in one or one or more picture to be detected at least two pictures to be detected
The result for stating In vivo detection is non-living body;
If it is determined that including face in each picture to be detected, then it is based at least two pictures to be detected and carries out
3D shape judges that In vivo detection, continuity judge that In vivo detection, sight angle judge that In vivo detection and screen reproduction judge to live
At least one of physical examination survey In vivo detection, and final In vivo detection knot is determined according to the result of at least one In vivo detection
Fruit.
2. the method as described in claim 1, which is characterized in that at least whether two pictures to be detected meet described in the judgement
Include face in each picture to be detected, including:
Using advance trained Face datection algorithm, judge to whether there is human face region in each picture to be detected.
3. the method as described in claim 1, which is characterized in that 3D shape judgement includes:
According at least two pictures to be detected, using advance trained neural network, three-dimensional is carried out to face and is built
To the result of three-dimensional structure;
Using advance trained grader, judge the face in the three-dimensional result built depth information whether with really
Face is consistent.
4. method as claimed in claim 3, which is characterized in that it is described in advance trained neural network be by using comprising
What the image data collection and gradient descent method of depth information were trained neural network, it is described including depth information
Image data collection is collected to obtain by depth camera.
5. method as claimed in claim 3, which is characterized in that at least two pictures to be detected described in the basis use
Advance trained neural network carries out three-dimensional to face and builds before obtaining the result of three-dimensional structure, further includes:
The angle of face at least two pictures to be detected described in determining is within corresponding angular range.
6. method as claimed in claim 3, which is characterized in that it is described using advance trained grader, judge described three
Whether the depth information for tieing up the face in the result of structure is consistent with real human face, including:
Using advance trained grader, judge the positive face in the three-dimensional result built depth information whether with really
Face is consistent.
7. method as claimed in claim 3, which is characterized in that further include:
If it is determined that the depth information of the face in the result of the three-dimensional structure is consistent with real human face, it is determined that it is described at least
Face in two pictures to be detected is live body.
8. the method as described in claim 1, which is characterized in that continuity judgement includes:
Carry out the judgement of face continuity, human body continuity judges and background continuity judges;And
The result and the background continuity that result, the human body continuity judged according to the face continuity judges judge
Result determine the result that the continuity judges.
9. method as claimed in claim 8, which is characterized in that face continuity judgement includes:
Face recognition features in each picture to be detected of at least two pictures to be detected described in determining;
According to the difference between the face recognition features, whether the face at least two pictures to be detected described in judgement is same
One people.
10. method as claimed in claim 8, which is characterized in that human body continuity judgement includes:
The corresponding human region image in position in each picture to be detected of at least two pictures to be detected described in determining;
According to the similarity between the human region image, the human region at least two pictures to be detected described in judgement is
It is no consistent.
11. method as claimed in claim 10, which is characterized in that each of at least two pictures to be detected described in determining waits for
The corresponding human region image in position in picture is detected, including:
The image that extraction counterpart is bold in the square region of small fixed proportion immediately below face is as the human region figure
Picture.
12. method as claimed in claim 8, which is characterized in that background continuity judgement includes:
The background image in same position region in each picture to be detected of selected at least two pictures to be detected;
According to the similarity between the background image, whether the background at least two pictures to be detected described in judgement belongs to same
One scene.
13. method as claimed in claim 12, which is characterized in that each of selected at least two pictures to be detected waits for
The background image in same position region in picture is detected, including:
The region except human face region in each picture to be detected randomly chooses the background of predefined size and predetermined quantity
Image.
14. method as claimed in claim 13, which is characterized in that if the background image of the predetermined quantity is judged as belonging to
It is more than predetermined ratio in the ratio of Same Scene, then the background continuity judges at least two pictures to be detected described in determination
Background belong to Same Scene.
15. method as claimed in claim 8, which is characterized in that further include:
If the face at least two pictures to be detected described in determination is judged for same people by the face continuity, and/
Or, if by the human body continuity judge to determine described in human region at least two pictures to be detected it is consistent, and/
Or, if judging that the background at least two pictures to be detected described in determination belongs to Same Scene by the background continuity,
Face at least two pictures to be detected described in then determining is live body.
16. the method as described in claim 1, which is characterized in that sight angle judgement includes:
The angle of the sight of people in each picture to be detected of at least two pictures to be detected described in determining;
Judge whether the angle of the sight of the people in each picture to be detected meets predefined angular range.
17. the method described in claim 16, which is characterized in that further include:
If judging to determine that the angle of the sight of the people in each picture to be detected meets institute by the sight angle
State predefined angular range, it is determined that the face at least two pictures to be detected is live body.
18. the method as described in claim 1, which is characterized in that screen reproduction judgement includes:
Whether at least two pictures to be detected are screen reproduction described in judging.
19. method as claimed in claim 18, which is characterized in that further include:
If judging that at least two pictures to be detected described in determination are not screen reproduction by the screen reproduction, it is determined that described
Face at least two pictures to be detected is live body.
20. the method as described in claim 1, which is characterized in that the result according at least one In vivo detection is true
Determining final In vivo detection result includes:The result of at least one In vivo detection be by the case of, determine final
In vivo detection result passes through for In vivo detection.
21. a kind of device of In vivo detection, including memory, processor and it is stored on the memory and in the processor
The computer program of upper operation, which is characterized in that the processor realizes any one of claim 1 to 20 when executing described program
The step of the method.
22. a kind of computer storage media, is stored thereon with computer program, which is characterized in that described program is held by processor
The step of any one of claim 1 to 20 the method is realized when row.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710253338.1A CN108734057A (en) | 2017-04-18 | 2017-04-18 | The method, apparatus and computer storage media of In vivo detection |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710253338.1A CN108734057A (en) | 2017-04-18 | 2017-04-18 | The method, apparatus and computer storage media of In vivo detection |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108734057A true CN108734057A (en) | 2018-11-02 |
Family
ID=63924213
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710253338.1A Pending CN108734057A (en) | 2017-04-18 | 2017-04-18 | The method, apparatus and computer storage media of In vivo detection |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108734057A (en) |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108875468A (en) * | 2017-06-12 | 2018-11-23 | 北京旷视科技有限公司 | Biopsy method, In vivo detection system and storage medium |
CN109829434A (en) * | 2019-01-31 | 2019-05-31 | 杭州创匠信息科技有限公司 | Method for anti-counterfeit and device based on living body texture |
CN110110597A (en) * | 2019-04-02 | 2019-08-09 | 北京旷视科技有限公司 | Biopsy method, device and In vivo detection terminal |
CN110520865A (en) * | 2019-06-27 | 2019-11-29 | 深圳市汇顶科技股份有限公司 | The method, apparatus and electronic equipment of recognition of face |
CN110598571A (en) * | 2019-08-15 | 2019-12-20 | 中国平安人寿保险股份有限公司 | Living body detection method, living body detection device and computer-readable storage medium |
CN110688946A (en) * | 2019-09-26 | 2020-01-14 | 上海依图信息技术有限公司 | Public cloud silence in-vivo detection device and method based on picture identification |
CN111091063A (en) * | 2019-11-20 | 2020-05-01 | 北京迈格威科技有限公司 | Living body detection method, device and system |
CN111144425A (en) * | 2019-12-27 | 2020-05-12 | 五八有限公司 | Method and device for detecting screen shot picture, electronic equipment and storage medium |
CN111177681A (en) * | 2019-12-31 | 2020-05-19 | 联想(北京)有限公司 | Identification verification method and device |
CN111914769A (en) * | 2020-08-06 | 2020-11-10 | 腾讯科技(深圳)有限公司 | User validity judging method, device, computer readable storage medium and equipment |
CN112906587A (en) * | 2021-02-26 | 2021-06-04 | 上海云从企业发展有限公司 | Data processing method and device, machine readable medium and equipment |
WO2021169616A1 (en) * | 2020-02-27 | 2021-09-02 | 深圳壹账通智能科技有限公司 | Method and apparatus for detecting face of non-living body, and computer device and storage medium |
CN113609959A (en) * | 2021-04-16 | 2021-11-05 | 六度云计算有限公司 | Face living body detection method and device |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20080095680A (en) * | 2007-04-25 | 2008-10-29 | 포항공과대학교 산학협력단 | Method for recognizing face gesture using 3-dimensional cylinder head model |
CN104102866A (en) * | 2013-04-15 | 2014-10-15 | 欧姆龙株式会社 | Authentication device and authentication method |
US8958607B2 (en) * | 2012-09-28 | 2015-02-17 | Accenture Global Services Limited | Liveness detection |
CN105118082A (en) * | 2015-07-30 | 2015-12-02 | 科大讯飞股份有限公司 | Personalized video generation method and system |
CN105335722A (en) * | 2015-10-30 | 2016-02-17 | 商汤集团有限公司 | Detection system and detection method based on depth image information |
CN105426827A (en) * | 2015-11-09 | 2016-03-23 | 北京市商汤科技开发有限公司 | Living body verification method, device and system |
CN105718863A (en) * | 2016-01-15 | 2016-06-29 | 北京海鑫科金高科技股份有限公司 | Living-person face detection method, device and system |
CN106067190A (en) * | 2016-05-27 | 2016-11-02 | 俞怡斐 | A kind of fast face threedimensional model based on single image generates and alternative approach |
CN106407914A (en) * | 2016-08-31 | 2017-02-15 | 北京旷视科技有限公司 | Method for detecting human faces, device and remote teller machine system |
CN106557726A (en) * | 2015-09-25 | 2017-04-05 | 北京市商汤科技开发有限公司 | A kind of band is mourned in silence the system for face identity authentication and its method of formula In vivo detection |
-
2017
- 2017-04-18 CN CN201710253338.1A patent/CN108734057A/en active Pending
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20080095680A (en) * | 2007-04-25 | 2008-10-29 | 포항공과대학교 산학협력단 | Method for recognizing face gesture using 3-dimensional cylinder head model |
US8958607B2 (en) * | 2012-09-28 | 2015-02-17 | Accenture Global Services Limited | Liveness detection |
CN104102866A (en) * | 2013-04-15 | 2014-10-15 | 欧姆龙株式会社 | Authentication device and authentication method |
CN105118082A (en) * | 2015-07-30 | 2015-12-02 | 科大讯飞股份有限公司 | Personalized video generation method and system |
CN106557726A (en) * | 2015-09-25 | 2017-04-05 | 北京市商汤科技开发有限公司 | A kind of band is mourned in silence the system for face identity authentication and its method of formula In vivo detection |
CN105335722A (en) * | 2015-10-30 | 2016-02-17 | 商汤集团有限公司 | Detection system and detection method based on depth image information |
CN105426827A (en) * | 2015-11-09 | 2016-03-23 | 北京市商汤科技开发有限公司 | Living body verification method, device and system |
CN105718863A (en) * | 2016-01-15 | 2016-06-29 | 北京海鑫科金高科技股份有限公司 | Living-person face detection method, device and system |
CN106067190A (en) * | 2016-05-27 | 2016-11-02 | 俞怡斐 | A kind of fast face threedimensional model based on single image generates and alternative approach |
CN106407914A (en) * | 2016-08-31 | 2017-02-15 | 北京旷视科技有限公司 | Method for detecting human faces, device and remote teller machine system |
Non-Patent Citations (6)
Title |
---|
ANDREA LAGORIO ET AL.: "Liveness detection based on 3D face shape analysis", 《2013 INTERNATIONAL WORKSHOP ON BIOMETRICS AND FORENSICS (IWBF)》 * |
ANH TUAN TRAN ET.AL: "Regressing Robust and discriminative 3D Morphable Models with a very Deep Neural Network", 《ARXIV:1612.04904V1》 * |
CHRISTOPHER B ET.AL: "3D-R2N2: A Unified Approach for Single and Multi-view 3D Object Reconstruction", 《EUROPEAN CONFERENCE ON COMPUTER VISION》 * |
ZHENYAO ZHU ET.AL: ""Deep Learning Multi-View Representation for Face Recognition", 《ARXIV:1406.6947V1》 * |
吴德烽: "《计算智能及其在三维表面扫描机器人系统中的应用》", 30 November 2012, 大连海事大学出版社 * |
许晓: "基于深度学习的活体人脸检测算法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
Cited By (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108875468B (en) * | 2017-06-12 | 2022-03-01 | 北京旷视科技有限公司 | Living body detection method, living body detection system, and storage medium |
CN108875468A (en) * | 2017-06-12 | 2018-11-23 | 北京旷视科技有限公司 | Biopsy method, In vivo detection system and storage medium |
CN109829434A (en) * | 2019-01-31 | 2019-05-31 | 杭州创匠信息科技有限公司 | Method for anti-counterfeit and device based on living body texture |
CN110110597A (en) * | 2019-04-02 | 2019-08-09 | 北京旷视科技有限公司 | Biopsy method, device and In vivo detection terminal |
CN110110597B (en) * | 2019-04-02 | 2021-08-27 | 北京旷视科技有限公司 | Living body detection method and device and living body detection terminal |
CN110520865A (en) * | 2019-06-27 | 2019-11-29 | 深圳市汇顶科技股份有限公司 | The method, apparatus and electronic equipment of recognition of face |
CN110598571A (en) * | 2019-08-15 | 2019-12-20 | 中国平安人寿保险股份有限公司 | Living body detection method, living body detection device and computer-readable storage medium |
CN110688946A (en) * | 2019-09-26 | 2020-01-14 | 上海依图信息技术有限公司 | Public cloud silence in-vivo detection device and method based on picture identification |
CN111091063A (en) * | 2019-11-20 | 2020-05-01 | 北京迈格威科技有限公司 | Living body detection method, device and system |
CN111091063B (en) * | 2019-11-20 | 2023-12-29 | 北京迈格威科技有限公司 | Living body detection method, device and system |
CN111144425A (en) * | 2019-12-27 | 2020-05-12 | 五八有限公司 | Method and device for detecting screen shot picture, electronic equipment and storage medium |
CN111144425B (en) * | 2019-12-27 | 2024-02-23 | 五八有限公司 | Method and device for detecting shot screen picture, electronic equipment and storage medium |
CN111177681A (en) * | 2019-12-31 | 2020-05-19 | 联想(北京)有限公司 | Identification verification method and device |
WO2021169616A1 (en) * | 2020-02-27 | 2021-09-02 | 深圳壹账通智能科技有限公司 | Method and apparatus for detecting face of non-living body, and computer device and storage medium |
CN111914769B (en) * | 2020-08-06 | 2024-01-26 | 腾讯科技(深圳)有限公司 | User validity determination method, device, computer readable storage medium and equipment |
CN111914769A (en) * | 2020-08-06 | 2020-11-10 | 腾讯科技(深圳)有限公司 | User validity judging method, device, computer readable storage medium and equipment |
CN112906587A (en) * | 2021-02-26 | 2021-06-04 | 上海云从企业发展有限公司 | Data processing method and device, machine readable medium and equipment |
CN113609959A (en) * | 2021-04-16 | 2021-11-05 | 六度云计算有限公司 | Face living body detection method and device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108734057A (en) | The method, apparatus and computer storage media of In vivo detection | |
CN108319953B (en) | Occlusion detection method and device, electronic equipment and the storage medium of target object | |
CN106407914A (en) | Method for detecting human faces, device and remote teller machine system | |
US11238568B2 (en) | Method and system for reconstructing obstructed face portions for virtual reality environment | |
CN105117695B (en) | In vivo detection equipment and biopsy method | |
US10168778B2 (en) | User status indicator of an augmented reality system | |
CN104978548B (en) | A kind of gaze estimation method and device based on three-dimensional active shape model | |
US20160343168A1 (en) | Virtual personification for augmented reality system | |
US20170177941A1 (en) | Threat identification system | |
US20160341961A1 (en) | Context-based augmented reality content delivery | |
CN108876833A (en) | Image processing method, image processing apparatus and computer readable storage medium | |
CN108804884A (en) | Identity authentication method, device and computer storage media | |
BR112019009219A2 (en) | method for facial, handset and electronic device recognition | |
CN110046546A (en) | A kind of adaptive line of sight method for tracing, device, system and storage medium | |
CN108875452A (en) | Face identification method, device, system and computer-readable medium | |
CN109558764A (en) | Face identification method and device, computer equipment | |
CN109740491A (en) | A kind of human eye sight recognition methods, device, system and storage medium | |
CN108875546A (en) | Face auth method, system and storage medium | |
CN108932456A (en) | Face identification method, device and system and storage medium | |
CN108875723A (en) | Method for checking object, device and system and storage medium | |
CN108875469A (en) | In vivo detection and identity authentication method, device and computer storage medium | |
CN109522790A (en) | Human body attribute recognition approach, device, storage medium and electronic equipment | |
CN108961149A (en) | Image processing method, device and system and storage medium | |
CN109766785A (en) | A kind of biopsy method and device of face | |
US20210233314A1 (en) | Systems, Methods, and Media for Automatically Triggering Real-Time Visualization of Physical Environment in Artificial Reality |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20181102 |