CN109766785A - A kind of biopsy method and device of face - Google Patents

A kind of biopsy method and device of face Download PDF

Info

Publication number
CN109766785A
CN109766785A CN201811572285.0A CN201811572285A CN109766785A CN 109766785 A CN109766785 A CN 109766785A CN 201811572285 A CN201811572285 A CN 201811572285A CN 109766785 A CN109766785 A CN 109766785A
Authority
CN
China
Prior art keywords
face
detected
different moments
determined
degree
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811572285.0A
Other languages
Chinese (zh)
Other versions
CN109766785B (en
Inventor
侯晓楠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Unionpay Co Ltd
Original Assignee
China Unionpay Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Unionpay Co Ltd filed Critical China Unionpay Co Ltd
Priority to CN201811572285.0A priority Critical patent/CN109766785B/en
Publication of CN109766785A publication Critical patent/CN109766785A/en
Application granted granted Critical
Publication of CN109766785B publication Critical patent/CN109766785B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention discloses a kind of biopsy method of face and devices.The described method includes: obtaining face to be detected in different moments corresponding feature vector and default key point in different moments corresponding location information, to according to different moments corresponding feature vector and different moments corresponding location information, it can determine the face change degree of face to be detected, and then after determining that face change degree is greater than preset threshold, it can determine that face to be detected passes through In vivo detection.In this way, can be by judging the default key point in face to be detected, with the presence or absence of the variation on position, can determine whether face to be detected is living body in different moments.In this way, due to criminal forge faceform be it is static, biopsy method provided in an embodiment of the present invention can effectively identify the faceform of forgery, to improve the safety of recognition of face, and then improve the reliability of face identification system.

Description

A kind of biopsy method and device of face
Technical field
The present invention relates to technical field of face recognition more particularly to the biopsy methods and device of a kind of face.
Background technique
Currently, biometrics identification technology is widely used in security fields, be authenticate user identity main means it One.Biometrics identification technology, especially face recognition technology have been widely used in every field, such as financial payment neck Domain, gate inhibition security fields etc..There is easy-to-use, user friendly, contactless etc. a little in view of face recognition technology, in recent years To achieve the development advanced by leaps and bounds.
However, traditional face recognition technology is usually handled just for the image that video camera takes, it is not intended that Whether taken image is true man, and the faceform so as to cause forgeries such as photo face, mask faces can pass through people The detection of face identifying system is easy to cause the safety of recognition of face to be affected in turn.
Based on this, a kind of biopsy method of face is needed at present, for solving face recognition technology in the prior art The faceform of forgery can not be identified, thus the problem of influencing the safety of recognition of face.
Summary of the invention
The embodiment of the present invention provides the biopsy method and device of a kind of face, to solve recognition of face in the prior art Technology can not identify the faceform of forgery, thus the technical issues of influencing the safety of recognition of face.
The embodiment of the present invention provides a kind of biopsy method of face, which comprises
Face to be detected is obtained in different moments corresponding feature vector;
It obtains and presets key point in the face to be detected in the different moments corresponding location information, the position letter Breath is position of the default key point in the face to be detected;The default key point is that can characterize the area of human face expression Domain;
According to the different moments corresponding feature vector and the different moments corresponding location information, determine it is described to Detect the face change degree of face;
If the face change degree of the face to be detected is greater than preset threshold, it is determined that the face to be detected passes through living body Detection.
In this way, can be by judging the default key point in face to be detected in different moments with the presence or absence of the change on position Change, can determine whether face to be detected is living body.In this way, since the faceform that criminal forges is static , therefore, biopsy method provided in an embodiment of the present invention can effectively identify the faceform of forgery, to improve people The safety of face identification, and then improve the reliability of face identification system.
In one possible implementation, according to the different moments corresponding feature vector and the different moments pair The location information answered determines the face change degree of the face to be detected, comprising:
According to the different moments corresponding feature vector, characteristic similarity is determined;
According to the different moments corresponding location information, change in location degree is determined;
According to the characteristic similarity and the change in location degree, the face change degree of the face to be detected is determined.
In one possible implementation, face to be detected is obtained in different moments corresponding feature vector, comprising:
The segmentation area of the face to be detected is obtained in the different moments corresponding feature vector;Each segmentation Region is determined according to the face position of face;
According to the different moments corresponding feature vector, characteristic similarity is determined, comprising:
According to each cut zone in the different moments corresponding feature vector, the feature phase of each cut zone is determined Like degree;
According to the characteristic similarity and the change in location degree, the face change degree of the face to be detected is determined, wrap It includes:
According to the characteristic similarity of each cut zone and the change in location degree, the face of the face to be detected is determined Change degree.
By being split to face to be detected, the expression susceptibility of segmentation area can be comprehensively considered, to improve The accuracy rate of In vivo detection.
In one possible implementation, according to the characteristic similarity of each cut zone and the change in location degree, Determine the face change degree of the face to be detected, comprising:
For any default key point, cut zone belonging to the default key point is determined;
According to the characteristic similarity of affiliated cut zone and the change in location degree of the default key point, described point is determined Cut the face change degree in region;
According to the face change degree of segmentation area, the face change degree of the face to be detected is determined.
In one possible implementation, the cut zone includes mouth region, nasal area, cheek region, eyebrow Hair-fields domain, eye areas and forehead region.
In one possible implementation, it obtains in the face to be detected and presets key point in the different moments pair The location information answered, comprising:
It is corresponding in the different moments that default key point in the face to be detected is obtained using flight time TOF technology Location information;
Or
It is corresponding in the different moments that default key point in the face to be detected is obtained using 3D face reconstruction techniques Location information.
The related data of face is obtained using TOF technology, can obtain user's face in the case where user's unaware Related data, lower to the fitness requirement of user, the experience of user is more preferably.
In one possible implementation, after determining the face to be detected by In vivo detection, further includes:
According to the first eigenvector and the second feature vector, determine the corresponding feature of the face to be detected to Amount;
According to the corresponding feature vector of the face to be detected and it is pre-stored at least one to have detected face corresponding Feature vector, however, it is determined that it is described at least one detected in face that there are the similar faces of the face to be detected, it is determined that The face to be detected passes through authentication.
The embodiment of the present invention provides a kind of living body detection device of face, and described device includes:
Acquiring unit, for obtaining face to be detected in different moments corresponding feature vector;And it obtains described to be checked It surveys in face and presets key point in the different moments corresponding location information, the location information is that the default key point exists Position in the face to be detected;The default key point is that can characterize the region of human face expression;
Processing unit, for being believed according to the different moments corresponding feature vector and the different moments corresponding position Breath, determines the face change degree of the face to be detected;If the face change degree of the face to be detected is greater than preset threshold, Determine that the face to be detected passes through In vivo detection.
In one possible implementation, the processing unit is specifically used for:
According to the different moments corresponding feature vector, characteristic similarity is determined;And it is corresponding according to the different moments Location information, determine change in location degree;And it according to the characteristic similarity and the change in location degree, determines described to be checked Survey the face change degree of face.
In one possible implementation, the acquiring unit is specifically used for:
The segmentation area of the face to be detected is obtained in the different moments corresponding feature vector;Each segmentation Region is determined according to the face position of face;
The processing unit is specifically used for:
According to each cut zone in the different moments corresponding feature vector, the feature phase of each cut zone is determined Like degree;
And characteristic similarity and the change in location degree according to each cut zone, determine the face to be detected Face change degree.
In one possible implementation, the processing unit is specifically used for:
For any default key point, cut zone belonging to the default key point is determined;And according to affiliated segmentation The change in location degree of the characteristic similarity in region and the default key point determines the face change degree of the cut zone;With And the face change degree according to segmentation area, determine the face change degree of the face to be detected.
In one possible implementation, the cut zone includes mouth region, nasal area, cheek region, eyebrow Hair-fields domain, eye areas and forehead region.
In one possible implementation, the acquiring unit is specifically used for:
It is corresponding in the different moments that default key point in the face to be detected is obtained using flight time TOF technology Location information;
Or
It is corresponding in the different moments that default key point in the face to be detected is obtained using 3D face reconstruction techniques Location information.
In one possible implementation, the processing unit determine the face to be detected by In vivo detection it Afterwards, it is also used to:
According to the first eigenvector and the second feature vector, determine the corresponding feature of the face to be detected to Amount;And according to the corresponding feature vector of the face to be detected and it is pre-stored at least one to have detected face corresponding Feature vector, however, it is determined that it is described at least one detected in face that there are the similar faces of the face to be detected, it is determined that institute It states face to be detected and passes through authentication.
The embodiment of the present application also provides a kind of device, which has the In vivo detection for realizing face as described above The function of method.The function can execute corresponding software realization by hardware, in a kind of possible design, the device packet It includes: processor, transceiver, memory;The memory is for storing computer executed instructions, and the transceiver is for realizing the device It is communicated with other communication entities, which is connect with the memory by the bus, when the apparatus is operative, the processing Device executes the computer executed instructions of memory storage, so that the device executes the In vivo detection of face as described above Method.
The embodiment of the present invention also provides a kind of computer storage medium, stores software program in the storage medium, this is soft Part program realizes people described in above-mentioned various possible implementations when being read and executed by one or more processors The biopsy method of face.
The embodiment of the present invention also provides a kind of computer program product comprising instruction, when run on a computer, So that computer executes the biopsy method of face described in above-mentioned various possible implementations.
Detailed description of the invention
To describe the technical solutions in the embodiments of the present invention more clearly, make required in being described below to embodiment Attached drawing is briefly introduced.
Fig. 1 is flow diagram corresponding to a kind of biopsy method of face provided in an embodiment of the present invention;
Fig. 2 is a kind of schematic diagram of the cut zone of face provided in an embodiment of the present invention;
Fig. 3 a is a kind of schematic diagram of the corresponding default key point of eyes;
Fig. 3 b is a kind of schematic diagram of the corresponding default key point of mouth;
Fig. 4 is the schematic diagram of the belonging relation of default key point and cut zone;
Fig. 5 is the globality flow diagram of the In vivo detection of face involved in the embodiment of the present invention;
Fig. 6 is that use In vivo detection technology involved in the embodiment of the present invention carries out authentication process schematic diagram;
Fig. 7 is a kind of structural schematic diagram of the living body detection device of face provided in an embodiment of the present invention.
Specific embodiment
The application is specifically described with reference to the accompanying drawings of the specification, the concrete operation method in embodiment of the method can also To be applied in Installation practice.
Fig. 1 illustrates process corresponding to a kind of biopsy method of face provided in an embodiment of the present invention and shows It is intended to, as shown in Figure 1, including the following steps:
Step 101, face to be detected is obtained in different moments corresponding feature vector.
Step 102, it obtains and presets key point in face to be detected in different moments corresponding location information.
Step 103, it according to different moments corresponding feature vector and different moments corresponding location information, determines to be detected The face change degree of face;
Step 104, if the face change degree of face to be detected is greater than preset threshold, it is determined that face to be detected passes through living body Detection.
In this way, can be by judging the default key point in face to be detected in different moments with the presence or absence of the change on position Change, can determine whether face to be detected is living body.In this way, since the faceform that criminal forges is static , therefore, biopsy method provided in an embodiment of the present invention can effectively identify the faceform of forgery, to improve people The safety of face identification, and then improve the reliability of face identification system.
Specifically, in step 101 and step 102, different moments can refer to two different moments, or be also possible to At the time of referring to three differences, or it can also refer at the time of N number of difference (N be integer) greater than 1.For ease of description, with Under be described by taking two different moments as an example, i.e., in a step 101, obtain face to be detected at the first moment and the second moment Corresponding feature vector;In a step 102, it obtains and presets key point in face to be detected at the first moment and the second moment Corresponding location information, wherein at the time of the first moment and the second moment are two differences.
Based on the above-mentioned explanation to different moments, in step 101, can by preset neural network model to it is any when The corresponding feature of human face data for the face to be detected carved extracts, and obtains face to be certified according to the feature extracted and exist The corresponding feature vector of any moment.
Further, preset neural network model can be a plurality of types of neural network models, for example, can be 2D Deep neural network model, or it is also possible to 3D deep neural network model, specifically without limitation.
In view of the different zones of face to be detected are different the sensitivity of facial expression, such as eyes, mouth The sensitivity in equal regions is relatively high, and the corresponding sensitivity in the regions such as cheek, forehead is relatively low, therefore, the present invention Face to be detected can be divided into multiple cut zone by embodiment, to improve the accuracy rate of In vivo detection.Wherein, each segmentation Region can be determining according to the face position of face.As shown in Fig. 2, for a kind of point of face provided in an embodiment of the present invention Cut the schematic diagram in region.As shown in Fig. 2, face can be divided into multiple cut zone, such as cut zone can be mouth Region, is perhaps also possible to nasal area and is perhaps also possible to cheek region to be perhaps also possible to brow region or can also To be eye areas, or it is also possible to forehead region, specifically without limitation.
The signal of cut zone based on face shown in Fig. 2, in the embodiment of the present invention, also available face to be detected Segmentation area corresponding feature vector at any one time.Specifically, can by preset neural network model to appoint The corresponding feature of human face data of some cut zone of the face to be detected at one moment extracts, and according to the spy extracted Sign obtains some cut zone of face to be certified corresponding feature vector at any one time.
In step 102, default key point can refer to the region that can characterize human face expression, for example, people is when laughing at, eye Eyeball would generally be bent up, then, eyes can correspond to multiple default key points, be the corresponding default key of eyes as shown in Figure 3a A kind of example of point, can be using canthus, eye tail, the central point in upper eyelid, the central point of palpebra inferior and eyeball center as eye The corresponding default key point of eyeball;For another example, when crying, mouth would generally close lightly people, then, mouth can also correspond to multiple Default key point is as shown in Figure 3b a kind of example of the corresponding default key point of mouth, can will be in the corners of the mouth, upper lip Heart point, the central point of lower lip, front tooth are as the corresponding default key point of mouth.
In the embodiment of the present invention, the side that key point corresponding location information at any one time is preset in face to be detected is obtained There are many formulas, a kind of possible implementation be can be obtained using flight time (Time Of Flight, TOF) technology to It detects and presets key point corresponding location information at any one time in face.Specifically, TOF technology can be by connecting to target Supervention send light pulse, and the light returned from object is then received with sensor, by detecting these transmittings and receiving flying for light pulse Row (round-trip) time obtains object distance.In the embodiment of the present invention, can by TOF technical application to camera, thus It obtains and presets key point corresponding location information (such as coordinate data) at any one time in face to be detected.Using TOF technology The related data of face is obtained, the related data of user's face can be obtained in the case where user's unaware, user is matched Right to require lower, the experience of user is more preferably.
During alternatively possible realization is sent, it can be obtained using face reconstruction techniques and preset key point in face to be detected Corresponding location information at any one time.Specifically, for collected facial image (such as each frame in monitor video Facial image), (Cascaded Regression, CR) method can be returned using based on cascade, generate one by multiple weak times The strong recurrence for returning cascade to form, in conjunction with deep learning algorithm, realization is rebuild end to end, in this way, one facial image of input, The 3D model of face can be directly exported, in turn, can be determined according to the 3D model of the face and preset key in face to be detected Put corresponding location information (such as coordinate data) at any one time.
In other possible implementations, is also obtained using other methods and preset key point in face to be detected any Moment corresponding location information, for example obtained in such a way that user to be certified is manually entered and preset key point in face to be detected Corresponding location information at any one time, specifically without limitation.
It is also contemplated that difference of the different zones of face to be detected to the sensitivity of facial expression, Fig. 2 shows On the basis of the segmentation area of face, in the embodiment of the present invention, default key point corresponding position at any one time is being got After confidence breath, cut zone belonging to each default key point can be further determined that with the corresponding position of segmentation area. For example, as shown in figure 4, the schematic diagram of the belonging relation for default key point and cut zone, it is assumed that face tool to be detected There are 30 default key points that number is 1~30 shown in Fig. 4, also, face to be detected has before dotted line outlines in Fig. 4 Frontal region domain, brow region, eye areas, nasal area, mouth region and cheek region totally 6 cut zone, in conjunction with each default The location information of key point and the position of segmentation area can determine cut zone belonging to each default key point.
In step 103, the face change degree of face to be detected can be according to different moments corresponding feature vector and difference Moment presets the corresponding location information of key point to determine.There are many specific methods of determination, and a kind of possible implementation is, According to face to be detected in different moments corresponding feature vector, characteristic similarity is determined, and pre- according to face to be detected If key point determines change in location degree, and according to characteristic similarity and change in location in different moments corresponding location information Degree, determines the face change degree of face to be detected.
That is, the face change degree of face to be detected can be determined according to formula (1):
Δ=λ1·S-λ2D formula (1)
In formula (1), Δ is the face change degree of face to be detected;S is characterized change degree;D change in location degree;λ1For spy Levy the corresponding weight of change degree;λ2For the corresponding weight of change in location degree.
In view of in the embodiment of the present invention face to be detected can be obtained by the way of being split to face to be detected Segmentation area in different moments corresponding feature vector.Based on this, if what is got is each cut section of face to be detected It domain, then can be according to each cut zone in different moments corresponding feature vector, really in different moments corresponding feature vector The characteristic similarity of fixed each cut zone;And it can be according to the change in location for the default key point for including in each cut zone Degree, determines the change in location degree of each cut zone;In turn, can be become according to the characteristic similarity of each cut zone and position Change degree determines the face change degree of each cut zone, and then determines the face change degree of face to be detected.
Further, the change in location degree of each cut zone can be determined according to formula (2):
In formula (2), DiFor the change in location degree of i-th of cut zone, 1≤i≤M, M are the segmentation in face to be detected The quantity in region, M are the integer greater than 1;dijFor the Europe of j-th of default key point under different moments in i-th of cut zone Formula distance, 1≤j≤n, n are the quantity of the default key point in i-th of cut zone, and n is the integer greater than 1.
It should be noted that formula (2) is only a kind of example, those skilled in the art can also be counted using other way Calculate the change in location degree of each cut zone, for example calculate by the way of vector, specifically without limitation.
In turn, it according to the change in location degree of the changing features degree of each cut zone and each cut zone, can determine The face change degree of each cut zone.The face change degree of each cut zone can be determined according to formula (3):
In formula (3), δiFor the face change degree of i-th of cut zone, 1≤i≤M, M are the segmentation in face to be detected The quantity in region, M are the integer greater than 1;SiFor the changing features degree of i-th of cut zone;DiFor the position of i-th of cut zone Set change degree;λ1It is characterized the corresponding weight of change degree;λ2For the corresponding weight of change in location degree.
In turn, the face change degree of face to be detected can be determined according to formula (4):
In formula (4), Δ is the face change degree of face to be detected;δiFor the face change degree of i-th of cut zone, 1 ≤ i≤M, M are the quantity of the cut zone in face to be detected, and M is the integer greater than 1;ωiIt is corresponding for i-th of cut zone Weight.
It should be noted that formula (4) is only a kind of example, those skilled in the art are obtaining the people of each cut zone On the basis of face change degree, the face change degree of face to be detected can also be determined in other manners.For example, such as formula (5) shown in, for the method for determination of the face change degree of another face to be detected.
In formula (5), Δ is the face change degree of face to be detected;If δi≤Thri, then (Thrii)+=1;Otherwise, (Thrii)+=0.
It should be noted that being each segmentation according to different cut zone to the susceptibility of expression shape change in formula (5) Region sets different threshold value (i.e. Thri), susceptibility is higher, and face variation is more significant, and similarity is smaller, therefore the threshold being arranged It is worth smaller.In other words, if the similarity δ of a certain cut zonei≤Thri, then the expression for representing the cut zone becomes Change;Otherwise, the expression for representing the cut zone does not change.
It, can also be by face to be detected in different moments corresponding feature vector and pre- in other possible implementations If key point inputs trained similarity model in advance in different moments corresponding location information, so that it is determined that face to be detected Face change degree, specifically without limitation.
In step 104, whether can be greater than preset threshold by judging the face change degree of face to be detected determine to Whether detection face passes through In vivo detection, if more than can then determine that face to be detected passes through In vivo detection;It otherwise, can be true Fixed face to be detected does not pass through In vivo detection.Wherein, preset threshold can be those skilled in the art rule of thumb with practical feelings What condition determined, specifically without limitation.
For example, by taking the face change degree for the face to be detected being calculated using formula (4) as an example, if this is to be checked The face change degree Δ for surveying face is greater than preset threshold, it is determined that face to be detected changes in different moments, it is believed that face Expression shape change captures successfully, so that it is determined that personnel to be detected pass through In vivo detection;Otherwise, it determines face to be detected is in different moments It does not change, it is believed that the failure of facial expression change capture, so that it is determined that personnel to be detected do not pass through In vivo detection.
By way of further example, by taking the face change degree for the face to be detected being calculated using formula (5) as an example, if Δ >= M/2 is indicated in face to be detected in the quantity of different moments changed cut zone more than half, then it is assumed that facial table Feelings change capture success, so that it is determined that personnel to be detected pass through In vivo detection;Otherwise, it indicates in face to be detected in different moments The quantity of changed cut zone is less than half, then it is assumed that the failure of facial expression change capture, so that it is determined that be detected Personnel do not pass through In vivo detection.
In order to more clearly introduce the biopsy method of above-mentioned face, below with reference to Fig. 5, to institute in the embodiment of the present invention The In vivo detection process for the face being related to carries out globality explanation.The content shown in Fig. 5 can be specifically participated in, herein no longer in detail It introduces.
In the embodiment of the present invention, after step 104 is performed, recognition of face can also be carried out, so that it is determined that people to be detected Whether face passes through authentication.Specifically, the mode of recognition of face can be according to first eigenvector and second feature to Amount, determines the corresponding feature vector of face to be detected;In turn, can according to the corresponding feature vector of face to be detected and in advance At least one of storage has detected the corresponding feature vector of face, however, it is determined that at least one has been detected in face, and there are people to be detected The similar face of face can then determine that face to be detected passes through authentication.
It may be that the mode of recognition of face is also possible to using existing based on depth nerve net in implementation other Network model identifies face to be detected, specifically without limitation.
Below by taking In vivo detection technology carries out authentication as an example, in conjunction with Fig. 6, to involved in the embodiment of the present invention Authentication process is carried out using In vivo detection technology and does globality explanation.The content shown in Fig. 6 can be specifically participated in, herein no longer It is discussed in detail.
Based on same inventive concept, Fig. 7 illustrates a kind of living body inspection of face provided in an embodiment of the present invention The structural schematic diagram for surveying device, as shown in fig. 7, the device includes acquiring unit 201 and processing unit 202;Wherein,
Acquiring unit 201, for obtaining face to be detected in different moments corresponding feature vector;And obtain it is described to It detects and presets key point in face in the different moments corresponding location information, the location information is the default key point Position in the face to be detected;The default key point is that can characterize the region of human face expression;
Processing unit 202, for according to the different moments corresponding feature vector and the different moments corresponding position Confidence breath, determines the face change degree of the face to be detected;If the face change degree of the face to be detected is greater than default threshold Value, it is determined that the face to be detected passes through In vivo detection.
In one possible implementation, the processing unit 202 is specifically used for:
According to the different moments corresponding feature vector, characteristic similarity is determined;And it is corresponding according to the different moments Location information, determine change in location degree;And it according to the characteristic similarity and the change in location degree, determines described to be checked Survey the face change degree of face.
In one possible implementation, the acquiring unit 201 is specifically used for:
The segmentation area of the face to be detected is obtained in the different moments corresponding feature vector;Each segmentation Region is determined according to the face position of face;
The processing unit 202 is specifically used for:
According to each cut zone in the different moments corresponding feature vector, the feature phase of each cut zone is determined Like degree;
And characteristic similarity and the change in location degree according to each cut zone, determine the face to be detected Face change degree.
In one possible implementation, the processing unit 202 is specifically used for:
For any default key point, cut zone belonging to the default key point is determined;And according to affiliated segmentation The change in location degree of the characteristic similarity in region and the default key point determines the face change degree of the cut zone;With And the face change degree according to segmentation area, determine the face change degree of the face to be detected.
In one possible implementation, the cut zone includes mouth region, nasal area, cheek region, eyebrow Hair-fields domain, eye areas and forehead region.
In one possible implementation, the acquiring unit 201 is specifically used for:
It is corresponding in the different moments that default key point in the face to be detected is obtained using flight time TOF technology Location information;
Or
It is corresponding in the different moments that default key point in the face to be detected is obtained using 3D face reconstruction techniques Location information.
In one possible implementation, the processing unit 202 determine the face to be detected pass through living body examine After survey, it is also used to:
According to the first eigenvector and the second feature vector, determine the corresponding feature of the face to be detected to Amount;And according to the corresponding feature vector of the face to be detected and it is pre-stored at least one to have detected face corresponding Feature vector, however, it is determined that it is described at least one detected in face that there are the similar faces of the face to be detected, it is determined that institute It states face to be detected and passes through authentication.
The embodiment of the present application also provides a kind of device, which has the In vivo detection for realizing face as described above The function of method.The function can execute corresponding software realization by hardware, in a kind of possible design, the device packet It includes: processor, transceiver, memory;The memory is for storing computer executed instructions, and the transceiver is for realizing the device It is communicated with other communication entities, which is connect with the memory by the bus, when the apparatus is operative, the processing Device executes the computer executed instructions of memory storage, so that the device executes the In vivo detection of face as described above Method.
The embodiment of the present invention also provides a kind of computer storage medium, stores software program in the storage medium, this is soft Part program realizes people described in above-mentioned various possible implementations when being read and executed by one or more processors The biopsy method of face.
The embodiment of the present invention also provides a kind of computer program product comprising instruction, when run on a computer, So that computer executes the biopsy method of face described in above-mentioned various possible implementations.
It should be understood by those skilled in the art that, the embodiment of the present invention can provide as method, system or computer program Product.Therefore, complete hardware embodiment, complete software embodiment or reality combining software and hardware aspects can be used in the present invention Apply the form of example.Moreover, it wherein includes the computer of computer usable program code that the present invention, which can be used in one or more, The computer program implemented in usable storage medium (including but not limited to magnetic disk storage, CD-ROM, optical memory etc.) produces The form of product.
The present invention be referring to according to the method for the embodiment of the present invention, the process of equipment (system) and computer program product Figure and/or block diagram describe.It should be understood that every one stream in flowchart and/or the block diagram can be realized by computer program instructions The combination of process and/or box in journey and/or box and flowchart and/or the block diagram.It can provide these computer programs Instruct the processor of general purpose computer, special purpose computer, Embedded Processor or other programmable data processing devices to produce A raw machine, so that being generated by the instruction that computer or the processor of other programmable data processing devices execute for real The device for the function of being specified in present one or more flows of the flowchart and/or one or more blocks of the block diagram.
These computer program instructions, which may also be stored in, is able to guide computer or other programmable data processing devices with spy Determine in the computer-readable memory that mode works, so that it includes referring to that instruction stored in the computer readable memory, which generates, Enable the manufacture of device, the command device realize in one box of one or more flows of the flowchart and/or block diagram or The function of being specified in multiple boxes.
These computer program instructions also can be loaded onto a computer or other programmable data processing device, so that counting Series of operation steps are executed on calculation machine or other programmable devices to generate computer implemented processing, thus in computer or The instruction executed on other programmable devices is provided for realizing in one or more flows of the flowchart and/or block diagram one The step of function of being specified in a box or multiple boxes.
Although preferred embodiments of the present invention have been described, it is created once a person skilled in the art knows basic Property concept, then additional changes and modifications may be made to these embodiments.So it includes excellent that the following claims are intended to be interpreted as It selects embodiment and falls into all change and modification of the scope of the invention.
Obviously, various changes and modifications can be made to the invention without departing from essence of the invention by those skilled in the art Mind and range.In this way, if these modifications and changes of the present invention belongs to the range of the claims in the present invention and its equivalent technologies Within, then the present invention is also intended to include these modifications and variations.

Claims (16)

1. a kind of biopsy method of face, which is characterized in that the described method includes:
Face to be detected is obtained in different moments corresponding feature vector;
It obtains and presets key point in the face to be detected in the different moments corresponding location information, the location information is Position of the default key point in the face to be detected;The default key point is that can characterize the region of human face expression;
According to the different moments corresponding feature vector and the different moments corresponding location information, determine described to be detected The face change degree of face;
If the face change degree of the face to be detected is greater than preset threshold, it is determined that the face to be detected is examined by living body It surveys.
2. the method according to claim 1, wherein according to the different moments corresponding feature vector and described Different moments corresponding location information determines the face change degree of the face to be detected, comprising:
According to the different moments corresponding feature vector, characteristic similarity is determined;
According to the different moments corresponding location information, change in location degree is determined;
According to the characteristic similarity and the change in location degree, the face change degree of the face to be detected is determined.
3. the method according to claim 1, wherein obtain face to be detected different moments corresponding feature to Amount, comprising:
The segmentation area of the face to be detected is obtained in the different moments corresponding feature vector;The segmentation area It is to be determined according to the face position of face;
According to the different moments corresponding feature vector, characteristic similarity is determined, comprising:
According to each cut zone in the different moments corresponding feature vector, determine that the feature of each cut zone is similar Degree;
According to the characteristic similarity and the change in location degree, the face change degree of the face to be detected is determined, comprising:
According to the characteristic similarity of each cut zone and the change in location degree, the face variation of the face to be detected is determined Degree.
4. according to the method described in claim 3, it is characterized in that, according to the characteristic similarity of each cut zone and institute's rheme Change degree is set, determines the face change degree of the face to be detected, comprising:
For any default key point, cut zone belonging to the default key point is determined;
According to the characteristic similarity of affiliated cut zone and the change in location degree of the default key point, the cut section is determined The face change degree in domain;
According to the face change degree of segmentation area, the face change degree of the face to be detected is determined.
5. according to the method described in claim 3, it is characterized in that, the cut zone includes mouth region, nasal area, face Buccal region domain, brow region, eye areas and forehead region.
6. presetting key point in the face to be detected described the method according to claim 1, wherein obtaining Different moments corresponding location information, comprising:
It is obtained using flight time TOF technology and presets key point in the face to be detected in the different moments corresponding position Information;
Or
It is obtained using 3D face reconstruction techniques and presets key point in the face to be detected in the different moments corresponding position Information.
7. method according to any one of claim 1 to 6, which is characterized in that determining that the face to be detected passes through After In vivo detection, further includes:
According to the first eigenvector and the second feature vector, the corresponding feature vector of the face to be detected is determined;
According to the corresponding feature vector of the face to be detected and it is pre-stored at least one detected the corresponding spy of face Levy vector, however, it is determined that it is described at least one detected in face that there are the similar faces of the face to be detected, it is determined that it is described Face to be detected passes through authentication.
8. a kind of living body detection device of face, which is characterized in that described device includes:
Acquiring unit, for obtaining face to be detected in different moments corresponding feature vector;And obtain the people to be detected Key point is preset in face in the different moments corresponding location information, the location information is the default key point described Position in face to be detected;The default key point is that can characterize the region of human face expression;
Processing unit is used for according to the different moments corresponding feature vector and the different moments corresponding location information, Determine the face change degree of the face to be detected;If the face change degree of the face to be detected is greater than preset threshold, really The fixed face to be detected passes through In vivo detection.
9. device according to claim 8, which is characterized in that the processing unit is specifically used for:
According to the different moments corresponding feature vector, characteristic similarity is determined;And according to the different moments corresponding position Confidence breath, determines change in location degree;And according to the characteristic similarity and the change in location degree, determine the people to be detected The face change degree of face.
10. device according to claim 8, which is characterized in that the acquiring unit is specifically used for:
The segmentation area of the face to be detected is obtained in the different moments corresponding feature vector;The segmentation area It is to be determined according to the face position of face;
The processing unit is specifically used for:
According to each cut zone in the different moments corresponding feature vector, determine that the feature of each cut zone is similar Degree;
And characteristic similarity and the change in location degree according to each cut zone, determine the face of the face to be detected Change degree.
11. device according to claim 10, which is characterized in that the processing unit is specifically used for:
For any default key point, cut zone belonging to the default key point is determined;And according to affiliated cut zone Characteristic similarity and the default key point change in location degree, determine the face change degree of the cut zone;And root According to the face change degree of segmentation area, the face change degree of the face to be detected is determined.
12. device according to claim 10, which is characterized in that the cut zone include mouth region, nasal area, Cheek region, brow region, eye areas and forehead region.
13. device according to claim 8, which is characterized in that the acquiring unit is specifically used for:
It is obtained using flight time TOF technology and presets key point in the face to be detected in the different moments corresponding position Information;
Or
It is obtained using 3D face reconstruction techniques and presets key point in the face to be detected in the different moments corresponding position Information.
14. the device according to any one of claim 8 to 13, which is characterized in that the processing unit is described in the determination After face to be detected passes through In vivo detection, it is also used to:
According to the first eigenvector and the second feature vector, the corresponding feature vector of the face to be detected is determined; And according to the corresponding feature vector of the face to be detected and it is pre-stored at least one detected the corresponding spy of face Levy vector, however, it is determined that it is described at least one detected in face that there are the similar faces of the face to be detected, it is determined that it is described Face to be detected passes through authentication.
15. a kind of computer readable storage medium, which is characterized in that the storage medium is stored with instruction, when described instruction exists When being run on computer, so that computer realizes method described in any one of perform claim requirement 1 to 7.
16. a kind of computer equipment characterized by comprising
Memory, for storing program instruction;
Processor, for calling the program instruction stored in the memory, according to acquisition program execute as claim 1 to Method described in any claim in 7.
CN201811572285.0A 2018-12-21 2018-12-21 Living body detection method and device for human face Active CN109766785B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811572285.0A CN109766785B (en) 2018-12-21 2018-12-21 Living body detection method and device for human face

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811572285.0A CN109766785B (en) 2018-12-21 2018-12-21 Living body detection method and device for human face

Publications (2)

Publication Number Publication Date
CN109766785A true CN109766785A (en) 2019-05-17
CN109766785B CN109766785B (en) 2023-09-01

Family

ID=66450831

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811572285.0A Active CN109766785B (en) 2018-12-21 2018-12-21 Living body detection method and device for human face

Country Status (1)

Country Link
CN (1) CN109766785B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110458098A (en) * 2019-08-12 2019-11-15 上海天诚比集科技有限公司 A kind of face comparison method of facial angle measurement
CN110728215A (en) * 2019-09-26 2020-01-24 杭州艾芯智能科技有限公司 Face living body detection method and device based on infrared image
CN111274879A (en) * 2020-01-10 2020-06-12 北京百度网讯科技有限公司 Method and device for detecting reliability of in-vivo examination model
CN111783644A (en) * 2020-06-30 2020-10-16 百度在线网络技术(北京)有限公司 Detection method, device, equipment and computer storage medium
CN112132996A (en) * 2019-06-05 2020-12-25 Tcl集团股份有限公司 Door lock control method, mobile terminal, door control terminal and storage medium
CN112395902A (en) * 2019-08-12 2021-02-23 北京旷视科技有限公司 Face living body detection method, image classification method, device, equipment and medium
CN112819986A (en) * 2021-02-03 2021-05-18 广东共德信息科技有限公司 Attendance system and method
CN112927383A (en) * 2021-02-03 2021-06-08 广东共德信息科技有限公司 Cross-regional labor worker face recognition system and method based on building industry
CN112927382A (en) * 2021-02-03 2021-06-08 广东共德信息科技有限公司 Face recognition attendance system and method based on GIS service

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104348778A (en) * 2013-07-25 2015-02-11 信帧电子技术(北京)有限公司 Remote identity authentication system, terminal and method carrying out initial face identification at handset terminal
CN104361326A (en) * 2014-11-18 2015-02-18 新开普电子股份有限公司 Method for distinguishing living human face
CN105389554A (en) * 2015-11-06 2016-03-09 北京汉王智远科技有限公司 Face-identification-based living body determination method and equipment
CN105447432A (en) * 2014-08-27 2016-03-30 北京千搜科技有限公司 Face anti-fake method based on local motion pattern
CN106850648A (en) * 2015-02-13 2017-06-13 腾讯科技(深圳)有限公司 Auth method, client and service platform
CN107220590A (en) * 2017-04-24 2017-09-29 广东数相智能科技有限公司 A kind of anti-cheating network research method based on In vivo detection, apparatus and system
CN107330914A (en) * 2017-06-02 2017-11-07 广州视源电子科技股份有限公司 Face position method for testing motion and device and vivo identification method and system
CN107346422A (en) * 2017-06-30 2017-11-14 成都大学 A kind of living body faces recognition methods based on blink detection
US20170345146A1 (en) * 2016-05-30 2017-11-30 Beijing Kuangshi Technology Co., Ltd. Liveness detection method and liveness detection system
CN107886070A (en) * 2017-11-10 2018-04-06 北京小米移动软件有限公司 Verification method, device and the equipment of facial image
CN107992842A (en) * 2017-12-13 2018-05-04 深圳云天励飞技术有限公司 Biopsy method, computer installation and computer-readable recording medium
CN108805047A (en) * 2018-05-25 2018-11-13 北京旷视科技有限公司 A kind of biopsy method, device, electronic equipment and computer-readable medium

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104348778A (en) * 2013-07-25 2015-02-11 信帧电子技术(北京)有限公司 Remote identity authentication system, terminal and method carrying out initial face identification at handset terminal
CN105447432A (en) * 2014-08-27 2016-03-30 北京千搜科技有限公司 Face anti-fake method based on local motion pattern
CN104361326A (en) * 2014-11-18 2015-02-18 新开普电子股份有限公司 Method for distinguishing living human face
CN106850648A (en) * 2015-02-13 2017-06-13 腾讯科技(深圳)有限公司 Auth method, client and service platform
CN105389554A (en) * 2015-11-06 2016-03-09 北京汉王智远科技有限公司 Face-identification-based living body determination method and equipment
US20170345146A1 (en) * 2016-05-30 2017-11-30 Beijing Kuangshi Technology Co., Ltd. Liveness detection method and liveness detection system
CN107220590A (en) * 2017-04-24 2017-09-29 广东数相智能科技有限公司 A kind of anti-cheating network research method based on In vivo detection, apparatus and system
CN107330914A (en) * 2017-06-02 2017-11-07 广州视源电子科技股份有限公司 Face position method for testing motion and device and vivo identification method and system
CN107346422A (en) * 2017-06-30 2017-11-14 成都大学 A kind of living body faces recognition methods based on blink detection
CN107886070A (en) * 2017-11-10 2018-04-06 北京小米移动软件有限公司 Verification method, device and the equipment of facial image
CN107992842A (en) * 2017-12-13 2018-05-04 深圳云天励飞技术有限公司 Biopsy method, computer installation and computer-readable recording medium
CN108805047A (en) * 2018-05-25 2018-11-13 北京旷视科技有限公司 A kind of biopsy method, device, electronic equipment and computer-readable medium

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112132996A (en) * 2019-06-05 2020-12-25 Tcl集团股份有限公司 Door lock control method, mobile terminal, door control terminal and storage medium
CN110458098A (en) * 2019-08-12 2019-11-15 上海天诚比集科技有限公司 A kind of face comparison method of facial angle measurement
CN112395902A (en) * 2019-08-12 2021-02-23 北京旷视科技有限公司 Face living body detection method, image classification method, device, equipment and medium
CN110728215A (en) * 2019-09-26 2020-01-24 杭州艾芯智能科技有限公司 Face living body detection method and device based on infrared image
CN111274879A (en) * 2020-01-10 2020-06-12 北京百度网讯科技有限公司 Method and device for detecting reliability of in-vivo examination model
CN111274879B (en) * 2020-01-10 2023-04-25 北京百度网讯科技有限公司 Method and device for detecting reliability of living body detection model
CN111783644A (en) * 2020-06-30 2020-10-16 百度在线网络技术(北京)有限公司 Detection method, device, equipment and computer storage medium
CN112819986A (en) * 2021-02-03 2021-05-18 广东共德信息科技有限公司 Attendance system and method
CN112927383A (en) * 2021-02-03 2021-06-08 广东共德信息科技有限公司 Cross-regional labor worker face recognition system and method based on building industry
CN112927382A (en) * 2021-02-03 2021-06-08 广东共德信息科技有限公司 Face recognition attendance system and method based on GIS service
CN112927382B (en) * 2021-02-03 2023-01-10 广东共德信息科技有限公司 Face recognition attendance system and method based on GIS service

Also Published As

Publication number Publication date
CN109766785B (en) 2023-09-01

Similar Documents

Publication Publication Date Title
CN109766785A (en) A kind of biopsy method and device of face
CN105518711B (en) Biopsy method, In vivo detection system and computer program product
CN105612533B (en) Living body detection method, living body detection system, and computer program product
CN108985134B (en) Face living body detection and face brushing transaction method and system based on binocular camera
EP2546782B1 (en) Liveness detection
CN108124486A (en) Face living body detection method based on cloud, electronic device and program product
CN102945366B (en) A kind of method and device of recognition of face
CN104966070B (en) Biopsy method and device based on recognition of face
CN108140123A (en) Face living body detection method, electronic device and computer program product
CN107590430A (en) Biopsy method, device, equipment and storage medium
CN106570489A (en) Living body determination method and apparatus, and identity authentication method and device
CN105518708A (en) Method and equipment for verifying living human face, and computer program product
CN105160318A (en) Facial expression based lie detection method and system
US11682236B2 (en) Iris authentication device, iris authentication method and recording medium
US11756338B2 (en) Authentication device, authentication method, and recording medium
CN106682473A (en) Method and device for identifying identity information of users
US20230306792A1 (en) Spoof Detection Based on Challenge Response Analysis
CN108932774A (en) information detecting method and device
CN111178233A (en) Identity authentication method and device based on living body authentication
US20220277311A1 (en) A transaction processing system and a transaction method based on facial recognition
KR102530141B1 (en) Method and apparatus for face authentication using face matching rate calculation based on artificial intelligence
US11961329B2 (en) Iris authentication device, iris authentication method and recording medium
KR102616230B1 (en) Method for determining user's concentration based on user's image and operating server performing the same
Lakshmi et al. Efficient log-based iris detection and image sharpness enhancement (l-IDISE) using artificial neural network
Jagadeesh et al. Software implementation procedure of the development of an iris-biometric identification system using image processing techniques

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant