CN111783501A - Living body detection method and device and corresponding electronic equipment - Google Patents

Living body detection method and device and corresponding electronic equipment Download PDF

Info

Publication number
CN111783501A
CN111783501A CN201910266411.8A CN201910266411A CN111783501A CN 111783501 A CN111783501 A CN 111783501A CN 201910266411 A CN201910266411 A CN 201910266411A CN 111783501 A CN111783501 A CN 111783501A
Authority
CN
China
Prior art keywords
image
plane
determining
living body
key point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910266411.8A
Other languages
Chinese (zh)
Inventor
高鹏
任伟强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Horizon Robotics Technology Research and Development Co Ltd
Original Assignee
Beijing Horizon Robotics Technology Research and Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Horizon Robotics Technology Research and Development Co Ltd filed Critical Beijing Horizon Robotics Technology Research and Development Co Ltd
Priority to CN201910266411.8A priority Critical patent/CN111783501A/en
Publication of CN111783501A publication Critical patent/CN111783501A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements

Abstract

A method and apparatus for detecting a living body and a corresponding electronic device are disclosed. The in vivo detection method comprises the following steps: determining at least three key points of a detection object in an image; determining at least one plane in space from the at least three keypoints; determining a normal vector for each of the at least one plane; and determining whether the detection object is a living body according to the normal vector of each plane. By the living body detection method and device and the corresponding electronic equipment according to the embodiment of the disclosure, the living body detection can be more accurately carried out.

Description

Living body detection method and device and corresponding electronic equipment
Technical Field
The present disclosure relates generally to the field of pattern recognition, and in particular to a method and apparatus for detecting a living body and a corresponding electronic device.
Background
With the development of biometric technology, face recognition technology tends to be stable and mature. However, an illegal user can deceive the face recognition system through means of photos, videos, face masks and the like, and the accuracy of face detection and recognition is reduced under the condition of poor face pose and illumination. Therefore, the accuracy of human face living body detection is very important for application scenes with high safety requirements such as entrance guard, login and the like.
Disclosure of Invention
According to an aspect of the present disclosure, there is provided a method of living body detection, including: determining at least three key points of a detection object in an image; determining at least one plane in space from the at least three keypoints; determining a normal vector for each of the at least one plane; and determining whether the detection object is a living body according to the normal vector of each plane.
According to another aspect of the present disclosure, there is provided a living body detection apparatus including: a keypoint locating unit configured to determine at least three keypoints of a detected object in an image; a normal vector determination unit configured to determine at least one plane in space from the at least three keypoints and determine a normal vector for each plane of the at least one plane; and a classification unit configured to determine whether the detection object is a living body from a normal vector of each plane.
According to another aspect of the disclosure, an electronic device is provided that includes a processor and a memory for storing instructions executable by the processor. The processor may be configured to read the instructions from the memory and execute the instructions to implement the liveness detection method described above.
According to another aspect of the present disclosure, there is provided a computer-readable storage medium having stored thereon program instructions that, when executed by a computer, perform the above-described living body detection method.
By the living body detection method and device and the corresponding electronic equipment according to the embodiment of the disclosure, the living body detection can be more accurately carried out.
Drawings
The above and other objects, features and advantages of the present disclosure will become more apparent by describing in more detail embodiments of the present disclosure with reference to the attached drawings. The accompanying drawings are included to provide a further understanding of the embodiments of the disclosure and are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description serve to explain the principles of the disclosure and not to limit the disclosure. In the drawings, like reference numbers generally represent like parts or steps.
Fig. 1 is an example of a living body detection method according to an embodiment of the present disclosure.
Fig. 2 is an example of a living body detecting device according to an embodiment of the present disclosure.
Fig. 3 is an example of a living body detecting device according to an embodiment of the present disclosure.
Fig. 4 is an example of an electronic device according to an embodiment of the present disclosure.
Detailed Description
Hereinafter, example embodiments according to the present disclosure will be described in detail with reference to the accompanying drawings. It is to be understood that the described embodiments are merely a subset of the embodiments of the present disclosure and not all embodiments of the present disclosure, with the understanding that the present disclosure is not limited to the example embodiments described herein.
SUMMARY
As previously mentioned, an illegal user can spoof the face recognition system by means of a photo, video, face mask, etc. This effectively counteracts such an attack, improves the discrimination between living and non-living features, and it is desirable to provide enhanced depth information so that more fine-grained features can be extracted in order to improve the accuracy of the live detection.
Exemplary method
Fig. 1 illustrates an example of a living body detection method according to an embodiment of the present disclosure, which may be applied to an electronic device, and may include steps S110, S120, S130, and S140.
As shown in fig. 1, in step S110, at least three key points of the detection object in the image may be determined.
According to various embodiments, the image in step S110 may be a single-channel or multi-channel image obtained by any suitable means including any suitable information.
For example, the image in step S110 may be an image captured by an appropriate binocular image capturing device, for example, may be one of a left image and a right image captured by a binocular camera for a detection object; or may be one of two or more images that can be used for stereoscopic matching obtained by any suitable means; or may be a single image captured by a monocular camera operating in conjunction with a depth image capturing device such as an infrared depth camera, lidar, etc. In addition, each pixel in the image in step S110 may also include depth information itself, and for example, a channel representing a depth value of each pixel may be further included on the basis of a color channel.
According to different embodiments, the detection object may comprise any type of object of interest, such as a person, a certain part/parts of a human body, an animal, etc. For example, the detection object may be a human face or a human body part including a human face region.
According to various embodiments, the keypoints of the detection object may be at least three markers (landmark) in any form capable of locating or characterizing the key part or key region of the detection object on the image.
For example, in the case where the detection object is a human face, the key points may be at least three markers capable of locating, in a given image, the positions of key regions or key parts on the human face, such as eyebrows, eyes, a nose, a mouth, a face contour, and the like.
All the key points or one or more shape vectors formed by these points can characterize or represent the geometry of the detection object or the geometry of one or more key parts or key areas of the detection object.
Then, in step S120, at least one plane in space may be determined from the determined at least three keypoints.
According to different embodiments, one of the determined at least three key points or a point other than the determined at least three key points may be used as an origin, and a three-dimensional coordinate system may be established according to a relative positional relationship between each key point and the origin. In such a three-dimensional coordinate space, mathematically, every third point can define a plane, from which at least one plane can be defined. More details will be described later.
Then, in step S130, a normal vector of each plane may be determined.
Mathematically, a straight line perpendicular to a plane represents a vector that is a normal vector to the plane, and each plane has an infinite number of normal vectors. In one embodiment, one normal vector for each plane may be determined for subsequent processing.
Then, in step S140, it may be determined whether the detection object is a living body from the normal vector of each plane.
According to various embodiments, the normal vectors of each plane may be provided as feature data to a live body classifier constructed based on any suitable type of model, such as a support vector machine, convolutional neural network, logistic regression, etc., and having any suitable structure. The present disclosure is not limited to a particular type and/or structure of classifier or model for in vivo detection using planar normal vectors.
In the method according to the embodiment of the present disclosure, at least three key points of the detection object are first determined, then at least one plane in space is determined based on the respective key points, and then a normal vector of each plane is provided as input feature data to the living body detection model. The plane normal vector features can be fused with the structural features of the detection object, so that the method according to the embodiment of the disclosure can remarkably improve the accuracy of living body identification.
Further details of the method according to an embodiment of the present disclosure are described below, taking face liveness detection as an example. However, it should be understood that objects that can be detected by the method according to embodiments of the present disclosure are not limited to human faces.
According to various embodiments, in step S110, face detection may be performed on the input image by using any suitable method and/or model, such as a convolutional neural network, and then the detected face partial image is subjected to key point positioning/labeling by using any suitable method and/or model; the method and/or the model which can directly mark or locate the key points of the face from the input image can also be adopted to determine the key points of the face.
For example, in step S110, a keypoint detection method/Model such as an Active Appearance Model (Active Appearance Model), a constrained Local Model (constrained Local Mldel), a Cascaded shape regression (Cascaded shape regression), or a Deep object Network (Deep Alignment Network) may be adopted to determine the keypoint of the detection object in the input image.
The number of at least three keypoints determined may be any suitable number, such as 3, 5, 14, 34, 68, and so on, according to various embodiments. For example, 51 interior keypoints relating to the facial five sense organs can be determined, and 17 contour keypoints relating to the facial outer contour can be determined, thereby determining 68 keypoints.
It should be understood that the present disclosure is not limited to methods and/or models for face detection and keypoint localization/labeling, nor to the number of keypoints determined.
There may be a relative positional relationship on the plane with the input image between the respective key points determined by step S110. For example, in a case where one of the determined at least three keypoints or some point other than the determined at least three keypoints is adopted as the origin of coordinates on the image plane, each keypoint may have a corresponding two-dimensional coordinate on the image plane.
Then, the method according to an embodiment of the present disclosure may proceed to step S120.
According to one embodiment, in step S120, a depth value of each of the at least three keypoints may be determined, then coordinates of each keypoint in the space may be determined according to the depth value of each keypoint and coordinates of each keypoint in the image, and then one plane of the at least one plane may be determined according to coordinates of each three keypoints in the space.
In the embodiment, the three-dimensional coordinates of each key point in the space are determined according to the depth value of each key point and the two-dimensional coordinates of each key point in the image plane, and then at least one plane in the space is determined, so that the plane normal vector can reflect the depth feature while fusing the structural feature, and more and finer-grained feature information can be provided, so that living body identification can be performed more accurately.
According to different embodiments, the depth value of each keypoint may be determined in various suitable ways, depending on the situation.
For example, as described above, while the regular image about the detection object is obtained by, for example, a camera in step S110, the corresponding depth image may be obtained using a depth image acquisition device such as an infrared depth camera, a lidar, or the like, and then the pixels in the regular image and the depth image may be associated. In this way, for each keypoint of the detection object determined based on the regular image, the depth value of each keypoint may be determined according to the association between the pixels in the regular image and the depth image.
In further embodiments, determining the depth value for each of the at least three keypoints may comprise: determining a parallax value of each key point according to a coordinate difference of each key point in the image and another image of the at least three key points, wherein the image and the another image are from a binocular image acquisition device; and determining a depth value of each key point according to the parallax value of each key point.
For example, the image in step S110 may be an image captured by a suitable binocular image capturing device (e.g., a binocular camera), and accordingly the disparity value of each of the at least three key points may be determined according to the coordinate difference of each key point in the image and another image from the binocular image capturing device, e.g., one and the other of a left image and a right image captured by the binocular camera for the detection object, respectively. Then, a depth value of each keypoint may be determined from the disparity value of each keypoint.
In order to facilitate obtaining a coordinate difference of each keypoint in the image and the other image, in one embodiment, the method according to an embodiment of the present disclosure may further include correcting the binocular image capturing device such that each point on the same photographic subject has the same ordinate value in the image and the other image. Then, the image and the other image with respect to the detection object may be captured using the corrected binocular image capturing apparatus.
Then, for example, it is possible to set key points at the center of a rigid region (rigid region) such as the center of the nose, the center of the eyebrow, etc., as the origin of coordinates, and determine two-dimensional coordinates of each key point on the image plane of the image IMG1 and/or IMG2 from the relative positional relationship between the respective key points.
If the coordinates of a keypoint P on images IMG1 and IMG2 are (x) respectively1,y1) And (x)2,y2) Then, since the binocular image capturing apparatus is subjected to correction processing such as epipolar rectification, each point on the same photographic subject has the same ordinate value, i.e., y, in the IMGs 1 and 21=y2. Then, the disparity value V of P can be determined according to the following equation (1)PAX
VPAX=x1-x2(1)
Then, for example, the disparity value V according to P can be utilized by the following equation (2)PAXDetermining the depth value of P:
VDEP=B*F/VPAX(2)
wherein, B may be a baseline length of the binocular image capturing device, and F may be a focal length of the binocular image capturing device.
Then, for example, for the above-described key point P, its three-dimensional coordinates in space can be determined as (x)1,y1,VDEP)。
In this embodiment, the disparity value of each key point is determined by using the coordinate difference of the key point in the two images from the binocular image acquisition device, so that the complex alignment and coordination operation in the case of obtaining a corresponding depth image by using a depth image acquisition device such as an infrared depth camera, a laser radar, and the like can be avoided, and the complex stereo matching network can be avoided, thereby improving the processing efficiency of the living body detection and reducing the hardware cost and the complexity of software control.
In another embodiment, it is also possible to make each point on the same photographic subject have the same abscissa value in the IMG1 and the IMG2 by correction, and use the difference between the ordinate values as the coordinate difference of the point on the IMG1 and the IMG 2.
According to various embodiments, the depth value for each point (or keypoint) may be an absolute depth or a relative depth value.
For example, in step S120, an initial depth value of each keypoint may be determined according to the disparity value of each keypoint, and then a depth value of each keypoint may be determined according to a difference between the initial depth value of each keypoint and an average value of the initial depth values of all keypoints.
For example, assume that the key point P is determined in step S1101、P2、……、Pn(where n may be an integer greater than or equal to 3), and the initial depth value of each keypoint is V respectively, for example, by equation (2) above1、V2、……、VnFor any key point Pi(1. ltoreq. i. ltoreq. n), the final depth value DiCan be determined according to the following equation (3):
Di=Vi-(Σ1≤j≤nVj)/n (3)
by using the relative depth values, the influence of different distances from a camera lens to a shot detection object on a detection result can be removed, so that the robustness of the living body detection at different distances can be effectively enhanced.
Then, in step S130, one normal vector for each plane may be selected for subsequent processing.
For example, if the equation of one plane determined in step S120 is ax + by + cz ═ d, then in step S130, the normal vectors (a, b, c) may be determined for subsequent processing.
Then, in step S140, the normal vector of each plane determined in step S130 may be provided to a classifier of a type such as a support vector machine, a neural network, a logistic regression, or the like, and it may be determined whether the detection object is a living body according to a classification result of the classifier.
In another embodiment, in step S140, it may be further determined whether the detection object is a living body according to the normal vector of each plane and the depth value of each key point and/or information such as a histogram of directional gradients generated from the image.
For example, a feature vector including a normal vector of each plane, a depth value of each keypoint, and a histogram of directional gradients generated from the image may be provided to a classifier of a type such as a support vector machine, a neural network, a logistic regression, or the like, and it is determined whether the detection object is a living body according to a classification result of the classifier. By providing a greater number and a greater number of dimensions of feature information, the accuracy of the in-vivo detection can be further enhanced.
As described above, in the living body detection method according to the embodiment of the present disclosure, it is possible to construct a plane of a key point and extract feature information of a plane normal vector, whereby the spatial position relationship of a detection object can be expressed more abundantly, so that the distinctiveness of depth information features can be enhanced. In addition, a spatial coordinate system can be established based on the relative depth, so that good robustness can be obtained under different distances. In addition, the in-vivo detection method according to the embodiment of the present disclosure may not rely on a complex stereo matching algorithm and/or an additional depth information acquisition device, but may recover depth information only using a key point localization technique, thereby being capable of significantly improving calculation speed and real-time and reducing implementation cost.
Exemplary devices
Fig. 2 illustrates an example of a living body detection apparatus according to an embodiment of the present disclosure, which may implement a living body detection method according to an embodiment of the present disclosure, and may include a keypoint locating unit 210, a normal vector determining unit 220, and a classifying unit 230.
The keypoint positioning unit 210 may be configured to determine at least three keypoints of the detection object in the image, i.e. to implement step S110 of the living body detection method according to an embodiment of the present disclosure.
According to various embodiments, the keypoint location unit 210 may include a general-purpose processor such as a central processing unit and a graphics processor, or may be a special-purpose processor developed based on a field programmable gate array, for example. For example, in the case of performing object of interest detection and/or keypoint labeling based on a convolutional neural network, the keypoint positioning unit 210 may also include elements such as multiply-add cell arrays, adder arrays, twist operators, etc. for accelerating operations such as convolution, pooling, point-by-point addition, activation, etc., as well as static random access memory for caching of data, etc.
The normal vector determination unit 220 may be configured to determine at least one plane in space from the at least three keypoints and to determine a normal vector for each plane of the at least one plane, i.e., to implement steps S120 and S130 of the living body detection method according to an embodiment of the present disclosure.
According to various embodiments, the normal vector determination unit 220 may also include a general-purpose processor such as a central processing unit and a graphics processor, or may be a special-purpose processor developed based on a device such as a field programmable gate array.
In one embodiment, as shown in fig. 3, the normal vector determination unit 220 may include a depth value calculation module 2210 and a plane determination module 2220, wherein the depth value calculation module 2210 may be configured to determine a depth value of each keypoint, and the plane determination module 2220 may be configured to determine coordinates of each keypoint in the space from the depth value of each keypoint and coordinates in the image and to determine one plane in the at least one plane from coordinates of each three keypoints in the space.
The classification unit 230 may be configured to determine whether the detection object is a living body from a normal vector of each plane, i.e., configured to implement step S140 of the living body detection method according to the embodiment of the present disclosure.
Similar to the keypoint locating unit 210, the classification unit 230 may also include general-purpose processors such as a central processing unit and a graphics processor, or may be a special-purpose processor developed based on a field programmable gate array, for example, according to different embodiments. For example, in the case of performing liveness detection based on a convolutional neural network, the classification unit 230 may further include a multiply-add unit array, an adder array, a twist operator, and other elements for accelerating operations such as convolution, pooling, point-by-point addition, activation, and the like, and a static random access memory for caching of data, and the like.
In one embodiment, the various units and/or modules described above may multiplex one or more compute acceleration components. In other embodiments, the functions of the various units and/or modules described above may be implemented by one or more general or special purpose processors, such as a central processing unit, a graphics processor, a field programmable gate array, or the like.
The various units and/or modules described above may be interconnected in any suitable manner, such as by a bus, cross bar, shared memory, etc., according to various embodiments.
It should be understood that fig. 2 or 3 are merely examples of an apparatus according to an embodiment of the present disclosure, and the present disclosure is not limited thereto. For example, in further examples, an apparatus according to embodiments of the present disclosure may further include a memory for storing intermediate or result data and/or one or more interfaces for receiving data or transmitting detection results to the outside.
Exemplary electronic device
As shown in fig. 4, embodiments of the present disclosure may also be an electronic device, which electronic device 400 may include one or more processors 410 and memory 420.
The processor 410 is a Central Processing Unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in the electronic device 400 to perform desired functions.
Memory 420 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, Random Access Memory (RAM), cache memory (cache), and/or the like. The non-volatile memory may include, for example, Read Only Memory (ROM), hard disk, flash memory, etc. One or more computer program instructions may be stored on the computer-readable storage medium and executed by processor 410 to implement the above-described liveness detection methods of the various embodiments of the present application and/or other desired functions. Various contents such as an input signal, a signal component, a noise component, etc. may also be stored in the computer-readable storage medium.
In one example, the electronic device 400 may further include: an input device 430 and an output device 440, which are interconnected by a bus system and/or other form of connection mechanism (not shown).
The input device 430 may also include, for example, a keyboard, mouse, and the like.
The output device 440 may output various information including the determined distance information, direction information, etc. to the outside. The output devices 440 may include, for example, a display, speakers, a printer, and a communication network and remote output devices connected thereto, among others.
Of course, for simplicity, only some of the components of the electronic device 400 relevant to the present application are shown in fig. 4, omitting components such as buses, input/output interfaces, and the like. In addition, the electronic device may include any other suitable components, depending on the particular application.
Exemplary computer program product and computer-readable storage Medium
In addition to the methods and apparatus described above, embodiments of the present disclosure may also be a computer program product comprising computer program instructions that, when executed by a processor, cause the processor to perform the steps in the methods according to the various embodiments of the present disclosure described in the "exemplary methods" section above of this specification.
The computer program product may write program code for performing the operations of embodiments of the present disclosure in any combination of one or more programming languages, which may include an object oriented programming language such as Java, C + +, or the like, as well as conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server.
Furthermore, embodiments of the present disclosure may also be a computer-readable storage medium, such as a computer-readable non-transitory storage medium, having stored thereon program instructions that, when executed by a processor, cause the processor to perform steps in methods according to various embodiments of the present disclosure as described in the "exemplary methods" section above of this specification.
A computer-readable storage medium may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may include, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing describes the general principles of the present disclosure in conjunction with specific embodiments, however, it is noted that the advantages, effects, etc. mentioned in the present disclosure are merely examples and are not limiting, and they should not be considered essential to the various embodiments of the present disclosure. Furthermore, the foregoing disclosure of specific details is for the purpose of illustration and description and is not intended to be limiting, since the disclosure is not intended to be limited to the specific details so described.
The block diagrams of devices, apparatuses, systems referred to in this disclosure are only given as illustrative examples and are not intended to require or imply that the connections, arrangements, configurations, etc. must be made in the manner shown in the block diagrams. These devices, apparatuses, devices, systems may be connected, arranged, configured in any manner, as will be appreciated by those skilled in the art. Words such as "including," "comprising," "having," and the like are open-ended words that mean "including, but not limited to," and are used interchangeably therewith. The words "or" and "as used herein mean, and are used interchangeably with, the word" and/or, "unless the context clearly dictates otherwise. The word "such as" is used herein to mean, and is used interchangeably with, the phrase "such as but not limited to".
It is also noted that in the devices, apparatuses, and methods of the present disclosure, each component or step can be decomposed and/or recombined. These decompositions and/or recombinations are to be considered equivalents of the present disclosure.
In this document, modifiers such as "first," "second," etc., without quantity, are intended to distinguish between different elements/components/circuits/modules/devices/steps and are not intended to emphasize order, positional relationships, importance, priority, etc. In contrast, modifiers such as "first," "second," and the like with quantitative terms may be used to emphasize different elements/components/circuits/modules/devices/steps in order, location, degree of importance, priority, and the like.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present disclosure. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the disclosure. Thus, the present disclosure is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description has been presented for purposes of illustration and description. Furthermore, this description is not intended to limit embodiments of the disclosure to the form disclosed herein. While a number of example aspects and embodiments have been discussed above, those of skill in the art will recognize certain variations, modifications, alterations, additions and sub-combinations thereof.

Claims (10)

1. A method of in vivo detection comprising:
determining at least three key points of a detection object in an image;
determining at least one plane in space from the at least three keypoints;
determining a normal vector for each of the at least one plane; and
and determining whether the detection object is a living body according to the normal vector of each plane.
2. The in-vivo detection method as in claim 1, wherein determining at least one plane in space from the at least three keypoints comprises:
determining a depth value for each keypoint of the at least three keypoints;
determining the coordinates of each key point in the space according to the depth value of each key point and the coordinates of each key point in the image; and
determining one of the at least one plane from the coordinates of every three keypoints in the space.
3. The liveness detection method of claim 2, wherein determining a depth value for each keypoint of the at least three keypoints comprises:
determining a parallax value of each key point according to a coordinate difference of each key point in the image and another image of the at least three key points, wherein the image and the another image are from a binocular image acquisition device; and
and determining the depth value of each key point according to the parallax value of each key point.
4. The in-vivo detection method as recited in claim 3, wherein determining the depth value of each keypoint from the disparity value of each keypoint comprises:
determining an initial depth value of each key point according to the parallax value of each key point; and
and determining the depth value of each key point according to the difference between the initial depth value of each key point and the average value of the initial depth values of all the key points.
5. The in-vivo detection method according to claim 3, further comprising:
correcting the binocular image acquisition device so that each point on the same shooting object has the same longitudinal coordinate value in the image and the other image; and
capturing the image and the another image about the detection object with the binocular image capturing apparatus.
6. The living body detection method according to any one of claims 1 to 5, wherein determining whether the detection object is a living body from a normal vector of each plane includes:
and determining whether the detection object is a living body according to the normal vector of each plane and the depth value of each key point and/or a direction gradient histogram generated according to the image.
7. A living body detection apparatus comprising:
a keypoint locating unit configured to determine at least three keypoints of a detected object in an image;
a normal vector determination unit configured to determine at least one plane in space from the at least three keypoints and determine a normal vector for each plane of the at least one plane; and
a classification unit configured to determine whether the detection object is a living body from a normal vector of each plane.
8. The living body detecting apparatus according to claim 7, wherein the normal vector determining unit includes:
a depth value calculation module configured to determine a depth value for each keypoint; and
a plane determination module configured to determine coordinates of each keypoint in the space based on the depth value of each keypoint and the coordinates in the image and to determine one of the at least one plane based on coordinates of each three keypoints in the space.
9. An electronic device, comprising:
a processor;
a memory for storing instructions executable by the processor;
the processor configured to read the instructions from the memory and execute the instructions to implement the liveness detection method of any one of claims 1 to 6.
10. A computer-readable storage medium on which program instructions are stored, which when executed by a computer perform the living body detection method according to any one of claims 1 to 6.
CN201910266411.8A 2019-04-03 2019-04-03 Living body detection method and device and corresponding electronic equipment Pending CN111783501A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910266411.8A CN111783501A (en) 2019-04-03 2019-04-03 Living body detection method and device and corresponding electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910266411.8A CN111783501A (en) 2019-04-03 2019-04-03 Living body detection method and device and corresponding electronic equipment

Publications (1)

Publication Number Publication Date
CN111783501A true CN111783501A (en) 2020-10-16

Family

ID=72754929

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910266411.8A Pending CN111783501A (en) 2019-04-03 2019-04-03 Living body detection method and device and corresponding electronic equipment

Country Status (1)

Country Link
CN (1) CN111783501A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105740775A (en) * 2016-01-25 2016-07-06 北京天诚盛业科技有限公司 Three-dimensional face living body recognition method and device
US20160196467A1 (en) * 2015-01-07 2016-07-07 Shenzhen Weiteshi Technology Co. Ltd. Three-Dimensional Face Recognition Device Based on Three Dimensional Point Cloud and Three-Dimensional Face Recognition Method Based on Three-Dimensional Point Cloud
KR20170080126A (en) * 2015-12-31 2017-07-10 동의대학교 산학협력단 Access Control System using Depth Information based Face Recognition
CN108764069A (en) * 2018-05-10 2018-11-06 北京市商汤科技开发有限公司 Biopsy method and device
CN108875468A (en) * 2017-06-12 2018-11-23 北京旷视科技有限公司 Biopsy method, In vivo detection system and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160196467A1 (en) * 2015-01-07 2016-07-07 Shenzhen Weiteshi Technology Co. Ltd. Three-Dimensional Face Recognition Device Based on Three Dimensional Point Cloud and Three-Dimensional Face Recognition Method Based on Three-Dimensional Point Cloud
KR20170080126A (en) * 2015-12-31 2017-07-10 동의대학교 산학협력단 Access Control System using Depth Information based Face Recognition
CN105740775A (en) * 2016-01-25 2016-07-06 北京天诚盛业科技有限公司 Three-dimensional face living body recognition method and device
CN108875468A (en) * 2017-06-12 2018-11-23 北京旷视科技有限公司 Biopsy method, In vivo detection system and storage medium
CN108764069A (en) * 2018-05-10 2018-11-06 北京市商汤科技开发有限公司 Biopsy method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
彭思江;戴厚平;周成富;刘倩;: "基于HOG/PCA/SVM的跨年龄人脸识别算法", 吉首大学学报(自然科学版), no. 05, 25 September 2018 (2018-09-25) *
陈正兵;: "基于深度图像的室内三维平面分割方法研究", 电子设计工程, no. 24, 20 December 2016 (2016-12-20) *

Similar Documents

Publication Publication Date Title
US10198623B2 (en) Three-dimensional facial recognition method and system
JP6844038B2 (en) Biological detection methods and devices, electronic devices and storage media
KR101874494B1 (en) Apparatus and method for calculating 3 dimensional position of feature points
WO2019056988A1 (en) Face recognition method and apparatus, and computer device
CN105740778B (en) Improved three-dimensional human face in-vivo detection method and device
JP5631086B2 (en) Information processing apparatus, control method therefor, and program
KR101647803B1 (en) Face recognition method through 3-dimension face model projection and Face recognition system thereof
CN111563924B (en) Image depth determination method, living body identification method, circuit, device, and medium
WO2013159686A1 (en) Three-dimensional face recognition for mobile devices
KR101510312B1 (en) 3D face-modeling device, system and method using Multiple cameras
CN110956114A (en) Face living body detection method, device, detection system and storage medium
US10853631B2 (en) Face verification method and apparatus, server and readable storage medium
CN112200056B (en) Face living body detection method and device, electronic equipment and storage medium
CN106937532A (en) System and method for detecting actual user
WO2022218161A1 (en) Method and apparatus for target matching, device, and storage medium
CN115035546A (en) Three-dimensional human body posture detection method and device and electronic equipment
JP2013218605A (en) Image recognition device, image recognition method, and program
CN111160233B (en) Human face in-vivo detection method, medium and system based on three-dimensional imaging assistance
CN111383256A (en) Image processing method, electronic device, and computer-readable storage medium
CN111383255A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN111783501A (en) Living body detection method and device and corresponding electronic equipment
KR20230060029A (en) Planar surface detection apparatus and method
CN115482285A (en) Image alignment method, device, equipment and storage medium
CN111767940A (en) Target object identification method, device, equipment and storage medium
JP2016194847A (en) Image detection device, image detection method, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination