CN111160233A - Human face in-vivo detection method, medium and system based on three-dimensional imaging assistance - Google Patents

Human face in-vivo detection method, medium and system based on three-dimensional imaging assistance Download PDF

Info

Publication number
CN111160233A
CN111160233A CN201911374651.6A CN201911374651A CN111160233A CN 111160233 A CN111160233 A CN 111160233A CN 201911374651 A CN201911374651 A CN 201911374651A CN 111160233 A CN111160233 A CN 111160233A
Authority
CN
China
Prior art keywords
dimensional point
dimensional
living body
point cloud
face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911374651.6A
Other languages
Chinese (zh)
Other versions
CN111160233B (en
Inventor
程诚
汪浩源
王旭光
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Institute of Nano Tech and Nano Bionics of CAS
Original Assignee
Suzhou Institute of Nano Tech and Nano Bionics of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Institute of Nano Tech and Nano Bionics of CAS filed Critical Suzhou Institute of Nano Tech and Nano Bionics of CAS
Priority to CN201911374651.6A priority Critical patent/CN111160233B/en
Publication of CN111160233A publication Critical patent/CN111160233A/en
Application granted granted Critical
Publication of CN111160233B publication Critical patent/CN111160233B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06T5/80
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20228Disparity calculation for image-based rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Abstract

The invention discloses a human face in-vivo detection method based on three-dimensional imaging assistance, which comprises the following steps: s01, acquiring a face image shot by a binocular camera; s02, generating a three-dimensional point cloud data set; s03, randomly selecting N three-dimensional point clouds from the three-dimensional point cloud data set to perform plane fitting to obtain a fitting plane Z; s04, randomly selecting M three-dimensional point clouds from the three-dimensional point cloud data set, and respectively calculating the distances d between the M three-dimensional point clouds and the fitting plane Zn(ii) a S05, calculating the distance dnCalculating the false point ratio R (Q/M) of the three-dimensional point cloud quantity Q smaller than the threshold value; s06, repeating the steps S03-S05 for preset times, and calculating the average value of the false point ratio; and S07, judging whether the average value is within the living body detection standard threshold value R, if so, judging the living body, otherwise, judging the living body to be not a living body. The invention has the advantages of high speed and good adaptabilityThe application environment and the behavior and action of the living body have small influence and the like, and can effectively resist common attack means such as photos, videos, shelters, screen reproduction and the like.

Description

Human face in-vivo detection method, medium and system based on three-dimensional imaging assistance
Technical Field
The invention relates to the field of computer face in-vivo detection, in particular to a face in-vivo detection method based on three-dimensional imaging assistance, a computer readable medium and a system.
Background
As the age has developed, the use of face recognition systems is becoming more prevalent than ever. Face recognition systems are being used in various industries, from face recognition unlocking on smartphones, to face recognition card punching, access control systems, and the like. However, face recognition systems are easily fooled by "false" faces. For example, a picture of a person is placed in a face recognition camera, so that the face recognition system can be deceived to recognize the picture as a face. In order to make the face recognition system safer, the living body detection is needed to be capable of not only recognizing the face but also detecting whether the face is a real face.
The existing in vivo detection technologies are mainly divided into the following types:
(1) action instruction living body detection: generally, a method of matching instruction actions, such as left turning, right turning, mouth opening, blinking and the like, is adopted, the method judges the living body through the actions of organs, and if video attack is adopted, a larger vulnerability may exist.
(2) Detecting the near-infrared human face living body: the near-infrared human face living body detection is mainly realized based on an optical flow method. The method can only realize living body judgment at night or under the condition of no natural light, and has larger error under the special condition of stronger ambient light (such as strong outdoor sunlight).
(3) Three-dimensional living body detection: a method of projecting structured light by a laser is adopted in the mainstream, based on a 3D structured light imaging principle, a depth image is constructed by reflecting light on the surface of a human face, whether a target object is a living body is judged, and attacks such as pictures, videos, screens and molds can be defended effectively. However, the method adopts structured light, which is greatly affected by ambient light, for example, in the outdoor environment, the structured light emitted by the laser is easily submerged by the sunlight, the error of the collected pattern is large, the judgment result may be wrong, the method can only be used in cloudy days, and the time consumption of model matching is long.
The methods have the defects of poor robustness, detection result errors caused by fine actions of objects or changes of ambient light distance and the like, and the method is sensitive to an illumination environment, long in sampling time and low in speed, or is easily attacked by manual operations such as photos, videos and shielding in the process of in-vivo detection.
Disclosure of Invention
In view of the defects in the prior art, the invention provides a human face living body detection method based on three-dimensional imaging assistance, a computer readable medium and a system, which have the advantages of high speed, small influence by the application environment and the behavior and the action of the living body, and the like, and can effectively resist common attack means such as photos, videos, shelters, screen reproduction and the like.
In order to achieve the purpose, the invention adopts the following technical scheme:
a human face in-vivo detection method based on three-dimensional imaging assistance comprises the following steps:
s01, acquiring a face image shot by a binocular camera;
s02, generating a three-dimensional point cloud data set of the face according to the acquired face image, wherein the three-dimensional point cloud data comprises three-dimensional coordinates corresponding to three-dimensional point clouds on the face image;
s03, performing living body detection, including:
s031, from the said three-dimensional point cloud data set, choose N three-dimensional point clouds to carry on the plane fitting at random, get and fit the plane Z, N is greater than 3;
s032, randomly selecting M three-dimensional point clouds from the three-dimensional point cloud data set, and respectively calculating the distances d between the M three-dimensional point clouds and the fitting plane Zn,M≥1000;
S033, calculating the distance dnCalculating the false point ratio R (Q/M) of the three-dimensional point cloud quantity Q smaller than the threshold D;
s034, repeating the steps S031-S033 for K times in a preset number of times, and calculating the average value Avg (R) of the false point ratio;
and S035, judging whether the average value Avg (R) is within a living body detection standard threshold value R, if so, judging the living body, and otherwise, judging the living body not.
As one embodiment, in step S031, N satisfies: n is more than 5 and less than 20.
As one embodiment, in step S032, M satisfies: m is more than or equal to 8000 and less than or equal to 12000.
In one embodiment, in step S034, the predetermined number K satisfies: k is more than or equal to 3000 and less than or equal to 10000.
As one embodiment, the step S02 includes:
s021, epipolar line correction: correcting the binocular image acquired in the step S01 so that the matching points in the left and right images are on the same line;
s022, stereo matching: carrying out stereo matching on the corrected binocular images to generate a disparity map of the three-dimensional point cloud;
s023, reconstructing three-dimensional point cloud: and obtaining a depth map of the three-dimensional point cloud by using the disparity map obtained in the stereo matching, thereby obtaining three-dimensional coordinates (X, Y, Z) of all the three-dimensional point clouds:
Figure BDA0002340599000000031
Figure BDA0002340599000000032
Figure BDA0002340599000000033
where f represents the focal length of the binocular camera, T represents the baseline length between the two cameras, xlAnd xrRespectively, the abscissa, y, of the pair of matching pointslIs the column coordinate of the matching point in the left image, and d is the disparity.
As one embodiment, the step S02 further includes: s020, calibrating a camera, namely calibrating the binocular camera to obtain internal parameters and external parameters of the binocular camera;
in the step S021, distortion correction and epipolar parallel correction are performed on the binocular image by using a Bouguet epipolar line correction method according to the internal parameters and the external parameters obtained in the step S020.
The internal parameters comprise principal points of the left camera and the right camera, distortion vectors of the left camera and the right camera, and the external parameters comprise a rotation matrix and a translation matrix between the left camera and the right camera.
In one embodiment, in step S022, a disparity map is obtained by performing stereo matching using a BM algorithm or an SGBM algorithm.
Another object of the present invention is to provide a computer readable medium, which has stored therein a plurality of instructions, the instructions being adapted to be loaded by a processor and to execute the steps of the above-mentioned method for detecting a living human face based on three-dimensional imaging assistance.
It is a further object of the invention to provide a computing device comprising the computer readable medium described above and a processor adapted to implement the instructions.
The invention realizes human face living body detection by adopting a binocular stereo matching method, has the characteristics of high speed, small influence by application environment and living body self behavior action and the like, adopts a multi-plane fitting algorithm with the advantages of good variability and good robustness, can adjust algorithm parameters according to the requirements of specific application scenes to improve the speed and the accuracy, can effectively resist common attack means such as photos, videos, shelters, screen reprints and the like, and solves the problems of long sampling time, low speed, detection result errors caused by the fine action of an object or the change of the ambient light distance and the like of the existing living body detection system.
Drawings
FIG. 1 is a flowchart of a face liveness detection method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a binocular stereo vision matching algorithm for three-dimensional reconstruction according to an embodiment of the present invention;
FIG. 3 is a schematic flow chart of generating a three-dimensional point cloud of a human face by stereo matching a binocular image according to an embodiment of the present invention;
FIG. 4 is a flowchart of in vivo detection by the multi-plane fitting algorithm according to the embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Referring to fig. 1, the human face in-vivo detection method based on three-dimensional imaging assistance according to the embodiment of the present invention includes:
s01, acquiring a face image shot by a binocular camera;
specifically, the binocular camera of this embodiment is formed by fixing a left color camera and a right color camera of the same model on the same horizontal plane by using a stable bracket, and the left color camera and the right color camera respectively acquire images of the same face at different viewing angles and send the images to the upper computer system for storage.
And then, the upper computer system identifies the face in the image by adopting a face identification algorithm, and if the face is not identified, the display screen prompts to shoot again. The face recognition algorithm may adopt an existing face recognition algorithm, such as MTCNN (Multi-taskcoded probabilistic Neural Networks), RCNN (Region with cnn feature, object detection based on candidate regions), fast RCNN, SSD (single shot Multi-box detection), and the like. The method mainly detects whether a current image contains a face, and if the current image contains the face, a rectangular frame containing the outline of the face is output.
S02, generating a three-dimensional point cloud data set of the face according to the acquired face image, wherein the three-dimensional point cloud data comprises three-dimensional coordinates (X, Y and Z) corresponding to each three-dimensional point cloud on the face image;
in the step, the upper computer system adopts a binocular stereo vision matching algorithm, and generates a three-dimensional point cloud data set of the human face based on the extracted human face image.
Binocular stereo vision is a common method in three-dimensional reconstruction, and the main principle is shown in fig. 2. Ideally, the optical center O of the cameralAnd OrThe parallel collineation, the imaging plane of left and right camera is parallel coplane. The left and right cameras simultaneously record a certain point P in the three-dimensional space, forming imaging points P1 and P2, respectively, and these two points P1 and P2 are called homonymous points (also called matching point pairs). A plane can be determined by using the points P1 and P2 with the same name and the target point P, and according to the principle of similar triangles, the formula (1) can be obtained, and the formula (2) is obtained after transformation:
Figure BDA0002340599000000051
Figure BDA0002340599000000052
wherein x islAnd xrRespectively, the abscissa of the same point, d ═ xl-xrAnd is called parallax, Z represents a depth value, i.e., a Z coordinate value of the point P, f represents a focal length of the camera, and T represents a base length between the two cameras.
After the homonymous points of the left and right images are determined by the binocular stereo vision algorithm, the Z value of the target point P in the three-dimensional space can be obtained by the formula (2), and the three-dimensional coordinates (X, Y, Z) of the target point P are further determined. However, in practical situations, there is a certain angle between the imaging planes of the left and right cameras, and the homologous points do not exist in the same row, so that epipolar line correction is required to correct the distortion.
Therefore, as shown in fig. 3, the generation of the three-dimensional point cloud data needs to be implemented through a series of processes, which specifically include:
s020, calibrating the camera, and calibrating the binocular camera to obtain internal parameters and external parameters of the binocular camera, wherein the internal parameters comprise principal points of the left camera and the right camera and distortion vectors of the left camera and the right camera, the external parameters comprise a rotation matrix and a translation matrix between the left camera and the right camera, and the calibration method in a matlab calibration tool box or the Zhang calibration method can be referred specifically.
S021, epipolar line correction: the binocular image acquired in step S01 is corrected so that the matching points in the left and right images are on the same line.
The embodiment preferably adopts a Bouguet polar line correction method to carry out distortion correction and polar line parallel correction on the binocular image. The epipolar rectification process specifically comprises the steps of carrying out distortion rectification on the left image and the right image by utilizing internal parameters and external parameters obtained after the two cameras are calibrated, and then carrying out epipolar rectification on the left image and the right image after the distortion rectification, so that the epipolar lines of the images of the binocular cameras are parallel, and corresponding pixel points in the left image and the right image are located in the same line. The corrected phase unwrapped map can be searched for matching points along the same row.
S022, stereo matching: carrying out stereo matching on the corrected binocular images to generate a disparity map of the three-dimensional point cloud;
after the epipolar line correction is completed, the corresponding relation between the matching points can be conveniently found in the left image and the right image. The parallax between corresponding pixel points of the left camera and the right camera can be rapidly calculated by using the BM algorithm or the SGBM algorithm for stereo matching, so that a parallax image is obtained.
S023, reconstructing the three-dimensional point cloud.
Obtaining the parallax d according to the parallax map obtained after stereo matching in the step S022, and substituting the parallax d into the formula (1), so as to obtain the Z value of the three-dimensional point cloud, wherein the Z coordinate information of all the three-dimensional point clouds constitutes a depth map, so as to obtain the three-dimensional coordinates (X, Y, Z) of all the three-dimensional point clouds:
Figure BDA0002340599000000061
Figure BDA0002340599000000062
Figure BDA0002340599000000063
wherein, ylIs the column coordinate of the matching point in the left image, and d is the disparity.
And S03, performing living body detection.
As shown in fig. 4, the in-vivo detection by the multi-plane fitting algorithm in this embodiment specifically includes:
s031, randomly selecting N three-dimensional point clouds from the three-dimensional point cloud data set to perform plane fitting, and obtaining a fitting plane Z which is AX + BY + C, wherein N is larger than 3; preferably, 5 < N < 20, with the greater N being less affected by noise, the more accurate the results obtained, but the slower the calculation speed. Experiments prove that when N is 10, the calculation effect and efficiency are better;
s032, randomly selecting M three-dimensional point clouds from the three-dimensional point cloud data set, and respectively calculating the distances d between the M three-dimensional point clouds and a fitting plane ZnM is not less than 1000, and M, N is an integer; preferably, M is more than or equal to 8000 and less than or equal to 12000, the larger M is, the smaller the influence of noise is, the more accurate the obtained result is, but the calculation speed is also correspondingly slowed down, and experiments prove that when M is 10000, the calculation effect and the efficiency are better;
s033, calculating the distance dnWhen the number of the three-dimensional point clouds is smaller than a threshold value D, calculating a false point ratio R, wherein R is Q/M;
s034, repeating the steps S031-S033 for K times in a preset number of times, and calculating the average value Avg (R) of the false point ratio; the predetermined number of times K satisfies: k is more than or equal to 3000 and less than or equal to 10000, and experiments prove that when K is 5000, the calculation effect and the efficiency are better.
And S035, judging whether the average value Avg (R) is within the living body detection standard threshold value R (not including the threshold value R), if so, judging the living body, otherwise, judging the living body not. After the determination is completed, the display screen outputs the determination result of the living body detection.
In addition, the invention also provides a computer readable medium and a human face living body detection system based on three-dimensional imaging assistance, wherein the computer readable medium is stored with a plurality of instructions, the instructions are suitable for being loaded by a processor and executing the steps of the human face living body detection method based on three-dimensional imaging assistance, and the computer readable medium is part of the human face living body detection system. The processor may be a Central Processing Unit (CPU), controller, microcontroller, microprocessor, or other data Processing chip in some embodiments. The processor is typically used to control the overall operation of the computing device. In this embodiment, the processor is configured to execute a program code stored in a computer-readable medium or process data.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
In conclusion, the invention realizes human face living body detection by adopting a binocular stereo matching method, has the characteristics of high speed, small influence by application environment and the behavior and action of the living body, and the like, the adopted multi-plane fitting algorithm has the advantages of good variability and good robustness, algorithm parameters can be adjusted according to the requirements of specific application scenes to improve the speed and the accuracy, common attack means such as photos, videos, shelters and screen reprinting can be effectively resisted, and the problems of long sampling time, low speed, detection result errors caused by the changes of fine actions of objects or the distance of ambient light and the like in the existing living body detection system are solved.
The foregoing is directed to embodiments of the present application and it is noted that numerous modifications and adaptations may be made by those skilled in the art without departing from the principles of the present application and are intended to be within the scope of the present application.

Claims (10)

1. A human face living body detection method based on three-dimensional imaging assistance is characterized by comprising the following steps:
s01, acquiring a face image shot by a binocular camera;
s02, generating a three-dimensional point cloud data set of the face according to the acquired face image, wherein the three-dimensional point cloud data comprises three-dimensional coordinates corresponding to three-dimensional point clouds on the face image;
s03, performing living body detection, including:
s031, from the said three-dimensional point cloud data set, choose N three-dimensional point clouds to carry on the plane fitting at random, get and fit the plane Z, N is greater than 3;
s032, randomly selecting M three-dimensional point clouds from the three-dimensional point cloud data set, and respectively calculating the distances d between the M three-dimensional point clouds and the fitting plane Zn,M≥1000;
S033, calculating the distance dnCalculating the false point ratio R (Q/M) of the three-dimensional point cloud quantity Q smaller than the threshold D;
s034, repeating the steps S031-S033 for K times in a preset number of times, and calculating the average value Avg (R) of the false point ratio;
and S035, judging whether the average value Avg (R) is within a living body detection standard threshold value R, if so, judging the living body, and otherwise, judging the living body not.
2. The method for detecting the living human face based on the three-dimensional imaging assistance of claim 1, wherein in the step S031, N satisfies: n is more than 5 and less than 20.
3. The method for detecting the living human face based on the three-dimensional imaging assistance as claimed in claim 1, wherein in the step S032, M satisfies: m is more than or equal to 8000 and less than or equal to 12000.
4. The method for detecting the living human face based on the three-dimensional imaging assistance as claimed in claim 1, wherein in the step S034, the predetermined number K satisfies: k is more than or equal to 3000 and less than or equal to 10000.
5. The method for detecting the living human face based on the three-dimensional imaging assistance as claimed in any one of claims 1 to 4, wherein the step S02 comprises:
s021, epipolar line correction: correcting the binocular image acquired in the step S01 so that the matching points in the left and right images are on the same line;
s022, stereo matching: carrying out stereo matching on the corrected binocular images to generate a disparity map of the three-dimensional point cloud;
s023, reconstructing three-dimensional point cloud: and obtaining a depth map of the three-dimensional point cloud by using the disparity map obtained in the stereo matching, thereby obtaining three-dimensional coordinates (X, Y, Z) of all the three-dimensional point clouds:
Figure FDA0002340598990000021
Figure FDA0002340598990000022
Figure FDA0002340598990000023
where f represents the focal length of the binocular camera, T represents the baseline length between the two cameras, xlAnd xrRespectively, the abscissa, y, of the pair of matching pointslIs the column coordinate of the matching point in the left image, and d is the disparity.
6. The human face living body detection method based on the three-dimensional imaging assistance as claimed in claim 5,
the step S02 further includes: s020, calibrating a camera, namely calibrating the binocular camera to obtain internal parameters and external parameters of the binocular camera;
in the step S021, distortion correction and epipolar parallel correction are performed on the binocular image by using a Bouguet epipolar line correction method according to the internal parameters and the external parameters obtained in the step S020.
7. The three-dimensional imaging assistance-based face in-vivo detection method according to claim 6, wherein the internal parameters comprise principal points of left and right cameras, distortion vectors of the left and right cameras, and the external parameters comprise a rotation matrix and a translation matrix between the left and right cameras.
8. The three-dimensional imaging assistance-based face in-vivo detection method according to claim 6, wherein in the step S022, a disparity map is obtained by performing stereo matching using a BM algorithm or an SGBM algorithm.
9. A computer-readable medium, wherein a plurality of instructions are stored in the computer-readable medium, and the instructions are adapted to be loaded by a processor and execute the steps of the method for detecting a living human face based on three-dimensional imaging assistance according to any one of claims 1 to 8.
10. A face liveness detection system based on three-dimensional imaging assistance, comprising the computer-readable medium of claim 9 and a processor adapted to implement the instructions.
CN201911374651.6A 2019-12-27 2019-12-27 Human face in-vivo detection method, medium and system based on three-dimensional imaging assistance Active CN111160233B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911374651.6A CN111160233B (en) 2019-12-27 2019-12-27 Human face in-vivo detection method, medium and system based on three-dimensional imaging assistance

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911374651.6A CN111160233B (en) 2019-12-27 2019-12-27 Human face in-vivo detection method, medium and system based on three-dimensional imaging assistance

Publications (2)

Publication Number Publication Date
CN111160233A true CN111160233A (en) 2020-05-15
CN111160233B CN111160233B (en) 2023-04-18

Family

ID=70558422

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911374651.6A Active CN111160233B (en) 2019-12-27 2019-12-27 Human face in-vivo detection method, medium and system based on three-dimensional imaging assistance

Country Status (1)

Country Link
CN (1) CN111160233B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112365582A (en) * 2020-11-17 2021-02-12 电子科技大学 Countermeasure point cloud generation method, storage medium and terminal
CN113158892A (en) * 2021-04-20 2021-07-23 南京大学 Face recognition method irrelevant to textures and expressions

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101398886A (en) * 2008-03-17 2009-04-01 杭州大清智能技术开发有限公司 Rapid three-dimensional face identification method based on bi-eye passiveness stereo vision
US20110299741A1 (en) * 2010-06-08 2011-12-08 Microsoft Corporation Distinguishing Live Faces from Flat Surfaces
CN103955920A (en) * 2014-04-14 2014-07-30 桂林电子科技大学 Binocular vision obstacle detection method based on three-dimensional point cloud segmentation
CN105023010A (en) * 2015-08-17 2015-11-04 中国科学院半导体研究所 Face living body detection method and system
CN105205458A (en) * 2015-09-16 2015-12-30 北京邮电大学 Human face living detection method, device and system
CN108171211A (en) * 2018-01-19 2018-06-15 百度在线网络技术(北京)有限公司 Biopsy method and device
CN108319901A (en) * 2018-01-17 2018-07-24 百度在线网络技术(北京)有限公司 Biopsy method, device, computer equipment and the readable medium of face
CN109086718A (en) * 2018-08-02 2018-12-25 深圳市华付信息技术有限公司 Biopsy method, device, computer equipment and storage medium
CN110059579A (en) * 2019-03-27 2019-07-26 北京三快在线科技有限公司 For the method and apparatus of test alive, electronic equipment and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101398886A (en) * 2008-03-17 2009-04-01 杭州大清智能技术开发有限公司 Rapid three-dimensional face identification method based on bi-eye passiveness stereo vision
US20110299741A1 (en) * 2010-06-08 2011-12-08 Microsoft Corporation Distinguishing Live Faces from Flat Surfaces
CN103955920A (en) * 2014-04-14 2014-07-30 桂林电子科技大学 Binocular vision obstacle detection method based on three-dimensional point cloud segmentation
CN105023010A (en) * 2015-08-17 2015-11-04 中国科学院半导体研究所 Face living body detection method and system
CN105205458A (en) * 2015-09-16 2015-12-30 北京邮电大学 Human face living detection method, device and system
CN108319901A (en) * 2018-01-17 2018-07-24 百度在线网络技术(北京)有限公司 Biopsy method, device, computer equipment and the readable medium of face
CN108171211A (en) * 2018-01-19 2018-06-15 百度在线网络技术(北京)有限公司 Biopsy method and device
CN109086718A (en) * 2018-08-02 2018-12-25 深圳市华付信息技术有限公司 Biopsy method, device, computer equipment and storage medium
CN110059579A (en) * 2019-03-27 2019-07-26 北京三快在线科技有限公司 For the method and apparatus of test alive, electronic equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
潘景光;赵铱民;吴国峰;苏方;张惠;: "三维重建单侧眼眶部缺损的面部数字化模型", 口腔颌面修复学杂志 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112365582A (en) * 2020-11-17 2021-02-12 电子科技大学 Countermeasure point cloud generation method, storage medium and terminal
CN113158892A (en) * 2021-04-20 2021-07-23 南京大学 Face recognition method irrelevant to textures and expressions
CN113158892B (en) * 2021-04-20 2024-01-26 南京大学 Face recognition method irrelevant to textures and expressions

Also Published As

Publication number Publication date
CN111160233B (en) 2023-04-18

Similar Documents

Publication Publication Date Title
CN111091063B (en) Living body detection method, device and system
CN111652086B (en) Face living body detection method and device, electronic equipment and storage medium
CN108764071B (en) Real face detection method and device based on infrared and visible light images
CN107392958B (en) Method and device for determining object volume based on binocular stereo camera
US9031315B2 (en) Information extraction method, information extraction device, program, registration device, and verification device
CN112150528A (en) Depth image acquisition method, terminal and computer readable storage medium
CN111160232B (en) Front face reconstruction method, device and system
WO2021063128A1 (en) Method for determining pose of active rigid body in single-camera environment, and related apparatus
CN110956114A (en) Face living body detection method, device, detection system and storage medium
US10853631B2 (en) Face verification method and apparatus, server and readable storage medium
WO2016070300A1 (en) System and method for detecting genuine user
CN112198963A (en) Immersive tunnel type multimedia interactive display method, equipment and storage medium
CN111160233B (en) Human face in-vivo detection method, medium and system based on three-dimensional imaging assistance
KR20150031085A (en) 3D face-modeling device, system and method using Multiple cameras
CN111046845A (en) Living body detection method, device and system
CN115035235A (en) Three-dimensional reconstruction method and device
CN112257713A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN112580434A (en) Face false detection optimization method and system based on depth camera and face detection equipment
CN111882655A (en) Method, apparatus, system, computer device and storage medium for three-dimensional reconstruction
CN110800020B (en) Image information acquisition method, image processing equipment and computer storage medium
CN113902932A (en) Feature extraction method, visual positioning method and device, medium and electronic equipment
CN111833441A (en) Face three-dimensional reconstruction method and device based on multi-camera system
CN112164099A (en) Self-checking and self-calibrating method and device based on monocular structured light
CN111383264B (en) Positioning method, positioning device, terminal and computer storage medium
CN111383255A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant