CN108875468B - Living body detection method, living body detection system, and storage medium - Google Patents

Living body detection method, living body detection system, and storage medium Download PDF

Info

Publication number
CN108875468B
CN108875468B CN201710439294.1A CN201710439294A CN108875468B CN 108875468 B CN108875468 B CN 108875468B CN 201710439294 A CN201710439294 A CN 201710439294A CN 108875468 B CN108875468 B CN 108875468B
Authority
CN
China
Prior art keywords
image
living body
angle
preset
body detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710439294.1A
Other languages
Chinese (zh)
Other versions
CN108875468A (en
Inventor
范浩强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Kuangshi Technology Co Ltd
Beijing Megvii Technology Co Ltd
Original Assignee
Beijing Kuangshi Technology Co Ltd
Beijing Megvii Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Kuangshi Technology Co Ltd, Beijing Megvii Technology Co Ltd filed Critical Beijing Kuangshi Technology Co Ltd
Priority to CN201710439294.1A priority Critical patent/CN108875468B/en
Publication of CN108875468A publication Critical patent/CN108875468A/en
Application granted granted Critical
Publication of CN108875468B publication Critical patent/CN108875468B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation

Abstract

A living body detection method, a living body detection system, and a storage medium. The in vivo detection method comprises the following steps: acquiring a first image of a detected face at a first shooting angle and a second image of the detected face at a second shooting angle, wherein the first shooting angle is different from the second shooting angle; constructing three-dimensional information of the detected face based on the first image and the second image; and prompting that the living body detection is successful under the condition that the three-dimensional information is determined to be matched with the three-dimensional shape of the real human face. The in-vivo detection method can be used for performing in-vivo detection on the basis of the face images at different shooting angles, the in-vivo detection cost is reduced, the judgment process of the in-vivo detection is simplified, the judgment efficiency of the in-vivo detection is improved, and the safety and the reliability of the identity verification system based on the face recognition are improved.

Description

Living body detection method, living body detection system, and storage medium
Technical Field
Embodiments of the present disclosure relate to the field of artificial intelligence, and in particular, to a method, a system, and a storage medium for in vivo detection.
Background
The face recognition is a biological recognition technology for identity recognition based on face feature information of a person, and has the advantages of non-mandatory property, non-contact property and the like. With the improvement of the accuracy of the face recognition algorithm and the development of the large-scale parallel computing technology, the application technology based on the face recognition is gradually commercialized, and the face recognition system is increasingly applied to security, financial fields, electronic commerce and other scenes needing identity verification, such as bank remote account opening, access control systems, remote transaction operation verification and the like.
In order to improve the safety and reliability of the face recognition system, the living body detection technology is gradually becoming the core technology of the face recognition system. The living body detection technology can determine whether a detected object is a living individual, but not an inanimate object such as a photo, a video and the like, so that a malicious attacker can be prevented from carrying out malicious attack in the modes of a recorded video, a shot photo, a 3D human face model, a forged mask and the like.
Disclosure of Invention
At least one embodiment of the present disclosure provides a living body detection method, a living body detection system, and a storage medium. The in-vivo detection method can be used for performing in-vivo detection on the basis of the face images at different shooting angles, the in-vivo detection cost is reduced, the judgment process of the in-vivo detection is simplified, the judgment efficiency of the in-vivo detection is improved, and the safety and the reliability of the identity verification system based on the face recognition are improved.
At least one embodiment of the present disclosure provides a method for detecting a living body, including: acquiring a first image of a detected face at a first shooting angle and a second image of the detected face at a second shooting angle, wherein the first shooting angle is different from the second shooting angle; constructing three-dimensional information of the detected human face based on the first image and the second image; and prompting that the living body detection is successful under the condition that the three-dimensional information is determined to be matched with the three-dimensional shape of the real human face.
At least one embodiment of the present disclosure also provides a living body detection system, which includes: an image acquisition device, at least one processor; at least one memory. The image acquisition equipment is used for acquiring images; the memory stores a computer program adapted to be executed by the processor, the computer program being executed by the processor to perform the steps of: acquiring a first image of a detected face at a first shooting angle and a second image of the detected face at a second shooting angle, wherein the first shooting angle is different from the second shooting angle; constructing three-dimensional information of the detected human face based on the first image and the second image; and prompting that the living body detection is successful under the condition that the three-dimensional information is determined to be matched with the three-dimensional shape of the real human face.
At least one embodiment of the present disclosure also provides a storage medium storing a computer program adapted to be executed by a processor, the computer program being executed by the processor to perform the steps of: acquiring a first image of a detected face at a first shooting angle and a second image of the detected face at a second shooting angle, wherein the first shooting angle is different from the second shooting angle; constructing three-dimensional information of the detected human face based on the first image and the second image; and prompting that the living body detection is successful under the condition that the three-dimensional information is determined to be matched with the three-dimensional shape of the real human face.
It is to be understood that both the foregoing general description and the following detailed description of the present disclosure are exemplary and explanatory and are intended to provide further explanation of the invention as claimed.
Drawings
To more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings of the embodiments will be briefly introduced below, and it is apparent that the drawings in the following description relate only to some embodiments of the present disclosure and are not limiting to the present disclosure.
FIG. 1 is a schematic flow chart diagram of a method of detecting a living organism according to an embodiment of the present disclosure;
FIG. 2 is a flowchart illustrating timeout determination in one example of a method for in-vivo detection according to an embodiment of the present disclosure;
FIG. 3 is a flowchart illustrating quality determination in one example of a method for in-vivo detection according to an embodiment of the present disclosure;
fig. 4 is a schematic block diagram of a living body detection system according to an embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described below clearly and completely with reference to the accompanying drawings of the embodiments of the present disclosure. It is to be understood that the described embodiments are only a few embodiments of the present disclosure, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the described embodiments of the disclosure without any inventive step, are within the scope of protection of the disclosure.
Unless otherwise defined, technical or scientific terms used herein shall have the ordinary meaning as understood by one of ordinary skill in the art to which this disclosure belongs. The use of "first," "second," and similar terms in this disclosure is not intended to indicate any order, quantity, or importance, but rather is used to distinguish one element from another. The word "comprising" or "comprises", and the like, means that the element or item listed before the word covers the element or item listed after the word and its equivalents, but does not exclude other elements or items. The terms "connected" or "coupled" and the like are not restricted to physical or mechanical connections, but may include electrical connections, whether direct or indirect. "upper", "lower", "left", "right", and the like are used merely to indicate relative positional relationships, and when the absolute position of the object being described is changed, the relative positional relationships may also be changed accordingly.
To maintain the following description of the embodiments of the present disclosure clear and concise, a detailed description of known functions and known components have been omitted from the present disclosure.
At present, in the field of biometric identification, people gradually attract attention from a face identification technology, and the face identification technology becomes a research hotspot in the field of biometric identification. The face recognition technology can be applied to many fields, and in the application field requiring a high security level, in addition to ensuring that the similarity of the face of the detected object conforms to the face information stored in the database, it is first necessary to determine whether the detected object is a living organism. That is, the face recognition system needs to be able to prevent malicious attackers from performing malicious attacks using photos, videos, 3D face models, or forged masks. Therefore, the living body detection technology becomes a research hotspot in the face recognition technology, and the safety and the reliability of the face recognition can be improved.
In the research, the inventor notices that the existing in-vivo detection technology is mainly realized by two detection modes. One of the detection methods is to give some instructions to the user through the terminal, such as opening the mouth, blinking, shaking the head left and right, etc., and then determine whether the detected object is a living organism by means of image processing. Another detection method is to use special hardware to perform living body judgment, for example, a 3D face image with a structured light pattern is collected by a binocular camera, and then living body judgment is performed according to the scattering degree of the structured light pattern on the sub-surface of the 3D face. However, the above detection method either needs to define multiple human face interaction modes and perform complex classifier training, or needs to increase hardware cost, which brings inconvenience to practical application.
At least one embodiment of the present disclosure provides a living body detection method, a living body detection system, and a storage medium. The living body detection method comprises the steps of obtaining a first image of a detected human face at a first shooting angle and a second image of the detected human face at a second shooting angle, wherein the first shooting angle is different from the second shooting angle; constructing three-dimensional information of the detected face based on the first image and the second image; and prompting that the living body detection is successful under the condition that the three-dimensional information is determined to be matched with the three-dimensional shape of the real human face. The in-vivo detection method can be used for performing in-vivo detection on the basis of the face images at different shooting angles, the in-vivo detection cost is reduced, the judgment process of the in-vivo detection is simplified, the judgment efficiency of the in-vivo detection is improved, and the safety and the reliability of the identity verification system based on the face recognition are improved.
It should be noted that, in the embodiment of the present disclosure, a surface on which a human face is located may be defined as a first surface, a surface for acquiring an image in the image acquisition apparatus (for example, an imaging surface of a camera) may be defined as representing a second surface, and when the first surface and the second surface are parallel to each other, a deflection angle representing the human face is 0 °. On the first surface, a line connecting the two eyes of the person is defined as a first straight line, a perpendicular line to the first straight line is defined as a second straight line, and the second straight line is located in the middle of the two eyes of the person. The "left or right deflection angle" indicates an angle at which the face is rotated left or right about the second straight line.
Several embodiments of the present disclosure are described in detail below, but the present disclosure is not limited to these specific embodiments.
Example one
The present embodiment provides a method of in vivo detection. Fig. 1 shows a schematic flow chart of a method for detecting a living body provided in the present embodiment.
For example, as shown in fig. 1, one example of the living body detection method of the embodiment may include the following operations:
s1: acquiring a first image of a detected face at a first shooting angle and a second image of the detected face at a second shooting angle;
s2: constructing three-dimensional information of the detected face based on the first image and the second image;
s3: and prompting that the living body detection is successful under the condition that the three-dimensional information is determined to be matched with the three-dimensional shape of the real human face.
The living body detection method provided by the embodiment can perform living body detection based on the face images of different shooting angles, reduce the cost of the living body detection, simplify the judgment process of the living body detection, improve the judgment efficiency of the living body detection, and improve the safety and reliability of the identity verification system based on the face recognition.
For example, the external input required by the living body detection method provided by the embodiment only includes a common face image, and only needs to interact with the detected object on the display screen, so that no special hardware is required, and the cost of living body detection can be reduced. The in-vivo detection method can be deployed at a face image acquisition end, for example, in the field of security application, the in-vivo detection method can be deployed in systems such as an access control system and an identity recognition system based on face recognition; in the field of financial applications, it may be deployed at personal terminals, which may include, for example, smart phones, tablets, personal computers, and the like.
For example, the living body detection method may also be distributively deployed at a server side (or a cloud side) and a human face image acquisition side. For example, a control signal may be sent out at a server (or a cloud), and the control signal is transmitted to a face image acquisition end to control the face image acquisition end to perform living body detection; the face image acquisition end sends out indication information according to the received control signal so as to acquire a first image and a second image of the detected face at different shooting angles, then the living body verification is carried out according to the first image and the second image, and finally the face image acquisition end transmits the result of the living body verification to the server end (or the cloud end). For another example, the indication information may be sent out at the server (or the cloud) and transmitted to the face image collecting end, the face image collecting end obtains the first image and the second image of the detected face at different shooting angles and transmits the first image and the second image to the server (or the cloud), then the server (or the cloud) performs the living body verification according to the received first image and second image, and if necessary, the server (or the cloud) may also transmit the living body verification result to the face image collecting end.
It should be noted that the server (or the cloud) may send out a control signal, and the background user may also manually control to send out the control signal.
For example, the first image may be a frontal view of the measured face and the second image may be a lateral view of the measured face. The first image may include one front view or a plurality of front views; accordingly, the second image may include one side view or may include multiple side views.
For example, the number of the first images and/or the second images may be preset, or may be randomly generated by the face image acquisition end or the server end during the living body examination. For example, it may be preset that the first image includes only one front view and the second image includes only one side view.
For example, the side view may include a left side view or a right side view of the detected face, and may also include an upper left or lower left side view, an upper right or lower right side view, and the like.
For example, the first photographing angle is different from the second photographing angle. The first shooting angle can be an angle of a left or right deflection angle of the detected face within a first preset shooting angle range, and the second shooting angle can be an angle of a left or right deflection angle of the detected face within a second preset shooting angle range. For example, the first preset shooting angle range may be 0 ° to 5 °, that is, the first shooting angle may be an angle between 0 ° and 5 ° at which the detected face rotates left or right around the second straight line; the second preset photographing angle range may be 10 ° to 30 °, that is, the second photographing angle may be an angle between 10 ° and 30 ° by which the face of the person to be measured is rotated leftward or rightward around the second straight line.
For another example, the first photographing angle may be an angle of the measured face rotated around the first straight line between-5 ° and 5 °, and the second photographing angle may be an angle of the measured face rotated around the first straight line between 10 ° and 30 ° or between-30 ° and-10 °. The first shooting angle and/or the second shooting angle can also be an angle formed by the combination of the rotation angles of the detected human face around the first straight line and the second straight line.
It should be noted that, if the deflection angle is defined as positive when the detected face rotates clockwise around the first straight line, the deflection angle is defined as negative when the detected face rotates counterclockwise around the first straight line.
For example, the first image and the second image may be grayscale images or color images.
For example, the information of successful living body detection can be fed back to the measured object in the form of characters and/or sound; meanwhile, the information of successful living body detection can be fed back to the background server (or cloud).
For example, one example of the living body detection method provided by the present embodiment may further include line-of-sight living body detection. In operation S3, the gaze angle of the human eye and the constructed three-dimensional information of the detected human face may be combined to perform the in vivo detection, and the attack using the 3D human face model may be effectively prevented through the gaze in vivo detection, thereby further improving the success rate of the in vivo detection.
For example, in one example, line-of-sight liveness detection may include the following operations:
s41: detecting a first gaze angle of the human eye in the first image and a second gaze angle of the human eye in the second image;
s42: judging whether the first sight angle is within a first preset sight angle range or not and whether the second sight angle is within a second preset sight angle range or not, and determining that the sight living body detection is passed under the condition that the first sight angle is within the first preset sight angle range and the second sight angle is within the second preset sight angle range;
s43: and prompting that the living body detection is successful under the condition that the sight living body detection is determined to pass and the three-dimensional information is determined to be matched with the three-dimensional shape of the real human face.
For example, a first preset sight angle range and a second preset sight angle range may be preset, and in the case that it is determined that the first sight angle of the human eyes in the first image is within the first preset sight angle range and the second sight angle of the human eyes in the second image is within the second sight angle range, the passage of the sight living body detection is prompted.
For example, the first gaze angle and the second gaze angle may represent an angle between a direction in which a gaze of a human eye is located and a normal of a plane in which a human face is located. In the first image and the second image, the angle of sight of the human eye can be determined by the position of the eyeball of the human eye in the eye socket. For example, when the eyeball is located at the midpoint of the orbit, the angle of the eye line of the human eye may be defined as 0 °, and in the direction of the first line, when the eyeball moves to the left or right, it indicates that the angle of the eye line of the human eye changes. For example, if the eyeball moves to the left, it means that the angle of the human eye becomes negative; when the eyeball moves rightwards, the visual line angle of the human eyes becomes positive.
For example, the gaze angle of the human eye may be trained in advance. Under the condition of processing by using a neural network, the sight angles of human eyes of a large number of human face images are trained, so that the corresponding relation between the sight angles of the human eyes and the positions of eyeball positions is determined when the human faces are in different deflection angle ranges. After the first image and the second image are obtained, the characteristic points of the human eyes are extracted through a neural network algorithm, and the positions of the eyeball of the human eyes are predicted, so that the sight angle of the human eyes is determined. It should be noted that other methods (e.g., a deep learning algorithm, etc.) may also be used to train the correspondence between the sight range of the human eye and the eyeball position. The present embodiment does not limit this.
For example, the first preset viewing angle range may be-5 ° to 5 °, and the second preset viewing angle range may be 10 ° to 30 ° or-30 ° to-10 °.
For example, in operation S1, acquiring the first image and the second image of the detected face may include the following operations:
s11: sending a first instruction to acquire a first image;
s12: and sending a second instruction to acquire a second image.
For example, the first instruction and the second instruction may be generated by a face image acquisition side or a server side (or a cloud side). The first and second instructions may be generated randomly or may be generated in a certain order. For example, a first instruction is generated first, and then a second instruction is generated; or first generate the second instruction and then the first instruction. In the description of the embodiments of the present disclosure, a description is given taking as an example that the first instruction is generated first and then the second instruction is generated.
For example, operation S11 may include: prompting to acquire a first indication of a first image; detecting a first trigger operation, starting the image acquisition equipment according to the first trigger operation, detecting a second trigger operation, shooting according to the second trigger operation, detecting a third trigger operation, and taking a shot image as the first image according to the third trigger operation.
For example, the first instruction may include information of human-computer interaction such as a first instruction, a first trigger operation, a second trigger operation, and a third trigger operation.
For example, the first indication may include information prompting the subject to take a frontal view, such as information prompting the type of angle (i.e., the first photographing angle) at which the image is taken and a photographing request prompt.
For example, in one example, when the display screen is used as the interaction means, after the first instruction is issued, a first instruction may be displayed on the display screen to prompt the tested object to shoot a front view of a human face.
The display screen can also display a first icon corresponding to the first trigger operation, for example, the first icon displays 'start', when the detected object clicks the first icon, the first trigger operation is detected to be effective, the image acquisition device is started, a photographing frame appears on the display screen, for example, to prompt that the detected object can start to photograph the face image, at the moment, the face can also be displayed on the photographing frame, the detected object can adjust the photographing position, the photographing angle and the like of the face according to the first indication based on the face displayed in the photographing frame, and therefore the high-quality face image is obtained.
The display screen may further display a second icon corresponding to a second trigger operation, for example, the second icon displays "take a picture", when the object to be tested clicks the second icon, it is detected that the second trigger operation is valid, and the image acquisition device shoots a face image.
A third icon corresponding to a third trigger operation can be displayed on the display screen, for example, the third icon can include two icons of "confirm" and "cancel", at this time, the face image obtained by shooting can be displayed on the display screen, the detected object can judge whether the shot face image is qualified, when the shot face image is determined to be unqualified, the "cancel" icon can be clicked, the state is returned to the second trigger operation state, and then the image is shot again; and when the shot face image is determined to be qualified, clicking the 'determination' icon, so that the fact that the third triggering operation is effective is detected, and the shot face image can be used as the first image.
For example, operation S12 may include: prompting for a second indication to acquire a second image; detecting a fourth trigger operation, starting the image acquisition equipment according to the fourth trigger operation, detecting a fifth trigger operation, shooting according to the fifth trigger operation, detecting a sixth trigger operation, and taking the shot image as a second image according to the sixth trigger operation.
For example, the second instruction may include information of human-computer interaction such as a second instruction, a fourth trigger operation, a fifth trigger operation, and a sixth trigger operation.
For example, the second indication may include information prompting the subject to take a side view, such as information prompting the type of angle (i.e., the second photographing angle) at which the image is taken and information prompting a photographing request.
For example, the fourth trigger operation, the fifth trigger operation, and the sixth trigger operation may refer to the descriptions of the first trigger operation, the second trigger operation, and the third trigger operation, respectively, and the man-machine interaction manner of the fourth trigger operation, the fifth trigger operation, and the sixth trigger operation may be the same as or different from the man-machine interaction manner of the first trigger operation, the second trigger operation, and the third trigger operation, respectively, and is not limited thereto.
It should be noted that the first trigger operation, the second trigger operation, the third trigger operation, the fourth trigger operation, the fifth trigger operation, and the sixth trigger operation may be implemented by touch control, a key, and the like, so as to implement human-computer interaction.
It should be noted that the first image and/or the second image may also be obtained in other human-computer interaction manners, which is not limited in this embodiment.
For another example, the first indication and the second indication may further include information prompting the subject to adjust the angle of the line of sight, that is, prompting the subject to aim the line of sight at an image acquisition device (e.g., a camera) when taking a picture.
It should be noted that the first instruction and/or the second instruction may further include information for prompting the detected object to remove a blocking object on the detected face, where the blocking object may include glasses, a mask, and the like.
For example, the first instruction and/or the second instruction may adopt one or more of image, text, voice and the like to prompt the detected object to adjust the shooting angle, the shooting distance and the like between the detected face and the image acquisition device, so as to acquire a first image and/or a second image of the detected face at a specified shooting angle; meanwhile, the quality of the acquired first image and/or second image can be better.
For example, images, character information, and the like may be presented in the form of a pop-up window or the like, and audio information, and the like may be presented in the form of a speaker or the like.
For example, the first image and/or the second image may be pre-processed to facilitate detection of a facial image in the first image and/or the second image. For example, in the case where the first image and/or the second image are photographs, the preprocessing may include processing such as scaling, Gamma correction, image enhancement, or noise reduction filtering the photographs.
For example, when the detected human face is in a poor light or dark environment, the living body detection method may further include illuminating the detected human face with a light source so that the quality of the acquired first image and/or second image is good. The light source may for example be a programmable LED light source, which may emit visible light; besides using a separately arranged light source, the light emitted by the display screen can be used for illumination, for example, the brightness of the display screen can be increased as required to make up for the shortage of ambient light.
For example, in one example, the liveness detection method may further include a timeout determination, e.g., the timeout determination may include: prompting that the living body detection fails under the condition that the first image is not acquired within first preset time after the first instruction is sent out; and/or prompting that the living body detection fails under the condition that the second image is not acquired within second preset time after the second instruction is sent out. In this example, the living body detection method performs timeout determination after each instruction is issued, and if the image of the detected face is not acquired after the preset time is exceeded, it is prompted that the living body detection fails, so that the efficiency of the living body detection can be improved.
For example, fig. 2 shows a flowchart of the timeout determination in one example of the living body detection method.
S110: a first instruction is issued.
After the first instruction is issued, operation S111 is performed: and judging whether the first preset time is exceeded. Operation S111 may determine whether an image is not acquired within a preset first preset time. For example, operation S111 may determine whether an image is not acquired within a preset first preset time or an image not containing any recognizable face image is acquired. For example, an image that does not contain any recognizable face image may indicate that the acquired image does not contain a face image, e.g., a flower image or the like is acquired; or may also indicate that the face in the acquired image cannot be recognized, for example, the face in the acquired image is severely deformed or the acquired image is blurred and cannot be recognized.
In case it is determined that the first preset time is exceeded, operation S4 is performed: and prompting the failure of the living body detection. In case it is determined that the first preset time is not exceeded, operation S112 is performed: a first image is acquired and stored.
After acquiring the first image, operation S120 is performed: a second instruction is issued.
After the second instruction is issued, operation S121 is performed: and judging whether the second preset time is exceeded. Operation S121 may determine whether an image is not acquired within a preset second preset time. For the description of operation S121, reference may be made to the description of operation S111, and repeated descriptions are omitted here.
In case it is determined that the second preset time is exceeded, operation S4 is performed: and prompting the failure of the living body detection. In case it is determined that the second preset time is not exceeded, operation S122 is performed: a second image is acquired and stored. After the first image and the second image are acquired, operation S2 is performed.
For example, the first preset time and the second preset time may be the same or different. For example, the first preset time may range from 10 seconds to 30 seconds, and the second preset time may also range from 10 seconds to 30 seconds. The first preset time and the second preset time may be 10 seconds, for example, that is, if the currently issued instruction is the first instruction and a valid first image is not acquired within 10 seconds, it is indicated that the biopsy has failed. It should be noted that the first preset time and the second preset time are not limited to the above ranges, and may be specifically set according to actual situations.
For example, the first preset time and the second preset time may be automatically set in advance and remain unchanged; alternatively, the first preset time and the second preset time may be randomly generated during the living body examination, that is, a time range of the first preset time and the second preset time may be preset, and then the first preset time and the second preset time may be randomly generated within the time range during the living body examination. For example, in operation S111, when a first instruction is issued, a first preset time is automatically set at random; in operation S121, a second preset time is automatically randomly set when the second instruction is issued. It should be noted that the first preset time and the second preset time may also be set manually. This is not limiting.
For example, the first instruction and the second instruction can be issued and simultaneously the first preset time and the second preset time of the tested object can be prompted. For example, a first preset time and/or a second preset time is displayed on the display screen, and countdown is performed so that the measured object can capture the first image and/or the second image within a specified time. For another example, the measured object may be prompted when the first preset time and/or the second preset time are/is exceeded, so as to avoid failure of the in-vivo detection due to time-out.
For example, in one example, the liveness detection method may further include quality determination, which may include, for example: after the first image is obtained, performing quality judgment on the first image, and under the condition that the first image does not meet a first quality requirement, re-sending a first instruction and re-obtaining the first image; and/or after the second image is acquired, performing quality judgment on the second image, and under the condition that the second image does not meet the second quality requirement, re-sending a second instruction and re-acquiring the second image.
For example, the quality determination includes at least one of face detection, light determination, face angle determination, and screen flipping determination for the first image and/or the second image.
For example, the first quality requirement may include at least one of a face pose, a light level, an image contrast, a face keypoint location, and the like; the second quality requirement may also include at least one of face pose, light intensity, image contrast, face keypoint location, and the like.
It should be noted that the first quality requirement and the second quality requirement may be the same or different. For example, the first quality requirement may include determining whether the first image is in a front view, and the second quality requirement may include determining whether the second image is in a side view.
For example, the quality determination may be performed at a face image acquisition end, or may be performed at a server end (or a cloud end).
For example, in one example, the quality determination may further include: prompting that the living body detection fails under the condition that the number of the acquired first images which do not meet the first quality requirement is determined to exceed a first preset number; or prompting that the living body detection fails under the condition that the number of the acquired second images which do not meet the second quality requirement is determined to exceed a second preset number.
For example, fig. 3 shows a flowchart of quality judgment in one example of the living body detecting method.
S11: and sending a first instruction to acquire a first image.
After the first image is acquired, operation S13 is performed: and judging the quality of the first image, and judging whether the first image meets the first quality requirement. In the case that the first image does not meet the first quality requirement, operation S11 may be executed again to reissue the first instruction and retrieve the first image.
In case the first image does not meet the first quality requirement, operation S14 may also be performed: and judging whether the number of the first images which do not meet the first quality requirement exceeds a first preset number. Operation S14 may include, for example: a first counter is set, and a zero clearing operation is performed on the first counter before the living body detection is performed. When the first image does not meet the first quality requirement, the first counter counts, and in a case where the value recorded by the first counter exceeds a first preset number, that is, in a case where it is determined that the number of acquired first images that do not meet the first quality requirement exceeds the first preset number, operation S4 is performed: and prompting the failure of the living body detection. In the case where the value recorded by the first counter does not exceed the first preset number, that is, in the case where it is determined that the number of acquired first images that do not meet the first quality requirement does not exceed the first preset number, no operation may be performed, and it may also be returned to operation S11.
In case the first image meets the first quality requirement, then operation S12 is performed: and sending a second instruction to acquire a second image.
After the second image is acquired, operation S15 is performed: and judging the quality of the second image, and judging whether the second image meets the second quality requirement. In the case that the second image does not meet the second quality requirement, operation S12 is executed again to reissue the second instruction and retrieve the second image.
For example, in case the second image does not meet the second quality requirement, operation S16 may also be performed: and judging whether the number of the second images which do not meet the second quality requirement exceeds a second preset number. Operation 16 may include, for example: and setting a second counter, and carrying out zero clearing operation on the second counter before carrying out the living body detection. When the second image does not meet the second quality requirement, the second counter counts, and in a case where the value recorded by the second counter exceeds a second preset number, that is, in a case where it is determined that the number of the acquired second images that do not meet the second quality requirement exceeds the second preset number, operation S4 is performed: and prompting the failure of the living body detection. In the case where the value recorded by the second counter does not exceed the second preset number, that is, in the case where it is determined that the number of acquired second images that do not meet the second quality requirement does not exceed the second preset number, no operation may be performed, and it may also be returned to operation S12.
In case the first image meets the first quality requirement and the second image meets the second quality requirement, operation S2 is performed.
For example, a background user may manually clear the first counter and the second counter, or may automatically clear the first counter and the second counter while issuing the first instruction and the second instruction.
For example, the first counter and the second counter may be the same counter; different counters are also possible. When the first counter and the second counter may be the same counter, the counter may be cleared when operation S12 is performed. The counter may be, for example, an up counter, an up-down counter, or the like.
For example, the first predetermined number and the second predetermined number may be the same or different. For example, the first predetermined number may range from 5 to 10, and the second predetermined number may range from 5 to 10. The first preset number and the second preset number may each be, for example, 6.
For example, in one example, the quality determination may further include: under the condition that the first image does not meet the first quality requirement, sending a first error prompt and prompting the type of the first quality requirement which does not meet; and/or in the case that the second image does not comply with the second quality requirement, issuing a second error prompt and prompting the type of the non-compliant second quality requirement.
For example, in one example, the first error prompt and/or the second error prompt may be issued in the form of a pop-up window on the display screen, while the type of non-compliant first quality requirement and/or the type of non-compliant second quality requirement is displayed in the pop-up window. For example, text information, picture information, video information, or the like may be displayed in the popup window.
For example, the first error prompt and the second error prompt may be the same or different.
For example, the first error prompt, the second error prompt, the type of the first quality requirement, and the type of the second quality requirement may be one or more of images, characters, sounds, and the like, so as to perform human-computer interaction.
For example, in the examples shown in fig. 2 and 3, in operation S4, while the failure of the live body test is prompted, a reason for prompting the failure of the live body test may also be included. Reasons for failure of the living body detection may include, for example, exceeding a preset time, an image not meeting quality requirements, and the like. The image non-conforming quality requirements may include, for example, the face region being too bright or too dark, no face image being detected, the face image not being a frontal view, the face image not being a lateral view, etc. It should be noted that, when the in-vivo detection failure exceeds the predetermined failure number, the device adopting the in-vivo detection method can be locked, and an alarm signal is sent out, so as to effectively prevent the malicious attack of an attacker.
For example, after the first image and the second image are obtained, operation S2 is performed. In operation S2, the first image and the second image may be calculated using a pre-trained Convolutional Neural Network (CNN), positions of feature points of the detected face may be obtained, and depth information of the detected face may be predicted, so as to construct three-dimensional information of the detected face.
For example, the STFT feature extraction, the HOG feature extraction, and other methods may be used to extract each feature point of the detected face.
For example, in operation S2, feature points of a face in the first image and/or the second image may be located, and then three-dimensional information of the detected face may be reconstructed based on the feature points of the face. The feature points of the face may be some key points of the face with strong characterization capability, for example, the position coordinates of the key points such as eyes, eye corners, eyebrows, peak of cheekbones, nose, mouth, chin, and outer contour of the face. For example, the feature points of the face can be located by using a traditional face key point location method, which is based on a parameter shape model, learns a parameter model according to the apparent features near the key points, iteratively optimizes the positions of the key points when in use, and finally obtains the position coordinates of the key points. Or, the key points can be located by locating the feature points of the face by a method based on cascade regression, and the accurate key points of the face can be located in real time according to the input face image. For another example, the feature points of the human face can be positioned by adopting a method such as a level deep learning-based correlation algorithm and an active shape model correlation algorithm. The present embodiment does not limit this.
For example, after the three-dimensional information of the detected face is constructed, operation S3 is performed, i.e., the living body detection is performed. In operation S3, in the case where it is determined that the three-dimensional information matches the three-dimensional shape of the real face, prompting that the live body detection is successful; and under the condition that the three-dimensional information is determined not to be matched with the three-dimensional shape of the real human face, the failure of the living body detection is prompted.
For example, a classifier may be trained in advance using a large number of face images, and training data may be obtained, where the training data may be information of face key points of N face images. After the training of the classifier is completed, the constructed three-dimensional information of the detected face can be input into the classifier, and whether the three-dimensional information of the detected face is matched with the three-dimensional shape of the real face or not is judged by using the classifier.
For example, the classifiers may include a Deep Neural Network (DNN), a Convolutional Neural Network (CNN), a Support Vector Machine (SVM) classifier, a HAAR classifier, a nearest neighbor rule (KNN) classifier, and the like.
For example, a matching proportion threshold of the three-dimensional information of the face may be preset, and success of the living body detection is prompted when it is determined that the matching proportion of the three-dimensional information of the constructed detected face and the three-dimensional shape of the real face exceeds the matching proportion threshold; and prompting the failure of the living body detection under the condition that the matching proportion of the constructed three-dimensional information of the detected face and the three-dimensional shape of the real face does not exceed the matching proportion threshold value.
For example, matching the three-dimensional information of the detected face with the three-dimensional shape of the real face may indicate that the constructed feature points of the detected face correspond to the feature points of the real face, that is, the constructed detected face has the feature points (eyes, nose, mouth, etc.) included in the real face; on the other hand, the relative position relationship between the feature points of the constructed detected face is matched with the relative position relationship between the corresponding feature points of the real face. For example, the relative positional relationship may include the relative position of the nose and mouth, the distance between the eyes of the person, and the like.
Example two
The present embodiment provides a living body detection system. Fig. 4 shows a schematic block diagram of a living body detection system provided in the present embodiment.
For example, as shown in FIG. 4, the liveness detection system includes at least one processor 100, at least one memory 101, an image acquisition device 102, and an output device 103. These components are interconnected by a bus system 200 and/or other form of connection mechanism (not shown). It should be noted that the components and configuration of the liveness detection system shown in FIG. 4 are exemplary only, and not limiting, as the liveness detection system may have other components and configurations as desired.
The living body detection system provided by the embodiment can be applied to a special device for living body detection, and can also be applied to a handheld terminal with a living body detection function or any other electronic device with a living body detection function. The handheld terminal can be a smart phone, a tablet computer and the like.
It should be noted that, in the present embodiment, the image capturing device 102, the output device 103, and the like are disposed at a face image capturing end, and the processor 100, the memory 101, and the like are disposed at a server end (or a cloud end). But not limited thereto, the processor 100, the memory 101, the image capturing device 102 and the output means 103 may also all be disposed at the face image capturing end.
For example, the face image acquisition end can be arranged at an image acquisition end of a living body detection system which needs identity recognition, such as an access control system, a monitoring system and the like; the server end can be arranged at the personal terminal so as to realize remote control, and the server end can also be arranged at a monitoring room and other places.
For example, the image capturing device 102 may be configured to capture facial images (e.g., the first image and the second image) and transmit the facial images to the server (or the cloud), and the facial images may be stored in the memory 101 of the server (or the cloud) for use by other components of the server (or the cloud).
For example, image capture device 102 may be a camera of a smartphone, a camera of a tablet, a camera of a personal computer, or may even be a webcam.
For example, the output device 103 may output various information (e.g., image and/or sound information) to the outside (e.g., a background user or a subject). The output device 103 may include one or more of a display, a speaker, and the like. The display can prompt the interactive information in an image or text mode, and the loudspeaker can prompt the interactive information in a sound mode. The interaction information may include, for example, a first instruction, a second instruction, a first error prompt, a second error prompt, a type of the first quality requirement, a type of the second quality requirement, a success of the liveness detection, a failure of the liveness detection, and the like.
For example, the image capture device 102 and the output means 103 may be hardware, software, firmware, or any feasible combination thereof.
For example, the processor 100 may be a Central Processing Unit (CPU) or other form of processing unit having data processing capability and/or program execution capability, such as a Graphic Processing Unit (GPU), a Field Programmable Gate Array (FPGA), or a Tensor Processing Unit (TPU), etc., and the processor 100 may control other components in the server side to perform desired functions. Also for example, the Central Processing Unit (CPU) may be an X86 or ARM architecture or the like.
For example, memory 101 may include any combination of one or more computer program products, which may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. Volatile memory can include, for example, Random Access Memory (RAM), cache memory (or the like). The non-volatile memory may include, for example, Read Only Memory (ROM), a hard disk, an Erasable Programmable Read Only Memory (EPROM), a portable compact disc read only memory (CD-ROM), USB memory, flash memory, and the like. One or more computer programs may be stored on the computer-readable storage medium and executed by the processor 100 to implement various functions. Various applications and various data may also be stored in the computer-readable storage medium, such as the first instructions, the second instructions, training data for the convolutional neural network, training data for the classifier, the first quality requirement, the second quality requirement, and various data used and/or generated by the applications, among others.
For example, as shown in FIG. 4, in one example, the liveness detection system may further include an input device 105. At the face image acquisition end, the input device 105 may be a device used by the object to be tested to execute instructions, and may include a touch screen, mechanical keys, and the like. The measured object can perform information interaction with the living body detecting system through the input device 105. On the server side, the input device 105 may also be a device used by a background user to input instructions, which may include one or more of a keyboard, a mouse, a microphone, and the like. The instruction may be, for example, a first instruction, a second instruction, or the like, for prompting the subject to take a face image.
For example, the living body detection system may further include a communication device (not shown). The face image acquisition end and the server end (or the cloud end) can transmit information through the communication device. The communication means may employ, for example, a wired transmission method or a wireless transmission method. The wired transmission mode may be a communication mode such as twisted pair, coaxial cable, or optical fiber transmission, and the wireless transmission mode may be a communication mode such as 3G/4G/5G mobile communication network, bluetooth, Zigbee, or WiFi.
For example, in one example, the liveness detection system may also include a timer. For example, the timer may be a pulse type timer, an on delay type timer, an off delay type timer, or the like. For another example, the memory 101 may store a timing program, and when a timing operation is required, the processor 100 may directly run the timing program to implement a timing function.
For example, the computer program may be run by the processor 100 to perform the steps of: acquiring a first image of a detected face at a first shooting angle and a second image of the detected face at a second shooting angle, wherein the first shooting angle is different from the second shooting angle; constructing three-dimensional information of the detected face based on the first image and the second image; and prompting that the living body detection is successful under the condition that the three-dimensional information is determined to be matched with the three-dimensional shape of the real human face.
For example, under the condition that the three-dimensional information is determined to be matched with the three-dimensional shape of the real human face, sight living body detection can be further performed, namely, the sight angle of human eyes is detected, so that the success rate of the living body detection is improved.
For example, in one example, the computer program may also be run by the processor 100 to perform the steps of: detecting a first gaze angle of the human eye in the first image and a second gaze angle of the human eye in the second image; judging whether the first sight angle is within a first preset sight angle range or not and whether the second sight angle is within a second preset sight angle range or not, and determining that the sight living body detection is passed under the condition that the first sight angle is within the first preset sight angle range and the second sight angle is within the second preset sight angle range; and prompting that the living body detection is successful under the condition that the sight living body detection is determined to pass and the three-dimensional information is determined to be matched with the three-dimensional shape of the real human face.
For example, in one example, the computer program may also be run by the processor 100 to perform the steps of: sending a first instruction to acquire a first image; and sending a second instruction to acquire a second image.
For example, in one example, the computer program may also be run by the processor 100 to perform the steps of: prompting to acquire a first indication of a first image; detecting a first trigger operation, starting the image acquisition device 102 according to the first trigger operation, detecting a second trigger operation, shooting according to the second trigger operation, detecting a third trigger operation, and taking a shot image as a first image according to the third trigger operation; and/or prompting to acquire a second indication of a second image; detecting a fourth trigger operation, starting the image capturing device 102 according to the fourth trigger operation, detecting a fifth trigger operation, performing shooting according to the fifth trigger operation, and detecting a sixth trigger operation, wherein the shot image is taken as a second image according to the sixth trigger operation.
For example, in one example, the computer program may also be run by the processor 100 to perform the steps of: prompting that the living body detection fails under the condition that the first image is not acquired within first preset time after the first instruction is sent out; and/or prompting that the living body detection fails under the condition that the second image is not acquired within second preset time after the second instruction is sent out.
For example, in one example, the computer program may also be run by the processor 100 to perform the steps of: judging the quality of the first image, and under the condition that the first image does not meet the first quality requirement, re-sending the first instruction and re-acquiring the first image; and/or judging the quality of the second image, and under the condition that the second image does not meet the second quality requirement, re-sending a second instruction and re-acquiring the second image.
For example, the quality determination may include at least one of face detection, light determination, face angle determination, and screen-flipping determination.
For example, in one example, the computer program may also be run by the processor 100 to perform the steps of: under the condition that the first image does not meet the first quality requirement, sending a first error prompt and prompting the type of the first quality requirement which does not meet; and/or, in the event that the second image does not meet the second quality requirement, issuing a second error prompt and prompting the type of the non-meeting second quality requirement.
For example, in one example, the first error prompt and/or the second error prompt may be issued in the form of a pop-up window on the display screen, while the type of non-compliant first quality requirement and/or the type of non-compliant second quality requirement is displayed in the pop-up window. For example, text information, picture information, video information, or the like may be displayed in the popup window.
For example, in one example, the computer program may also be run by the processor 100 to perform the steps of: prompting that the living body detection fails under the condition that the number of the acquired first images which do not meet the first quality requirement is determined to exceed a first preset number; and/or prompting that the living body detection fails under the condition that the number of the acquired second images which do not meet the second quality requirement is determined to exceed a second preset number.
For example, the first quality requirement may include at least one of face pose, light intensity, image contrast, and face key point location; the second quality requirement may also include at least one of face pose, light intensity, image contrast, and face key point location.
For example, in one example, the computer program may also be run by the processor 100 to perform the steps of: and prompting that the living body detection fails under the condition that the three-dimensional information is determined not to be matched with the three-dimensional shape of the real human face.
It should be noted that, for the description of the first shooting angle, the second shooting angle, the first preset shooting angle range, the second preset shooting angle range, the first view angle, the second view angle, the first preset view angle range, the second preset view angle range, the first instruction, the second instruction, the first image, the second image, the first preset time, the second preset time, the first preset number, the second preset number, and the like, reference may be made to the related description in the first embodiment, and repeated points are not repeated herein.
EXAMPLE III
The present embodiment provides a storage medium storing a computer program adapted to be executed by a processor.
For example, in one example of the present embodiment, the storage medium may be applied to the living body detecting system described in any one of the second embodiments, and may be, for example, a memory in the living body detecting system.
For example, the computer program may be run by a processor to perform the steps of: acquiring a first image of a detected face at a first shooting angle and a second image of the detected face at a second shooting angle, wherein the first shooting angle is different from the second shooting angle; constructing three-dimensional information of the detected face based on the first image and the second image; and prompting that the living body detection is successful under the condition that the three-dimensional information is determined to be matched with the three-dimensional shape of the real human face.
For example, for the description of the storage medium, reference may be made to the description of the memory in embodiment two, and repeated descriptions are omitted.
For the present disclosure, there are also the following points to be explained:
(1) the drawings of the embodiments of the disclosure only relate to the structures related to the embodiments of the disclosure, and other structures can refer to the common design.
(2) Without conflict, embodiments of the present disclosure and features of the embodiments may be combined with each other to arrive at new embodiments.
The above description is only a specific embodiment of the present disclosure, but the scope of the present disclosure is not limited thereto, and the scope of the present disclosure should be subject to the scope of the claims.

Claims (17)

1. A method of in vivo detection comprising:
acquiring a first image of a detected face at a first shooting angle and a second image of the detected face at a second shooting angle, wherein the first shooting angle is different from the second shooting angle;
constructing three-dimensional information of the detected human face based on the first image and the second image; and
prompting that the living body detection is successful under the condition that the three-dimensional information is determined to be matched with the three-dimensional shape of the real human face;
wherein the in-vivo detection method further comprises:
detecting a first gaze angle of the human eye in the first image and a second gaze angle of the human eye in the second image;
judging whether the first sight angle is within a first preset sight angle range or not and whether the second sight angle is within a second preset sight angle range or not, wherein the first preset sight angle range and the second preset sight angle range are different;
determining that the sight living body detection is passed under the condition that the first sight angle is determined to be within the first preset sight angle range and the second sight angle is determined to be within the second preset sight angle range;
prompting the success of the living body detection under the condition that the sight line living body detection is determined to pass and the three-dimensional information is determined to be matched with the three-dimensional shape of the real human face,
wherein the acquiring of the first image and the second image of the detected face comprises:
sending a first instruction to acquire the first image;
sending a second instruction to acquire the second image,
the in-vivo detection method further includes:
prompting that the living body detection fails under the condition that the first image is not acquired within first preset time after the first instruction is sent out; and/or the presence of a gas in the gas,
prompting that the living body detection fails under the condition that the second image is not acquired within second preset time after the second instruction is sent out;
the in-vivo detection method further includes:
judging the quality of the first image, and under the condition that the first image does not meet a first quality requirement, re-sending the first instruction and re-acquiring the first image; and/or the presence of a gas in the gas,
judging the quality of the second image, and under the condition that the second image does not meet a second quality requirement, re-sending the second instruction and re-obtaining the second image;
the in-vivo detection method further includes:
prompting that the living body detection fails under the condition that the number of the acquired first images which do not meet the first quality requirement is determined to exceed a first preset number; and/or the presence of a gas in the gas,
prompting that the living body detection fails under the condition that the number of the acquired second images which do not meet the second quality requirement is determined to exceed a second preset number;
the in-vivo detection method further includes: and when the number of the failure prompting the living body detection exceeds the preset failure number, locking the equipment adopting the living body detection method and sending out an alarm signal.
2. The in-vivo detection method according to claim 1, wherein the first photographing angle is an angle at which a deflection angle of the detected face to the left or right is within a first preset photographing angle range, and the second photographing angle is an angle at which a deflection angle of the detected face to the left or right is within a second preset photographing angle range.
3. The in-vivo detection method according to claim 2, wherein the first preset shooting angle range is 0 ° -5 °, and the second preset shooting angle range is 10 ° -30 °.
4. The in-vivo detection method according to claim 1,
the sending a first instruction to acquire the first image includes:
prompting to acquire a first instruction of the first image, wherein the first instruction comprises an angle category prompt of a shot image and a shooting requirement prompt; and the number of the first and second groups,
detecting a first trigger operation, starting an image acquisition device according to the first trigger operation, detecting a second trigger operation, shooting according to the second trigger operation, detecting a third trigger operation, and taking a shot image as the first image according to the third trigger operation;
the sending a second instruction to obtain the second image includes:
prompting to acquire a second instruction of the second image, wherein the second instruction comprises an angle category prompt for shooting the image and a shooting requirement prompt; and the number of the first and second groups,
detecting a fourth trigger operation, starting the image acquisition equipment according to the fourth trigger operation, detecting a fifth trigger operation, shooting according to the fifth trigger operation, detecting a sixth trigger operation, and taking a shot image as the second image according to the sixth trigger operation.
5. The in-vivo detection method according to claim 1, further comprising:
under the condition that the first image does not meet the first quality requirement, sending a first error prompt and prompting the type of the first quality requirement which is not met; and/or the presence of a gas in the gas,
and sending a second error prompt and prompting the type of the second quality requirement which is not met under the condition that the second image does not meet the second quality requirement.
6. The liveness detection method according to claim 5, wherein the first error prompt and/or the second error prompt is issued in the form of a popup, and a type of the first quality requirement not being met and/or a type of the second quality requirement not being met is displayed in the popup.
7. The in-vivo detection method according to claim 1, wherein the quality judgment comprises at least one of face detection, light judgment, face angle judgment, screen flipping judgment.
8. The in-vivo detection method according to claim 1, further comprising:
and prompting that the living body detection fails under the condition that the three-dimensional information is determined not to be matched with the three-dimensional shape of the real human face.
9. A living body detection system comprising:
the image acquisition equipment is used for acquiring images;
at least one processor;
at least one memory storing a computer program adapted to be executed by the processor, the computer program being executed by the processor to perform the steps of:
acquiring a first image of a detected face at a first shooting angle and a second image of the detected face at a second shooting angle, wherein the first shooting angle is different from the second shooting angle;
constructing three-dimensional information of the detected human face based on the first image and the second image; and
prompting that the living body detection is successful under the condition that the three-dimensional information is determined to be matched with the three-dimensional shape of the real human face;
wherein the computer program when executed by the processor further performs the steps of:
detecting a first gaze angle of the human eye in the first image and a second gaze angle of the human eye in the second image;
judging whether the first sight angle is within a first preset sight angle range or not and whether the second sight angle is within a second preset sight angle range or not, wherein the first preset sight angle range and the second preset sight angle range are different;
determining that the sight living body detection is passed under the condition that the first sight angle is determined to be within the first preset sight angle range and the second sight angle is determined to be within the second preset sight angle range;
prompting the success of the living body detection under the condition that the sight line living body detection is determined to pass and the three-dimensional information is determined to be matched with the three-dimensional shape of the real human face,
the computer program when executed by the processor further performs the steps of:
sending a first instruction to acquire the first image;
sending a second instruction to acquire the second image,
the computer program when executed by the processor further performs the steps of:
prompting that the living body detection fails under the condition that the first image is not acquired within first preset time after the first instruction is sent out; and/or
Prompting that the living body detection fails under the condition that the second image is not acquired within second preset time after the second instruction is sent out;
the computer program when executed by the processor further performs the steps of:
judging the quality of the first image, and under the condition that the first image does not meet a first quality requirement, re-sending the first instruction and re-acquiring the first image; and/or
Judging the quality of the second image, re-sending the second instruction and re-acquiring the second image under the condition that the second image does not meet the second quality requirement,
the computer program when executed by the processor further performs the steps of:
prompting that the living body detection fails under the condition that the number of the acquired first images which do not meet the first quality requirement is determined to exceed a first preset number; and/or
Prompting that the living body detection fails under the condition that the number of the acquired second images which do not meet the second quality requirement is determined to exceed a second preset number;
the computer program when executed by the processor further performs the steps of: and when the number of the failure of the living body detection is prompted to exceed the preset failure number, locking the equipment adopting the living body detection method and sending an alarm signal.
10. The in-vivo detection system according to claim 9, wherein the first photographing angle is an angle at which a deflection angle of the detected human face to the left or the right is within a first preset photographing angle range, and the second photographing angle is an angle at which a deflection angle of the detected human face to the left or the right is within a second preset photographing angle range.
11. The in-vivo detection system of claim 10, wherein the first preset shooting angle range is 0 ° -5 °, and the second preset shooting angle range is 10 ° -30 °.
12. The in-vivo detection system according to claim 9,
the sending a first instruction to acquire the first image includes:
prompting to acquire a first instruction of the first image, wherein the first instruction comprises an angle category prompt of a shot image and a shooting requirement prompt; and the number of the first and second groups,
detecting a first trigger operation, starting the image acquisition equipment according to the first trigger operation, detecting a second trigger operation, shooting according to the second trigger operation, detecting a third trigger operation, and taking a shot image as the first image according to the third trigger operation;
the sending a second instruction to obtain the second image includes:
prompting to acquire a second instruction of the second image, wherein the second instruction comprises an angle category prompt for shooting the image and a shooting requirement prompt; and the number of the first and second groups,
detecting a fourth trigger operation, starting the image acquisition equipment according to the fourth trigger operation, detecting a fifth trigger operation, shooting according to the fifth trigger operation, detecting a sixth trigger operation, and taking a shot image as the second image according to the sixth trigger operation.
13. The liveness detection system of claim 9 wherein the computer program when executed by the processor further performs the steps of:
under the condition that the first image does not meet the first quality requirement, sending a first error prompt and prompting the type of the first quality requirement which is not met; and/or
And sending a second error prompt and prompting the type of the second quality requirement which is not met under the condition that the second image does not meet the second quality requirement.
14. The liveness detection system of claim 13 wherein the first error prompt and/or the second error prompt is issued in the form of a pop-up window and the type of the first quality requirement that does not comply and/or the type of the second quality requirement that does not comply is displayed in the pop-up window.
15. The in-vivo detection system according to claim 9, wherein the quality determination includes at least one of a face detection, a light determination, a face angle determination, a screen flip determination.
16. The liveness detection system of claim 9 wherein the computer program when executed by the processor further performs the steps of:
and prompting that the living body detection fails under the condition that the three-dimensional information is determined not to be matched with the three-dimensional shape of the real human face.
17. A storage medium storing a computer program adapted to be executed by a processor, the computer program being executed by the processor to perform the steps of:
acquiring a first image of a detected face at a first shooting angle and a second image of the detected face at a second shooting angle, wherein the first shooting angle is different from the second shooting angle;
constructing three-dimensional information of the detected human face based on the first image and the second image; and
prompting that the living body detection is successful under the condition that the three-dimensional information is determined to be matched with the three-dimensional shape of the real human face;
wherein the computer program is further executable by the processor to perform the steps of:
detecting a first gaze angle of the human eye in the first image and a second gaze angle of the human eye in the second image;
judging whether the first sight angle is within a first preset sight angle range or not and whether the second sight angle is within a second preset sight angle range or not, wherein the first preset sight angle range and the second preset sight angle range are different;
determining that the sight living body detection is passed under the condition that the first sight angle is determined to be within the first preset sight angle range and the second sight angle is determined to be within the second preset sight angle range;
prompting the success of the living body detection under the condition that the sight line living body detection is determined to pass and the three-dimensional information is determined to be matched with the three-dimensional shape of the real human face,
wherein the computer program when executed by the processor further performs the steps of:
sending a first instruction to acquire the first image;
sending a second instruction to acquire the second image,
the computer program when executed by the processor further performs the steps of:
prompting that the living body detection fails under the condition that the first image is not acquired within first preset time after the first instruction is sent out; and/or
Prompting that the living body detection fails under the condition that the second image is not acquired within second preset time after the second instruction is sent out;
the computer program when executed by the processor further performs the steps of:
judging the quality of the first image, and under the condition that the first image does not meet a first quality requirement, re-sending the first instruction and re-acquiring the first image; and/or
Judging the quality of the second image, re-sending the second instruction and re-acquiring the second image under the condition that the second image does not meet the second quality requirement,
the computer program when executed by the processor further performs the steps of:
prompting that the living body detection fails under the condition that the number of the acquired first images which do not meet the first quality requirement is determined to exceed a first preset number; and/or
Prompting that the living body detection fails under the condition that the number of the acquired second images which do not meet the second quality requirement is determined to exceed a second preset number;
the computer program when executed by the processor further performs the steps of: and when the number of the failure of the living body detection is prompted to exceed the preset failure number, locking the equipment adopting the living body detection method and sending an alarm signal.
CN201710439294.1A 2017-06-12 2017-06-12 Living body detection method, living body detection system, and storage medium Active CN108875468B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710439294.1A CN108875468B (en) 2017-06-12 2017-06-12 Living body detection method, living body detection system, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710439294.1A CN108875468B (en) 2017-06-12 2017-06-12 Living body detection method, living body detection system, and storage medium

Publications (2)

Publication Number Publication Date
CN108875468A CN108875468A (en) 2018-11-23
CN108875468B true CN108875468B (en) 2022-03-01

Family

ID=64321066

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710439294.1A Active CN108875468B (en) 2017-06-12 2017-06-12 Living body detection method, living body detection system, and storage medium

Country Status (1)

Country Link
CN (1) CN108875468B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111783501A (en) * 2019-04-03 2020-10-16 北京地平线机器人技术研发有限公司 Living body detection method and device and corresponding electronic equipment
CN110544335B (en) * 2019-08-30 2020-12-29 北京市商汤科技开发有限公司 Object recognition system and method, electronic device, and storage medium
CN110688946A (en) * 2019-09-26 2020-01-14 上海依图信息技术有限公司 Public cloud silence in-vivo detection device and method based on picture identification
CN110826440B (en) * 2019-10-28 2022-05-24 华南理工大学 Face changing video tampering detection method and system based on eye movement characteristics
CN111652086B (en) * 2020-05-15 2022-12-30 汉王科技股份有限公司 Face living body detection method and device, electronic equipment and storage medium
CN111949961A (en) * 2020-07-20 2020-11-17 上海淇馥信息技术有限公司 Human face authentication interaction method and system based on action prompt and electronic equipment
CN113627267A (en) * 2021-07-15 2021-11-09 中汽创智科技有限公司 Sight line detection method, device, equipment and medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103106401A (en) * 2013-02-06 2013-05-15 北京中科虹霸科技有限公司 Mobile terminal iris recognition device with human-computer interaction mechanism and method
CN103678984A (en) * 2013-12-20 2014-03-26 湖北微模式科技发展有限公司 Method for achieving user authentication by utilizing camera
CN108734057A (en) * 2017-04-18 2018-11-02 北京旷视科技有限公司 The method, apparatus and computer storage media of In vivo detection

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7668350B2 (en) * 2003-04-04 2010-02-23 Lumidigm, Inc. Comparative texture analysis of tissue for biometric spoof detection
WO2008036897A1 (en) * 2006-09-22 2008-03-27 Global Rainmakers, Inc. Compact biometric acquisition system and method
CN102800017A (en) * 2012-07-09 2012-11-28 高艳玲 Identity verification system based on face recognition
JP2014206932A (en) * 2013-04-15 2014-10-30 オムロン株式会社 Authentication device, authentication method, control program, and recording medium
CN104348778A (en) * 2013-07-25 2015-02-11 信帧电子技术(北京)有限公司 Remote identity authentication system, terminal and method carrying out initial face identification at handset terminal
CN105243386B (en) * 2014-07-10 2019-02-05 汉王科技股份有限公司 Face living body judgment method and system
WO2016127437A1 (en) * 2015-02-15 2016-08-18 北京旷视科技有限公司 Live body face verification method and system, and computer program product
CN105718863A (en) * 2016-01-15 2016-06-29 北京海鑫科金高科技股份有限公司 Living-person face detection method, device and system
CN106599772B (en) * 2016-10-31 2020-04-28 北京旷视科技有限公司 Living body verification method and device and identity authentication method and device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103106401A (en) * 2013-02-06 2013-05-15 北京中科虹霸科技有限公司 Mobile terminal iris recognition device with human-computer interaction mechanism and method
CN103678984A (en) * 2013-12-20 2014-03-26 湖北微模式科技发展有限公司 Method for achieving user authentication by utilizing camera
CN108734057A (en) * 2017-04-18 2018-11-02 北京旷视科技有限公司 The method, apparatus and computer storage media of In vivo detection

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
《生物特征身份认证系统的安全性和用户隐私性研究》;龙威 等;《保密科学技术》;20140910(第09期);第29-35页 *

Also Published As

Publication number Publication date
CN108875468A (en) 2018-11-23

Similar Documents

Publication Publication Date Title
CN108875468B (en) Living body detection method, living body detection system, and storage medium
US10339402B2 (en) Method and apparatus for liveness detection
TWI751161B (en) Terminal equipment, smart phone, authentication method and system based on face recognition
US10546183B2 (en) Liveness detection
CN106407914B (en) Method and device for detecting human face and remote teller machine system
CN105184246B (en) Living body detection method and living body detection system
EP3872689B1 (en) Liveness detection method and device, electronic apparatus, storage medium and related system using the liveness detection method
CN105612533B (en) Living body detection method, living body detection system, and computer program product
US20200380279A1 (en) Method and apparatus for liveness detection, electronic device, and storage medium
US11321575B2 (en) Method, apparatus and system for liveness detection, electronic device, and storage medium
CN104537292B (en) The method and system detected for the electronic deception of biological characteristic validation
WO2017181769A1 (en) Facial recognition method, apparatus and system, device, and storage medium
WO2016127437A1 (en) Live body face verification method and system, and computer program product
WO2017000213A1 (en) Living-body detection method and device and computer program product
WO2017000218A1 (en) Living-body detection method and device and computer program product
US10254831B2 (en) System and method for detecting a gaze of a viewer
EP4033458A2 (en) Method and apparatus of face anti-spoofing, device, storage medium, and computer program product
CN108647504B (en) Method and system for realizing information safety display
WO2016172923A1 (en) Video detection method, video detection system, and computer program product
WO2020020022A1 (en) Method for visual recognition and system thereof
KR101640014B1 (en) Iris recognition apparatus for detecting false face image
CN109977846B (en) Living body detection method and system based on near-infrared monocular photography
CN108629278B (en) System and method for realizing information safety display based on depth camera
WO2017000217A1 (en) Living-body detection method and device and computer program product
CN112949467B (en) Face detection method, device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant