CN113128258B - Living body detection method, living body detection device, electronic apparatus, and storage medium - Google Patents

Living body detection method, living body detection device, electronic apparatus, and storage medium Download PDF

Info

Publication number
CN113128258B
CN113128258B CN201911396238.XA CN201911396238A CN113128258B CN 113128258 B CN113128258 B CN 113128258B CN 201911396238 A CN201911396238 A CN 201911396238A CN 113128258 B CN113128258 B CN 113128258B
Authority
CN
China
Prior art keywords
area
region
image
imaging system
sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911396238.XA
Other languages
Chinese (zh)
Other versions
CN113128258A (en
Inventor
任志浩
华丛一
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN201911396238.XA priority Critical patent/CN113128258B/en
Publication of CN113128258A publication Critical patent/CN113128258A/en
Application granted granted Critical
Publication of CN113128258B publication Critical patent/CN113128258B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive

Abstract

The embodiment of the application provides a living body detection method, a living body detection device, electronic equipment and a storage medium, wherein a first imaging system is used for collecting a first image, a second imaging system is used for collecting a second image, face detection is carried out on the first image to determine a first area containing an object to be recognized, a second area is determined from the second image according to the position relation of the two imaging systems of a binocular camera and the first area, and then the face characteristics of the two areas are input into a pre-trained living body detection model to determine whether the object to be recognized is a living body and output prompt information. By adopting the scheme, the face area can be matched from the other image only according to the position relation of the two imaging systems in the binocular equipment and the distance between the object to be recognized and the binocular equipment, and then the living body detection is carried out according to the face area. The method does not limit whether the binocular equipment is calibrated or not, and is simple in process and low in cost.

Description

Living body detection method, living body detection device, electronic apparatus, and storage medium
Technical Field
The embodiment of the application relates to the technical field of machine vision, in particular to a living body detection method and device, electronic equipment and a storage medium.
Background
In the prior art, under good illumination conditions and postures, a face recognition system can perform more accurate face detection and recognition. However, for some face recognition systems, such as access control, login, etc., the user may cheat the system by illegal means, such as photos. Thus, for such systems, the higher the recognition rate, the greater the safety risk. Therefore, based on face recognition, face living body detection becomes a means for ensuring the safety of such systems.
In a common human face living body detection process, a calibration flow similar to a stretched calibration method is adopted to calibrate binocular equipment, and information such as optical imaging parameters between cameras is used for accurately pairing the binocular equipment, so that position coordinates of the same object are the same in images respectively collected by two cameras of the binocular equipment. In the process of detecting the living human face, two images, namely a first image and a second image, are acquired by using calibrated binocular equipment, the area of an object to be recognized in the first image is a first area, and a second area is matched in the second image according to the first area. Then, it is determined whether the object to be recognized is a living body from the first region and the second region.
In the living body detection process, the binocular equipment needs to be calibrated in advance. However, the production cost is increased while the production efficiency of the product is reduced by adding the calibration station on the production line. In addition, the cost is enormous if the equipment which is already put into use is returned to the factory for calibration.
Disclosure of Invention
The embodiment of the application provides a living body detection method, a living body detection device, electronic equipment and a storage medium, according to the position relation of two imaging systems of binocular equipment and a pre-fitting formula, a first area of an object to be recognized in a first image is utilized, a third area where the object to be recognized possibly appears is deduced from a second image, a second area where the object to be recognized actually is located is further determined from the third area, and finally, the face characteristics of the first area and the face characteristics of the second area are input into a pre-trained living body detection model to determine a living body recognition result of the object to be recognized.
In a first aspect, an embodiment of the present application provides a method for detecting a living body, including:
when an object to be recognized appears in the visual range of binocular equipment, acquiring a first image of the visual range by using a first imaging system of the binocular equipment, and acquiring a second image of the visual range by using a second imaging system of the binocular equipment;
determining a first area where the object to be recognized is located from the first image;
determining a second area from the second image by using the position relationship between the first imaging system and the second imaging system and the first area, wherein the second area contains the object to be identified;
inputting the face features of the first region and the face features of the second region into a pre-trained living body detection model to determine whether the object to be recognized is a living body;
and outputting prompt information, wherein the prompt information is used for indicating whether the object to be identified is a living body.
In one possible design, the determining, by using the position relationship between the first imaging system and the second imaging system and the first region, a second region from the second image includes:
determining a third region from the second image by using the position relation between the first imaging system and the second imaging system and the first region, wherein the area of the third region is larger than that of the first region;
and determining the second area from the third area according to the first area.
In one possible design, the determining, by using the positional relationship between the first imaging system and the second imaging system and the first region, a third region from the second image includes:
determining a first condition and a second condition by using the positional relationship of the first imaging system and the second imaging system, the first condition being a condition that the positional relationship of an abscissa x of a first coordinate point and an abscissa x 'of a second coordinate point is satisfied, the second condition being a condition that the positional relationship of an ordinate y of the first coordinate point and an ordinate y' of the second coordinate point is satisfied, the first coordinate point being located in the first area, the second coordinate point being located in the third area;
determining the abscissa x 'of the second coordinate point according to the abscissa x of the first coordinate point and the first condition, and determining the ordinate y' of the second coordinate point according to the ordinate y of the first coordinate point and the second condition;
determining a third region from the second image based on the first region, the abscissa x of the first coordinate, the ordinate y of the first coordinate, the abscissa x 'of the second coordinate, and the ordinate y' of the second coordinate.
In one possible design, when the first imaging system and the second imaging system are horizontally aligned, the first condition is x' = x + k x d +. DELTA.c, the second condition being y' = y +. DELTA.y, wherein. DELTA.y is a preset y-axis tolerance, wherein. DELTA.c is a preset x-axis tolerance, wherein d is a distance between the binocular device and the object to be recognized, and wherein k is a distance between the binocular device and the object to be recognized x And the relation coefficient between the X-axis deviation of the binocular equipment and the distance is obtained.
In a possible design, the method further includes:
acquiring a first sample image set and a second sample image set which are respectively acquired by sample equipment, wherein the sample equipment has the same equipment model as the binocular equipment, the first sample images in the first sample image set correspond to the second sample images in the second sample image set one by one, the corresponding distances of the different first sample images are different, and the distance is the distance between the sample equipment and a sample object in the first sample image;
determining the k according to the distances and the deviation of the abscissa of the object to be identified corresponding to the distances in the first sample image and the second sample image x To obtain the second condition.
In one possible design, when the positional relationship between the first imaging system and the second imaging system is in a vertical arrangement, the first condition is x '= x +. DELTA.c, and the second condition is y' = y + k y d + Delay, wherein the Delay is a preset y-axis tolerance, the Deltac is a preset x-axis tolerance, the d is the distance between the binocular device and the object to be identified, and the k is y And the relation coefficient between the y-axis deviation of the binocular equipment and the distance is obtained.
In one possible design, the method further includes:
acquiring a first sample image set and a second sample image set which are respectively acquired by sample equipment, wherein the sample equipment has the same equipment model as the binocular equipment, the first sample images in the first sample image set correspond to the second sample images in the second sample image set one by one, the corresponding distances of the different first sample images are different, and the distance is the distance between the sample equipment and a sample object in the first sample image;
determining the k according to the distances and the vertical coordinates of the objects to be recognized corresponding to the distances in the first sample image and the second sample image y To obtain the second condition.
In one possible design, the determining the second region from the third region according to the first region includes:
moving the first region in the third region according to a preset step length to determine a plurality of fourth regions with the same area as the first region from the third region;
and determining the similarity between the third area and each fourth area to obtain a plurality of similarities, and taking the fourth area corresponding to the maximum value in the similarities as the second area.
In one possible design, the determining the similarity between the third area and each fourth area includes:
compressing the third area and the fourth area to compress the first area and the fourth area to 63 × 63;
convolving a first area and a fourth area with the size of 63 × 63 to obtain the similarity of the first area and the fourth area.
In a second aspect, an embodiment of the present application provides a living body detection apparatus, including:
the system comprises a first imaging system, a second imaging system and a control system, wherein the first imaging system is used for acquiring a first image of a visual range when an object to be identified appears in the visual range of binocular equipment;
the second imaging system is used for acquiring a second image of the visual range when the object to be recognized appears in the visual range of the binocular equipment;
and the processor is used for determining a first area where the object to be recognized is located from the first image, determining a second area from the second image by using the position relationship between the first imaging system and the second imaging system and the first area, wherein the second area contains the object to be recognized, inputting the face characteristics of the first area and the face characteristics of the second area into a pre-trained living body detection model to determine a living body recognition result of the object to be recognized, and outputting prompt information, wherein the prompt information is used for indicating whether the object to be recognized passes through living body detection.
In a possible design, the processor is configured to determine a third region from the second image by using a positional relationship between the first imaging system and the second imaging system and the first region, where an area of the third region is larger than an area of the first region, and determine the second region from the third region according to the first region.
In one possible design, the processor is configured to determine a first condition and a second condition by using a positional relationship of the first imaging system and the second imaging system, the first condition being a condition that a positional relationship of an abscissa x of a first coordinate point and an abscissa x 'of a second coordinate point is satisfied, the second condition being a condition that a positional relationship of an ordinate y of the first coordinate point and an ordinate y' of the second coordinate point is satisfied, the first coordinate point is located in the first area, the second coordinate point is located in the third area, an abscissa x 'of the second coordinate point is determined according to the abscissa x of the first coordinate point and the first condition, an ordinate y' of the second coordinate point is determined according to the ordinate y of the first coordinate point and the second condition, and an ordinate y 'of the second coordinate point is determined according to the first area, the abscissa x of the first coordinate, the ordinate y of the first coordinate, the abscissa x of the second coordinate, and the ordinate y' of the second coordinate are determined from the second image.
In one possible design, when the first imaging system is usedAnd the second imaging system is horizontally arranged, the first condition is x' = x + k x d +. DELTA.c, the second condition is y' = y +. DELTA.y, wherein, the. DELTA.y is a preset y-axis tolerance, the. DELTA.c is a preset x-axis tolerance, the d is a distance between the binocular device and the object to be recognized, and the k is x And the relation coefficient between the X-axis deviation of the binocular equipment and the distance is obtained.
In a possible design, the processor is configured to obtain a first sample image set and a second sample image set respectively acquired by a sample device, where the sample device has the same device model as the binocular device, where first sample images in the first sample image set correspond to second sample images in the second sample image set one to one, distances corresponding to different first sample images are different, and the distance is a distance between the sample device and a sample object in the first sample image, and the k is determined according to each distance and a deviation of an abscissa of the object to be recognized in the first sample image and the second sample image corresponding to each distance x To obtain the second condition.
In one possible design, when the positional relationship between the first imaging system and the second imaging system is vertically aligned, the first condition is x '= x +. DELTA.c, and the second condition is y' = y + k y d + Deltay, wherein the Deltay is a preset y-axis tolerance, the Deltac is a preset x-axis tolerance, the d is the distance between the binocular device and the object to be identified, and the k is y And the relation coefficient between the y-axis deviation of the binocular equipment and the distance is obtained.
In a feasible design, the processor is configured to obtain a first sample image set and a second sample image set respectively acquired by a sample device, where the model of the sample device is the same as that of the binocular device, a first sample image in the first sample image set corresponds to a second sample image in the second sample image set one by one, distances corresponding to different first sample images are different, and the distance is between the sample device and a sample image in the first sample imageThe distance between the objects; determining the k according to the distances and the vertical coordinates of the object to be recognized corresponding to the distances in the first sample image and the second sample image y To obtain the second condition.
In a possible design, the processor is configured to move the first region within the third region according to a preset step size, to determine a plurality of fourth regions having an area equal to that of the first region from the third region, determine a similarity between the third region and each fourth region, to obtain a plurality of similarities, and use a fourth region corresponding to a maximum value among the plurality of similarities as the second region.
In one possible design, the processor is configured to compress the third area and the fourth area to reduce the size of the first area and the fourth area to 63 × 63; convolving a first region and a fourth region of size 63 × 63 to obtain a similarity of the first region and the fourth region.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor, a memory, and a computer program stored in the memory and executable on the processor, and the processor implements the method according to the first aspect or various possible implementations of the first aspect when executing the program.
In a fourth aspect, embodiments of the present application provide a storage medium, where instructions are stored, and when the storage medium is executed on an electronic device, the storage medium causes the electronic device to perform the method according to the first aspect or the various possible implementations of the first aspect.
In a fifth aspect, embodiments of the present application provide a computer program product, which, when run on an electronic device, causes the electronic device to perform the method according to the first aspect or the various possible implementations of the first aspect.
According to the living body detection method, the living body detection device, the electronic equipment and the storage medium, when the binocular device finds that the object to be recognized appears in the visual range, the first imaging system is used for collecting the first image, the second imaging system is used for collecting the second image, face detection is conducted on the first image to determine the first area containing the object to be recognized, then the second area is determined from the second image according to the position relation of the two imaging systems of the binocular camera and the first area, the face characteristics of the first area and the face characteristics of the second area are input to a pre-trained living body detection model, and therefore whether the object to be recognized is a living body or not is determined and prompt information is output. By adopting the scheme, binocular equipment does not need to be calibrated, the face area can be matched from the other image only according to the position relation of two imaging systems in the binocular equipment and the distance between the object to be recognized and the binocular equipment, and then the living body detection is carried out according to the face area. The method does not limit whether the binocular equipment is calibrated, and has simple process and low cost.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to these drawings without inventive exercise.
FIG. 1 is a flow chart of a method of in vivo detection provided by an embodiment of the present application;
FIG. 2 is a schematic diagram illustrating a relationship between a deviation and a distance in a living body detecting method according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a third area in a living body detecting method according to an embodiment of the present application;
FIG. 4 is a schematic diagram illustrating a fourth region being determined in the in-vivo detection method according to the embodiment of the present application;
FIG. 5 is a schematic structural diagram of a living body detecting apparatus according to an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application without making any creative effort belong to the protection scope of the present application.
At present, human face living body detection is used as a security guarantee means for entrance guard, login systems and the like on the basis of human face recognition, and a user can be prevented from deceiving a system by means of illegal means such as photos and the like through living body detection. In a common living body detection process, two images are simultaneously acquired by using calibrated binocular equipment, a face area is extracted from one image and projected into the other image to determine the other face area, and whether the living body exists is determined according to depth information of feature points of the two face areas and the like.
In the living body detection process, the binocular equipment needs to be calibrated in advance by adopting a calibration flow similar to Zhangzhen and the like, so that the position coordinates of the same object are the same in images respectively acquired by two cameras of the binocular equipment. However, most of the existing binocular devices are not calibrated, and the cost is huge if the binocular devices are returned to the factory for calibration or are calibrated at home. Meanwhile, if binocular equipment is calibrated before leaving a factory, calibration stations need to be added on a production line, so that the production cost can be increased while the production efficiency of products is reduced.
In view of this, the embodiment of the present application provides a living body detection method, which can match a face region from another image only according to a position relationship between two imaging systems in a binocular device and a distance between an object to be recognized and the binocular device without calibrating the binocular device, and further perform living body detection according to the face region. The method does not limit whether the binocular equipment is calibrated or not, and is simple in process and low in cost.
The living body detection method provided by the embodiment of the application can be applied to electronic equipment. In specific application, the electronic equipment can be equipment such as an attendance machine, access control equipment and the like with the function of simultaneously acquiring two images; alternatively, the electronic device may be a device, such as a server, a personal computer, or the like, that communicates with a device having a function of simultaneously capturing two images.
Wherein, the two images are images collected aiming at the same visual range. Taking binocular equipment as an example, the binocular equipment can be integrated on an attendance machine or an entrance guard device, or can also be communicated with a server, a personal computer and the like. The binocular device is a device which has two independent imaging systems by simulating the imaging principle of human eyes, and the two imaging systems can be both visible light (VIS) systems; alternatively, the two imaging systems may both be Near Infrared (NIR) systems; alternatively, the two imaging systems may be a VIS and NIR hybrid system or otherwise. The binocular devices may include binocular devices horizontally arranged left and right and binocular devices vertically arranged up and down, in terms of positional relationship.
The applicant found that: the change of the depth information value among the characteristic points of the human face is obvious, and the change of the depth information value of the characteristic points of the human face photo is not obvious. Therefore, the applicant can judge whether the object to be recognized is a living human face or a photograph through the depth information value. In addition, after the face regions in the two images are detected, whether the object to be recognized is a living face or a photo can be distinguished according to other image characteristics, which is not limited in the embodiment of the application.
In addition, in the embodiment of the present application, the two imaging systems of the binocular device may be separately arranged, as long as the mutual positional relationship satisfies the requirements of horizontal arrangement or vertical arrangement, and the two imaging systems may establish communication connection. In addition, besides the binocular device, other vision systems, such as a multi-view device, may also determine in advance a positional relationship of points in images acquired by each imaging system in the multi-view device, so as to perform living body identification.
The method for detecting a living body according to the embodiment of the present application will be described in detail below, taking electronic devices as devices such as an attendance machine and an entrance guard device having a function of simultaneously acquiring two images. For example, see fig. 1.
Fig. 1 is a flowchart of a method for detecting a living body according to an embodiment of the present application, where the embodiment includes:
101. when an object to be recognized appears in the visual range of binocular equipment, a first image of the visual range is acquired by using a first imaging system of the binocular equipment, and a second image of the visual range is acquired by using a second imaging system of the binocular equipment.
In the embodiment of the application, the visual range of the binocular device refers to an area that can be captured by the binocular device, and the object to be recognized is usually a human face. When the object to be recognized appears in the visual range of the binocular device, the first imaging system of the binocular device acquires the visual range to obtain a first image containing the object to be recognized, and meanwhile, the second imaging system of the binocular device acquires the visual range to obtain a second image containing the object to be recognized. For example, when attendance is performed, the employee a and the employee B are simultaneously present in the visual range of the attendance machine, the first image acquired by the first imaging system of the binocular device integrated on the attendance machine includes the employee a and the employee B, the second image acquired by the second imaging system includes the employee a and the employee B, and the employee a and the employee B are the objects to be identified.
102. And determining a first area where the object to be identified is located from the first image.
In an exemplary embodiment, after the electronic device obtains the first image and the second image, a face detection algorithm is used for detecting an area where an object to be identified is located from any one of the first image and the second image. For example, the electronic device detects a first region containing an object to be recognized from a first image using a face detection algorithm. After the first area is identified, comparing the pre-stored face image with the face image in the first area, and if the similarity between the face image in the first area and a pre-stored face image is greater than a preset threshold value, indicating that the face identification is passed, and then performing living body detection; and if the similarity between the face image in the first area and a pre-stored certain image is lower than a preset threshold value, the face recognition is not passed. In addition, the scheme of the embodiment of the application can also directly utilize the first image and the second image to carry out living body detection without carrying out face recognition; or, the scheme described in the embodiment of the present application may not limit the sequence of face recognition and live body detection.
103. And determining a second area from the second image by using the position relationship between the first imaging system and the second imaging system and the first area, wherein the second area contains the object to be identified.
104. And inputting the face characteristics of the first region and the face characteristics of the second region into a pre-trained living body detection model so as to determine a living body recognition result of the object to be recognized.
In the step 102, the electronic device performs face detection only on the first image, and performs face recognition. For the other second image, the electronic equipment can detect a face area from the second image according to the position relation between the first imaging system and the second imaging system of the binocular equipment and the first area, and further detect whether the object to be recognized is a living body according to the two face areas. When the sizes of the first image and the second image are relatively large, if the first image and the second image are 1080p, that is, 200 ten thousand pixels, if face detection is performed on both images, the consumption ratio of resources such as a CPU of the electronic device is relatively large, and time is wasted. Therefore, in the embodiment of the application, only one image and then the face detection are carried out, so that the consumption of resources such as a CPU (central processing unit) and the like can be reduced, and the time is saved.
In addition, when the number of faces in the first image is multiple, the electronic device may determine a pair of the first region and the second region for each face, so as to obtain multiple pairs of the first region and the second region, and thus, a matching relationship between the faces in the two images can be obtained, and further, face living body detection can be accurately performed.
In the living body detection process, the face features are extracted from the first area, the face features are extracted from the second area, the face features extracted from the first area and the face features extracted from the second area are respectively input into a model trained in advance, so that a score is obtained, the score is used for indicating whether the object to be recognized is a living body or not, and for example, if the score exceeds a preset value, the object to be recognized is considered as the living body.
105. And outputting prompt information, wherein the prompt information is used for indicating whether the object to be identified passes the living body detection.
For example, the electronic device may output the prompt information in a voice, animation, or text manner. For example, in the attendance checking process, if the object to be identified is detected not to be a living body, a voice is sent out to ensure that the authentication fails and the attendance checking is invalid; if the object to be recognized is detected to be a living body, an "OK" or "authentication pass" voice is emitted. The authentication pass indicates that the object to be recognized is a living body face, and the cognition failure indicates that the object to be recognized is not the living body face.
According to the living body detection method provided by the embodiment of the application, when the binocular device finds that the object to be recognized appears in the visual range, the first imaging system is used for collecting the first image, the second imaging system is used for collecting the second image, face detection is carried out on the first image to determine the first area containing the object to be recognized, then the second area is determined from the second image according to the position relation of the two imaging systems of the binocular camera and the first area, and further the face characteristics of the first area and the face characteristics of the second area are input into a pre-trained living body detection model to determine whether the object to be recognized is a living body or not and prompt information is output. By adopting the scheme, binocular equipment is not required to be calibrated, the face area can be matched from the other image only according to the position relation of two imaging systems in the binocular equipment and the distance from the object to be recognized to the binocular equipment, and then the living body detection is carried out according to the face area. The method does not limit whether the binocular equipment is calibrated or not, and is simple in process and low in cost.
In the above-described embodiment, the living body detection is performed based on the face region in the first image and the face region in the second image. When living body detection is carried out, a face area in a first image and a face area in a second image need to be determined. When the face in the second image is determined, a third region is determined from the second image, the area of the third region is larger than that of the first region, and the third region is the region where the object to be recognized is most likely to exist. Then, the electronic device determines a first region from the third region by using a Local binary pattern Hash (LBP-Hash) algorithm, where the first region is an accurate position of the object to be recognized in the second image. In the process, the face region (first region) in the first image can be determined by a face detection algorithm, and the face region in the second image is determined according to the position relationship of the two imaging systems and the first region. This is because the inventors found that:
when the positional relationship of the two imaging systems is fixed, the positional relationship of the coordinates of the point in the first area and the coordinates of the corresponding point in the face area in the second image satisfies a certain condition. If the coordinates of the first coordinate point in the first area are (x, y), and the coordinates of the second coordinate point corresponding to the first coordinate point in the second image are (x ', y '), when the first imaging system and the second imaging system are horizontally arranged, the first condition is that x ' = x + k x d +. DELTA.c, with the second condition that y' = y +. DELTA.y; when the first imaging system and the second imaging system are vertically aligned, the first condition is x '= x +. DELTA.c, and the second condition is y' = y + k y d +. DELTA.y. Wherein, Δ y is a preset y-axis tolerance, Δ c is a preset x-axis tolerance, d is a distance between the binocular device and the object to be recognized, and k is x K is a relation coefficient between the x-axis deviation of the binocular device and the distance y And the relation coefficient between the y-axis deviation of the binocular equipment and the distance is obtained.
In the following, how to obtain the above-mentioned first condition and second condition will be described in detail by taking an example of a horizontal arrangement of the first imaging system and the second imaging system.
First, the second condition.
Illustratively, an important concept involved in the embodiments of the present application is Epipolar constraints (Epipolar lines), which refer to: and the corresponding relation of points on the two polar lines. Assuming that the straight line l is the epipolar line corresponding to the point p ' and the straight line l ' is the epipolar line corresponding to the point p, the epipolar constraint is that the point p ' is fixed on the straight line l ' corresponding to the point p and the point p is fixed on the straight line l corresponding to the point p '. Thus, it can be seen that: when a first imaging system and a second imaging system of the binocular device are horizontally arranged, the size of a first image acquired by the first imaging system is the same as that of a second image acquired by the second imaging system, and ideally, the vertical coordinates of the same object in the two images are the same; similarly, when the first imaging system and the second imaging system of the binocular device are vertically arranged, if the first image and the second image have the same size, the abscissa of the same object in the two images is the same.
When the coordinates of the first coordinate point in the first region are (x, y) and the coordinates of the second coordinate point corresponding to the first coordinate point in the second image are (x ', y'), it can be known from the epipolar constraint conditions described above that: when the first imaging system and the second imaging system are horizontally arranged, ideally, the ordinate y of the first coordinate point is the same as the ordinate y' of the second coordinate point, and the y-axis tolerance Δ y is introduced in consideration of factors such as assembly errors, so that: y' = y +. DELTA.y. Similarly, when the first imaging system and the second imaging system are vertically arranged, ideally, the abscissa x of the first coordinate point is the same as the abscissa x' of the second coordinate point, and the x-axis tolerance Δ c is introduced in consideration of factors such as assembly errors, so that the first condition in vertical arrangement can be obtained: x' = x +. DELTA.c.
In the above embodiment, the y-axis tolerance Δ y and the x-axis tolerance Δ c are known. For example, 10 sample devices with the same model number are taken out in advance, two imaging systems of the 10 sample devices are horizontally arranged, images are collected every 10cm within the range of depth of field for each device, a plurality of groups of images are obtained, for example, 10 groups of images are obtained, the vertical coordinates of 5 pairs of feature points are extracted from each group of images, the difference value of the vertical coordinates of each pair of feature points is respectively calculated, and then the average value is obtained, so that the y-axis tolerance Δ y is obtained; similarly, the x-axis tolerance Δ c of the vertically arranged binocular devices is calculated in the same manner. The feature points are, for example, the left eye pupil, the right eye pupil, the left mouth corner, the right mouth corner, and the nose.
Next, a first condition.
When the first imaging system and the second imaging system are arranged horizontally, the positional relationship of the abscissas of the corresponding coordinate points in the two images depends on the deviation and the distance between the sample object and the sample device at the time of taking the images. Therefore, the electronic device obtains a first sample image set and a second sample image set respectively acquired by a sample device, the sample device has the same device model as the binocular device, the first sample images in the first sample image set correspond to the second sample images in the second sample image set one by one, the distances corresponding to different first sample images are different, and the distance is the distance between the sample device and the sample object in the first sample image; determining the k according to the distances and the abscissa of the object to be recognized corresponding to the distances in the first sample image and the second sample image x To obtain the second condition, k x And the relation coefficient between the x-axis deviation and the distance of the binocular equipment.
Illustratively, 10 sample devices of the same model are taken in advance, the two imaging systems of the 10 sample devices are horizontally arranged, and for each device, images are acquired every 10cm within the depth of field range to obtain multiple groups of images, such as 10 groups of images, so as to obtain a first sample image set and a second sample image set. For each pair of sample images, 5 feature points are marked on the sample image respectively, the deviation of the abscissa of the corresponding feature point is calculated, and the distance corresponding to the deviation is marked on the coordinate system. For example, refer to fig. 2, and fig. 2 is a schematic diagram illustrating a relationship between a deviation and a distance in a living body detection method provided in an embodiment of the present application.
Referring to fig. 2, the abscissa is a distance in centimeters (cm) representing a distance between a sample object and a sample device when an image is taken; the ordinate is the deviation, in pixels, representing the difference corresponding to the abscissa on the left. For example, of the points P1 to P6, the abscissa of the point P1 is 70, and the ordinate is about 18 pixels, which indicates that the distance between the sample object and the sample device is 70cm when the image is captured, and the difference between the coordinates of the corresponding points of the two imaging systems of the sample device is 18 pixels; as another example, the abscissa of the point P2 is about 95 and the ordinate is about 30, which indicates that the distance between the sample object and the sample instrument is 95cm and the difference between the coordinates of the corresponding points of the two imaging systems of the sample instrument is 30 pixels when the image is taken.
Wherein the deviation may be an average of deviations of a plurality of pairs of points. For example, when the distance is 70cm, in the image acquired by the sample device, the feature points include a left eye pupil, a right eye pupil, a left mouth corner, a right mouth corner and a nose, and the deviation of each feature point may be calculated to obtain 5 deviations, and the 5 deviations are averaged to obtain the deviation corresponding to the point P1, that is, the ordinate of the point P1.
As can be seen from fig. 2 above: the x-axis deviation and the distance have a certain corresponding relation, and the relation is that x' = x + k x d +. DELTA.c. From this, k can be determined x . In the subsequent living body detection process, when two imaging systems are horizontally arranged, d and k are different x X ', and Δ c are known, so x' can be determined.
By adopting the scheme, the purpose of determining the position relation of the abscissa of the corresponding point in the images acquired by the two imaging systems aiming at the binocular equipment horizontally arranged by the two imaging systems is realized.
It should be noted that, for a binocular device in which two imaging systems are vertically arranged, the second condition is that y' = y + k y d +. DELTA.y, the derivation of which is similar to the first condition x' = x + k described above x The derivation process of d + Δ c only needs to modify the deviation of the abscissa into the deviation of the ordinate, and is not described herein again.
According to the above, it can be seen that: when the first imaging system and the second imaging system are horizontally arranged, the first condition is x' = x + k x d +. DELTA.c, with the second condition that y' = y +. DELTA.y; when the first imaging system and the second imaging system are vertically aligned, the first condition is x '= x +. DELTA.c, and the second condition is y' = y + k y d +. DELTA.y. Therefore, the electronic equipment obtains the position relation of the two imaging systems by reading the hardware configurationThat is, the first condition and the second condition can be determined by using the positional relationship; then, determining the abscissa x 'of the second coordinate point according to the abscissa x of the first coordinate point and the first condition, and determining the ordinate y' of the second coordinate point according to the ordinate y of the first coordinate point and the second condition; then, according to the first area, the abscissa x of the first coordinate, the ordinate y of the first coordinate, the abscissa x 'of the second coordinate, and the ordinate y' of the second coordinate, a third area is determined from the second image, that is, an area where the object to be recognized is most likely to appear is determined. Next, how to determine the third area is described in detail, taking an example of the horizontal arrangement of the two imaging systems of the binocular device. For example, see fig. 3.
Fig. 3 is a schematic diagram of a third area in the living body detecting method according to the embodiment of the present application. Referring to fig. 3, the left side is the first image and the right side is the second image. The human face in the first image is a human face obtained through a human face detection algorithm, the human face has detailed key information of the human face, and after a first region is detected from the first image through the human face detection algorithm, a region where the human face to be recognized is most likely to appear, namely a third region, needs to be found in the second image through the position relation between the first region and two imaging systems. The process generally includes the following three stages:
the first stage is as follows: a first region is determined from the first image.
In this stage, for the first image, the electronic device extracts feature points of the face, where the feature points include a left-eye pupil, a right-eye pupil, a left mouth corner, a right mouth corner, and a nose, and determines the first region according to coordinates of the feature points. In the process of determining the first region, first, a center point of the first region, that is, a face center point, needs to be determined. Assuming that the left eye pupil coordinate is elc (eye _ x, eye _ y), the right eye pupil coordinate is erc (eye _ x, eye _ y), the left mouth angular coordinate is (mouthl _ x, mouthl _ y), and the right mouth angular coordinate is (mouthlh _ x, mouthl _ y), the 4 coordinates are calculated to obtain the center point fc (fc _ x, fc _ y):
fc_x=(eyel_x+eyer_x+mouthl_x+mouthr_x)/4
fc_y=(eyel_y+eyer_y+mouthl_y+mouthr_y)/4
in the embodiment of the present application, the width of the first region (face width) may be set to be twice the interpupillary distance, that is, face with =2 × (eye _ x-eye _ x), and the height (face height) and the width of the first region are consistent.
According to the above, it can be seen that: after the coordinates of the center point of the first region, the height of the first region, and the width of the first region are determined, the first region can be determined, and the first region is the region a in fig. 3.
And a second stage of determining a third area from the second image.
Referring to fig. 3, the electronic device projects the first region as a template onto the second image to obtain the a' region. The A' region is then extended to provide a third region, shown as the Ax region in the figure. The extension process is as follows: first, a first coordinate point (x, y) is found in a first region, such as the top of the upper left corner of the first region in the figure. Next, a first condition and a second condition are determined, for example, when the electronic device reads the hardware configuration and finds that the two imaging systems of the binocular device are horizontally arranged, the first condition is determined to be x' = x + k x d +. DELTA.c, the second condition being y' = y +. DELTA.y. Then, a second coordinate point (x ', y') corresponding to the first coordinate point is determined from the second image according to the first condition and the second condition, wherein k is x The relation coefficient between the x-axis deviation of the binocular device and the distance is obtained in advance according to fig. 2. And finally, extending the area A 'to obtain a third area, wherein in the extending process, the area A' is extended upwards and downwards by delta y/2, and the area A 'is extended leftwards and rightwards by (x-x')/2.
It should be noted that, in the above extension process, the embodiment of the present application is described by taking the example of extending the a 'region both upward and downward by Δ y/2 and extending the a' region both leftward and rightward by (x-x ')/2, but the embodiment of the present application is not limited thereto, and for example, in other possible implementations, the a' region may be extended both upward and downward by Δ y and the a 'region may be extended both leftward and rightward by (x-x'); for another example, the A ' region is extended both upward and downward by Δ y/3, the A ' region is extended both leftward and rightward by (x-x ')/3, etc.
And a third stage of determining the second area from the third area.
In this stage, the electronic device moves the first region according to a preset step length from the start point coordinates of the third region to determine a plurality of fourth regions with the same area as the first region from the third region; determining the similarity of the third area and each fourth area in the fourth areas to obtain a plurality of similarities, and taking the fourth area corresponding to the maximum value in the similarities as the fourth area. For example, please refer to fig. 4, fig. 4 is a schematic diagram illustrating a fourth area determined in the living body detecting method according to the embodiment of the present application.
Referring to fig. 4, the thick black region is a third region, and the dotted frame shows a fourth region, where the area of the third region is larger than that of the first region, and the area of the fourth region is the same as that of the first region. In the process of determining the fourth region, the electronic device moves the first region from a certain coordinate point of the third region, such as a vertex at the top left corner in the drawing, from left to right, from top to bottom, according to a preset step length, and completes traversal of the third region, so as to determine a series of fourth regions from the third region, where the areas of the fourth regions are the same as the areas of the first region. Then, the electronic device determines the similarity between the third area and each fourth area to obtain a plurality of similarities, and takes the fourth area corresponding to the maximum value in the plurality of similarities as the fourth area.
In the similarity calculation process, the electronic device performs compression on the first region and the third region to compress the sizes of the first region and the fourth region to 63 × 63, where the size compression is 63 × 63, which means that the lengths and widths of the compressed first region and fourth region are 63 pixels. Then, a first region and a fourth region of size 63 × 63 are convolved with a 3 × 3 LBP kernel to obtain the similarity of the first region and the fourth region. The embodiments of the present application do not limit the size of the LBP kernel, and 3 × 3 LBP kernels, 5 × 5 LBP kernels, and the like may be collected. Compared with the traditional perceptual hash algorithm, the traditional perceptual hash algorithm compresses the image to 64 x 64 size and extracts low-frequency information therein, and the image is compressed to 63 x 63 in the application, so that the obtained similarity is more accurate.
And a fourth stage, a living body detection process.
For example, after the electronic device determines the first region and the second region, the electronic device performs preprocessing on the two images, such as denoising, histogram equalization, and the like. And then, extracting image characteristics of the preprocessed image, and inputting the extracted image into a pre-trained living body detection model to obtain an output result, wherein the output result comprises a human face living body or a non-human face living body. Specifically, which image features are extracted according to the first region and the second region to perform living body detection, and the embodiment of the present application is not limited.
The following are embodiments of the apparatus of the present application that may be used to perform embodiments of the method of the present application. For details which are not disclosed in the embodiments of the apparatus of the present application, reference is made to the embodiments of the method of the present application.
Fig. 5 is a schematic structural diagram of a living body detection apparatus according to an embodiment of the present disclosure. The biopsy device 100 may be implemented in software and/or hardware. As shown in fig. 5, the living body detecting apparatus 100 includes:
the first imaging system 11 is used for acquiring a first image of a visual range when an object to be recognized appears in the visual range of the binocular equipment;
the second imaging system 12 is used for acquiring a second image of a visual range when the object to be recognized appears in the visual range of the binocular equipment;
a processor 13, configured to determine a first region where the object to be recognized is located from the first image, determine a second region from the second image by using a positional relationship between the first imaging system and the second imaging system and the first region, where the second region includes the object to be recognized, input the face features of the first region and the face features of the second region into a pre-trained living body detection model to determine a living body recognition result of the object to be recognized, and output prompt information, where the prompt information is used to indicate whether the object to be recognized passes through living body detection.
In a possible design, the processor 13 is configured to determine a third region from the second image by using the position relationship between the first imaging system and the second imaging system, and the first region, where the area of the third region is larger than the area of the first region, and determine the second region from the third region according to the first region.
In one possible design, the processor 13 is configured to determine a first condition and a second condition by using a positional relationship between the first imaging system and the second imaging system, the first condition being a condition that a positional relationship between an abscissa x of a first coordinate point and an abscissa x ' of a second coordinate point is satisfied, the second condition being a condition that a positional relationship between an ordinate y of the first coordinate point and an ordinate y ' of the second coordinate point is satisfied, the first coordinate point is located in the first area, the second coordinate point is located in the third area, an abscissa x ' of the second coordinate point is determined according to the abscissa x of the first coordinate point and the first condition, an ordinate y ' of the second coordinate point is determined according to the ordinate y of the first coordinate point and the second condition, and an ordinate y ' of the second coordinate point is determined according to the first area, the abscissa x of the first coordinate, the ordinate y of the first coordinate, the abscissa x ' of the second coordinate point, and the ordinate y ' of the second coordinate point is determined from the second area.
In one possible design, when the first imaging system and the second imaging system are horizontally aligned, the first condition is x' = x + k x d +. DELTA.c, the second condition is y' = y +. DELTA.y, wherein, the. DELTA.y is a preset y-axis tolerance, the. DELTA.c is a preset x-axis tolerance, the d is a distance between the binocular device and the object to be recognized, and the k is x And the relation coefficient between the x-axis deviation of the binocular equipment and the distance is obtained.
In one possible design, the processor 13 is configured to obtain a first set of sample images respectively acquired by the sample device anda second sample image set, wherein the model of the sample device is the same as that of the binocular device, the first sample images in the first sample image set correspond to the second sample images in the second sample image set one by one, the distances corresponding to different first sample images are different, the distance is the distance between the sample device and the sample object in the first sample image, and the k is determined according to the distances and the deviation of the abscissa of the object to be recognized corresponding to the distances in the first sample image and the second sample image x To obtain the second condition.
In one possible design, when the positional relationship between the first imaging system and the second imaging system is in a vertical arrangement, the first condition is x '= x +. DELTA.c, and the second condition is y' = y + k y d + Deltay, wherein the Deltay is a preset y-axis tolerance, the Deltac is a preset x-axis tolerance, the d is the distance between the binocular device and the object to be identified, and the k is y And the relation coefficient between the y-axis deviation of the binocular equipment and the distance is obtained.
In a possible design, the processor 13 is configured to obtain a first sample image set and a second sample image set respectively acquired by a sample device, where the sample device has the same device model as the binocular device, a first sample image in the first sample image set corresponds to a second sample image in the second sample image set one to one, and different first sample images correspond to different distances, where the distance is a distance between the sample device and a sample object in the first sample image; determining the k according to the distances and the vertical coordinates of the objects to be recognized corresponding to the distances in the first sample image and the second sample image y To obtain the second condition.
In a possible design, the processor 13 is configured to move the first region within the third region according to a preset step size, to determine a plurality of fourth regions having an area equal to that of the first region from the third region, determine a similarity between the third region and each fourth region, to obtain a plurality of similarities, and use a fourth region corresponding to a maximum value among the plurality of similarities as the second region.
In one possible design, the processor 13 is configured to compress the third area and the fourth area to reduce the size of the first area and the fourth area to 63 × 63; convolving a first region and a fourth region of size 63 × 63 to obtain a similarity of the first region and the fourth region.
Fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present application. As shown in fig. 6, the electronic apparatus 200 includes:
at least one processor 21 and memory 22;
the memory 22 stores computer-executable instructions;
the at least one processor 21 executes computer-executable instructions stored by the memory 22 to cause the at least one processor 21 to perform the liveness detection method as described above.
Optionally, the electronic device 200 further comprises a communication component 23. The processor 21, the memory 22, and the communication unit 23 may be connected by a bus 24.
The embodiment of the present application further provides a readable storage medium, in which a computer executing instruction is stored, and the computer executing instruction is used for implementing the living body detection method as described above when executed by a processor.
Embodiments of the present application also provide a computer program product, which, when run on an electronic device, causes the electronic device to execute the above-mentioned living body detection method.
Those of ordinary skill in the art will understand that: all or a portion of the steps of implementing the above-described method embodiments may be performed by hardware associated with program instructions. The program may be stored in a computer-readable storage medium. When executed, the program performs steps comprising the method embodiments described above; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art will understand that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and these modifications or substitutions do not depart from the scope of the technical solutions of the embodiments of the present application.

Claims (12)

1. A method of in vivo detection, comprising:
when an object to be recognized appears in the visual range of binocular equipment, acquiring a first image of the visual range by using a first imaging system of the binocular equipment, and acquiring a second image of the visual range by using a second imaging system of the binocular equipment;
determining a first area where the object to be recognized is located from the first image;
determining a second area from the second image by using the position relation between the first imaging system and the second imaging system and the first area, wherein the second area contains the object to be identified; the first imaging system and the second imaging system are arranged horizontally or vertically, the first area is provided with a horizontal and vertical coordinate area, the second area is provided with a horizontal and vertical coordinate area, and the first image and the second image have a horizontal and vertical coordinate corresponding relation based on the position relation of the first imaging system and the second imaging system;
inputting the face features of the first region and the face features of the second region into a pre-trained living body detection model to determine a living body recognition result of the object to be recognized;
and outputting prompt information, wherein the prompt information is used for indicating whether the object to be identified passes the living body detection.
2. The method of claim 1, wherein determining a second region from the second image using the positional relationship of the first imaging system and the second imaging system, and the first region, comprises:
determining a third region from the second image by using the position relation between the first imaging system and the second imaging system and the first region, wherein the area of the third region is larger than that of the first region, and the third region comprises an object to be identified;
and determining the second area from the third area according to the first area.
3. The method of claim 2, wherein determining a third region from the second image using the positional relationship of the first imaging system and the second imaging system, and the first region comprises:
determining a first condition and a second condition by using the positional relationship of the first imaging system and the second imaging system, the first condition being a condition that the positional relationship of an abscissa x of a first coordinate point and an abscissa x 'of a second coordinate point is satisfied, the second condition being a condition that the positional relationship of an ordinate y of the first coordinate point and an ordinate y' of the second coordinate point is satisfied, the first coordinate point being located in the first area, the second coordinate point being located in the third area;
determining the abscissa x 'of the second coordinate point according to the abscissa x of the first coordinate point and the first condition, and determining the ordinate y' of the second coordinate point according to the ordinate y of the first coordinate point and the second condition;
a third region is determined from the second image based on the first region, the abscissa x of the first coordinate, the ordinate y of the first coordinate, the abscissa x 'of the second coordinate and the ordinate y' of the second coordinate.
4. The method of claim 3,
the first condition is that x' = x + k when the first and second imaging systems are horizontally aligned x d +. DELTA.c, the second condition is y' = y +. DELTA.y, wherein, the. DELTA.y is a preset y-axis tolerance, the. DELTA.c is a preset x-axis tolerance, the d is a distance between the binocular device and the object to be recognized, and the k is x And the relation coefficient between the x-axis deviation of the binocular equipment and the distance is obtained.
5. The method of claim 4, further comprising:
acquiring a first sample image set and a second sample image set which are respectively acquired by sample equipment, wherein the sample equipment has the same equipment model as the binocular equipment, the first sample images in the first sample image set correspond to the second sample images in the second sample image set one by one, the distances corresponding to different first sample images are different, and the distance is the distance between the sample equipment and a sample object in the first sample image;
determining the k according to the distances and the deviation of the abscissa of the object to be identified corresponding to the distances in the first sample image and the second sample image x To obtain the first condition.
6. The method of claim 3,
when the positional relationship of the first imaging system and the second imaging system is vertical arrangement, the first condition is x '= x +. DELTA.c, and the second condition is y' = y + k y d + Delay, wherein the Delay is a preset y-axis tolerance, the Deltac is a preset x-axis tolerance, the d is the distance between the binocular device and the object to be identified, and the k is y And the relation coefficient between the y-axis deviation of the binocular equipment and the distance is obtained.
7. The method of claim 6, further comprising:
acquiring a first sample image set and a second sample image set which are respectively acquired by sample equipment, wherein the sample equipment has the same equipment model as the binocular equipment, the first sample images in the first sample image set correspond to the second sample images in the second sample image set one by one, the distances corresponding to different first sample images are different, and the distance is the distance between the sample equipment and a sample object in the first sample image;
determining the k according to the distances and the vertical coordinates of the objects to be recognized corresponding to the distances in the first sample image and the second sample image y To obtain the second condition.
8. The method according to any one of claims 2 to 7, wherein said determining the second region from the third region based on the first region comprises:
moving the first region in the third region according to a preset step length to determine a plurality of fourth regions with the same area as the first region from the third region;
determining the similarity between the third area and each fourth area to obtain a plurality of similarities, and taking the fourth area corresponding to the maximum value in the plurality of similarities as the second area.
9. The method of claim 8, wherein determining the similarity between the third region and each fourth region comprises:
compressing the third area and the fourth area to compress the third area and the fourth area to 63 x 63 in size;
and convolving a third area and a fourth area with the size of 63 × 63 to obtain the similarity of the first area and the fourth area.
10. A living body detection device, comprising:
the system comprises a first imaging system, a second imaging system and a control system, wherein the first imaging system is used for acquiring a first image of a visual range when an object to be identified appears in the visual range of binocular equipment;
the second imaging system is used for acquiring a second image of the visual range when the object to be recognized appears in the visual range of the binocular equipment;
a processor, configured to determine a first region where the object to be recognized is located from the first image, and determine a second region from the second image by using a positional relationship between the first imaging system and the second imaging system and the first region, where the second region includes the object to be recognized; the first imaging system and the second imaging system are arranged horizontally or vertically, the first area is provided with a horizontal and vertical coordinate area, the second area is provided with a horizontal and vertical coordinate area, and the first image and the second image have a horizontal and vertical coordinate corresponding relation based on the position relation of the first imaging system and the second imaging system;
and inputting the face characteristics of the first region and the face characteristics of the second region into a pre-trained living body detection model to determine a living body recognition result of the object to be recognized, and outputting prompt information, wherein the prompt information is used for indicating whether the object to be recognized passes through living body detection.
11. An electronic device, comprising: a processor, a memory, and a computer program; wherein the computer program is stored in the memory and configured to be executed by the processor, the computer program comprising instructions for performing the method of any of claims 1-9.
12. A readable storage medium having stored therein instructions that, when executed on an electronic device, cause the electronic device to perform the method of any one of claims 1-9.
CN201911396238.XA 2019-12-30 2019-12-30 Living body detection method, living body detection device, electronic apparatus, and storage medium Active CN113128258B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911396238.XA CN113128258B (en) 2019-12-30 2019-12-30 Living body detection method, living body detection device, electronic apparatus, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911396238.XA CN113128258B (en) 2019-12-30 2019-12-30 Living body detection method, living body detection device, electronic apparatus, and storage medium

Publications (2)

Publication Number Publication Date
CN113128258A CN113128258A (en) 2021-07-16
CN113128258B true CN113128258B (en) 2022-10-04

Family

ID=76767818

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911396238.XA Active CN113128258B (en) 2019-12-30 2019-12-30 Living body detection method, living body detection device, electronic apparatus, and storage medium

Country Status (1)

Country Link
CN (1) CN113128258B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105868733A (en) * 2016-04-21 2016-08-17 腾讯科技(深圳)有限公司 Face in-vivo validation method and device
WO2018133791A1 (en) * 2017-01-19 2018-07-26 腾讯科技(深圳)有限公司 Living body discrimination method and system based on video analysis, and storage medium
CN109241832A (en) * 2018-07-26 2019-01-18 维沃移动通信有限公司 A kind of method and terminal device of face In vivo detection
CN109858375A (en) * 2018-12-29 2019-06-07 深圳市软数科技有限公司 Living body faces detection method, terminal and computer readable storage medium
CN110443192A (en) * 2019-08-01 2019-11-12 中国科学院重庆绿色智能技术研究院 A kind of non-interactive type human face in-vivo detection method and system based on binocular image

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105868733A (en) * 2016-04-21 2016-08-17 腾讯科技(深圳)有限公司 Face in-vivo validation method and device
WO2018133791A1 (en) * 2017-01-19 2018-07-26 腾讯科技(深圳)有限公司 Living body discrimination method and system based on video analysis, and storage medium
CN109241832A (en) * 2018-07-26 2019-01-18 维沃移动通信有限公司 A kind of method and terminal device of face In vivo detection
CN109858375A (en) * 2018-12-29 2019-06-07 深圳市软数科技有限公司 Living body faces detection method, terminal and computer readable storage medium
CN110443192A (en) * 2019-08-01 2019-11-12 中国科学院重庆绿色智能技术研究院 A kind of non-interactive type human face in-vivo detection method and system based on binocular image

Also Published As

Publication number Publication date
CN113128258A (en) 2021-07-16

Similar Documents

Publication Publication Date Title
US11048953B2 (en) Systems and methods for facial liveness detection
EP2580711B1 (en) Distinguishing live faces from flat surfaces
US8578299B2 (en) Method and computing device in a system for motion detection
CN106372629B (en) Living body detection method and device
US9866820B1 (en) Online calibration of cameras
US20180165511A1 (en) Three-dimensional facial recognition method and system
CN104933389B (en) Identity recognition method and device based on finger veins
US10423848B2 (en) Method, system, and computer-readable recording medium for long-distance person identification
CN107392137B (en) Face recognition method and device
WO2019071664A1 (en) Human face recognition method and apparatus combined with depth information, and storage medium
CN105740778B (en) Improved three-dimensional human face in-vivo detection method and device
CN108537131B (en) Face recognition living body detection method based on face characteristic points and optical flow field
CN111652082B (en) Face living body detection method and device
KR101640014B1 (en) Iris recognition apparatus for detecting false face image
CN110956114A (en) Face living body detection method, device, detection system and storage medium
US10496874B2 (en) Facial detection device, facial detection system provided with same, and facial detection method
US9036920B2 (en) Method of detecting feature points of an object in a system for motion detection
WO2019061659A1 (en) Method and device for removing eyeglasses from facial image, and storage medium
US8615136B2 (en) Computing device and method for motion detection
CN108509868B (en) Face recognition system and method based on light field camera
CN112507767A (en) Face identification method and related computer system
KR20210136092A (en) Image processing apparatus, image processing method and image processing program
CN113128258B (en) Living body detection method, living body detection device, electronic apparatus, and storage medium
CN112232121A (en) Living body detection method, apparatus, device, and medium
CN111241870A (en) Terminal device and face image recognition method and system thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant