CN116152932A - Living body detection method and related equipment - Google Patents

Living body detection method and related equipment Download PDF

Info

Publication number
CN116152932A
CN116152932A CN202111361322.5A CN202111361322A CN116152932A CN 116152932 A CN116152932 A CN 116152932A CN 202111361322 A CN202111361322 A CN 202111361322A CN 116152932 A CN116152932 A CN 116152932A
Authority
CN
China
Prior art keywords
light image
infrared light
target
brightness
face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111361322.5A
Other languages
Chinese (zh)
Inventor
洪哲鸣
张晓翼
王少鸣
郭润增
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202111361322.5A priority Critical patent/CN116152932A/en
Publication of CN116152932A publication Critical patent/CN116152932A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

The application discloses a living body detection method and related equipment, and related embodiments can be applied to various scenes such as cloud technology, artificial intelligence, intelligent traffic and the like; at least one group of face images for the object to be detected can be acquired; aiming at each group of face images, carrying out face area recognition on visible light images in the face images to obtain a recognition result; determining a target detection area corresponding to an infrared light image in the face image based on the identification result; carrying out statistical analysis on brightness values of pixel points in a target detection area corresponding to the infrared light image to obtain a target brightness value corresponding to the infrared light image; selecting a target infrared light image from each group of face images according to a target brightness value corresponding to the infrared light image in each group of face images; and performing living body detection on the object to be detected based on the target infrared light image. According to the method and the device, the target infrared light image which is more favorable for living body detection can be selected, so that the living body detection accuracy and detection efficiency are improved.

Description

Living body detection method and related equipment
Technical Field
The application relates to the technical field of computers, in particular to a living body detection method and related equipment.
Background
With the development of computer technology, image processing technology is applied to more and more fields, for example, face recognition technology is widely applied to various fields such as access control and attendance checking, information security, electronic certificates, monitoring security and the like. Specifically, the face recognition technology is a technology of automatically extracting face features from a face image and then performing authentication according to the features. The security of face recognition technology is receiving increasing attention. Some lawbreakers can perform face recognition by forging faces and perform actions of endangering property, personnel and public safety after the face recognition is successful. In order to prevent illegal attacks, a living body detection technology in the face recognition technology is particularly important; the living body detection can be generally performed based on the infrared light image.
However, in the related art, a comprehensive infrared light image selection scheme does not exist in the living body detection process, and living body detection needs to be performed on a plurality of acquired infrared light images, so that the living body detection efficiency and accuracy are low.
Disclosure of Invention
The embodiment of the application provides a living body detection method and related equipment, wherein the related equipment can comprise a living body detection device, electronic equipment, a computer readable storage medium and a computer program product, and can select a target infrared light image which is more beneficial to living body detection, so that the living body detection accuracy and detection efficiency are improved.
The embodiment of the application provides a living body detection method, which comprises the following steps:
collecting at least one group of face images aiming at an object to be detected, wherein each group of face images comprises an infrared light image and a visible light image;
aiming at each group of face images, carrying out face area recognition on visible light images in the face images to obtain a recognition result;
determining a target detection area corresponding to an infrared light image in the face image based on the identification result;
carrying out statistical analysis on brightness values of pixel points in a target detection area corresponding to the infrared light image to obtain a target brightness value corresponding to the infrared light image;
selecting a target infrared light image for living body detection from each group of face images according to a target brightness value corresponding to the infrared light image in each group of face images;
and performing living body detection on the object to be detected based on the target infrared light image.
Accordingly, an embodiment of the present application provides a living body detection apparatus, including:
the device comprises an acquisition unit, a detection unit and a detection unit, wherein the acquisition unit is used for acquiring at least one group of face images aiming at an object to be detected, and each group of face images comprises an infrared light image and a visible light image;
the recognition unit is used for recognizing the face area of the visible light image in the face image aiming at each group of face images to obtain a recognition result;
The determining unit is used for determining a target detection area corresponding to the infrared light image in the face image based on the identification result;
the analysis unit is used for carrying out statistical analysis on the brightness value of the pixel point in the target detection area corresponding to the infrared light image to obtain a target brightness value corresponding to the infrared light image;
the selecting unit is used for selecting a target infrared light image for living body detection from the face images according to the target brightness value corresponding to the infrared light image in the face images;
and the living body detection unit is used for carrying out living body detection on the object to be detected based on the target infrared light image.
Optionally, in some embodiments of the present application, the determining unit may be specifically configured to determine, when the recognition result is that a face area of the visible light image is recognized, a target detection area corresponding to an infrared light image in the face image according to the face area of the visible light image.
Optionally, in some embodiments of the present application, the determining unit may include an alignment subunit and a mapping subunit as follows:
the alignment subunit is configured to perform position alignment processing on the pixel points of the infrared light image in the face image based on the position information of the pixel points in the visible light image, so as to obtain an aligned infrared light image;
And the mapping subunit is used for mapping the face area of the visible light image to a corresponding position in the aligned infrared light image to obtain a target detection area corresponding to the aligned infrared light image.
Optionally, in some embodiments of the present application, the analysis unit may be specifically configured to perform a mean value operation on a luminance value of a pixel point in a target detection area corresponding to the infrared light image, so as to obtain a target luminance value corresponding to the infrared light image.
Optionally, in some embodiments of the present application, the determining unit may include a selecting subunit and a constructing subunit as follows:
the selecting subunit is configured to select, when the recognition result is that the face area of the visible light image is not recognized, a target detection pixel point from the infrared light image based on a brightness value of a pixel point in the infrared light image in the face image;
and the construction subunit is used for constructing a target detection area corresponding to the infrared light image according to the target detection pixel point.
Optionally, in some embodiments of the present application, the analysis unit may include an interval determination subunit, a calculation subunit, and an analysis subunit, as follows:
The interval determining subunit is configured to determine a target brightness interval according to a brightness value of a pixel point in a target detection area corresponding to the infrared light image, where the target brightness interval includes at least one brightness subinterval;
a calculating subunit, configured to calculate, for each luminance subinterval, a pixel point duty ratio in the target detection area, where the pixel point duty ratio belongs to the luminance subinterval, according to luminance values of the luminance subinterval and a pixel point in the target detection area corresponding to the infrared light image;
and the analysis subunit is used for carrying out brightness analysis on the target detection area according to the pixel point duty ratio corresponding to each brightness subinterval to obtain a target brightness value corresponding to the infrared light image.
Optionally, in some embodiments of the present application, the calculating subunit may be specifically configured to, for each luminance subinterval, count, for each pixel point in the target detection area corresponding to the infrared light image, where a luminance value falls into the luminance subinterval, to obtain a number of target pixel points corresponding to the luminance subinterval; and calculating the pixel point duty ratio belonging to the brightness subinterval in the target detection area according to the number of the target pixel points corresponding to the brightness subinterval.
Optionally, in some embodiments of the present application, the analyzing subunit may specifically be configured to determine a reference luminance value corresponding to each luminance subinterval; and according to the pixel point duty ratio corresponding to each brightness subinterval, fusing the reference brightness values corresponding to each brightness subinterval to obtain the target brightness value corresponding to the infrared light image.
An electronic device provided in an embodiment of the present application includes a processor and a memory, where the memory stores a plurality of instructions, and the processor loads the instructions to perform steps in the living body detection method provided in the embodiment of the present application.
The present embodiment also provides a computer-readable storage medium having stored thereon a computer program, wherein the computer program, when executed by a processor, implements the steps in the living body detection method provided by the embodiment of the present application.
In addition, the embodiment of the application further provides a computer program product, which comprises a computer program or instructions, and the computer program or instructions realize the steps in the living body detection method provided by the embodiment of the application when being executed by a processor.
The embodiment of the application provides a living body detection method and related equipment, which can collect at least one group of face images aiming at an object to be detected, wherein each group of face images comprises an infrared light image and a visible light image; aiming at each group of face images, carrying out face area recognition on visible light images in the face images to obtain a recognition result; determining a target detection area corresponding to an infrared light image in the face image based on the identification result; carrying out statistical analysis on brightness values of pixel points in a target detection area corresponding to the infrared light image to obtain a target brightness value corresponding to the infrared light image; selecting a target infrared light image for living body detection from each group of face images according to a target brightness value corresponding to the infrared light image in each group of face images; and performing living body detection on the object to be detected based on the target infrared light image. According to the method and the device, the face region recognition result of the visible light image can be utilized to determine the target detection region of the infrared light image, further, based on the brightness analysis result of the target detection region, the infrared light image with too dark or overexposed picture is intercepted, the target infrared light image which is more beneficial to living body detection is selected, and therefore living body detection accuracy and detection efficiency are improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly introduced below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1a is a schematic view of a living body detection method according to an embodiment of the present application;
FIG. 1b is a flow chart of a method of in-vivo detection provided by an embodiment of the present application;
FIG. 1c is another flow chart of a method of in-vivo detection provided by an embodiment of the present application;
FIG. 1d is another flow chart of a method of in-vivo detection provided by embodiments of the present application;
FIG. 2 is another flow chart of a method of in-vivo detection provided by an embodiment of the present application;
fig. 3 is a schematic structural view of a living body detection apparatus provided in an embodiment of the present application;
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by those skilled in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
The embodiment of the application provides a living body detection method and related equipment, wherein the related equipment can comprise a living body detection device, an electronic device, a computer readable storage medium and a computer program product. The living body detection apparatus may be integrated in an electronic device, which may be a terminal or a server or the like.
It will be appreciated that the living body detection method of the present embodiment may be executed on the terminal, may be executed on the server, or may be executed by both the terminal and the server. The above examples should not be construed as limiting the present application.
As shown in fig. 1a, an example is a method in which a terminal and a server perform a living body detection together. The living body detection system provided by the embodiment of the application comprises a terminal 10, a server 11 and the like; the terminal 10 and the server 11 are connected via a network, for example, a wired or wireless network connection, etc., wherein the living body detection apparatus may be integrated in the terminal.
Wherein, terminal 10 can be used for: collecting at least one group of face images aiming at an object to be detected, wherein each group of face images comprises an infrared light image and a visible light image; aiming at each group of face images, carrying out face area recognition on visible light images in the face images to obtain a recognition result; determining a target detection area corresponding to an infrared light image in the face image based on the identification result; carrying out statistical analysis on brightness values of pixel points in a target detection area corresponding to the infrared light image to obtain a target brightness value corresponding to the infrared light image; selecting a target infrared light image for living body detection from each group of face images according to a target brightness value corresponding to the infrared light image in each group of face images; the target infrared light image is transmitted to the server 11, so that the server 11 performs living body detection on the object to be detected based on the target infrared light image. The terminal 10 may include a mobile phone, a smart tv, a tablet computer, a notebook computer, or a personal computer (PC, personal Computer), among others. A client may also be provided on the terminal 10, which may be an application client or a browser client, etc.
Wherein the server 11 may be configured to: the target infrared light image transmitted by the terminal 10 is received, and the object to be detected is subjected to living body detection based on the target infrared light image. The server 11 may be a single server, or may be a server cluster or cloud server composed of a plurality of servers. The disclosed in-vivo detection method or apparatus, wherein a plurality of servers may be organized into a blockchain and the servers are nodes on the blockchain.
The above-described step of selecting an infrared light image by the terminal 10 may be performed by the server 11.
The living body detection method provided by the embodiment of the application relates to the computer vision technology in the field of artificial intelligence.
Among these, artificial intelligence (AI, artificial Intelligence) is the theory, method, technique and application system that uses a digital computer or a digital computer-controlled machine to simulate, extend and expand human intelligence, sense the environment, acquire knowledge and use knowledge to obtain optimal results. In other words, artificial intelligence is an integrated technology of computer science that attempts to understand the essence of intelligence and to produce a new intelligent machine that can react in a similar way to human intelligence. Artificial intelligence, i.e. research on design principles and implementation methods of various intelligent machines, enables the machines to have functions of sensing, reasoning and decision. The artificial intelligence technology is a comprehensive subject, and relates to the technology with wide fields, namely the technology with a hardware level and the technology with a software level. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning, automatic driving, intelligent traffic and other directions.
The Computer Vision technology (CV) Computer Vision is a science of researching how to make a machine "look at", and more specifically, it means to replace a human eye with a camera and a Computer to perform machine Vision such as identifying, tracking and measuring on a target, and further perform graphic processing, so that the Computer processing becomes an image more suitable for the human eye to observe or transmit to an instrument to detect. As a scientific discipline, computer vision research-related theory and technology has attempted to build artificial intelligence systems that can acquire information from images or multidimensional data. Computer vision technologies typically include image processing, image recognition, image semantic understanding, image retrieval, OCR, video processing, video semantic understanding, video content/behavior recognition, three-dimensional object reconstruction, 3D technology, virtual reality, augmented reality, synchronous positioning and mapping, autopilot, intelligent transportation, etc., as well as common biometric technologies such as face recognition, fingerprint recognition, etc.
The following will describe in detail. The following description of the embodiments is not intended to limit the preferred embodiments.
The present embodiment will be described from the viewpoint of a living body detection apparatus, which may be integrated in an electronic device, which may be a device such as a server or a terminal.
The living body detection method can be applied to various scenes needing living body detection, such as face-brushing payment, entrance guard attendance and the like. The embodiment can be applied to various scenes such as cloud technology, artificial intelligence, intelligent traffic, auxiliary driving and the like.
As shown in fig. 1b, the specific flow of the living body detection method may be as follows:
101. at least one group of face images for an object to be detected is acquired, each group of face images comprising an infrared light image and a visible light image.
The object to be detected is an object to be detected in living body, which may be a real face or a false face, such as a printed paper face photo, a silica gel mask, etc.
The infrared light image can be specifically an infrared image of floodinfrared light imaging acquired by an infrared Sensor (Sensor); it can be used for living body detection of an object to be detected.
The visible light image may be a color image of natural light image collected by a color Sensor (Sensor), and may be used for identifying the identity of the object to be detected.
Specifically, each group of face images may further include a depth image, where the depth image may specifically be a depth image obtained by collecting infrared light with a speckle structure by an infrared sensor and resolving the speckle by a depth unit. In three-dimensional (3D,three dimensional) computer graphics and computer vision, a depth map is an image or image channel containing information about the distance of the surface of an acquired scene object to a viewpoint. Each pixel point of the depth map represents a vertical distance between the depth camera plane and the object plane to be photographed, and is usually represented by 16 bits, in millimeters. Depth images can generally be used in face recognition scenarios: living body detection and auxiliary identification; the depth image is utilized to assist in face identification, so that the accuracy and the robustness of face identification can be greatly improved.
In general, a detection method for determining whether an object to be detected (i.e., a face brushing object) is a real person, a photo, a head model, or the like can determine whether the object to be detected is a photo through a depth image, and determine whether the object to be detected is a silica gel head model through brightness of an infrared light image.
In some embodiments, the face image may be acquired by a camera, which may be a three-dimensional (3D,three dimensional) camera, which may add related software and hardware for in-vivo detection, such as a depth camera and an infrared camera, and may further improve the security of the information by in-vivo detection.
In the related art, face recognition is generally performed using RGB-D (red green blue-Depth) data.
The RGB is a color standard in industry, and various colors can be obtained by adjusting three color channels of Red (Red), green (Green) and Blue (Blue) and overlapping the three color channels, namely, colors representing the three channels of Red, green and Blue.
Wherein D is specifically abbreviated as Depth Map. In three-dimensional computer graphics, depth Map is an image or image channel containing information about the distance of the surface of a scene object from a viewpoint. The Depth Map is similar to a gray scale image except that the pixel value of each pixel is the actual distance of the sensor from the object. Typically, the RGB image (i.e., the visible light image) and the Depth image are registered, so that there is a one-to-one correspondence between the pixels of the RGB image and the pixels of the Depth image.
102. And carrying out face region recognition on visible light images in the face images aiming at each group of face images to obtain a recognition result.
The face region identification may be to identify a specific position of a face in the visible light image, that is, a face region in the visible light image, where the face region includes a face of an object to be detected. The recognition result may be divided into two cases, namely, a face region in which the visible light image is recognized and a face region in which the visible light image is not recognized.
The living body detection method can determine the target detection area of the infrared light image based on the face area recognition result of the visible light image, and then select the target infrared light image capable of being used for living body detection according to the brightness value of the target detection area of the infrared light image. The embodiment can provide a complete infrared light image selection scheme to comprehensively support the situations that the face in the visible light image is recognized and the face in the visible light image is not recognized.
103. And determining a target detection area corresponding to the infrared light image in the face image based on the identification result.
The infrared light image and the visible light image corresponding to the recognition result belong to the same group of face images. The visible light image and the infrared light image in the same group of face images are acquired at the same time.
The target detection area can be specifically an area for brightness analysis in the infrared light image; according to the embodiment, the target brightness value corresponding to the infrared light image can be determined according to the brightness value of the target detection area, and then the target infrared light image for living body detection is selected from the groups of face images based on the target brightness value.
Optionally, in this embodiment, the step of determining, based on the recognition result, a target detection area corresponding to the infrared light image in the face image may include:
and when the identification result is that the face area of the visible light image is identified, determining a target detection area corresponding to the infrared light image in the face image according to the face area of the visible light image.
Specifically, when the recognition result is that the face region of the visible light image is recognized, the face region of the visible light image can be directly mapped to a corresponding position in the infrared light image, so as to obtain a target detection region corresponding to the infrared light image.
In some embodiments, the pixels of the visible light image and the infrared light image may not be aligned, and then the alignment processing is required to be performed on the pixels of the visible light image and the infrared light image, and then the target detection area corresponding to the infrared light image is determined based on the face area of the visible light image.
Optionally, in this embodiment, the step of determining, according to the face area of the visible light image, a target detection area corresponding to the infrared light image in the face image may include:
based on the position information of the pixel points in the visible light image, performing position alignment processing on the pixel points of the infrared light image in the face image to obtain an aligned infrared light image;
and mapping the face region of the visible light image to a corresponding position in the aligned infrared light image to obtain a target detection region corresponding to the aligned infrared light image.
The position information of the pixel point in the visible light image may be a position of the pixel point in the whole visible light image, for example, the position information may be represented in a coordinate form, and the specific position of a certain pixel point in the visible light image in the whole visible light image may be represented by the coordinate.
The visible light image can be used as a reference, and each pixel point in the infrared light image and the corresponding pixel point in the visible light image are subjected to position alignment processing, namely, the pixel points in the infrared light image, which represent the same target object with the visible light image, are subjected to position alignment, so that the positions of the pixel points of the same target object in the infrared light image and the visible light image after alignment are the same.
The face region of the visible light image is mapped to a corresponding position in the aligned infrared light image, specifically, may be mapped to a position in the aligned infrared light image, which is the same as the coordinates of the face region of the visible light image, so that the position is determined as a target detection region corresponding to the aligned infrared light image.
Optionally, in this embodiment, the step of determining, based on the recognition result, a target detection area corresponding to the infrared light image in the face image may include:
when the recognition result is that the face area of the visible light image is not recognized, selecting a target detection pixel point from the infrared light image based on the brightness value of the pixel point in the infrared light image in the face image;
and constructing a target detection area corresponding to the infrared light image according to the target detection pixel points.
The pixel point with the brightness value meeting the preset brightness condition in the infrared light image can be determined as the target detection pixel point. The preset brightness condition may be set according to actual conditions, which is not limited in this embodiment. For example, the preset brightness condition may be a pixel point with a brightness value greater than a preset brightness value, and the preset brightness value may be set according to the actual situation.
Specifically, a point set formed by the target detection pixel points may be used as the target detection region corresponding to the infrared light image. For example, each selected target detection pixel point may be added to a preset blank set to obtain a point set including each target detection pixel point, and then a region corresponding to the point set in the infrared light image is used as a target detection region.
104. And carrying out statistical analysis on the brightness value of the pixel point in the target detection area corresponding to the infrared light image to obtain the target brightness value corresponding to the infrared light image.
The target brightness value corresponding to the infrared light image can be determined according to the statistical analysis result of the brightness value of the pixel point in the target detection area. Specifically, based on recognition results of face regions of different visible light images, statistical analysis methods for luminance values of pixel points in the target detection region may be different.
Optionally, in this embodiment, the step of performing statistical analysis on the brightness value of the pixel point in the target detection area corresponding to the infrared light image to obtain the target brightness value corresponding to the infrared light image may include:
and carrying out average value operation on the brightness values of the pixel points in the target detection area corresponding to the infrared light image to obtain the target brightness value corresponding to the infrared light image.
The embodiment specifically refers to a case of performing statistical analysis on brightness values of pixel points in a target detection area when the recognition result is that a face area of a visible light image is recognized.
The average value operation result of the brightness values of the pixels in the target detection area may be used as the target brightness value corresponding to the infrared light image, specifically, the average brightness value of the pixels in the target detection area is used as the target brightness value corresponding to the infrared light image.
Optionally, in this embodiment, the step of performing statistical analysis on the brightness value of the pixel point in the target detection area corresponding to the infrared light image to obtain the target brightness value corresponding to the infrared light image may include:
determining a target brightness interval according to the brightness value of the pixel point in the target detection area corresponding to the infrared light image, wherein the target brightness interval comprises at least one brightness subinterval;
for each brightness subinterval, calculating the pixel point duty ratio of the brightness subinterval in the target detection area according to the brightness values of the brightness subinterval and the pixel points in the target detection area corresponding to the infrared light image;
and carrying out brightness analysis on the target detection area according to the pixel point duty ratio corresponding to each brightness subinterval to obtain a target brightness value corresponding to the infrared light image.
The embodiment specifically refers to a case where, when a face region of a visible light image is not recognized, a statistical analysis is performed on luminance values of pixel points in a target detection region.
The target brightness interval can be set according to actual conditions; for example, the target brightness interval may be a brightness interval corresponding to a minimum brightness value to a maximum brightness value of the pixel points in the target detection area, and the target brightness interval may also be a preset numerical range. The luminance sub-section may be obtained by dividing the target luminance section, and the dividing manner may be equal division or unequal division, which is not limited in this embodiment.
In the present embodiment, the target luminance section and each luminance sub-section may be continuous or discrete, and the present embodiment is not limited thereto. Discrete intervals mean that the values contained in the intervals are discrete and not continuous, e.g., integer intervals contain integer values, are discrete intervals, etc.
In one embodiment, the brightness values of the pixels in the target detection area corresponding to the infrared light image include 160, 173, 185, 196, 207, and 222, and the target brightness interval may be a brightness interval including 160, 173, 185, 196, 207, and 222, where each brightness sub-interval corresponds to one of the values, for example, the brightness value 160 is regarded as a brightness sub-interval, and the brightness value 173 is regarded as a brightness sub-interval. The pixel ratio belonging to each luminance subinterval in the target detection area is thus calculated, that is, the probability that the pixel in the target detection area appears at each luminance value is calculated.
Optionally, in this embodiment, the step of "calculating, for each luminance subinterval, a pixel point duty ratio of a pixel point in the target detection area, which belongs to the luminance subinterval, according to the luminance subinterval and a luminance value of the pixel point in the target detection area corresponding to the infrared light image" may include:
counting the pixel points of which the brightness values fall into the brightness subintervals in the target detection areas corresponding to the infrared light images aiming at each brightness subinterval to obtain the number of the target pixel points corresponding to the brightness subintervals;
and calculating the pixel point duty ratio belonging to the brightness subinterval in the target detection area according to the number of the target pixel points corresponding to the brightness subinterval.
For example, for a certain brightness subinterval, if the number of pixels in the target detection area with brightness values falling into the brightness subinterval is 20, the number of target pixels corresponding to the brightness subinterval is 20, and if the number of pixels in the target detection area is 400, the duty ratio of pixels belonging to the brightness subinterval in the target detection area is 20/400=0.05.
Optionally, in this embodiment, the step of performing luminance analysis on the target detection area according to the pixel point duty ratio corresponding to each luminance subinterval to obtain the target luminance value corresponding to the infrared light image may include:
Determining a reference brightness value corresponding to each brightness subinterval;
and according to the pixel point duty ratio corresponding to each brightness subinterval, fusing the reference brightness values corresponding to each brightness subinterval to obtain the target brightness value corresponding to the infrared light image.
The reference brightness value of each brightness subinterval can be set according to actual conditions. For example, if the luminance subinterval includes only one discrete value, the reference luminance value may be the discrete value; if the luminance subinterval is a continuous interval, the reference luminance value may be an average value of two end values of the luminance subinterval.
The fusion manner of the reference luminance values corresponding to each luminance subinterval is various, which is not limited in this embodiment. For example, the fusion mode may be weighted fusion, in which the pixel point duty ratio corresponding to each brightness subinterval is used as a weight, so that the weighted summation operation is performed on the reference brightness value corresponding to each brightness subinterval, and the operation result obtained by the weighted summation operation is used as the target brightness value corresponding to the infrared light image.
In a specific scenario, as shown in fig. 1c, when a face region in a visible light image is not recognized, a process of acquiring a target brightness value corresponding to an infrared light image may be as follows:
A1, filtering an infrared light image, namely determining pixel points with brightness values larger than a preset brightness value in the infrared light image as target detection pixel points, and taking a point set formed by the target detection pixel points as a target detection area corresponding to the infrared light image;
a2, carrying out statistical analysis on brightness values of the pixel points in the target detection area, and particularly calculating the probability of occurrence of the pixel points in the target detection area on each brightness subinterval;
for example, the point set obtained in the step A1 is denoted as G, the pixel points in the target detection area (i.e., the target detection pixel points) are denoted as k, each target detection pixel point k in the point set G is obtained, the histogram distribution of the luminance values thereof is obtained, the horizontal axis corresponding to the histogram represents the luminance value, which may include a plurality of luminance subintervals, the luminance of each luminance subinterval is denoted as L (k), and the vertical axis corresponding to the histogram may represent the number of target detection pixel points whose luminance value is within the range of the corresponding luminance subinterval, so, according to the histogram, the probability P (L (k)) that the pixel point in the target detection area appears on each luminance subinterval L (k) (specifically, the luminance value) may be calculated;
a3, calculating an expected brightness value of the pixel point in the target detection area based on the probability of the pixel point in the target detection area on each brightness subinterval and the reference brightness value corresponding to each brightness subinterval, and taking the expected brightness value as a target brightness value corresponding to the infrared light image;
Specifically, the reference luminance value corresponding to each luminance sub-section may be denoted as L (k), and in this embodiment, the probability P (k) that a pixel point in the target detection area appears on each luminance sub-section L (k) may be used as the weight of each luminance sub-section, and based on the weight, the reference luminance value L (k) corresponding to each luminance sub-section may be weighted, where the formula corresponding to the weighted calculation may be Σl (k) x P (L (k)), so as to obtain the desired luminance value E of the pixel point in the target detection area, where the desired luminance value e= Σl (k) x P (L (k)).
105. And selecting a target infrared light image for living body detection from the face images according to the target brightness value corresponding to the infrared light image in the face images.
Specifically, according to the target brightness value corresponding to the infrared light image, whether the infrared light image is too dark or overexposed, and the infrared light image which is too dark or overexposed is not beneficial to living body detection. Therefore, it is necessary to screen out the infrared light image with too dark or overexposed picture according to the target brightness value corresponding to the infrared light image.
The infrared light images with target brightness values meeting preset conditions in the face images of each group can be selected as target infrared light images for living body detection. The preset conditions may be set according to actual situations, which is not limited in this embodiment; for example, the preset condition may be that the target brightness value of the infrared light image is within a preset brightness range, and the preset brightness range may be set according to the actual situation.
In one embodiment, as shown in fig. 1d, a process for selecting an infrared light image of a target for living body detection is described as follows:
b1, acquiring a plurality of groups of face images through a camera, wherein each group of face images comprises a visible light image and an infrared light image with aligned pixel points, and the face region recognition result of the visible light image can be used for representing the face region recognition result of the infrared light image due to the alignment of the pixel points of the images; then, aiming at each group of face images, carrying out face area recognition on visible light images in the face images to obtain a recognition result;
b2, when the recognition result is that the face region of the visible light image is recognized, if the average brightness value of the pixel points in the face region of the infrared light image is within a preset brightness range, selecting the infrared light image as a target infrared light image for living body detection;
specifically, since the positions of the pixels of the visible light image and the infrared light image in the same group of face images are aligned, for each group of face images, the face area identified in the group of visible light images can be mapped into the infrared light image of the group to obtain the face area of the infrared light image, the face area of the infrared light image is taken as a target detection area, and the average brightness value of the pixels in the target detection area is taken as a target brightness value corresponding to the infrared light image;
When the target brightness value of the infrared light image is larger than a preset overexposure brightness threshold value, judging that the infrared light image is overexposed, and preferentially intercepting the infrared light image; when the target brightness value of the infrared light image is smaller than a preset over-dark brightness threshold value, judging that the infrared light image is over-dark, and preferentially intercepting the infrared light image; selecting an infrared light image which is not intercepted as a target infrared light image for living body detection;
b3, when the recognition result is that the face area of the visible light image cannot be recognized, determining pixel points with brightness values larger than a preset brightness value in the infrared light image as target detection pixel points, taking a point set formed by the target detection pixel points as a target detection area corresponding to the infrared light image, and taking the expected brightness value of the pixel points in the target detection area as a target brightness value corresponding to the infrared light image; if the target brightness value is within the preset brightness range, selecting the infrared light image as a target infrared light image for living body detection;
specifically, when the target brightness value of the infrared light image is larger than a preset overexposure brightness threshold value, judging that the infrared light image is overexposed, and preferentially intercepting the infrared light image; when the target brightness value of the infrared light image is smaller than a preset over-dark brightness threshold value, judging that the infrared light image is over-dark, and preferentially intercepting the infrared light image; the infrared light image that is not intercepted is selected as the target infrared light image for living body detection.
The infrared light image selection method provided by the embodiment can comprehensively support the conditions that the face in the visible light image is recognized and the face in the visible light image is not recognized; and through verification, the exposed or excessively dark infrared light image can be intercepted by 100%, so that a better infrared light image selection effect is obtained.
106. And performing living body detection on the object to be detected based on the target infrared light image.
The living body detection is mainly used for judging whether the face appearing in front of the lens is real or fake, wherein the face presented by other media can be defined as a false face, and the false face comprises a printed paper face photo, a face on an electronic display screen, a silica gel mask, a three-dimensional (3D,three dimensional) portrait and the like. The living body detection technology can effectively resist common attack means such as photos, face changes, masks, shielding, screen flipping and the like.
Specifically, after living body detection, the human face recognition can be performed on the object to be detected based on the visible light image, wherein the human face recognition is a technology for exchanging human identity information through human face multimedia information. In some embodiments, the associated payment operations may also be performed based on the identified face feature information.
The visible light image and the target infrared light image for face recognition are acquired at the same time, namely, the visible light image and the target infrared light image belong to the same group of face images. For example, after selecting the target infrared light image for living body detection, a face image including the target infrared light image may be selected from the face images of each group as a reference face image group, and face recognition may be performed for visible light images in each reference face image group.
Specifically, in some embodiments, the step of "face recognition of the object to be detected based on the visible light image" may include:
and extracting features of the visible light image to obtain face feature information corresponding to the visible light image, and identifying the identity of the object to be detected according to the face feature information.
The face feature information may be feature string information that uniquely identifies a user and is obtained by converting visible light image information, and may be obtained by extracting a visible light image through a face recognition model, which may be a deep neural Network (DNN, deep Neural Networks), a visual geometry group Network (VGGNet, visual Geometry Group Network), a Residual Network (Residual Network), a dense connection convolutional Network (densanenet, dense Convolutional Network), or the like, but it should be understood that the face recognition model of the present embodiment is not limited to only the above-listed types.
In some embodiments, a plurality of visible light images may be obtained, and then image analysis of at least one dimension is performed on each visible light image, so as to select a target visible light image from the plurality of visible light images, and feature extraction is performed on the target visible light image, so as to obtain face feature information corresponding to the target visible light image, thereby performing identity recognition. Identification may identify which user the brusher is.
Specifically, at least one dimension of image analysis is performed on each visible light image to ensure that the quality of the image meets the operation requirement of subsequent business, wherein the at least one dimension of image analysis can comprise a face shielding range, an illumination environment, a face size, a face centering degree, a face angle, an image contrast, brightness and definition of the image and the like; and selecting a target visible light image from the acquired visible light images based on the image analysis result.
Then, feature extraction can be performed on the target visible light image obtained preferably, so as to obtain face feature information. Specifically, the position information and the shape information of the facial features in the target visible light image can be extracted; and acquiring the face characteristic information of the target visible light image based on the position information and the shape information of the facial features. After the face feature information is obtained, the face feature information can be matched with the face features stored in a preset face feature database, and the user identity corresponding to the face feature information is determined.
In a specific face recognition scenario, multiple sets of face images for an object to be detected are generally collected, where each set of face images includes an infrared light image, a visible light image, and a depth image, where the infrared light image is used for in vivo detection to defend a silica gel head model and the like, the visible light image is used for face recognition, and the depth image is used for in vivo detection to defend a photo and the like. After a plurality of groups of face images are acquired, a group of infrared light images, visible light images and depth images which meet the preconditions of living body detection and identification algorithms are required to be selected. For the selection of the infrared light image and the visible light image, reference may be made to the description of the above embodiments, and the description is omitted here. In addition, it should be noted that the selected infrared light image, visible light image and depth image are acquired at the same time, and if one image (such as infrared light image) in a group of face images does not conform to the pre-conditions of the living body detection and identification algorithm, the group of face images can be subjected to the revocation treatment.
The process of selecting the depth image may specifically be as follows: and aiming at each group of face images, carrying out face area identification on the visible light images to obtain face areas of the visible light images, and carrying out position alignment processing on the visible light images and the pixels of the depth images in the group of face images, so that the face areas of the visible light images can be mapped to corresponding positions in the depth images to obtain face areas of the depth images, and the depth images are selected based on the depth image integrity in the face areas of the depth images.
The depth image completeness specifically refers to the proportion of effective points in a face area of the depth image, wherein the value of a part of pixel points in the depth image is 0, usually 0 represents an ineffective point, and the rest represents effective points; the integrity refers to the ratio of the effective point to all points in the face region of the depth image.
As can be seen from the above, the present embodiment may collect at least one group of face images for an object to be detected, where each group of face images includes an infrared light image and a visible light image; aiming at each group of face images, carrying out face area recognition on visible light images in the face images to obtain a recognition result; determining a target detection area corresponding to an infrared light image in the face image based on the identification result; carrying out statistical analysis on brightness values of pixel points in a target detection area corresponding to the infrared light image to obtain a target brightness value corresponding to the infrared light image; selecting a target infrared light image for living body detection from each group of face images according to a target brightness value corresponding to the infrared light image in each group of face images; and performing living body detection on the object to be detected based on the target infrared light image. According to the method and the device, the face region recognition result of the visible light image can be utilized to determine the target detection region of the infrared light image, further, based on the brightness analysis result of the target detection region, the infrared light image with too dark or overexposed picture is intercepted, the target infrared light image which is more beneficial to living body detection is selected, and therefore living body detection accuracy and detection efficiency are improved.
The method according to the previous embodiment will be described in further detail below with the living body detection device specifically integrated in the terminal.
The embodiment of the application provides a living body detection method, as shown in fig. 2, the specific flow of the living body detection method may be as follows:
201. the terminal acquires at least one group of face images aiming at an object to be detected, wherein each group of face images comprises an infrared light image and a visible light image.
The object to be detected is an object to be detected in living body, which may be a real face or a false face, such as a printed paper face photo, a silica gel mask, etc.
In general, a detection method for determining whether an object to be detected (i.e., a face brushing object) is a real person, a photo, a head model, or the like can determine whether the object to be detected is a photo through a depth image, and determine whether the object to be detected is a silica gel head model through brightness of an infrared light image.
202. The terminal carries out face area recognition on visible light images in the face images aiming at each group of face images to obtain a recognition result; when the recognition result is that the face area of the visible light image is recognized, entering a step 203; and when the recognition result is that the face area of the visible light image is not recognized, entering step 205.
The face region identification may be to identify a specific position of a face in the visible light image, that is, a face region in the visible light image, where the face region includes a face of an object to be detected. The recognition result may be divided into two cases, namely, a face region in which the visible light image is recognized and a face region in which the visible light image is not recognized.
The living body detection method can determine the target detection area of the infrared light image based on the face area recognition result of the visible light image, and then select the target infrared light image capable of being used for living body detection according to the brightness value of the target detection area of the infrared light image. The embodiment can provide a complete infrared light image selection scheme to comprehensively support the situations that the face in the visible light image is recognized and the face in the visible light image is not recognized.
203. The terminal determines a target detection area corresponding to an infrared light image in the face image according to the face area of the visible light image; step 204 is entered.
The infrared light image and the visible light image corresponding to the recognition result belong to the same group of face images. The visible light image and the infrared light image in the same group of face images are acquired at the same time.
Specifically, when the recognition result is that the face region of the visible light image is recognized, the face region of the visible light image can be directly mapped to a corresponding position in the infrared light image, so as to obtain a target detection region corresponding to the infrared light image.
In some embodiments, the pixels of the visible light image and the infrared light image may not be aligned, and then the alignment processing is required to be performed on the pixels of the visible light image and the infrared light image, and then the target detection area corresponding to the infrared light image is determined based on the face area of the visible light image.
Optionally, in this embodiment, the step of determining, according to the face area of the visible light image, a target detection area corresponding to the infrared light image in the face image may include:
based on the position information of the pixel points in the visible light image, performing position alignment processing on the pixel points of the infrared light image in the face image to obtain an aligned infrared light image;
and mapping the face region of the visible light image to a corresponding position in the aligned infrared light image to obtain a target detection region corresponding to the aligned infrared light image.
204. The terminal carries out average value operation on the brightness values of the pixel points in the target detection area corresponding to the infrared light image to obtain a target brightness value corresponding to the infrared light image; step 208 is entered.
205. The terminal selects a target detection pixel point from the infrared light image based on the brightness value of the pixel point in the infrared light image in the face image; constructing a target detection area corresponding to the infrared light image according to the target detection pixel points; step 206 is entered.
The infrared light image and the visible light image corresponding to the recognition result belong to the same group of face images. The visible light image and the infrared light image in the same group of face images are acquired at the same time.
The pixel point with the brightness value meeting the preset brightness condition in the infrared light image can be determined as the target detection pixel point. The preset brightness condition may be set according to actual conditions, which is not limited in this embodiment. For example, the preset brightness condition may be a pixel point with a brightness value greater than a preset brightness value, and the preset brightness value may be set according to the actual situation.
Specifically, a point set formed by the target detection pixel points may be used as the target detection region corresponding to the infrared light image. For example, each selected target detection pixel point may be added to a preset blank set to obtain a point set including each target detection pixel point, and then a region corresponding to the point set in the infrared light image is used as a target detection region.
206. The terminal determines a target brightness interval according to the brightness value of the pixel point in the target detection area corresponding to the infrared light image, wherein the target brightness interval comprises at least one brightness subinterval; step 207 is entered.
The target brightness interval can be set according to actual conditions; for example, the target brightness interval may be a brightness interval corresponding to a minimum brightness value to a maximum brightness value of the pixel points in the target detection area, and the target brightness interval may also be a preset numerical range. The luminance sub-section may be obtained by dividing the target luminance section, and the dividing manner may be equal division or unequal division, which is not limited in this embodiment.
In the present embodiment, the target luminance section and each luminance sub-section may be continuous or discrete, and the present embodiment is not limited thereto. Discrete intervals mean that the values contained in the intervals are discrete and not continuous, e.g., integer intervals contain integer values, are discrete intervals, etc.
In one embodiment, the brightness values of the pixels in the target detection area corresponding to the infrared light image include 160, 173, 185, 196, 207, and 222, and the target brightness interval may be a brightness interval including 160, 173, 185, 196, 207, and 222, where each brightness sub-interval corresponds to one of the values, for example, the brightness value 160 is regarded as a brightness sub-interval, and the brightness value 173 is regarded as a brightness sub-interval. The pixel ratio belonging to each luminance subinterval in the target detection area is thus calculated, that is, the probability that the pixel in the target detection area appears at each luminance value is calculated.
207. The terminal calculates the pixel point duty ratio of the brightness subintervals in the target detection area according to the brightness values of the pixel points in the target detection area corresponding to the brightness subintervals and the infrared light image aiming at each brightness subinterval; according to the pixel point duty ratio corresponding to each brightness subinterval, carrying out brightness analysis on the target detection area to obtain a target brightness value corresponding to the infrared light image; step 208 is entered.
Optionally, in this embodiment, the step of "calculating, for each luminance subinterval, a pixel point duty ratio of a pixel point in the target detection area, which belongs to the luminance subinterval, according to the luminance subinterval and a luminance value of the pixel point in the target detection area corresponding to the infrared light image" may include:
counting the pixel points of which the brightness values fall into the brightness subintervals in the target detection areas corresponding to the infrared light images aiming at each brightness subinterval to obtain the number of the target pixel points corresponding to the brightness subintervals;
and calculating the pixel point duty ratio belonging to the brightness subinterval in the target detection area according to the number of the target pixel points corresponding to the brightness subinterval.
Optionally, in this embodiment, the step of performing luminance analysis on the target detection area according to the pixel point duty ratio corresponding to each luminance subinterval to obtain the target luminance value corresponding to the infrared light image may include:
Determining a reference brightness value corresponding to each brightness subinterval;
and according to the pixel point duty ratio corresponding to each brightness subinterval, fusing the reference brightness values corresponding to each brightness subinterval to obtain the target brightness value corresponding to the infrared light image.
The reference brightness value of each brightness subinterval can be set according to actual conditions. For example, if the luminance subinterval includes only one discrete value, the reference luminance value may be the discrete value; if the luminance subinterval is a continuous interval, the reference luminance value may be an average value of two end values of the luminance subinterval.
The fusion manner of the reference luminance values corresponding to each luminance subinterval is various, which is not limited in this embodiment. For example, the fusion mode may be weighted fusion, in which the pixel point duty ratio corresponding to each brightness subinterval is used as a weight, so that the weighted summation operation is performed on the reference brightness value corresponding to each brightness subinterval, and the operation result obtained by the weighted summation operation is used as the target brightness value corresponding to the infrared light image.
208. And the terminal selects a target infrared light image for living body detection from the face images according to the target brightness value corresponding to the infrared light image in the face images.
Specifically, according to the target brightness value corresponding to the infrared light image, whether the infrared light image is too dark or overexposed, and the infrared light image which is too dark or overexposed is not beneficial to living body detection. Therefore, it is necessary to screen out the infrared light image with too dark or overexposed picture according to the target brightness value corresponding to the infrared light image.
The infrared light images with target brightness values meeting preset conditions in the face images of each group can be selected as target infrared light images for living body detection. The preset conditions may be set according to actual situations, which is not limited in this embodiment; for example, the preset condition may be that the target brightness value of the infrared light image is within a preset brightness range, and the preset brightness range may be set according to the actual situation.
209. And the terminal carries out living body detection on the object to be detected based on the target infrared light image.
As can be seen from the above, in this embodiment, at least one group of face images for an object to be detected may be collected by the terminal, where each group of face images includes an infrared light image and a visible light image; and carrying out face region recognition on visible light images in the face images aiming at each group of face images to obtain a recognition result. When the identification result is that the face area of the visible light image is identified, the terminal determines a target detection area corresponding to the infrared light image in the face image according to the face area of the visible light image; and carrying out average value operation on the brightness values of the pixel points in the target detection area corresponding to the infrared light image to obtain the target brightness value corresponding to the infrared light image. When the recognition result is that the face area of the visible light image is not recognized, the terminal selects a target detection pixel point from the infrared light image based on the brightness value of the pixel point in the infrared light image in the face image; constructing a target detection area corresponding to the infrared light image according to the target detection pixel points; determining a target brightness interval according to the brightness value of the pixel point in the target detection area corresponding to the infrared light image, wherein the target brightness interval comprises at least one brightness subinterval; for each brightness subinterval, calculating the pixel point duty ratio of the brightness subinterval in the target detection area according to the brightness values of the brightness subinterval and the pixel points in the target detection area corresponding to the infrared light image; and carrying out brightness analysis on the target detection area according to the pixel point duty ratio corresponding to each brightness subinterval to obtain a target brightness value corresponding to the infrared light image. After obtaining the target brightness value corresponding to the infrared light image, the terminal selects a target infrared light image for living body detection from the face images according to the target brightness value corresponding to the infrared light image in the face images; and performing living body detection on the object to be detected based on the target infrared light image.
According to the method and the device, the face region recognition result of the visible light image can be utilized to determine the target detection region of the infrared light image, further, based on the brightness analysis result of the target detection region, the infrared light image with too dark or overexposed picture is intercepted, the target infrared light image which is more beneficial to living body detection is selected, and therefore living body detection accuracy and detection efficiency are improved.
In order to better implement the above method, the embodiment of the present application further provides a living body detection device, as shown in fig. 3, which may include an acquisition unit 301, an identification unit 302, a determination unit 303, an analysis unit 304, a selection unit 305, and a living body detection unit 306, as follows:
(1) An acquisition unit 301;
the acquisition unit 301 is configured to acquire at least one set of face images for an object to be detected, where each set of face images includes an infrared light image and a visible light image.
(2) An identification unit 302;
the recognition unit 302 is configured to recognize, for each group of face images, a face region of a visible light image in the face images, and obtain a recognition result.
(3) A determination unit 303;
a determining unit 303, configured to determine a target detection area corresponding to the infrared light image in the face image based on the identification result.
Optionally, in some embodiments of the present application, the determining unit may be specifically configured to determine, when the recognition result is that a face area of the visible light image is recognized, a target detection area corresponding to an infrared light image in the face image according to the face area of the visible light image.
Optionally, in some embodiments of the present application, the determining unit may include an alignment subunit and a mapping subunit as follows:
the alignment subunit is configured to perform position alignment processing on the pixel points of the infrared light image in the face image based on the position information of the pixel points in the visible light image, so as to obtain an aligned infrared light image;
and the mapping subunit is used for mapping the face area of the visible light image to a corresponding position in the aligned infrared light image to obtain a target detection area corresponding to the aligned infrared light image.
Optionally, in some embodiments of the present application, the determining unit may include a selecting subunit and a constructing subunit as follows:
the selecting subunit is configured to select, when the recognition result is that the face area of the visible light image is not recognized, a target detection pixel point from the infrared light image based on a brightness value of a pixel point in the infrared light image in the face image;
And the construction subunit is used for constructing a target detection area corresponding to the infrared light image according to the target detection pixel point.
(4) An analysis unit 304;
and the analysis unit 304 is configured to perform statistical analysis on the brightness value of the pixel point in the target detection area corresponding to the infrared light image, so as to obtain a target brightness value corresponding to the infrared light image.
Optionally, in some embodiments of the present application, the analysis unit may be specifically configured to perform a mean value operation on a luminance value of a pixel point in a target detection area corresponding to the infrared light image, so as to obtain a target luminance value corresponding to the infrared light image.
Optionally, in some embodiments of the present application, the analysis unit may include an interval determination subunit, a calculation subunit, and an analysis subunit, as follows:
the interval determining subunit is configured to determine a target brightness interval according to a brightness value of a pixel point in a target detection area corresponding to the infrared light image, where the target brightness interval includes at least one brightness subinterval;
a calculating subunit, configured to calculate, for each luminance subinterval, a pixel point duty ratio in the target detection area, where the pixel point duty ratio belongs to the luminance subinterval, according to luminance values of the luminance subinterval and a pixel point in the target detection area corresponding to the infrared light image;
And the analysis subunit is used for carrying out brightness analysis on the target detection area according to the pixel point duty ratio corresponding to each brightness subinterval to obtain a target brightness value corresponding to the infrared light image.
Optionally, in some embodiments of the present application, the calculating subunit may be specifically configured to, for each luminance subinterval, count, for each pixel point in the target detection area corresponding to the infrared light image, where a luminance value falls into the luminance subinterval, to obtain a number of target pixel points corresponding to the luminance subinterval; and calculating the pixel point duty ratio belonging to the brightness subinterval in the target detection area according to the number of the target pixel points corresponding to the brightness subinterval.
Optionally, in some embodiments of the present application, the analyzing subunit may specifically be configured to determine a reference luminance value corresponding to each luminance subinterval; and according to the pixel point duty ratio corresponding to each brightness subinterval, fusing the reference brightness values corresponding to each brightness subinterval to obtain the target brightness value corresponding to the infrared light image.
(5) A selecting unit 305;
the selecting unit 305 is configured to select a target infrared light image for in-vivo detection from the face images of each group according to a target brightness value corresponding to the infrared light image in the face images of each group.
(6) A living body detection unit 306;
and a living body detection unit 306, configured to perform living body detection on the object to be detected based on the target infrared light image.
As can be seen from the above, the present embodiment can collect at least one group of face images for an object to be detected by the collecting unit 301, wherein each group of face images includes an infrared light image and a visible light image; the recognition unit 302 recognizes the face area of the visible light image in the face image aiming at each group of face images to obtain a recognition result; determining, by the determining unit 303, a target detection area corresponding to an infrared light image in the face image based on the recognition result; the analysis unit 304 is used for carrying out statistical analysis on the brightness value of the pixel point in the target detection area corresponding to the infrared light image to obtain a target brightness value corresponding to the infrared light image; selecting, by the selecting unit 305, a target infrared light image for living body detection from each group of face images according to a target brightness value corresponding to the infrared light image in each group of face images; the object to be detected is subjected to living body detection by a living body detection unit 306 based on the target infrared light image. According to the method and the device, the face region recognition result of the visible light image can be utilized to determine the target detection region of the infrared light image, further, based on the brightness analysis result of the target detection region, the infrared light image with too dark or overexposed picture is intercepted, the target infrared light image which is more beneficial to living body detection is selected, and therefore living body detection accuracy and detection efficiency are improved.
The embodiment of the application further provides an electronic device, as shown in fig. 4, which shows a schematic structural diagram of the electronic device according to the embodiment of the application, where the electronic device may be a terminal or a server, specifically:
the electronic device may include one or more processing cores 'processors 401, one or more computer-readable storage media's memory 402, power supply 403, and input unit 404, among other components. Those skilled in the art will appreciate that the electronic device structure shown in fig. 4 is not limiting of the electronic device and may include more or fewer components than shown, or may combine certain components, or may be arranged in different components. Wherein:
the processor 401 is a control center of the electronic device, connects various parts of the entire electronic device using various interfaces and lines, and performs various functions of the electronic device and processes data by running or executing software programs and/or modules stored in the memory 402, and calling data stored in the memory 402, thereby performing overall monitoring of the electronic device. Optionally, processor 401 may include one or more processing cores; preferably, the processor 401 may integrate an application processor and a modem processor, wherein the application processor mainly processes an operating system, a user interface, an application program, etc., and the modem processor mainly processes wireless communication. It will be appreciated that the modem processor described above may not be integrated into the processor 401.
The memory 402 may be used to store software programs and modules, and the processor 401 executes various functional applications and data processing by executing the software programs and modules stored in the memory 402. The memory 402 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like; the storage data area may store data created according to the use of the electronic device, etc. In addition, memory 402 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device. Accordingly, the memory 402 may also include a memory controller to provide the processor 401 with access to the memory 402.
The electronic device further comprises a power supply 403 for supplying power to the various components, preferably the power supply 403 may be logically connected to the processor 401 by a power management system, so that functions of managing charging, discharging, and power consumption are performed by the power management system. The power supply 403 may also include one or more of any of a direct current or alternating current power supply, a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator, and the like.
The electronic device may further comprise an input unit 404, which input unit 404 may be used for receiving input digital or character information and generating keyboard, mouse, joystick, optical or trackball signal inputs in connection with user settings and function control.
Although not shown, the electronic device may further include a display unit or the like, which is not described herein. In particular, in this embodiment, the processor 401 in the electronic device loads executable files corresponding to the processes of one or more application programs into the memory 402 according to the following instructions, and the processor 401 executes the application programs stored in the memory 402, so as to implement various functions as follows:
collecting at least one group of face images aiming at an object to be detected, wherein each group of face images comprises an infrared light image and a visible light image; aiming at each group of face images, carrying out face area recognition on visible light images in the face images to obtain a recognition result; determining a target detection area corresponding to an infrared light image in the face image based on the identification result; carrying out statistical analysis on brightness values of pixel points in a target detection area corresponding to the infrared light image to obtain a target brightness value corresponding to the infrared light image; selecting a target infrared light image for living body detection from each group of face images according to a target brightness value corresponding to the infrared light image in each group of face images; and performing living body detection on the object to be detected based on the target infrared light image.
The specific implementation of each operation above may be referred to the previous embodiments, and will not be described herein.
As can be seen from the above, the present embodiment may collect at least one group of face images for an object to be detected, where each group of face images includes an infrared light image and a visible light image; aiming at each group of face images, carrying out face area recognition on visible light images in the face images to obtain a recognition result; determining a target detection area corresponding to an infrared light image in the face image based on the identification result; carrying out statistical analysis on brightness values of pixel points in a target detection area corresponding to the infrared light image to obtain a target brightness value corresponding to the infrared light image; selecting a target infrared light image for living body detection from each group of face images according to a target brightness value corresponding to the infrared light image in each group of face images; and performing living body detection on the object to be detected based on the target infrared light image.
According to the method and the device, the face region recognition result of the visible light image can be utilized to determine the target detection region of the infrared light image, further, based on the brightness analysis result of the target detection region, the infrared light image with too dark or overexposed picture is intercepted, the target infrared light image which is more beneficial to living body detection is selected, and therefore living body detection accuracy and detection efficiency are improved.
Those of ordinary skill in the art will appreciate that all or a portion of the steps of the various methods of the above embodiments may be performed by instructions, or by instructions controlling associated hardware, which may be stored in a computer-readable storage medium and loaded and executed by a processor.
To this end, embodiments of the present application provide a computer readable storage medium having stored therein a plurality of instructions capable of being loaded by a processor to perform steps in any of the living detection methods provided by the embodiments of the present application. For example, the instructions may perform the steps of:
collecting at least one group of face images aiming at an object to be detected, wherein each group of face images comprises an infrared light image and a visible light image; aiming at each group of face images, carrying out face area recognition on visible light images in the face images to obtain a recognition result; determining a target detection area corresponding to an infrared light image in the face image based on the identification result; carrying out statistical analysis on brightness values of pixel points in a target detection area corresponding to the infrared light image to obtain a target brightness value corresponding to the infrared light image; selecting a target infrared light image for living body detection from each group of face images according to a target brightness value corresponding to the infrared light image in each group of face images; and performing living body detection on the object to be detected based on the target infrared light image.
The specific implementation of each operation above may be referred to the previous embodiments, and will not be described herein.
Wherein the computer-readable storage medium may comprise: read Only Memory (ROM), random access Memory (RAM, random Access Memory), magnetic or optical disk, and the like.
Because the instructions stored in the computer readable storage medium may perform the steps in any of the living body detection methods provided in the embodiments of the present application, the beneficial effects that any of the living body detection methods provided in the embodiments of the present application can be achieved, which are detailed in the previous embodiments and are not described herein.
According to one aspect of the present application, there is provided a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The computer instructions are read from a computer-readable storage medium by a processor of a computer device, which executes the computer instructions, causing the computer device to perform the methods provided in various alternative implementations of the living detection aspects described above.
The foregoing has outlined some of the more detailed description of a method for biopsy and associated apparatus, wherein specific examples are provided herein to illustrate the principles and implementations of the present application, the description of the above examples being only for the purpose of aiding in the understanding of the method and core concepts of the present application; meanwhile, those skilled in the art will have variations in the specific embodiments and application scope in light of the ideas of the present application, and the present description should not be construed as limiting the present application in view of the above.

Claims (12)

1. A living body detecting method, characterized by comprising:
collecting at least one group of face images aiming at an object to be detected, wherein each group of face images comprises an infrared light image and a visible light image;
aiming at each group of face images, carrying out face area recognition on visible light images in the face images to obtain a recognition result;
determining a target detection area corresponding to an infrared light image in the face image based on the identification result;
carrying out statistical analysis on brightness values of pixel points in a target detection area corresponding to the infrared light image to obtain a target brightness value corresponding to the infrared light image;
selecting a target infrared light image for living body detection from each group of face images according to a target brightness value corresponding to the infrared light image in each group of face images;
and performing living body detection on the object to be detected based on the target infrared light image.
2. The method according to claim 1, wherein determining a target detection area corresponding to an infrared light image in the face image based on the recognition result includes:
and when the identification result is that the face area of the visible light image is identified, determining a target detection area corresponding to the infrared light image in the face image according to the face area of the visible light image.
3. The method according to claim 2, wherein the determining the target detection area corresponding to the infrared light image in the face image according to the face area of the visible light image includes:
based on the position information of the pixel points in the visible light image, performing position alignment processing on the pixel points of the infrared light image in the face image to obtain an aligned infrared light image;
and mapping the face region of the visible light image to a corresponding position in the aligned infrared light image to obtain a target detection region corresponding to the aligned infrared light image.
4. The method according to claim 2, wherein the performing statistical analysis on the brightness values of the pixel points in the target detection area corresponding to the infrared light image to obtain the target brightness value corresponding to the infrared light image includes:
and carrying out average value operation on the brightness values of the pixel points in the target detection area corresponding to the infrared light image to obtain the target brightness value corresponding to the infrared light image.
5. The method according to claim 1, wherein determining a target detection area corresponding to an infrared light image in the face image based on the recognition result includes:
When the recognition result is that the face area of the visible light image is not recognized, selecting a target detection pixel point from the infrared light image based on the brightness value of the pixel point in the infrared light image in the face image;
and constructing a target detection area corresponding to the infrared light image according to the target detection pixel points.
6. The method according to claim 5, wherein the performing statistical analysis on the brightness values of the pixels in the target detection area corresponding to the infrared light image to obtain the target brightness value corresponding to the infrared light image includes:
determining a target brightness interval according to the brightness value of the pixel point in the target detection area corresponding to the infrared light image, wherein the target brightness interval comprises at least one brightness subinterval;
for each brightness subinterval, calculating the pixel point duty ratio of the brightness subinterval in the target detection area according to the brightness values of the brightness subinterval and the pixel points in the target detection area corresponding to the infrared light image;
and carrying out brightness analysis on the target detection area according to the pixel point duty ratio corresponding to each brightness subinterval to obtain a target brightness value corresponding to the infrared light image.
7. The method according to claim 6, wherein for each brightness subinterval, calculating a pixel point duty ratio of the target detection area belonging to the brightness subinterval according to the brightness values of the brightness subinterval and the pixel points in the target detection area corresponding to the infrared light image includes:
counting the pixel points of which the brightness values fall into the brightness subintervals in the target detection areas corresponding to the infrared light images aiming at each brightness subinterval to obtain the number of the target pixel points corresponding to the brightness subintervals;
and calculating the pixel point duty ratio belonging to the brightness subinterval in the target detection area according to the number of the target pixel points corresponding to the brightness subinterval.
8. The method according to claim 6, wherein the performing brightness analysis on the target detection area according to the pixel point duty ratio corresponding to each brightness subinterval to obtain the target brightness value corresponding to the infrared light image includes:
determining a reference brightness value corresponding to each brightness subinterval;
and according to the pixel point duty ratio corresponding to each brightness subinterval, fusing the reference brightness values corresponding to each brightness subinterval to obtain the target brightness value corresponding to the infrared light image.
9. A living body detecting device, characterized by comprising:
the device comprises an acquisition unit, a detection unit and a detection unit, wherein the acquisition unit is used for acquiring at least one group of face images aiming at an object to be detected, and each group of face images comprises an infrared light image and a visible light image;
the recognition unit is used for recognizing the face area of the visible light image in the face image aiming at each group of face images to obtain a recognition result;
the determining unit is used for determining a target detection area corresponding to the infrared light image in the face image based on the identification result;
the analysis unit is used for carrying out statistical analysis on the brightness value of the pixel point in the target detection area corresponding to the infrared light image to obtain a target brightness value corresponding to the infrared light image;
the selecting unit is used for selecting a target infrared light image for living body detection from the face images according to the target brightness value corresponding to the infrared light image in the face images;
and the living body detection unit is used for carrying out living body detection on the object to be detected based on the target infrared light image.
10. An electronic device comprising a memory and a processor; the memory stores an application program, and the processor is configured to execute the application program in the memory to perform the operations in the living body detection method according to any one of claims 1 to 8.
11. A computer readable storage medium storing a plurality of instructions adapted to be loaded by a processor to perform the steps in the living being detection method according to any of claims 1 to 8.
12. A computer program product comprising a computer program or instructions which, when executed by a processor, carries out the steps in the living body detection method according to any one of claims 1 to 8.
CN202111361322.5A 2021-11-17 2021-11-17 Living body detection method and related equipment Pending CN116152932A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111361322.5A CN116152932A (en) 2021-11-17 2021-11-17 Living body detection method and related equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111361322.5A CN116152932A (en) 2021-11-17 2021-11-17 Living body detection method and related equipment

Publications (1)

Publication Number Publication Date
CN116152932A true CN116152932A (en) 2023-05-23

Family

ID=86358639

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111361322.5A Pending CN116152932A (en) 2021-11-17 2021-11-17 Living body detection method and related equipment

Country Status (1)

Country Link
CN (1) CN116152932A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117373110A (en) * 2023-08-30 2024-01-09 武汉星巡智能科技有限公司 Visible light-thermal infrared imaging infant behavior recognition method, device and equipment

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117373110A (en) * 2023-08-30 2024-01-09 武汉星巡智能科技有限公司 Visible light-thermal infrared imaging infant behavior recognition method, device and equipment

Similar Documents

Publication Publication Date Title
WO2020151489A1 (en) Living body detection method based on facial recognition, and electronic device and storage medium
CN111768336B (en) Face image processing method and device, computer equipment and storage medium
WO2018166525A1 (en) Human face anti-counterfeit detection method and system, electronic device, program and medium
CN108108711B (en) Face control method, electronic device and storage medium
WO2022222575A1 (en) Method and system for target recognition
CN112215043A (en) Human face living body detection method
CN107316029A (en) A kind of live body verification method and equipment
CN113902641B (en) Data center hot zone judging method and system based on infrared image
CN110163111A (en) Method, apparatus of calling out the numbers, electronic equipment and storage medium based on recognition of face
CN113642639B (en) Living body detection method, living body detection device, living body detection equipment and storage medium
CN112802081B (en) Depth detection method and device, electronic equipment and storage medium
CN111767879A (en) Living body detection method
CN111222447A (en) Living body detection method based on neural network and multichannel fusion LBP (local binary pattern) characteristics
CN112784741A (en) Pet identity recognition method and device and nonvolatile storage medium
CN114973349A (en) Face image processing method and training method of face image processing model
CN115147936A (en) Living body detection method, electronic device, storage medium, and program product
CN116152932A (en) Living body detection method and related equipment
CN112308093B (en) Air quality perception method based on image recognition, model training method and system
CN113705469A (en) Face recognition method and device, electronic equipment and computer readable storage medium
CN113111810A (en) Target identification method and system
CN111832464A (en) Living body detection method and device based on near-infrared camera
CN117218398A (en) Data processing method and related device
CN113723310B (en) Image recognition method and related device based on neural network
CN113255456B (en) Inactive living body detection method, inactive living body detection device, electronic equipment and storage medium
CN112149598A (en) Side face evaluation method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination