CN115273245A - Living body detection method, living body detection device and computer-readable storage medium - Google Patents

Living body detection method, living body detection device and computer-readable storage medium Download PDF

Info

Publication number
CN115273245A
CN115273245A CN202210702013.8A CN202210702013A CN115273245A CN 115273245 A CN115273245 A CN 115273245A CN 202210702013 A CN202210702013 A CN 202210702013A CN 115273245 A CN115273245 A CN 115273245A
Authority
CN
China
Prior art keywords
image
target object
living body
body detection
thermal imaging
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210702013.8A
Other languages
Chinese (zh)
Inventor
李永凯
王宁波
朱树磊
郝敬松
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202210702013.8A priority Critical patent/CN115273245A/en
Publication of CN115273245A publication Critical patent/CN115273245A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J5/00Radiation pyrometry, e.g. infrared or optical thermometry
    • G01J5/48Thermography; Techniques using wholly visual means
    • G01J5/485Temperature profile

Abstract

The application discloses a method and a device for in vivo detection and a computer readable storage medium, wherein the method for in vivo detection comprises the following steps: acquiring a thermal imaging image of a target object, and dividing the thermal imaging image into a plurality of areas; respectively determining the area temperature of each area in the thermal imaging image; performing feature extraction based on the region temperatures of the plurality of regions to obtain temperature distribution features of the target object; live body detection is performed based on the temperature distribution characteristic of the target object to determine whether the target object is a live body. The method provided by the application can reduce the calculation amount of the algorithm in the living body detection process and improve the speed of the living body detection.

Description

Living body detection method, living body detection device and computer-readable storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a method and an apparatus for detecting a living body, and a computer-readable storage medium.
Background
The development of digital technology brings the human society into the artificial intelligence era, and various biological recognition technologies are widely applied to the daily life of people. The face-based biometric technology is widely applied to user identity authentication of systems such as gate passage, mobile phone unlocking, attendance check and check-in. However, these face identification authentication systems have a huge security problem, for example, when the face information of the user is stolen, the face information is put on a mobile phone screen, printed into a photo or made into a mask to disguise the face of the user for identification and authentication, and attacks the identity authentication system. If the face authentication system does not accurately detect the disguised face information, immeasurable loss is brought to the user, so that the living body detection of the face to be authenticated is necessary.
The human face living body detection technology is a technology for determining whether a human face acquired by a camera is a disguised and attacked human face or a human face of a real living body through human face image information analysis. However, the existing human face living body detection technology has the problems of low detection speed, low detection efficiency and the like.
Disclosure of Invention
The application provides a method and a device for detecting living bodies and a computer readable storage medium, which can improve the speed of detecting the living bodies.
In a first aspect, an embodiment of the present application provides a method for in-vivo detection, where the method includes: acquiring a thermal imaging image of a target object, and dividing the thermal imaging image into a plurality of areas; separately determining a region temperature for each of the regions in the thermographic image; performing feature extraction based on the region temperatures of the plurality of regions to obtain a temperature distribution feature of the target object; performing living body detection based on the temperature distribution characteristic of the target object to determine whether the target object is a living body.
A second aspect of embodiments of the present application provides a living body detection apparatus, including: the device comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring a thermal imaging image of a target object and dividing the thermal imaging image into a plurality of areas; the determining module is connected with the acquiring module and is used for respectively determining the region temperature of each region in the thermal imaging image; the extraction module is connected with the determination module and is used for performing feature extraction based on the region temperatures of the plurality of regions to obtain the temperature distribution features of the target object; the detection module is connected with the extraction module and is used for performing living body detection based on the temperature distribution characteristics of the target object so as to determine whether the target object is a living body.
A third aspect of the embodiments of the present application provides a living body detecting apparatus, which includes a processor, a memory, and a communication circuit, where the processor is respectively coupled to the memory and the communication circuit, the memory stores program data, and the processor implements the steps in the method by executing the program data in the memory.
A fourth aspect of embodiments of the present application provides a computer-readable storage medium, which stores a computer program that can be executed by a processor to implement the steps in the above method.
The beneficial effects are that: according to the method and the device, after the thermal imaging image is divided into the plurality of regions, the temperature distribution characteristics of the target object are extracted based on the region temperatures of the plurality of regions, and the down-sampling processing is performed on the image data by dividing the thermal imaging image into the regions in the processing process, so that the subsequent calculated amount can be reduced, and the speed of in-vivo detection can be improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, it is obvious that the drawings in the following description are only some embodiments of the present application, and other drawings can be obtained by those skilled in the art without inventive efforts, wherein:
FIG. 1 is a schematic flow chart diagram of one embodiment of a method for in vivo testing according to the present application;
FIG. 2 is a schematic flow chart of step S120 in FIG. 1;
FIG. 3 is a schematic flow chart of step S140 in FIG. 1;
FIG. 4 is a block diagram of the method of in vivo detection of FIG. 1 in an application scenario;
FIG. 5 is a schematic view of the structure of an embodiment of the biopsy device according to the present application;
FIG. 6 is a schematic configuration view of another embodiment of the biopsy device according to the present application;
FIG. 7 is a schematic structural diagram of an embodiment of a computer-readable storage medium according to the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Referring to fig. 1, fig. 1 is a schematic flow chart of an embodiment of the method for detecting a living body according to the present application, the method including:
s110: a thermographic image of a target object is acquired and divided into a plurality of regions.
The thermal imaging image includes a target object, and the purpose of the living body detection is to determine whether the target object in the thermal imaging image is a living body, that is, the target object is an object to be subjected to living body authentication. The thermal imaging image of the target object is obtained by shooting the target object by a thermal imaging infrared camera. After the thermal imaging infrared camera shoots the target object, an image output by the thermal imaging infrared camera can be directly used as a thermal imaging image of the target object, a detection frame framing the target object can also be determined in the thermal imaging image, and then the image in the detection frame is used as the thermal imaging image of the target object, wherein the process of determining the detection frame framing the target object in the thermal imaging image can be referred to below.
In the present embodiment, in order to reduce the amount of calculation and increase the rate of the biopsy, the thermal imaging image is divided into a plurality of regions to perform down-sampling processing on the image data of the thermal imaging image.
Although the plurality of regions may be the same size or different sizes, in the present embodiment, the plurality of regions are set to be the same size, that is, the thermal imaging image is equally divided into a plurality of regions of the same size in step S110 when dividing the regions. The length and width of each region may be the same or different, and are not limited herein.
In an application scenario, the thermographic image is divided into i rows and i columns of regions, for example, the thermographic image is divided into 9 regions (i.e., 3 rows and 3 columns of regions), 16 regions (i.e., 4 rows and 4 columns of regions), or 25 regions (i.e., 5 rows and 5 columns of regions).
S120: the zone temperature of each zone in the thermographic image is determined separately.
Wherein the thermographic image carries temperature information, since after dividing the regions, the region temperature of each region can be determined.
Referring to fig. 2, in the present embodiment, the step of determining the zone temperature of each zone in step S120 includes:
s121: and respectively determining the average temperature value of the pixel points of each region.
Specifically, in the thermal imaging image, each pixel has a temperature value, and therefore, after the areas are divided, for each area, the average temperature value T of the pixel is calculated by using the following formula:
Figure BDA0003704132460000041
wherein N is the number of pixel points in the region, tiAnd representing the temperature value of the ith pixel point in the area.
S122: and respectively determining the average temperature value of each area as the area temperature of each area.
Specifically, after step S121, each area has a corresponding average temperature value, so that the average temperature value of each area is determined as the respective area temperature of each area.
That is, in the present embodiment, the average temperature value of the pixels of the area is determined as the area temperature of the area, but the present application is not limited thereto, and in another embodiment, the temperature values such as the maximum temperature value, the minimum temperature value, or the median temperature value of the pixels of the area may be determined as the area temperature. In summary, the present application is not limited to the specific process of determining the zone temperature of a zone.
S130: and performing feature extraction based on the region temperatures of the plurality of regions to obtain the temperature distribution feature of the target object.
Specifically, after the above steps, each region has a region temperature, and feature extraction is performed based on the region temperatures of all the regions to obtain the temperature distribution feature of the target object.
In this embodiment, step S130 specifically includes: and performing convolution operation on the thermal imaging image based on the region temperatures of the plurality of regions to obtain the temperature distribution characteristic of the target object.
The thermal imaging image can be input into a convolution neural network, and the convolution neural network performs convolution operation based on the region temperatures of all regions, so that the high-dimensional temperature characteristic of the thermal imaging image, namely the temperature distribution characteristic of the target object, can be extracted.
S140: live body detection is performed based on the temperature distribution characteristic of the target object to determine whether the target object is a live body.
In an application scenario, when the temperature distribution characteristic of the target object conforms to the temperature distribution characteristic of the living body, it is determined that the target object is a living body, otherwise it is determined that the target object is a non-living body.
In another application scenario, as shown in fig. 3, step S140 specifically includes:
s141: respectively carrying out living body detection characteristic extraction on at least one target image to obtain the living body detection characteristics of the target object, wherein the at least one target image comprises at least one of a color image, a near infrared image and a thermal imaging image of the target object, and the living body detection characteristics of the target object are different from the temperature distribution characteristics of the target object.
Specifically, the fact that the live body detection feature of the target object is different from the temperature distribution feature of the target object means that the live body detection feature of the target object is not obtained by performing temperature feature extraction on the target object, and for example, when the live body detection feature of the target object is obtained by performing the live body detection feature extraction on a near-infrared image, since the near-infrared image does not reveal temperature information of the target object, the extracted live body detection feature naturally cannot contain temperature information, and the obtained live body detection feature naturally differs from the temperature distribution feature.
Alternatively, the living body detection feature of the target object is obtained by extracting the temperature feature of the target object, but the process of extracting the living body detection feature of the target object at this time is different from the above-described process of obtaining the temperature distribution feature. For example, when the live body detection feature extraction is performed on the thermal imaging image, the process of performing the live body detection feature extraction on the thermal imaging image is set to be different from the above-described process of obtaining the temperature distribution feature, thereby ensuring that the live body detection feature is different from the temperature distribution feature.
Wherein the liveness detection feature of the target object is capable of characterizing a confidence that the target object is a live body. In the related art, it is possible to determine whether or not the target object is a living body by performing the living body detection directly based on the living body detection characteristics of the target object, but in the present embodiment, in order to improve the accuracy of the living body detection, the living body detection is not performed directly based on the living body detection characteristics of the target object, and it is subsequently necessary to fuse the living body detection characteristics of the target object with the temperature distribution characteristics obtained as described above, which will be described below.
The color image of the target object is obtained by shooting the target object through the white light camera, and the near-infrared image is obtained by shooting the target object through the near-infrared camera.
In an application scenario, the step of determining a color image, a near-infrared image, and a thermal imaging image of a target object includes:
(a) The method comprises the steps of respectively obtaining a first image, a second image and a third image which are obtained by shooting a target object through a white light camera, a near-infrared camera and a thermal imaging infrared camera.
(b) And carrying out target recognition on the first image to obtain a first position of the target object in the first image, and extracting a color image of the target object from the first image according to the first position.
Specifically, the first image is obtained by shooting with a white light camera, and the definition of the first image is high, so that the accuracy of target recognition is high.
The first position of the target object in the first image may be a position of the detection frame of the target object in the first image, for example, the first position includes coordinates of an upper left vertex and a lower right vertex of the detection frame of the target object in the first image.
And the size of the finally obtained color image of the target object is smaller than that of the first image, and the occupation ratio of the target object in the final color image is larger than that of the target object in the first image.
(c) A near infrared image and a thermal imaging image of the target object are respectively calibrated in the second image and the third image according to the first position.
Specifically, according to a first position of the target object in the first image and a relative installation position between the white light camera and the near-infrared camera, a second position of the target object in the second image can be determined, and then according to the second position, the near-infrared image of the target object can be extracted from the second image. It is understood that, at this time, in the finally obtained near-infrared image, the proportion of the target object is larger than that of the target object in the second image. Wherein, the position of the target object in the second image may be the position of the detection frame of the framing target object in the second image.
Similarly, a third position of the target object in the third image can be determined according to the first position of the target object in the first image and the relative installation positions of the white light camera and the thermal imaging infrared camera, and then the thermal imaging image of the target object can be extracted from the third image according to the third position. It will be appreciated that at this time, in the resulting thermographic image, the proportion of the target object is greater than the proportion of the target object in the third image. Wherein, the position of the target object in the third image may be the position of the detection frame of the framing target object in the third image.
In the above scheme, on one hand, the accuracy of target identification of the first image obtained by shooting the white light camera is considered to be high, so that the accuracy of extracting the near-infrared image and the thermal imaging image of the target object from the second image and the third image respectively is high based on the position of the target object in the first image, and on the other hand, the near-infrared image and the thermal imaging image of the target object are extracted from the second image and the third image respectively directly according to the first position, so that target identification of the second image and the third image is not required to be performed independently, the calculation amount of an algorithm can be reduced, and the speed of the whole detection method is further improved.
In other embodiments, the near-infrared image of the target object may be extracted from the second image by directly performing target recognition on the second image to determine the position of the target object, and then based on the position of the target object.
Alternatively, in another embodiment, the target recognition may be performed only on the second image or the third image, for example, the target recognition may be performed only on the second image, the second position of the target object in the second image is determined, and then the near-infrared image of the target object is extracted from the second image according to the second position, and simultaneously the color image and the thermal imaging image of the target object are respectively calibrated in the first image and the third image based on the second position.
Alternatively, in another embodiment, the first image may be a color image directly representing the target object, the second image may be a near-infrared image directly representing the target object, or the third image may be a thermal imaging image directly representing the target object.
In summary, the present application is not limited to the specific process of determining color, near infrared, and thermographic images of a target object.
Continuing to refer to fig. 3, in step S141, the biopsy feature of the target object may be directly obtained by performing a biopsy feature extraction on one of the color image, the near-infrared image, and the thermal imaging image, for example, by performing a biopsy feature extraction on the near-infrared image to obtain the biopsy feature of the target object.
Alternatively, in step S141, a plurality (for example, two or three) of the color image, the near-infrared image, and the thermal imaging image may be subjected to the living body detection feature extraction, and then the living body detection feature of the target object may be obtained. For example, the living body detection feature extraction is performed on the color image and the near-infrared image, respectively, to obtain the living body detection feature of the target object.
In an application scene, after a plurality of live body detection characteristics in a color image, a near-infrared image and a thermal imaging image are extracted, the respective live body detection characteristics of each image are obtained, then the live body detection characteristics corresponding to all the images are subjected to fusion processing, and finally the live body detection characteristics of the target object are obtained. In the method, any fusion technology in the prior art can be adopted to perform fusion processing on the in-vivo detection features corresponding to all the images, and the specific process of the fusion processing is not described in detail here.
That is, step S141 includes: respectively extracting the living body detection characteristics of at least one target image to obtain the living body detection characteristics corresponding to each target image; and performing fusion processing on the living body detection characteristics corresponding to all the target images to obtain the living body detection characteristics of the target object. It can be understood that when the living body detection feature extraction is performed on only one target image, the living body detection feature of the target image is directly taken as the living body detection feature of the target object, when the living body detection feature extraction is performed on a plurality of target images, the respective living body detection feature of each target image is obtained, and then the living body detection features corresponding to the plurality of target images are subjected to fusion processing to obtain the living body detection feature of the target object.
The process of extracting the living body detection features of the target image comprises the following steps: and performing feature extraction on the target image by using a pre-trained living body detection network to obtain the living body detection features of the target image.
Specifically, when the target image is subjected to feature extraction by using the living body detection network, different types of images use different living body detection networks, for example, the living body detection network used for performing the living body detection feature extraction on the color image, the living body detection network used for performing the living body detection feature extraction on the near-infrared image, and the living body detection network used for performing the living body detection feature extraction on the thermal imaging image are three different kinds of living body detection networks, which are respectively defined as a first living body detection network, a second living body detection network, and a third living body detection network. In the training phase, when the first living body detection network is trained, the used sample image is an image shot by a white light camera, when the second living body detection network is trained, the used sample image is an image shot by a near infrared camera, and when the third living body detection network is trained, the used sample image is an image shot by a thermal imaging camera.
S142: and fusing the temperature distribution characteristic and the living body detection characteristic to obtain a fused characteristic.
In an application scene, the temperature distribution features and the living body detection features are spliced to obtain fusion features, for example, the size of the temperature distribution features is 1 × 10 × 9 (1 represents the dimension, 10 represents the number of channels), the size of the living body detection features is 1 × 12 × 9 (1 represents the dimension, 12 represents the number of channels), and after the splicing processing, the fusion features with the size of 1 × 12 × 9 (1 represents the dimension) are obtained.
Wherein, any fusion means in the prior art can be adopted to perform fusion processing on the temperature distribution characteristic and the living body detection characteristic to obtain a fusion characteristic. The detailed process of the fusion process will not be described in detail here.
S143: live body detection is performed based on the fusion features to determine whether the target object is a live body.
Specifically, since the fusion feature includes both the temperature distribution feature of the target object and the living body detection feature of the target object, the present embodiment can improve the accuracy of the living body detection compared to performing the living body detection based only on the temperature distribution feature of the target object or performing the living body detection based only on the living body detection feature of the target object.
In an application scenario, in order to further improve the accuracy and speed of detection, a pre-trained neural network is used for living body detection, and specifically, a fusion feature is input into the neural network, so as to determine whether a target object is a living body.
In the related art, the temperature distribution characteristics of the thermal imaging image are generally extracted directly based on the temperature values of all pixel points in the thermal imaging image, but in the embodiment, after the thermal imaging image is divided into a plurality of regions, the temperature distribution characteristics of the target object are extracted based on the region temperatures of the plurality of regions, and the down-sampling processing is performed on the image data by performing the region division on the thermal imaging image in the processing process, so that the subsequent calculation amount can be reduced, and the speed of in-vivo detection can be increased.
For better understanding of the solution of the present application, the following describes the solution of the present application in detail with reference to the application scenario of fig. 4:
the method comprises the steps of firstly, respectively obtaining a first image, a second image and a third image which are obtained by shooting a target object through a white light camera, a near-infrared camera and a thermal imaging infrared camera, then carrying out target identification on the first image to obtain a first position of the target object in the first image, and then respectively calibrating the near-infrared image and the thermal imaging image of the target object in the second image and the third image according to the first position.
Next, the near-infrared image (in fig. 4, the near-infrared image of the target object is denoted by reference numeral 101) is input to a pre-trained biometric network, and the biometric characteristic of the target object (in fig. 4, the biometric characteristic of the target object is denoted by reference numeral 201) is obtained.
And dividing the thermal imaging image (in fig. 4, the thermal imaging image is denoted by reference numeral 102) into a plurality of regions, respectively determining an average temperature value of a pixel point of each region, and finally determining the average temperature value of the pixel point of each region as the respective region temperature of each region.
The thermographic image is convolved based on the region temperatures of the plurality of regions to obtain the temperature distribution characteristic of the target object (in fig. 4, the temperature distribution characteristic of the target object is denoted by reference numeral 202).
Next, the living body detection feature and the temperature distribution feature of the target object are subjected to fusion processing to obtain a fusion feature (in fig. 4, the fusion feature is denoted by reference numeral 20).
And finally, performing living body detection based on the fusion characteristics to determine whether the target object is a living body. For example, the fusion features are fused into a fully connected layer of a pre-trained neural network for calculation to obtain whether the target object is a living body.
Referring to FIG. 5, FIG. 5 is a schematic structural diagram of an embodiment of the biopsy device of the present application. The biopsy device 200 includes a processor 210, a memory 220, and a communication circuit 230, wherein the processor 210 is coupled to the memory 220 and the communication circuit 230, respectively, the memory 220 stores program data, and the processor 210 implements the steps of the method according to any of the above embodiments by executing the program data in the memory 220, wherein detailed steps can refer to the above embodiments and are not described herein again.
The biopsy device 200 may be any device with image processing capability, such as a computer or a mobile phone, and is not limited herein.
Referring to FIG. 6, FIG. 6 is a schematic structural diagram of an embodiment of the biopsy device of the present application. The liveness detection device 300 includes an acquisition module 310, a determination module 320, an extraction module 330, and a detection module 340.
The acquiring module 310 is configured to acquire a thermal imaging image of a target object and divide the thermal imaging image into a plurality of regions.
The determining module 320 is connected to the acquiring module 310 for determining the region temperature of each region in the thermal imaging image respectively.
The extracting module 330 is connected to the determining module 320, and configured to perform feature extraction based on the area temperatures of the multiple areas to obtain a temperature distribution feature of the target object.
The detection module 340 is connected to the extraction module 330, and is configured to perform living body detection based on the temperature distribution characteristic of the target object to determine whether the target object is a living body.
The steps in the method for performing a biopsy in any of the above embodiments of the biopsy device 300 during operation can be referred to above, and detailed steps are not described herein again.
The biopsy device 300 may be any device with image processing capability, such as a computer or a mobile phone, and is not limited herein.
Referring to fig. 7, fig. 7 is a schematic structural diagram of an embodiment of a computer-readable storage medium according to the present application. The computer-readable storage medium 400 stores a computer program 410, the computer program 410 being executable by a processor to implement the steps of any of the methods described above.
The computer-readable storage medium 400 may be a device that can store the computer program 410, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk, or may be a server that stores the computer program 410, and the server may send the stored computer program 410 to another device for operation, or may self-operate the stored computer program 410.
The above description is only an example of the present application, and is not intended to limit the scope of the present application, and all equivalent structures or equivalent processes performed by the present application and the contents of the attached drawings, which are directly or indirectly applied to other related technical fields, are also included in the scope of the present application.

Claims (10)

1. A method of in vivo testing, the method comprising:
acquiring a thermal imaging image of a target object, and dividing the thermal imaging image into a plurality of areas;
separately determining a region temperature for each of the regions in the thermographic image;
performing feature extraction based on the region temperatures of the plurality of regions to obtain a temperature distribution feature of the target object;
performing living body detection based on the temperature distribution characteristic of the target object to determine whether the target object is a living body.
2. The method of claim 1, wherein said step of separately determining a region temperature for each of said regions in said thermographic image comprises:
respectively determining the average temperature value of the pixel points of each region;
and respectively determining the average temperature value of each area as the area temperature of each area.
3. The method of claim 1, wherein the plurality of regions are the same size.
4. The method according to claim 1, wherein the step of performing feature extraction based on the region temperatures of the plurality of regions to obtain the temperature distribution feature of the target object comprises:
performing a convolution operation on the thermal imaging image based on the region temperatures of the plurality of regions to obtain the temperature distribution characteristic of the target object.
5. The method according to claim 1, wherein the step of performing living body detection based on the temperature distribution characteristic of the target object to determine whether the target object is a living body comprises:
respectively performing living body detection characteristic extraction on at least one target image to obtain living body detection characteristics of the target object, wherein the at least one target image comprises at least one of a color image, a near infrared image and the thermal imaging image of the target object, and the living body detection characteristics of the target object are different from the temperature distribution characteristics of the target object;
fusing the temperature distribution characteristic and the living body detection characteristic to obtain a fused characteristic;
performing a live body detection based on the fusion feature to determine whether the target object is a live body.
6. The method of claim 5, wherein when the at least one target image comprises the near-infrared image of the target object, prior to said acquiring a thermographic image of the target object, the method further comprises:
respectively acquiring a first image, a second image and a third image which are obtained by shooting the target object by a white light camera, a near-infrared camera and a thermal imaging infrared camera;
performing target identification on the first image to obtain a first position of the target object in the first image;
calibrating the near-infrared image and the thermal imaging image of the target object in the second image and the third image, respectively, according to the first position.
7. The method according to claim 5, wherein the step of performing live body detection feature extraction on at least one target image respectively to obtain the live body detection feature of the target object comprises:
respectively carrying out living body detection feature extraction on at least one target image to obtain a living body detection feature corresponding to each target image;
and performing fusion processing on the living body detection characteristics corresponding to all the target images to obtain the living body detection characteristics of the target object.
8. A living body detecting device, characterized in that the living body detecting device comprises:
the device comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring a thermal imaging image of a target object and dividing the thermal imaging image into a plurality of areas;
the determining module is connected with the acquiring module and is used for respectively determining the region temperature of each region in the thermal imaging image;
the extraction module is connected with the determination module and is used for performing feature extraction based on the region temperatures of the plurality of regions to obtain the temperature distribution features of the target object;
the detection module is connected with the extraction module and is used for performing living body detection based on the temperature distribution characteristics of the target object so as to determine whether the target object is a living body.
9. A living body detecting device, comprising a processor, a memory and a communication circuit, wherein the processor is respectively coupled to the memory and the communication circuit, the memory stores program data, and the processor executes the program data in the memory to realize the steps of the method according to any one of claims 1-7.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which is executable by a processor to implement the steps in the method according to any of claims 1-7.
CN202210702013.8A 2022-06-20 2022-06-20 Living body detection method, living body detection device and computer-readable storage medium Pending CN115273245A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210702013.8A CN115273245A (en) 2022-06-20 2022-06-20 Living body detection method, living body detection device and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210702013.8A CN115273245A (en) 2022-06-20 2022-06-20 Living body detection method, living body detection device and computer-readable storage medium

Publications (1)

Publication Number Publication Date
CN115273245A true CN115273245A (en) 2022-11-01

Family

ID=83760488

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210702013.8A Pending CN115273245A (en) 2022-06-20 2022-06-20 Living body detection method, living body detection device and computer-readable storage medium

Country Status (1)

Country Link
CN (1) CN115273245A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117409488A (en) * 2023-10-20 2024-01-16 湖南远图网络科技有限公司 User identity recognition method, system, equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117409488A (en) * 2023-10-20 2024-01-16 湖南远图网络科技有限公司 User identity recognition method, system, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN107766786B (en) Activity test method and activity test computing device
US10176377B2 (en) Iris liveness detection for mobile devices
WO2019134536A1 (en) Neural network model-based human face living body detection
US20230274577A1 (en) Device and method with image matching
CN112801057B (en) Image processing method, image processing device, computer equipment and storage medium
CN111444744A (en) Living body detection method, living body detection device, and storage medium
KR20180065889A (en) Method and apparatus for detecting target
KR20180109664A (en) Liveness test method and apparatus
CN110532746B (en) Face checking method, device, server and readable storage medium
CN110008943B (en) Image processing method and device, computing equipment and storage medium
CN111104833A (en) Method and apparatus for in vivo examination, storage medium, and electronic device
CN111582155B (en) Living body detection method, living body detection device, computer equipment and storage medium
CN112101195A (en) Crowd density estimation method and device, computer equipment and storage medium
Nguyen et al. Face presentation attack detection based on a statistical model of image noise
KR102038576B1 (en) Method of detecting fraud of an iris recognition system
CN115273245A (en) Living body detection method, living body detection device and computer-readable storage medium
CN111767879A (en) Living body detection method
CN113642639B (en) Living body detection method, living body detection device, living body detection equipment and storage medium
CN114387548A (en) Video and liveness detection method, system, device, storage medium and program product
CN113159229A (en) Image fusion method, electronic equipment and related product
You et al. Tampering detection and localization base on sample guidance and individual camera device convolutional neural network features
JP2021131737A (en) Data registration device, biometric authentication device, and data registration program
CN116311400A (en) Palm print image processing method, electronic device and storage medium
CN115830720A (en) Living body detection method, living body detection device, computer equipment and storage medium
CN113920556A (en) Face anti-counterfeiting method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination