CN112149570B - Multi-person living body detection method, device, electronic equipment and storage medium - Google Patents

Multi-person living body detection method, device, electronic equipment and storage medium Download PDF

Info

Publication number
CN112149570B
CN112149570B CN202011012408.2A CN202011012408A CN112149570B CN 112149570 B CN112149570 B CN 112149570B CN 202011012408 A CN202011012408 A CN 202011012408A CN 112149570 B CN112149570 B CN 112149570B
Authority
CN
China
Prior art keywords
image
living body
face
images
detected
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011012408.2A
Other languages
Chinese (zh)
Other versions
CN112149570A (en
Inventor
袁宏进
刘杰
庄伯金
王少军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN202011012408.2A priority Critical patent/CN112149570B/en
Publication of CN112149570A publication Critical patent/CN112149570A/en
Application granted granted Critical
Publication of CN112149570B publication Critical patent/CN112149570B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Human Computer Interaction (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to data processing, and discloses a multi-person living body detection method, which comprises the following steps: recognizing all face images from the images to be detected, and setting the face image with the largest size in the face images as a foreground image; intercepting all face area images containing the foreground image and any face image in the images to be detected except the foreground image; acquiring category information of all face area images in the image to be detected by using a preset living body classification model, and calculating to obtain a living body probability value according to the category information of all face area images; and judging whether the image to be detected is a living body image or not according to the living body probability value and a preset threshold value. According to the application, the living body classification model is adopted to use the illumination condition and the surrounding background of each face in the multi-face image in the image living body detection, so that the difference information between the living body image and the non-living body image is further increased, and the effect of multi-person living body detection classification is improved.

Description

Multi-person living body detection method, device, electronic equipment and storage medium
Technical Field
The present application relates to the field of data processing technologies, and in particular, to a method and apparatus for detecting living bodies of multiple persons, an electronic device, and a storage medium.
Background
In the field of financial remote auditing, such as a long-time network video call scene, a server identity auditing scene, a server face comparison scene and the like, the living detection technology is mainly applied to judging whether the detected face is a living body or not during system face auditing, and in the normal approval process, a second living face is not allowed to appear in a picture, if the living body appears, an alarm is required to be prompted, and if the living face is not a living face, filtering is required.
Because the traditional silence living body recognition method is relatively suitable for high-definition face image quality, stable illumination scenes, fixed acquisition equipment and the like, the problems of picture transmission loss, large illumination change, multiple acquisition equipment and the like exist in long-term network video call scenes, the problems of small faces, side faces, face shielding, face blurring and the like can occur when the background faces are wrongly entered, the face texture information of the detected faces is singly used, and cannot be used as good characteristics for distinguishing living body categories, and living body detection can not be carried out on multiple face images.
Disclosure of Invention
In view of the above, it is necessary to provide a multi-person living body detection method for generalizing and stably performing multi-person living body detection on a face image.
The multi-human living body detection method provided by the application comprises the following steps:
recognizing all face images from the images to be detected, and setting the face image with the largest size in the face images as a foreground image;
intercepting all face area images containing the foreground image and any face image in the images to be detected except the foreground image;
acquiring category information of all face area images in the image to be detected by using a preset living body classification model, and calculating to obtain a living body probability value according to the category information of all face area images;
and judging whether the image to be detected is a living body image or not according to the living body probability value and a preset threshold value.
Optionally, the capturing all face area images including the foreground image and any face image in the image to be detected except the foreground image includes:
detecting boundaries of the foreground image and other face images in the image to be detected;
sequentially intercepting a minimum rectangular image containing the foreground image and any other face image according to the boundary;
and scaling all the intercepted minimum rectangular images to a preset size to serve as face area images.
Optionally, the training of the preset living body classification model includes:
a sample face area image containing a face image with the largest size and another face image is intercepted;
taking the sample face area image as input data, and taking class information of the sample image as output data, wherein the class information is living or non-living;
and training the convolutional neural network model according to the input data and the output data to obtain the living body classification model.
Optionally, the classification logic of the living organism classification model comprises:
carrying out illumination analysis on a face image with the largest size in the face area image and another face image, judging whether the two faces are under the same illumination condition, and if not, outputting category information as a non-living body;
if so, analyzing the image background information except the human face in the human face area image, judging whether the surrounding backgrounds of the two human faces are different, if so, outputting the category information as a non-living body, and if not, outputting the category information as a living body.
Optionally, the calculating the living body probability value according to the category information of all the face area images includes:
and calculating the ratio of the number of living body category information in the category information of all the face area images to the total number to obtain a living body probability value.
Optionally, the determining whether the image to be detected is a living image according to the living probability value and a preset threshold value includes:
if the living body probability value is greater than or equal to a preset threshold value, judging that the image to be detected is a living body image;
and if the living body probability value is smaller than a preset threshold value, judging that the image to be detected is a non-living body image.
In addition, to achieve the above object, the present application also provides an electronic device including: the multi-person living body detection device comprises a memory and a processor, wherein the memory stores a multi-person living body detection program capable of running on the processor, and the multi-person living body detection program realizes the following steps of the multi-person living body detection method when being executed by the processor:
recognizing all face images from the images to be detected, and setting the face image with the largest size in the face images as a foreground image;
intercepting all face area images containing the foreground image and any face image in the images to be detected except the foreground image;
acquiring category information of all face area images in the image to be detected by using a preset living body classification model, and calculating to obtain a living body probability value according to the category information of all face area images;
and judging whether the image to be detected is a living body image or not according to the living body probability value and a preset threshold value.
Optionally, the classification logic of the living organism classification model comprises:
carrying out illumination analysis on a face image with the largest size in the face area image and another face image, judging whether the two faces are under the same illumination condition, and if not, outputting category information as a non-living body;
if so, analyzing the image background information except the human face in the human face area image, judging whether the surrounding backgrounds of the two human faces are different, if so, outputting the category information as a non-living body, and if not, outputting the category information as a living body.
In addition, in order to achieve the above object, the present application also provides a computer-readable storage medium having stored thereon a multi-person living body detection program executable by one or more processors to implement the steps of the multi-person living body detection method according to any one of claims 1 to 6.
In addition, in order to achieve the above object, the present application also provides a multi-person living body detection apparatus including:
the identification module is used for identifying all face images from the images to be detected and setting the face image with the largest size in the face images as a foreground image;
the intercepting module is used for intercepting all face area images containing the foreground image and any face image in the images to be detected except the foreground image;
the living body classification module is used for acquiring category information of all face area images in the image to be detected by using a preset living body classification model, and calculating to obtain a living body probability value according to the category information of all face area images;
and the judging module is used for judging whether the image to be detected is a living body image or not according to the living body probability value and a preset threshold value.
Compared with the prior art, the method and the device have the advantages that all face images are identified from the images to be detected, and the face image with the largest size in the face images is set as the foreground image; intercepting all face area images containing the foreground image and any face image in the images to be detected except the foreground image; acquiring category information of all face area images in the image to be detected by using a preset living body classification model, and calculating to obtain a living body probability value according to the category information of all face area images; and judging whether the image to be detected is a living body image or not according to the living body probability value and a preset threshold value. According to the method, the living body classification model is adopted to use the illumination condition and the surrounding background of each face in the multi-face image in the image living body detection, so that the difference information between the living body image and the non-living body image is further increased, and the effect of multi-person living body detection classification is improved.
Drawings
Fig. 1 is a schematic diagram of an internal structure of an electronic device for implementing a multi-person living body detection method according to an embodiment of the present application;
fig. 2 is a schematic functional block diagram of a multi-person living body detection apparatus according to an embodiment of the present application;
fig. 3 is a flowchart of a method for multi-person living body detection according to an embodiment of the present application.
The achievement of the objects, functional features and advantages of the present application will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
It should be noted that the description of "first", "second", etc. in this disclosure is for descriptive purposes only and is not to be construed as indicating or implying a relative importance or implying an indication of the number of technical features being indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In addition, the technical solutions of the embodiments may be combined with each other, but it is necessary to base that the technical solutions can be realized by those skilled in the art, and when the technical solutions are contradictory or cannot be realized, the combination of the technical solutions should be considered to be absent and not within the scope of protection claimed in the present application.
Fig. 3 is a schematic flow chart of a multi-person living body detection method according to an embodiment of the application, which includes steps S1-S4.
S1, recognizing all face images from images to be detected, and setting the face image with the largest size in the face images as a foreground image.
Specifically, the application identifies a face image from images to be detected by a preset face detection method (for example retinaface, mtcnn). Examples are as follows:
and (3) identifying the face image by adopting mtcnn, firstly performing scaling operation on the image, scaling the image to be detected into different scales, and then sequentially inputting the images with different scales into three sub-networks (P-Net, R-Net and O-Net) for training to obtain the face image and the corresponding facial key point positions. The P-Net (Proposal Network) judges whether a human face exists in the input image to be detected through 3 layers of convolution, and gives out a candidate frame of a human face area and human face key points; R-Net (Refine Network) is one more full-connection layer than P-Net, and error removal screening is carried out on all candidate frames output by P-Net through bounding box regression and NMS; O-Net (Output Network) is one more layer of convolution layer than R-Net, and all candidate frames output by R-Net are subjected to fine processing to obtain the coordinate information of the candidate frames of the face region and the positions of five face key points; and finally, identifying the face image according to the coordinate information of the face region candidate frame.
S2, all face area images containing the foreground image and any face image in the images to be detected except the foreground image are intercepted.
Specifically, detecting boundaries of the foreground image and other face images in the image to be detected, sequentially intercepting minimum rectangular images containing the foreground image and any other face image according to the boundaries, and scaling all the intercepted minimum rectangular images to a preset size to serve as face area images. For example, four face images are recognized in an image to be detected, the face image with the largest size is set as a foreground image, other face images are respectively an image 1, an image 2 and an image 3, then a rectangular image 1 containing the foreground image and the image 1, a rectangular image 2 containing the foreground image and the image 2 and a rectangular image 3 containing the foreground image and the image 3 are intercepted, and the rectangular image 1, the rectangular image 2 and the rectangular image 3 are scaled to a preset size to serve as face area images of the image to be detected.
And S3, acquiring category information of all face area images in the image to be detected by using a preset living body classification model, and calculating to obtain a living body probability value according to the category information of all face area images.
Specifically, the training of the preset living body classification model includes:
a sample face area image containing a face image with the largest size and another face image is intercepted; taking the sample face area image as input data, and taking class information of the sample image as output data, wherein the class information is living or non-living; and training a general convolutional neural network classification model according to the input data and the output data to obtain the living body classification model.
In one embodiment, the classification logic of the living subject classification model comprises:
carrying out illumination analysis on a face image with the largest size in the face area image and another face image, judging whether the two faces are under the same illumination condition, and if not, outputting category information as a non-living body;
if so, analyzing the image background information except the human face in the human face area image, judging whether the surrounding backgrounds of the two human faces are different, if so, outputting the category information as a non-living body, and if not, outputting the category information as a living body.
In an embodiment, inputting all face area images of an image to be detected into a preset living body classification model to obtain category information of all face area images output by the living body classification model. And calculating the ratio of the number of living body category information in the category information of all the face area images to the total number to obtain a living body probability value.
Examples are as follows:
and intercepting a rectangular image 1, a rectangular image 2 and a rectangular image 3 from an image to be detected, and acquiring category information of the rectangular image 1, the rectangular image 2 and the rectangular image 3, namely living bodies, living bodies and non-living bodies by using the living body classification model, wherein the ratio of the number of the category information of the living bodies to the total number is 2/3, namely the living body probability value is 2/3.
And S4, judging whether the image to be detected is a living body image or not according to the living body probability value and a preset threshold value.
Specifically, if the living body probability value is greater than or equal to a preset threshold value, judging that the image to be detected is a living body image; and if the living body probability value is smaller than a preset threshold value, judging that the image to be detected is a non-living body image.
In the living body detection of a plurality of face images, the main object to be detected is typically a face image of the largest size, i.e., the foreground image. The method comprises the steps of comparing and detecting the foreground image with other face images in the image to be detected one by adopting a preset living body classification model, and judging whether the image to be detected is a living body image or not according to difference information of the other face images in the image to be detected and the foreground image.
As can be seen from the above embodiments, in the multi-person living body detection method provided by the present application, all face images are identified from the images to be detected, and the face image with the largest size in the face images is set as the foreground image; intercepting all face area images containing the foreground image and any face image in the images to be detected except the foreground image; acquiring category information of all face area images in the image to be detected by using a preset living body classification model, and calculating to obtain a living body probability value according to the category information of all face area images; and judging whether the image to be detected is a living body image or not according to the living body probability value and a preset threshold value. According to the method, the living body classification model is adopted to use the illumination condition and the surrounding background of each face in the multi-face image in the image living body detection, so that the difference information between the living body image and the non-living body image is further increased, and the effect of multi-person living body detection classification is improved.
As shown in fig. 1, an internal structure of an electronic device 1 for implementing a multi-person living body detection method according to an embodiment of the present application is shown. The electronic apparatus 1 is an apparatus capable of automatically performing numerical calculation and/or information processing in accordance with a preset or stored instruction. The electronic device 1 may be a computer, a server group formed by a single network server, a plurality of network servers, or a cloud formed by a large number of hosts or network servers based on cloud computing, wherein the cloud computing is one of distributed computing, and is a super virtual computer formed by a group of loosely coupled computer sets.
In the present embodiment, the electronic apparatus 1 includes, but is not limited to, a memory 11, a processor 12, and a network interface 13, which are communicably connected to each other via a system bus, and a multi-person living body detection program 10 is stored in the memory 11, and the multi-person living body detection program 10 is executable by the processor 12. Fig. 1 shows only an electronic device 1 having components 11-13 and a multi-person biopsy procedure 10, it will be appreciated by those skilled in the art that the structure shown in fig. 1 does not constitute a limitation of the electronic device 1, and may include fewer or more components than shown, or may combine certain components, or a different arrangement of components.
Wherein the storage 11 comprises a memory and at least one type of readable storage medium. The memory provides a buffer for the operation of the electronic device 1; the readable storage medium may be a non-volatile storage medium such as flash memory, hard disk, multimedia card, card memory (e.g., SD or DX memory, etc.), random Access Memory (RAM), static Random Access Memory (SRAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), programmable Read Only Memory (PROM), magnetic memory, magnetic disk, optical disk, etc. In some embodiments, the readable storage medium may be an internal storage unit of the electronic device 1, such as a hard disk of the electronic device 1; in other embodiments, the non-volatile storage medium may also be an external storage device of the electronic device 1, such as a plug-in hard disk provided on the electronic device 1, a smart memory card (SmartMediaCard, SMC), a secure digital (SecureDigital, SD) card, a flash card (FlashCard), or the like. In the present embodiment, the readable storage medium of the memory 11 mainly includes a storage program area and a storage data area, wherein the storage program area is generally used for storing an operating system and various kinds of application software installed in the electronic device 1, for example, codes and the like of the multi-person living body detection program 10 in one embodiment of the present application; the storage data area may store data created according to the use of blockchain nodes, etc., such as various types of data that have been output or are to be output.
Processor 12 may be a central processing unit (CentralProcessingUnit, CPU), controller, microcontroller, microprocessor, or other data processing chip in some embodiments. The processor 12 is typically used to control the overall operation of the electronic device 1, such as performing control and processing related to data interaction or communication with other devices, etc. In this embodiment, the processor 12 is configured to execute the program code or process data stored in the memory 11, for example, to execute the multi-person living body detection program 10.
The network interface 13 may comprise a wireless network interface or a wired network interface, the network interface 13 being used for establishing a communication connection between the electronic device 1 and a client (not shown).
Optionally, the electronic device 1 may further comprise a user interface, which may comprise a Display (Display), an input unit such as a Keyboard (Keyboard), and optionally a standard wired interface, a wireless interface. Alternatively, in some embodiments, the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an organic light-emitting diode (EmittingDiode, OLED) touch, or the like. The display may also be referred to as a display screen or display unit, as appropriate, for displaying information processed in the electronic device 1 and for displaying a visual user interface.
In one embodiment of the present application, the multi-person biopsy procedure 10, when executed by the processor 12, implements the following steps S1-S4.
S1, recognizing all face images from images to be detected, and setting the face image with the largest size in the face images as a foreground image.
Specifically, the application identifies a face image from images to be detected by a preset face detection method (for example retinaface, mtcnn). Examples are as follows:
and (3) identifying the face image by adopting mtcnn, firstly performing scaling operation on the image, scaling the image to be detected into different scales, and then sequentially inputting the images with different scales into three sub-networks (P-Net, R-Net and O-Net) for training to obtain the face image and the corresponding facial key point positions. The P-Net (Proposal Network) judges whether a human face exists in the input image to be detected through 3 layers of convolution, and gives out a candidate frame of a human face area and human face key points; R-Net (Refine Network) is one more full-connection layer than P-Net, and error removal screening is carried out on all candidate frames output by P-Net through bounding box regression and NMS; O-Net (Output Network) is one more layer of convolution layer than R-Net, and all candidate frames output by R-Net are subjected to fine processing to obtain the coordinate information of the candidate frames of the face region and the positions of five face key points; and finally, identifying the face image according to the coordinate information of the face region candidate frame.
S2, all face area images containing the foreground image and any face image in the images to be detected except the foreground image are intercepted.
Specifically, detecting boundaries of the foreground image and other face images in the image to be detected, sequentially intercepting minimum rectangular images containing the foreground image and any other face image according to the boundaries, and scaling all the intercepted minimum rectangular images to a preset size to serve as face area images. For example, four face images are recognized in an image to be detected, the face image with the largest size is set as a foreground image, other face images are respectively an image 1, an image 2 and an image 3, then a rectangular image 1 containing the foreground image and the image 1, a rectangular image 2 containing the foreground image and the image 2 and a rectangular image 3 containing the foreground image and the image 3 are intercepted, and the rectangular image 1, the rectangular image 2 and the rectangular image 3 are scaled to a preset size to serve as face area images of the image to be detected.
And S3, acquiring category information of all face area images in the image to be detected by using a preset living body classification model, and calculating to obtain a living body probability value according to the category information of all face area images.
Specifically, the training of the preset living body classification model includes:
a sample face area image containing a face image with the largest size and another face image is intercepted; taking the sample face area image as input data, and taking class information of the sample image as output data, wherein the class information is living or non-living; and training a general convolutional neural network classification model according to the input data and the output data to obtain the living body classification model.
In one embodiment, the classification logic of the living subject classification model comprises:
carrying out illumination analysis on a face image with the largest size in the face area image and another face image, judging whether the two faces are under the same illumination condition, and if not, outputting category information as a non-living body;
if so, analyzing the image background information except the human face in the human face area image, judging whether the surrounding backgrounds of the two human faces are different, if so, outputting the category information as a non-living body, and if not, outputting the category information as a living body.
In an embodiment, inputting all face area images of an image to be detected into a preset living body classification model to obtain category information of all face area images output by the living body classification model. And calculating the ratio of the number of living body category information in the category information of all the face area images to the total number to obtain a living body probability value.
Examples are as follows:
and intercepting a rectangular image 1, a rectangular image 2 and a rectangular image 3 from an image to be detected, and acquiring category information of the rectangular image 1, the rectangular image 2 and the rectangular image 3, namely living bodies, living bodies and non-living bodies by using the living body classification model, wherein the ratio of the number of the category information of the living bodies to the total number is 2/3, namely the living body probability value is 2/3.
And S4, judging whether the image to be detected is a living body image or not according to the living body probability value and a preset threshold value.
Specifically, if the living body probability value is greater than or equal to a preset threshold value, judging that the image to be detected is a living body image; and if the living body probability value is smaller than a preset threshold value, judging that the image to be detected is a non-living body image.
In the living body detection of a plurality of face images, the main object to be detected is typically a face image of the largest size, i.e., the foreground image. The method comprises the steps of comparing and detecting the foreground image with other face images in the image to be detected one by adopting a preset living body classification model, and judging whether the image to be detected is a living body image or not according to difference information of the other face images in the image to be detected and the foreground image.
As can be seen from the above embodiments, the electronic device 1 provided by the present application recognizes all face images from the images to be detected, and sets the face image with the largest size in the face images as the foreground image; intercepting all face area images containing the foreground image and any face image in the images to be detected except the foreground image; acquiring category information of all face area images in the image to be detected by using a preset living body classification model, and calculating to obtain a living body probability value according to the category information of all face area images; and judging whether the image to be detected is a living body image or not according to the living body probability value and a preset threshold value. According to the method, the living body classification model is adopted to use the illumination condition and the surrounding background of each face in the multi-face image in the image living body detection, so that the difference information between the living body image and the non-living body image is further increased, and the effect of multi-person living body detection classification is improved.
In other embodiments, the multi-person living body detection program 10 may be further divided into one or more modules, and the one or more modules are stored in the memory 11 and executed by one or more processors (the processor 12 in this embodiment) to complete the present application, and the modules referred to herein refer to a series of instruction sections of the computer program capable of performing a specific function, for describing the execution of the multi-person living body detection program 10 in the electronic device 1.
Fig. 2 is a schematic functional block diagram of a multi-person living body detection apparatus 100 according to an embodiment of the present application.
In an embodiment of the present application, the multi-person living body detection apparatus 100 includes an identification module 110, an interception module 120, a living body classification module 130, and a judgment module 140, illustratively:
the identifying module 110 is configured to identify all face images from the images to be detected, and set a face image with a largest size in the face images as a foreground image;
the intercepting module 120 is configured to intercept all face area images including the foreground image and any face image in the images to be detected except the foreground image;
the living body classification module 130 is configured to obtain category information of all face area images in the image to be detected by using a preset living body classification model, and calculate a living body probability value according to the category information of all face area images;
the judging module 140 is configured to judge whether the image to be detected is a living body image according to the living body probability value and a preset threshold value.
The functions or operation steps performed by the above-mentioned identification module 110, interception module 120, living body classification module 130, and judgment module 140 are substantially the same as those of the above-mentioned multi-person living body detection method and specific embodiments of the electronic device 1, and are not described here again.
In addition, the embodiment of the application also provides a computer readable storage medium, which can be any one or any combination of a plurality of hard disk, a multimedia card, an SD card, a flash memory card, an SMC, a read-only memory (ROM), an erasable programmable read-only memory (EPROM), a portable compact disc read-only memory (CD-ROM), a USB memory and the like. A multi-person biopsy program 10 is included in the computer readable storage medium, the multi-person biopsy program 10 when executed by a processor performing the following operations:
a1, recognizing all face images from the images to be detected, and setting the face image with the largest size in the face images as a foreground image.
A2, intercepting all face area images containing the foreground image and any face image in the images to be detected except the foreground image.
A3, acquiring category information of all face area images in the image to be detected by using a preset living body classification model, and calculating to obtain a living body probability value according to the category information of all face area images.
And A4, judging whether the image to be detected is a living body image or not according to the living body probability value and a preset threshold value.
The embodiment of the computer readable storage medium of the present application is substantially the same as the embodiment of the multi-person living body detection method and the embodiment of the electronic device 1, and will not be described here again.
The foregoing embodiment numbers of the present application are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, apparatus, article, or method that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, apparatus, article, or method. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, apparatus, article or method that comprises the element.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal device (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the method according to the embodiments of the present application.
The foregoing description is only of the preferred embodiments of the present application, and is not intended to limit the scope of the application, but rather is intended to cover any equivalents of the structures or equivalent processes disclosed herein or in the alternative, which may be employed directly or indirectly in other related arts.

Claims (8)

1. A multi-person living body detection method, comprising:
recognizing all face images from the images to be detected, and setting the face image with the largest size in the face images as a foreground image;
intercepting all face area images containing the foreground image and any face image in the images to be detected except the foreground image;
acquiring category information of all face area images in the image to be detected by using a preset living body classification model, and calculating to obtain a living body probability value according to the category information of all face area images;
judging whether the image to be detected is a living body image or not according to the living body probability value and a preset threshold value;
wherein the classification logic of the living organism classification model comprises: carrying out illumination analysis on a face image with the largest size in the face area image and another face image, judging whether the two faces are under the same illumination condition, and if not, outputting category information as a non-living body; if so, analyzing the image background information except the human face in the human face area image, judging whether the surrounding backgrounds of the two human faces are different, if so, outputting the category information as a non-living body, and if not, outputting the category information as a living body.
2. The multi-person living body detecting method according to claim 1, wherein the capturing all face area images including the foreground image and any face image of the images to be detected other than the foreground image includes:
detecting boundaries of the foreground image and other face images in the image to be detected;
sequentially intercepting a minimum rectangular image containing the foreground image and any other face image according to the boundary;
and scaling all the intercepted minimum rectangular images to a preset size to serve as face area images.
3. The multi-person living body detection method according to claim 1, wherein the training of the preset living body classification model includes:
a sample face area image containing a face image with the largest size and another face image is intercepted;
taking the sample face area image as input data, and taking class information of the sample image as output data, wherein the class information is living or non-living;
and training the convolutional neural network model according to the input data and the output data to obtain the living body classification model.
4. The multi-person living body detection method according to claim 1, wherein the calculating a living body probability value from category information of all face area images includes:
and calculating the ratio of the number of living body category information in the category information of all the face area images to the total number to obtain a living body probability value.
5. The multi-person living body detection method according to claim 1, wherein the judging whether the image to be detected is a living body image according to the living body probability value and a preset threshold value includes:
if the living body probability value is greater than or equal to a preset threshold value, judging that the image to be detected is a living body image;
and if the living body probability value is smaller than a preset threshold value, judging that the image to be detected is a non-living body image.
6. An electronic device, comprising: the multi-person living body detection device comprises a memory and a processor, wherein the memory stores a multi-person living body detection program capable of running on the processor, and the multi-person living body detection program realizes the following steps of the multi-person living body detection method when being executed by the processor:
recognizing all face images from the images to be detected, and setting the face image with the largest size in the face images as a foreground image;
intercepting all face area images containing the foreground image and any face image in the images to be detected except the foreground image;
acquiring category information of all face area images in the image to be detected by using a preset living body classification model, and calculating to obtain a living body probability value according to the category information of all face area images;
judging whether the image to be detected is a living body image or not according to the living body probability value and a preset threshold value;
wherein the classification logic of the living organism classification model comprises: carrying out illumination analysis on a face image with the largest size in the face area image and another face image, judging whether the two faces are under the same illumination condition, and if not, outputting category information as a non-living body; if so, analyzing the image background information except the human face in the human face area image, judging whether the surrounding backgrounds of the two human faces are different, if so, outputting the category information as a non-living body, and if not, outputting the category information as a living body.
7. A computer-readable storage medium, having stored thereon a multi-person living body detection program executable by one or more processors to implement the steps of the multi-person living body detection method according to any one of claims 1-5.
8. A multi-person living body detection apparatus, characterized in that the apparatus comprises:
the identification module is used for identifying all face images from the images to be detected and setting the face image with the largest size in the face images as a foreground image;
the intercepting module is used for intercepting all face area images containing the foreground image and any face image in the images to be detected except the foreground image;
the living body classification module is used for acquiring category information of all face area images in the image to be detected by using a preset living body classification model, and calculating to obtain a living body probability value according to the category information of all face area images;
the judging module is used for judging whether the image to be detected is a living body image or not according to the living body probability value and a preset threshold value;
wherein the classification logic of the living organism classification model comprises: carrying out illumination analysis on a face image with the largest size in the face area image and another face image, judging whether the two faces are under the same illumination condition, and if not, outputting category information as a non-living body; if so, analyzing the image background information except the human face in the human face area image, judging whether the surrounding backgrounds of the two human faces are different, if so, outputting the category information as a non-living body, and if not, outputting the category information as a living body.
CN202011012408.2A 2020-09-23 2020-09-23 Multi-person living body detection method, device, electronic equipment and storage medium Active CN112149570B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011012408.2A CN112149570B (en) 2020-09-23 2020-09-23 Multi-person living body detection method, device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011012408.2A CN112149570B (en) 2020-09-23 2020-09-23 Multi-person living body detection method, device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112149570A CN112149570A (en) 2020-12-29
CN112149570B true CN112149570B (en) 2023-09-15

Family

ID=73896359

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011012408.2A Active CN112149570B (en) 2020-09-23 2020-09-23 Multi-person living body detection method, device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112149570B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112613470A (en) * 2020-12-30 2021-04-06 山东山大鸥玛软件股份有限公司 Face silence living body detection method, device, terminal and storage medium
CN113011385A (en) * 2021-04-13 2021-06-22 深圳市赛为智能股份有限公司 Face silence living body detection method and device, computer equipment and storage medium
CN114627534A (en) * 2022-03-15 2022-06-14 平安科技(深圳)有限公司 Living body discrimination method, electronic device, and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105718906A (en) * 2016-01-25 2016-06-29 宁波大学 Living body face detection method based on SVD-HMM
CN108038480A (en) * 2018-02-09 2018-05-15 宁波静芯号网络科技有限公司 A kind of face identification system device false proof based on face's vein
KR101919090B1 (en) * 2017-06-08 2018-11-20 (주)이더블유비엠 Apparatus and method of face recognition verifying liveness based on 3d depth information and ir information
CN111066023A (en) * 2017-04-21 2020-04-24 Sita高级旅行解决方案有限公司 Detection system, detection device and method thereof
CN111144365A (en) * 2019-12-31 2020-05-12 北京三快在线科技有限公司 Living body detection method, living body detection device, computer equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105718906A (en) * 2016-01-25 2016-06-29 宁波大学 Living body face detection method based on SVD-HMM
CN111066023A (en) * 2017-04-21 2020-04-24 Sita高级旅行解决方案有限公司 Detection system, detection device and method thereof
KR101919090B1 (en) * 2017-06-08 2018-11-20 (주)이더블유비엠 Apparatus and method of face recognition verifying liveness based on 3d depth information and ir information
CN108038480A (en) * 2018-02-09 2018-05-15 宁波静芯号网络科技有限公司 A kind of face identification system device false proof based on face's vein
CN111144365A (en) * 2019-12-31 2020-05-12 北京三快在线科技有限公司 Living body detection method, living body detection device, computer equipment and storage medium

Also Published As

Publication number Publication date
CN112149570A (en) 2020-12-29

Similar Documents

Publication Publication Date Title
CN112149570B (en) Multi-person living body detection method, device, electronic equipment and storage medium
US10699103B2 (en) Living body detecting method and apparatus, device and storage medium
US11074445B2 (en) Remote sensing image recognition method and apparatus, storage medium and electronic device
CN110197146B (en) Face image analysis method based on deep learning, electronic device and storage medium
CN111178183B (en) Face detection method and related device
WO2019033572A1 (en) Method for detecting whether face is blocked, device and storage medium
WO2019033574A1 (en) Electronic device, dynamic video face recognition method and system, and storage medium
US20190340744A1 (en) Image processing method, terminal and storge medium
CN111914775B (en) Living body detection method, living body detection device, electronic equipment and storage medium
JP2020515983A (en) Target person search method and device, device, program product and medium
CN108021863B (en) Electronic device, age classification method based on image and storage medium
CN113255516A (en) Living body detection method and device and electronic equipment
CN111368632A (en) Signature identification method and device
CN111680546A (en) Attention detection method, attention detection device, electronic equipment and storage medium
CN113643260A (en) Method, apparatus, device, medium and product for detecting image quality
CN110222576B (en) Boxing action recognition method and device and electronic equipment
KR101961462B1 (en) Object recognition method and the device thereof
US20220122341A1 (en) Target detection method and apparatus, electronic device, and computer storage medium
CN113762027B (en) Abnormal behavior identification method, device, equipment and storage medium
CN114297720A (en) Image desensitization method and device, electronic equipment and storage medium
CN114639056A (en) Live content identification method and device, computer equipment and storage medium
CN114730487A (en) Target detection method, device, equipment and computer readable storage medium
CN114463242A (en) Image detection method, device, storage medium and device
CN114078271A (en) Threshold determination method, target person identification method, device, equipment and medium
CN111523544A (en) License plate type detection method and system, computer equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant