WO2023273050A1 - Procédé et appareil de détection de corps vivant, dispositif électronique et support d'enregistrement - Google Patents

Procédé et appareil de détection de corps vivant, dispositif électronique et support d'enregistrement Download PDF

Info

Publication number
WO2023273050A1
WO2023273050A1 PCT/CN2021/126438 CN2021126438W WO2023273050A1 WO 2023273050 A1 WO2023273050 A1 WO 2023273050A1 CN 2021126438 W CN2021126438 W CN 2021126438W WO 2023273050 A1 WO2023273050 A1 WO 2023273050A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
detection
living body
body detection
target object
Prior art date
Application number
PCT/CN2021/126438
Other languages
English (en)
Chinese (zh)
Inventor
王柏润
张学森
刘建博
伊帅
Original Assignee
北京市商汤科技开发有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京市商汤科技开发有限公司 filed Critical 北京市商汤科技开发有限公司
Publication of WO2023273050A1 publication Critical patent/WO2023273050A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • G06V20/647Three-dimensional objects by matching two-dimensional images to three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive

Definitions

  • the present disclosure relates to the technical field of image processing, and in particular to a living body detection method and device, electronic equipment, and a storage medium.
  • Liveness detection can play a key role in related applications of face recognition, and can be used to prevent attackers from using prostheses for face forgery. Therefore, how to improve the accuracy of liveness detection has become an urgent problem to be solved in the field of computer vision.
  • the present disclosure proposes a living body detection technical solution.
  • a living body detection method comprising:
  • the two-dimensional image may include an infrared image and/or a color image
  • the target living detection method may include living detection based on at least two images in a depth image, an infrared image, and a color image
  • the depth image information may include depth
  • the size information and/or distance information of the image through the embodiments of the present disclosure, based on the depth image information contained in the depth map, fully consider the actual depth of the living body detection, and flexibly select the appropriate target living body detection method to realize the living body detection. It improves the accuracy and flexibility of liveness detection.
  • the determining the detection mode of the target living body based on the depth image information contained in the depth map includes: acquiring the target object at the depth based on the depth image information contained in the depth map.
  • the detection method of the target living body includes: living body detection based on the depth map and the two-dimensional image.
  • the living body detection based on two-dimensional images may include living body detection based on infrared images and/or color images
  • the living body detection based on depth maps and two-dimensional images may include living body detection based on depth maps and infrared images, based on For liveness detection based on depth maps and color images, or liveness detection based on depth maps, infrared images, and color images, through the embodiments of the present disclosure, when the size of the target object in the depth map is small, the depth map can be omitted and selected
  • a clearer two-dimensional image is used for live detection. On the one hand, it reduces the impact of the unclear depth map on the live detection accuracy and improves the accuracy of live detection. On the other hand, it can also reduce the impact of the depth map on the live detection distance. , which improves the recognition distance of liveness detection.
  • the acquiring the target size of the target object in the depth map based on the depth image information contained in the depth map includes: performing target object detection on the two-dimensional image, Obtaining a first detection image of the target object in the two-dimensional image; determining a first image correspondence between the depth map and the two-dimensional image based on the depth image information included in the depth map; according to the The first image correspondence relationship, combined with the first detection image, obtains the second detection image of the target object in the depth map; according to the size of the second detection image, determine the depth of the target object The target dimensions in the figure.
  • the first detection image may include an image of the location of the target object in the infrared image and/or an image of the location of the target object in the color image
  • the second detection image may include an image of the location of the target object in the depth map
  • the target size may include the length and/or width of the second detection image.
  • the method further includes: detecting the target object in the two-dimensional image Perform image quality detection on the first detection image to obtain an image quality detection result; if the image quality detection result is greater than a preset quality threshold, determine the relationship between the depth map and the depth image based on the depth image information contained in the depth map. A first image correspondence between two-dimensional images.
  • the image quality detection when the image quality detection result is greater than the preset quality threshold, enter the follow-up process such as determining the first image correspondence between the depth map and the two-dimensional image, the image quality detection can be used for subsequent entry
  • the image quality of the living body detection process is screened, thereby improving the image quality of the input image during the living body detection process, and then improving the accuracy of the living body detection.
  • the determining the target living body detection method based on the depth image information included in the depth map includes: acquiring the living body detection distance of the target object based on the depth image information included in the depth map ;
  • the target living body detection method includes: living body detection based on the two-dimensional image; in the case where the living body detection distance is less than or equal to a preset distance threshold , the target liveness detection method includes: liveness detection based on the depth map and the two-dimensional image.
  • the living body detection distance may include the distance between the target object and the living body detection device.
  • the depth map may be omitted and a more affected object may be selected. Two-dimensional images with less distance influence are used for liveness detection, which improves the accuracy of liveness detection while reducing the influence of depth maps on the distance of liveness detection and improving the recognition distance of liveness detection.
  • the two-dimensional image includes an infrared image and/or a color image
  • the target living body detection method includes: living body detection based on at least two images, and the at least two images include the depth At least two of the image, the infrared image, and the color image; based on the two-dimensional image, the living body detection of the target object is performed through the target living body detection method to obtain the living body detection of the target object
  • the results include: based on the at least two images, performing liveness detection on the target object through the target liveness detection method to obtain at least two intermediate liveness detection results, wherein the at least two intermediate liveness detection results are respectively the same as
  • the at least two types of images correspond; weights corresponding to the at least two intermediate living body detection results are obtained; based on the weights and the at least two intermediate living body detection results, the living body detection results of the target object are obtained.
  • the living body detection based on at least two kinds of images may include living body detection based on depth map and infrared image, living body detection based on depth map and color image, living body detection based on infrared image and color image, and depth map based , liveness detection of infrared images and color images
  • at least two intermediate liveness detection results may include at least two of the depth map liveness detection results, infrared image liveness detection results, and color image liveness detection results, through the embodiments of the present disclosure, can be in
  • the weight of the liveness detection result of the depth map is reduced by adaptive weighting, so that the weight of the liveness detection result corresponding to the depth map can also be reduced in the case of being attacked by a 3D prosthesis. Influence, to further improve the accuracy of liveness detection.
  • the obtaining the weights corresponding to the at least two intermediate living body detection results includes: determining the at least two intermediate living body detection results based on the training results of the weighted network layer in the living body detection network Corresponding weights respectively, wherein, the living body detection network is used to detect the living body of the target object through the target living body detection method; or, according to the depth image information and/or the two-dimensional image contained in the two-dimensional image The image information is used to determine weights corresponding to the at least two intermediate living body detection results.
  • the weights corresponding to at least two intermediate living body detection results may include the respective weights corresponding to the two intermediate living body detection results when two kinds of intermediate living body detection results are included, and may also include the case where three intermediate living body detection results are included.
  • the corresponding weights of the three intermediate living body detection results; the weights determined according to the two-dimensional image information contained in the depth image and/or the two-dimensional image may include the weight determined based on the distance in the depth map, or the weight determined based on the two-dimensional image.
  • the weights determined by the brightness or size, etc., through the embodiments of the present disclosure can flexibly determine the weights of different intermediate living body detection results in two ways, and improve the flexibility of the living body detection process, wherein the neural network is adaptively determined.
  • Weights corresponding to different intermediate liveness detection results so as to realize end-to-end liveness detection, improve the efficiency and accuracy of liveness detection; and adaptively determine the weights of different intermediate liveness detection results according to the actual situation of the liveness detection image, so that the obtained liveness The detection results are more in line with the real situation, further improving the accuracy of live detection.
  • the target living body detection method includes living body detection based on the depth map and the two-dimensional image; based on the two-dimensional image, the target living body detection method is used to detect
  • the liveness detection of the target object includes: performing liveness detection on the first detection image where the target object is located in the two-dimensional image through at least one first network branch in the liveness detection network; and, through the liveness detection network.
  • the second network branch performs liveness detection on the second detection image where the target object is located in the depth map.
  • the first network branch may include an infrared image liveness detection branch and/or a color image liveness detection branch
  • the first detection image may include a human face frame intercepted in an infrared image and/or a human face intercepted in a color image frame
  • the second network branch can include the depth map liveness detection branch
  • the second detection image can include the face frame intercepted in the depth map.
  • the target liveness detection method can include both depth map and two-dimensional image based
  • use the first detection image with a smaller size to quickly obtain the intermediate living body detection result corresponding to the two-dimensional image, and at the same time use the second detection image to obtain the intermediate living body detection result corresponding to the depth map, so as to pass
  • Multiple intermediate living body detection results jointly obtain the living body detection result of the target object more accurately, ensuring the accuracy of the living body detection and improving the efficiency of the living body detection.
  • the acquiring the depth map and the two-dimensional image of the target object includes: acquiring the depth map and the original two-dimensional image of the target object; based on the depth map, performing Registration processing, obtaining the two-dimensional image registered with the depth map, wherein the registration processing includes cropping processing and/or scaling processing.
  • the original two-dimensional image may include the original infrared image and/or the original color map
  • the registration may include the registration of resolution and spatial position.
  • the two-dimensional image configured with the depth map can be obtained.
  • the two-dimensional image is convenient for subsequent determination of the target size of the target object in the depth map based on the two-dimensional image, and acquisition of the first detection image and the second detection image, etc., thereby improving the detection efficiency of the living body detection as a whole.
  • the target living body detection method includes living body detection based on the two-dimensional image; based on the two-dimensional image, the target living body detection method is used to detect the target object's living body , comprising: acquiring a first detection image in the two-dimensional image where the target object is located; combining the first detection image according to a second image correspondence between the two-dimensional image and the original two-dimensional image , obtaining a third detection image of the target object in the original two-dimensional image; performing life detection on the third detection image through at least one first network branch in the life detection network.
  • the third detection image may include the regressed face frame in the original infrared image and/or the regressed face frame in the original color image.
  • the method further includes: in a case where it is determined that the target object is a living body based on the living body detection result, identifying the target object according to the two-dimensional image.
  • the target object can be identified when it is determined that the target object is a living body, which saves the identification process when the target object is not a living body, and improves the efficiency and confidence of identification.
  • a living body detection device including:
  • the image acquisition module is used to acquire the depth map and two-dimensional image of the target object;
  • the detection method determination module is used to determine the target living body detection method based on the depth image information contained in the depth map;
  • the living body detection module is used to determine the detection method based on the A two-dimensional image, performing a liveness detection on the target object through the target liveness detection method, and obtaining a liveness detection result of the target object.
  • the detection mode determination module is configured to: acquire the target size of the target object in the depth map based on the depth image information included in the depth map;
  • the target living detection method includes: live detection based on the two-dimensional image; or, when the target size is greater than or equal to the preset size threshold, the target living detection method Including: living body detection based on the depth map and the two-dimensional image.
  • the detection method determination module is further configured to: perform target object detection on the two-dimensional image to obtain a first detection image of the target object in the two-dimensional image; based on the The depth image information contained in the depth map determines a first image correspondence between the depth map and the two-dimensional image; according to the first image correspondence, combined with the first detection image, the depth map is obtained A second detection image of the target object; determining a target size of the target object in the depth map according to the size of the second detection image.
  • the detection method determination module is further configured to: perform image quality detection on the first detection image to obtain an image quality detection result; when the image quality detection result is greater than a preset quality threshold
  • the first image correspondence between the depth map and the two-dimensional image is determined based on the depth image information included in the depth map.
  • the detection mode determination module is configured to: acquire the living body detection distance of the target object based on the depth image information contained in the depth map; when the living body detection distance is greater than a preset distance threshold
  • the target living body detection method includes: living body detection based on the two-dimensional image; when the living body detection distance is less than or equal to a preset distance threshold, the target living body detection method includes: based on the Depth map and liveness detection of said 2D image.
  • the two-dimensional image includes an infrared image and/or a color image
  • the target living body detection method includes: living body detection based on at least two images, and the at least two images include the depth At least two of the image, the infrared image, and the color image
  • the living body detection module is configured to: based on the at least two kinds of images, perform living body detection on the target object through the target living body detection method, and obtain At least two intermediate living body detection results, wherein the at least two intermediate living body detection results correspond to the at least two images respectively; obtain the weights corresponding to the at least two intermediate living body detection results respectively;
  • the at least two intermediate living body detection results are obtained to obtain the living body detection result of the target object.
  • the living body detection module is further configured to: determine weights corresponding to the at least two intermediate living body detection results based on the training results of the weighted network layer in the living body detection network, wherein the living body The detection network is used to perform liveness detection on the target object through the target liveness detection method; or, according to the depth image information and/or the two-dimensional image information contained in the two-dimensional image, determine the at least two intermediate Weights corresponding to liveness detection results.
  • the target living detection method includes living detection based on the depth map and the two-dimensional image; the living detection module is configured to: use at least one first network in the living detection network branch, performing liveness detection on the first detection image where the target object is located in the two-dimensional image; Detect images for liveness detection.
  • the image acquisition module is configured to: acquire a depth map of the target object and an original two-dimensional image; based on the depth map, perform registration processing on the original two-dimensional image to obtain a The two-dimensional image registered with the depth map, wherein the registration processing includes cropping processing and/or scaling processing.
  • the target living body detection method includes living body detection based on the two-dimensional image; image; according to the second image correspondence between the two-dimensional image and the original two-dimensional image, combined with the first detection image, to obtain a third detection image of the target object in the original two-dimensional image; The living body detection is performed on the third detection image through at least one first network branch in the living body detection network.
  • the device is further configured to: identify the target object according to the two-dimensional image when it is determined that the target object is a living body based on the living body detection result.
  • an electronic device including:
  • a processor a memory for storing processor-executable instructions; wherein, the processor is configured to invoke the instructions stored in the memory to execute the above-mentioned living body detection method.
  • a computer-readable storage medium on which computer program instructions are stored, and when the computer program instructions are executed by a processor, the above-mentioned living body detection method is implemented.
  • a computer program product including computer readable codes, or a volatile computer readable storage medium or a nonvolatile computer readable storage medium carrying computer readable codes, when the When the computer-readable codes are run in the processor of the electronic device, the processor in the electronic device executes to implement the above-mentioned living body detection method.
  • Fig. 1 shows a flowchart of a living body detection method according to an embodiment of the present disclosure.
  • Fig. 2 shows a flowchart of a living body detection method according to an embodiment of the present disclosure.
  • Fig. 3 shows a flowchart of a living body detection method according to an embodiment of the present disclosure.
  • Fig. 4 shows a schematic diagram of a network structure of a living body detection network according to an embodiment of the present disclosure.
  • Fig. 5 shows a block diagram of a living body detection device according to an embodiment of the present disclosure.
  • Fig. 6 shows a schematic diagram of an application example according to the present disclosure.
  • Fig. 7 shows a block diagram of an electronic device according to an embodiment of the present disclosure.
  • Fig. 8 shows a block diagram of an electronic device according to an embodiment of the present disclosure.
  • Fig. 1 shows a flow chart of a method for detecting a living body according to an embodiment of the present disclosure.
  • the method can be applied to a living body detecting device, and the living body detecting device can be a terminal device, a server, or other processing devices.
  • the terminal device may be user equipment (User Equipment, UE), mobile device, user terminal, terminal, cellular phone, cordless phone, personal digital assistant (PDA), handheld device, computing device, vehicle-mounted device, Wearable equipment etc.
  • UE user equipment
  • PDA personal digital assistant
  • the living body detection method may be implemented by a processor invoking computer-readable instructions stored in a memory.
  • the living body detection method may include:
  • the target object may be any object to be detected, such as a person or an animal to be detected, and in some possible implementation manners, the target object may include a human face object and/or a human body object.
  • the number of target objects is not limited in the embodiments of the present disclosure, and can be a single object or multiple objects. In the case that the target object includes multiple objects, it can be simultaneously Liveness detection can be performed on multiple objects, or liveness detection can be performed on multiple objects separately, and which method to choose can be flexibly determined according to the actual situation.
  • the depth map may be an image in which the depth from the image capture device to at least one point in the captured scene is taken as a pixel value, and the depth map of the target object may reflect the geometric shape of the visible surface of the target object.
  • the manner of obtaining the depth map of the target object is not limited in the embodiments of the present disclosure, and can be flexibly determined according to actual conditions.
  • the depth map of the target object can be directly obtained from the image acquisition device, wherein the image
  • the acquisition device may be any device for image acquisition of the target object, such as a stereo camera or a Time of Flight (TOF, Time of Flight) camera.
  • TOF Time of Flight
  • the two-dimensional image of the target object can be any image obtained by collecting the two-dimensional image of the target object.
  • the two-dimensional image can be a related image for live detection, such as infrared (IR , Infrared Radiation) images and/or color images, etc.
  • the infrared image may be an image formed based on different thermal infrared differences between the target object itself and the background, less disturbed by ambient light, and the infrared image of the target object can be obtained at any time period.
  • a color image can be an image corresponding to multiple channels, such as RGB image (R means Red, red; G means Green, green; B means Blue, blue), CMYK image (C means Cyan, cyan; M means Magenta, magenta ; Y means Yellow, yellow; K means black, black) or YUV image (Y means Luminance, brightness; U and V mean Chrominance, chromaticity), etc.
  • the number of acquired depth maps and 2D images is not limited in the embodiments of the present disclosure.
  • One or more depth maps may be acquired, or one or more 2D images may be acquired, which can be flexibly selected according to actual conditions.
  • the method of acquiring the two-dimensional image of the target object is also not limited in the embodiments of the present disclosure, and can be flexibly determined according to actual conditions.
  • the two-dimensional image of the target object can be directly obtained from the image acquisition device.
  • the image acquisition device reference may be made to the foregoing disclosed embodiments, and details are not repeated here.
  • the depth map and the two-dimensional image of the target object can be acquired simultaneously or separately.
  • the images can be acquired at the same time or separately.
  • Depth image information can include the image information of the depth map itself, such as the size or resolution of the depth map, and can also include relevant information extracted from the depth map, such as determining the distance of the target object in space based on the depth map information etc.
  • the target living body detection method can be flexibly selected according to the actual situation, and is not limited to the following disclosed embodiments.
  • the target liveness detection methods may include liveness detection based on infrared images, liveness detection based on color images, liveness detection based on infrared images and color images, liveness detection based on depth maps and infrared images, depth-based Liveness detection based on image and color image or one or more implementations based on depth map, infrared image and color image.
  • the determination method of the target living body detection method can also be flexibly changed, for example, one or more target living body detection methods in the above disclosed embodiments can be selected based on the size of the target object in the depth map method, or select one or more target living body detection methods in the above-mentioned disclosed embodiments based on the living body detection distance of the target object, etc., see the following disclosed embodiments for details, and will not be expanded here.
  • the target object can be detected based on the two-dimensional image.
  • the process of the living detection can be flexibly determined according to the actual situation of the target living detection method.
  • the target living detection method can be realized.
  • Live detection based on two-dimensional images can be based on certain images in two-dimensional images, such as color images or depth maps, or live detection based on multiple images in two-dimensional images etc., can also be flexibly determined according to the actual situation of the detection method of the target living body.
  • the liveness detection result may include two cases of determining that the target object is alive and determining that the target object is not alive; in some possible implementations, the liveness detection result may also include whether the target object is alive or not. Confidence of living body and other types, etc.
  • the actual depth of the living body detection can be fully considered, and an appropriate target living body detection method can be flexibly selected to realize the living body detection, which improves the accuracy and flexibility of the living body detection.
  • S11 may include: acquiring a depth map of the target object and an original two-dimensional image; based on the depth map, performing registration processing on the original two-dimensional image to obtain a two-dimensional image registered with the depth map, Wherein, the registration processing includes cropping processing and/or scaling processing.
  • the original two-dimensional image may be a two-dimensional image obtained without any processing after image acquisition of the target object.
  • reference may be made to the implementation forms of the two-dimensional image in the above-mentioned disclosed embodiments, which will not be repeated here.
  • the depth map and the original two-dimensional image can be registered first, so that the depth map and the two-dimensional image The resolution and spatial position are all aligned with each other, thereby reducing the difficulty of subsequent image processing and liveness detection.
  • the manner of registration is not limited in this embodiment of the present disclosure.
  • the registration of the two-dimensional image and the depth map can be realized through cropping processing and/or scaling processing.
  • the configuration between the two-dimensional image and the depth map can also be realized through the registration network, wherein the registration network can be any neural network with image registration function, and its implementation form is not described in the embodiment of the present disclosure. limit.
  • the original two-dimensional image may include the original infrared image and the original color image.
  • the acquired original infrared image may itself have been co-registered with the depth map. In this case, only the original color image can be registered to obtain a color image configured with the depth map.
  • a two-dimensional image configured with a depth map, which facilitates the subsequent determination of the target size of the target object in the depth map based on the two-dimensional image, and acquisition of the first detection image and the second detection image, etc., thereby improving the overall quality of the living body. Detection efficiency of detection.
  • Fig. 2 shows a flow chart of a living body detection method according to an embodiment of the present disclosure.
  • S12 may include:
  • the target living body detection method includes: living body detection based on a two-dimensional image. or,
  • the target living body detection method includes: living body detection based on a depth map and a two-dimensional image.
  • the target size may be the size of the target object in the depth map, for example, it may include the length of the long side and/or the side length of the short side of the target object in the depth map, or include the resolution of the target object in the depth map, etc. .
  • the implementation of S121 can be flexibly selected according to the actual situation.
  • the target size of the target object can be jointly determined based on the size information of the depth map in the depth image information and the proportion of the target object in the depth map.
  • the target size can also be determined through the proportional relationship between the 2D image and the depth map, and the size of the target object in the 2D image.
  • the target size can be compared with a preset size threshold to determine a target liveness detection method.
  • the size of the preset size threshold can be flexibly set according to actual conditions, and is not limited in this embodiment of the present disclosure.
  • the target living body detection method can be determined as the living body detection based on the two-dimensional image, so as to omit the living body detection process based on the depth map and improve While improving the accuracy of liveness detection, the calculation amount of liveness detection is reduced, and the efficiency of liveness detection is improved.
  • the living body detection based on the two-dimensional image may be realized based on all kinds of two-dimensional images, or it may be based on one of the two-dimensional images Or live detection realized by multiple images, etc., can be flexibly selected according to the actual situation.
  • the living body detection based on the two-dimensional image may include the living body detection based on the infrared image and/or the color image.
  • the target size is greater than or equal to the preset size threshold, it may indicate that the target object in the depth image is relatively clear. In this case, the target object may be detected based on the depth image. Therefore, in a possible implementation, when the target size is greater than or equal to the preset size threshold, the target living body detection method can be determined as the living body detection based on the depth map and the two-dimensional image, so as to improve the accuracy of the living body detection .
  • liveness detection based on depth maps and two-dimensional images may include, liveness detection based on depth maps and infrared images, liveness detection based on depth maps and color images, and liveness detection based on depth maps, infrared images and color images.
  • liveness detection may include, liveness detection based on depth maps and infrared images, liveness detection based on depth maps and color images, and liveness detection based on depth maps, infrared images and color images.
  • the depth map when the size of the target object in the depth map is small, the depth map can be omitted and a clearer two-dimensional image can be selected for live detection.
  • the impact of precision improves the accuracy of liveness detection.
  • it can also reduce the influence of the depth map on the liveness detection distance and improve the recognition distance of liveness detection.
  • Fig. 3 shows a flowchart of a living body detection method according to an embodiment of the present disclosure.
  • S121 may include:
  • the first detection image is an image extracted from a two-dimensional image where the target object is located, for example, may include a detection image extracted from an infrared image and/or a detection image extracted from a color image.
  • the first detection image may be an image of the entire and/or part of the target object extracted from the two-dimensional image, for example, the first detection image may be based on the overall detection frame of the target object, the target object The detection image determined by the human body detection frame or the face detection frame of the target object.
  • the method of detecting the target object on the two-dimensional image to obtain the first detection image is not limited in the embodiment of the present disclosure, and can be flexibly selected according to the actual situation.
  • the detection network is used to obtain the detection frame output by the target object detection network, and the two-dimensional image is clipped based on the detection frame to obtain the first detection image.
  • the implementation manner of the target object detection network is not limited in the embodiments of the present disclosure, and any neural network with an object detection function may be used as an implementation form of the target object detection network.
  • the first image correspondence between the depth map and the two-dimensional image may also be determined based on depth image information included in the depth map.
  • the first image corresponding relationship may be an image coordinate transformation relationship between the depth map and the two-dimensional image.
  • the method of determining the corresponding relationship of the first image based on the depth image information can be flexibly determined according to the actual situation. In some possible implementations, it can be combined with The information such as the size, resolution, and the position of the corner points of the two-dimensional image is used to determine the position transformation relationship of the pixels between the depth map and the two-dimensional image as the corresponding relationship of the first image.
  • the aligned two-dimensional image is used as the transformed two-dimensional image, and based on the image correspondence between the transformed two-dimensional image and the two-dimensional image, the first image correspondence between the depth map and the two-dimensional image can be determined.
  • the depth map and the two-dimensional image can be directly determined
  • the correspondence between the first images is mutual correspondence.
  • the implementation order of S1211 and S1212 can be flexibly determined according to the actual situation, and can be implemented simultaneously or sequentially in a certain order, etc., which are not limited in the embodiments of the present disclosure.
  • the position and size of the target object in the depth map can be determined, so as to obtain the location of the target object in the depth map The second detection image of .
  • the second detection image may also be an image occupied by the whole and/or part of the target object in the depth map, and its implementation may refer to the first detection image, which will not be repeated here.
  • the specific process of obtaining the second detection image can also be flexibly selected according to the actual situation.
  • the position of the first detection image in the two-dimensional image can be transformed into the depth map through the corresponding relationship of the first image, so as to determine the target position of the object in the depth map, and crop the depth map based on the position to obtain a second detection image.
  • the target size of the target object in the depth map can be determined according to the size of the second detection image.
  • the target size is determined based on which size of the second detection image is not limited in this embodiment of the present disclosure.
  • the target size may be a size determined based on the length and/or width of the second detection image, In some possible implementation manners, the target size may also be a size having a maximum value or a minimum value among the length and width of the second detection image.
  • the size of the second detection image may be the overall size and/or partial size of the target object
  • the overall size of the target object can be deduced according to the partial size, for example, according to the size ratio between the face and the human body of most faces, the partial size Convert to the overall size to obtain the target size
  • the partial size can also be directly used as the target size
  • the preset size threshold can be the threshold set according to the partial size of the target object
  • the preset size threshold may be a size threshold set based on the face size of the target object.
  • a two-dimensional image can be used to locate the target object, and the first image correspondence between the two-dimensional image and the depth map can be used to conveniently determine the target size of the target object in the depth map.
  • the size determination method is more convenient and has higher accuracy, thereby improving the efficiency and accuracy of living body detection.
  • the method proposed in the embodiment of the present disclosure may further include: performing image quality detection on the first detection image to obtain the image quality detection result; when the image quality detection result is greater than the preset quality threshold In the case of , based on the depth image information included in the depth map, determine the first image correspondence between the depth map and the two-dimensional image.
  • the image quality detection result may include the image quality of the first detected image under one or more evaluation criteria, and the evaluation standard may be flexibly set according to the actual situation, and is not limited to the following disclosed embodiments, for example, it may include clarity , completeness or brightness and other one or more quality evaluation criteria.
  • the manner of performing image quality detection on the first detection image is not limited in this embodiment of the present disclosure, and may be flexibly selected according to actual conditions.
  • the first detected image can be input into the quality detection network to obtain the image quality detection result output by the quality detection network, wherein the quality detection network can be any neural network with image quality detection function, in this paper
  • the output image quality detection result may be a comprehensive detection result under the above-mentioned multiple evaluation standards, or may be individual evaluation results under one or more evaluation standards.
  • image quality detection on the first detection image through a relevant image quality detection algorithm, for example, determine the integrity of the first detection image through corner point detection, and determine the second detection image through a sharpness recognition method. Detecting the sharpness quality of the image, or determining the quality of lightness and darkness of the first detecting image based on the color values of the pixels in the first detecting image.
  • respective preset quality thresholds may be set for multiple judging criteria, and the image quality detection results under at least one judging standard are compared with the preset quality thresholds respectively, and at least one judging criterion When the image quality detection results under the standard are all greater than their corresponding preset quality thresholds, it is considered that the image quality detection results are greater than the preset quality thresholds.
  • a comprehensive image quality detection result may also be obtained based on the image quality detection result under at least one evaluation standard, and the comprehensive image quality detection result is compared with the set comprehensive preset quality threshold to obtain Get the comparison result.
  • the image quality detection result is greater than the preset quality threshold, it can be considered that the quality of the first detection image meets the requirements of living body detection, and in this case, it can go to S1212.
  • the image quality detection when the image quality detection result is greater than the preset quality threshold, enter the follow-up process such as determining the first image correspondence between the depth map and the two-dimensional image, the image quality detection can be used for subsequent entry
  • the image quality of the living body detection process is screened, thereby improving the image quality of the input image during the living body detection process, and then improving the accuracy of the living body detection.
  • the image quality detection result when the image quality detection result is less than or equal to the preset quality threshold, it may be considered that the quality of the first detection image is low, and correspondingly, the 2D image corresponding to the first detection image The image quality may also be low, in which case the liveness detection result based on the two-dimensional image is highly likely to be inaccurate. Therefore, in a possible implementation manner, when the image quality detection result is less than or equal to the preset quality threshold, the living body detection based on the acquired two-dimensional image may be stopped.
  • a new depth map and two-dimensional image can be acquired again, and through the living body detection method proposed in the embodiment of the present disclosure, based on the newly acquired
  • the depth map and two-dimensional image of the image can be used to realize the living body detection, etc.; in some possible implementations, it is also possible to directly exit the living body detection process, etc.
  • S12 may include: acquiring the living body detection distance of the target object based on the depth image information contained in the depth map; when the living body detection distance is greater than the preset distance threshold, the target living body detection method includes: based on Liveness detection of two-dimensional images; in the case that the living body detection distance is less than or equal to the preset distance threshold, target living body detection methods include: liveness detection based on depth maps and two-dimensional images.
  • the living body detection distance can be the distance between the target object and the living body detection device. How to obtain the living body detection distance based on the depth image information contained in the depth map can be flexibly determined according to the actual situation, and is not limited to the following public implementations. example.
  • the distance between the target object and the image acquisition device may be determined according to the depth distance represented by at least one pixel in the depth map, and the distance between the image acquisition device and the living body detection device may be corresponding relationship, to determine the liveness detection distance between the target object and the liveness detection device; in a possible implementation, when the liveness detection device itself includes an image acquisition device, it can be based on at least one pixel reflected in the depth map The depth distance directly determines the liveness detection distance between the target object and the liveness detection device.
  • the living body detection distance may be compared with a preset distance threshold to determine a target living body detection method.
  • the distance of the preset distance threshold can also be flexibly set according to actual conditions, and is not limited in this embodiment of the present disclosure.
  • the target object in the depth map may have low definition.
  • the accuracy of live detection based on the depth map is low, which may affect the Overall accuracy of liveness detection. Therefore, in a possible implementation, when the living body detection distance is greater than the preset distance threshold, the target living body detection method can be determined as the living body detection based on the two-dimensional image, so as to omit the living body detection process based on the depth map, While improving the accuracy of liveness detection, the calculation amount of liveness detection is reduced, and the efficiency of liveness detection is improved.
  • the target object in the depth map may be relatively clear.
  • the liveness detection of the target object may be performed based on the depth map. Therefore, in a possible implementation, when the living body detection distance is less than or equal to the preset distance threshold, the target living body detection method can be determined as the living body detection based on the depth map and the two-dimensional image, so as to improve the accuracy of the living body detection. precision.
  • the implementation forms of the living body detection based on the depth map and the two-dimensional image can also refer to the above-mentioned disclosed embodiments, which will not be repeated here.
  • the depth map can be omitted and a two-dimensional image that is less affected by the distance can be selected for living body detection, which improves the accuracy of living body detection and reduces the depth.
  • the influence of the graph on the liveness detection distance improves the recognition distance of the liveness detection.
  • the two-dimensional image may include an infrared image and/or a color image
  • the target living body detection method may include a detection method based on a two-dimensional image, or a detection method based on a depth map and a two-dimensional image
  • the target living detection method may include: live detection based on at least two images, wherein the at least two images may be a depth map and an infrared image, a depth map and a color image, an infrared image and a color image images, or depth maps, infrared images, and color images.
  • S13 may include: based on at least two kinds of images, perform liveness detection on the target object by means of target liveness detection to obtain at least two intermediate liveness detection results; obtain at least two intermediate liveness detection results corresponding weights; based on the weights and at least two intermediate living body detection results, the living body detection results of the target object are obtained.
  • the intermediate living body detection result may be a detection result obtained by performing living body detection based on one of the images, since the target living body detection method may include living body detection based on at least two images, correspondingly, it may be obtained separately from at least two images Corresponding to at least two intermediate living body detection results.
  • liveness detection can be performed based on depth maps, infrared images, and color images.
  • intermediate liveness detection results corresponding to depth maps and intermediate liveness detection results corresponding to infrared images can be obtained respectively.
  • the living body detection can be performed based on the infrared image and the color image.
  • an intermediate living body detection result corresponding to the infrared image and an intermediate living body detection result corresponding to the color image can be respectively obtained.
  • weights corresponding to different intermediate living body detection results can be further obtained, and weighted summation is performed according to the weight and at least two intermediate living body detection results to obtain the living body detection result of the target object.
  • the method of obtaining the weights corresponding to different intermediate living body detection results can be flexibly determined according to the actual situation.
  • the respective weights can be preset for different intermediate living body detection results and the preset weights can be directly to read.
  • adaptive weights adapted to the actual conditions of the at least two images may also be acquired in other manners. Wherein, the manner of acquiring the adaptive weight can be flexibly determined according to the actual conditions of at least two images, see the following disclosed embodiments for details, and will not be expanded here.
  • weights of intermediate liveness detection results corresponding to the same type of images may be different.
  • the weight of the intermediate living body detection result corresponding to the infrared image may be A
  • the weight of the intermediate living body detection result corresponding to the infrared image may be B
  • the values of A and B may be different or the same, which is flexible according to the actual situation It only needs to be determined, and there is no limitation in this embodiment of the present disclosure.
  • the process of obtaining the living body detection result of the target object based on the weight and at least two intermediate living body detection results is not limited in the embodiment of the present disclosure. In a possible implementation manner, at least two intermediate living body detection results can be respectively corresponding to it The weights are multiplied and then summed to obtain the liveness detection result of the target object.
  • the weight of the depth map live body detection results can be reduced by adaptive weighting, so that the depth map can also be reduced when it is attacked by a 3D prosthesis.
  • the impact of the corresponding living body detection results further improves the accuracy of living body detection.
  • obtaining weights corresponding to at least two intermediate live body detection results may include: determining weights corresponding to at least two intermediate live body detection results based on the training results of the weighted network layer in the live body detection network, Among them, the living body detection network is used to detect the living body of the target object through the target living body detection method. Alternatively, according to the depth image information and/or the two-dimensional image information contained in the two-dimensional image, the weights corresponding to at least two intermediate living body detection results are determined respectively.
  • the living body detection network may be a network used to detect the living body of the target object, and its implementation form is not limited in the embodiments of the present disclosure, and is not limited to the following disclosed embodiments.
  • the living body detection network may include at least a first network branch and a second network branch, wherein the first network branch may be used to realize living body detection based on two-dimensional images, and the second network branch may be used to It is used to realize liveness detection based on depth map.
  • the two-dimensional images may include infrared images and/or color images, therefore, in a possible implementation manner, the number of first network branches may be two, which are respectively used to implement infrared-based Image liveness detection and color image based liveness detection.
  • Different branches in the liveness detection network can be trained together or individually.
  • the embodiment of the present disclosure does not limit the training method of the liveness detection network, and any neural network training method can be used to train the liveness detection network.
  • the target liveness detection method includes depth maps and infrared images.
  • the infrared image can be input into the first network branch corresponding to the infrared image, and the depth map can be input into the second network branch, so as to realize the living body detection under the target living body detection mode;
  • the target living detection method includes living detection based on depth map, infrared image and color image
  • the infrared image can be input into the first network branch corresponding to the infrared image
  • the color image can be input into the first network corresponding to the color image branch, and input the depth map into the second network branch, so as to realize the liveness detection under the target liveness detection mode.
  • FIG. 4 shows a schematic diagram of a network structure of a living body detection network according to an embodiment of the present disclosure.
  • the living body detection network may further include a weighted network layer, which may be connected with the weighted network layer respectively.
  • the second network branch is connected to the output end of the first network branch, so as to carry out weighted summation on the intermediate living body detection results output by the second network branch and the first network branch, so as to obtain the living body detection result of the target object.
  • the training of the weighted network layer can also be realized.
  • weights corresponding to at least two intermediate living body detection results may be determined.
  • the weight may also be determined according to the depth image information and/or the two-dimensional image information included in the two-dimensional image.
  • determining the weight based on the depth image information may be based on the living body detection distance in the depth image information to determine the weight of the intermediate living body detection result corresponding to the depth image.
  • the accuracy of the intermediate liveness detection result corresponding to the depth map may be low, so a smaller weight can be assigned to it, and the specific correspondence between the weight and the liveness detection distance can be flexibly set according to the actual situation.
  • the two-dimensional image information may be related information included in the two-dimensional image, such as information such as the size and brightness of the two-dimensional image, which is not limited in this embodiment of the present disclosure.
  • the weight is determined based on two-dimensional image information, and its implementation method can also be flexibly determined. In some possible implementation methods, in the case of high brightness, the intermediate living body detection based on the color image is relatively more accurate.
  • the preset brightness threshold assigns a higher weight to the intermediate living body detection result corresponding to the color image when the brightness of the color image exceeds the preset brightness threshold, where the preset brightness
  • the value of the threshold and the value of the assigned weight are not limited in the embodiments of the present disclosure, and can be flexibly selected according to actual conditions.
  • the intermediate living body detection based on the infrared image is relatively more accurate, so in the same way, when the size of the infrared image is large, the infrared image Assign higher weights etc.
  • the weights of different intermediate living body detection results can be flexibly determined in two ways, and the flexibility of the living body detection process can be improved, wherein the weights corresponding to different intermediate living body detection results are adaptively determined through a neural network, so that Realize end-to-end liveness detection, improve the efficiency and accuracy of liveness detection; and adaptively determine the weight of different intermediate liveness detection results according to the actual situation of the liveness detection image, so that the obtained liveness detection results are more in line with the real situation, further improving the liveness Detection accuracy.
  • the target living body detection method may include living body detection based on a depth map and a two-dimensional image
  • S13 may include: using at least one first network branch in the living body detection network to detect the target object in the two-dimensional image Live body detection is performed on the first detected image. And, through the second network branch in the living body detection network, live body detection is performed on the second detection image where the target object is located in the depth map.
  • the implementation forms of the living body detection network, the first network branch, the second network branch, the first detection image and the second detection image can refer to the above disclosed embodiments, and will not be repeated here.
  • the first detection image can be the image where the target object is extracted from the two-dimensional image
  • inputting the first detection image to the first network branch for live body detection can effectively reduce the amount of computation in the live body detection process, Improve the efficiency of liveness detection.
  • the first network branch may include the first network branch corresponding to the infrared image and the first network branch corresponding to the color image, in order to realize the detection of the first detection image where the target object is located in the two-dimensional image.
  • only the first detection image extracted from the infrared image can be input into the first network branch corresponding to the infrared image, or only the first detection image extracted from the color image can be input.
  • the image is input to the first network branch corresponding to the color image, and the first detected image extracted from the infrared image and the color image can also be input to the first network branch corresponding to each.
  • the second detection image may be the image where the target object is extracted from the depth map
  • inputting the second detection image to the second network branch for liveness detection can also improve the efficiency of liveness detection.
  • the target living body detection method includes the living body detection based on the depth map and the two-dimensional image at the same time
  • the intermediate living body detection corresponding to the two-dimensional image can be quickly obtained by using the first detection image with a smaller size
  • the intermediate living body detection result corresponding to the depth map is obtained by using the second detection image at the same time, so that the living body detection result of the target object can be obtained more accurately through multiple intermediate living body detection results, and the accuracy of the living body detection is improved while ensuring the accuracy of the living body detection. efficiency.
  • the target living body detection method may include living body detection based on a two-dimensional image
  • S13 may include: using at least one first network branch in the living body detection network to detect the second branch of the target object in the two-dimensional image
  • a detection image is used for liveness detection.
  • the depth map when the depth map is unclear or the distance is far away, the depth map can be omitted and the living body detection can be realized through the two-dimensional image, thereby improving the overall accuracy of the living body detection.
  • the detection method of the target living body may include the living body detection based on the two-dimensional image
  • S13 may include: acquiring the first detection image where the target object is located in the two-dimensional image; The corresponding relationship between the second image, combined with the first detection image, to obtain the third detection image of the target object in the original two-dimensional image; through at least one first network branch in the living body detection network, perform liveness detection on the third detection image .
  • the second image corresponding relationship may be an image coordinate transformation relationship between the two-dimensional image and the original image.
  • the two-dimensional image may be an image obtained after registration processing is performed on the original two-dimensional image, so during the registration process, the second The correspondence between the two images.
  • the first detection image is the image where the target object is located extracted from the two-dimensional image, and based on the corresponding relationship of the second image, the first detection image where the target object is located can be extracted from the original two-dimensional image.
  • Three detection images the coordinate position of the first detection image in the two-dimensional image can be transformed through the second image correspondence to obtain the coordinate position of the target object in the original two-dimensional image, based on the coordinate The position is used to crop the original two-dimensional image to obtain a third detection image.
  • the third detection image may also include the third detection image extracted from the original infrared image.
  • registration processing such as cropping and/or scaling may be performed on the original 2D image, so that the resolution or definition of the 2D image is lower than the original 2D image.
  • image so the third detection image obtained from the original two-dimensional image can have higher image quality than the first detection image extracted from the two-dimensional image, and the accuracy of living body detection based on the third detection image Can also be higher.
  • the third detection image with higher resolution intercepted in the original infrared image and/or the original color image can be used to detect Live detection is performed to effectively improve the accuracy of live detection, and more accurate live detection results can be obtained even when the depth map is omitted.
  • the living body detection method proposed by the embodiment of the present disclosure may further include: when the target object is determined to be a living body based on the living body detection result, identifying the target object according to the two-dimensional image.
  • identifying the target object based on the two-dimensional image may include identifying the target object based on a color image, or identifying the target object based on an infrared image, or identifying the target object based on a color image and an infrared image. Identification.
  • the specific manner of identity recognition is not limited in the embodiment of the present disclosure, and any process that can realize identity recognition based on images can be used as the realization manner of identity recognition.
  • the identity recognition of the target object can be realized through any neural network with identity recognition function.
  • the process may end without identifying the target object.
  • the target object can be identified when it is determined that the target object is a living body, which saves the identification process when the target object is not a living body, and improves the efficiency and confidence of identification.
  • Fig. 5 shows a block diagram of a living body detection device 20 according to an embodiment of the present disclosure.
  • the device includes: an image acquisition module 21, configured to acquire a depth map and a two-dimensional image of a target object.
  • the detection mode determination module 22 is configured to determine a target living body detection mode based on the depth image information included in the depth map.
  • the living body detection module 23 is configured to detect the living body of the target object through the target living body detection method based on the two-dimensional image, and obtain the living body detection result of the target object.
  • the detection mode determination module is used to: obtain the target size of the target object in the depth map based on the depth image information contained in the depth map; when the target size is smaller than the preset size threshold, the target living body
  • the detection method includes: living body detection based on a two-dimensional image; or, when the target size is greater than or equal to a preset size threshold, the target living body detection method includes: living body detection based on a depth map and a two-dimensional image.
  • the detection mode determination module is further used to: detect the target object on the two-dimensional image to obtain the first detection image of the target object in the two-dimensional image; determine the depth based on the depth image information contained in the depth map The first image correspondence between the map and the two-dimensional image; according to the first image correspondence, combined with the first detection image, a second detection image of the target object in the depth map is obtained; according to the size of the second detection image, the target object is determined Object size in the depth map.
  • the detection mode determination module is further configured to: perform image quality detection on the first detection image to obtain an image quality detection result; when the image quality detection result is greater than a preset quality threshold, The contained depth image information determines the first image correspondence between the depth map and the two-dimensional image.
  • the detection mode determination module is configured to: obtain the living body detection distance of the target object based on the depth image information contained in the depth map; Including: living body detection based on two-dimensional images; when the living body detection distance is less than or equal to the preset distance threshold, the target living body detection method includes: living body detection based on depth map and two-dimensional image.
  • the two-dimensional image includes an infrared image and/or a color image
  • the target living body detection method includes: living body detection based on at least two images, and the at least two images include a depth image, an infrared image, and a color image. at least two; the living body detection module is used for: based on at least two kinds of images, through the target live body detection method to carry out live body detection on the target object, to obtain at least two intermediate live body detection results, wherein the at least two intermediate live body detection results are respectively the same as at least The two images correspond; weights corresponding to at least two intermediate living body detection results are obtained; based on the weights and the at least two intermediate living body detection results, the living body detection results of the target object are obtained.
  • the living body detection module is further configured to: determine weights corresponding to at least two intermediate living body detection results based on the training results of the weighted network layer in the living body detection network, wherein the living body detection network is used to pass the target Liveness detection is performed on the target object in the liveness detection mode; or, according to the depth image information and/or the two-dimensional image information contained in the two-dimensional image, the weights corresponding to at least two intermediate liveness detection results are determined respectively.
  • the target living detection method includes living detection based on a depth map and a two-dimensional image; the living detection module is configured to: use at least one first network branch in the living detection network to detect the Liveness detection is performed on the first detection image where the object is located; and liveness detection is performed on the second detection image where the target object is located in the depth map through the second network branch in the liveness detection network.
  • the image acquisition module is used to: acquire the depth map of the target object and the original two-dimensional image; based on the depth map, perform registration processing on the original two-dimensional image to obtain a two-dimensional image registered with the depth map images, wherein the registration process includes cropping and/or scaling.
  • the target living detection method includes living detection based on two-dimensional images; the living detection module is used to: obtain the first detection image where the target object is located in the two-dimensional image; The second image correspondence between the images is combined with the first detection image to obtain a third detection image of the target object in the original two-dimensional image; through at least one first network branch in the living body detection network, liveness is performed on the third detection image detection.
  • the device is further configured to: identify the target object according to the two-dimensional image when it is determined that the target object is a living body based on the living body detection result.
  • the functions or modules included in the device provided by the embodiments of the present disclosure can be used to execute the methods described in the method embodiments above, and its specific implementation can refer to the description of the method embodiments above. For brevity, here No longer.
  • Fig. 6 shows a schematic diagram of an application example according to the present disclosure.
  • an embodiment of the present disclosure proposes a living body detection method, and the living body detection process may include the following process:
  • S31 Collect an image of a person subject to be detected by the camera, and obtain an original depth image D, an original infrared image I, and an original RGB image V of the person object from the camera.
  • the adaptive weight can be obtained by the living body detection network, and can also be obtained from the image information in the original depth map D, the original infrared image I and the original RGB image V, where the image information can include the living body detection distance, image brightness or image size, etc. .
  • the living body detection method proposed in the application example of the disclosure can support a longer recognition distance and is easier to apply; it has higher defense and detection accuracy, and also has a higher pass rate of real people; in addition, the application example of the disclosure proposes
  • the liveness detection network can be obtained by deforming the relevant liveness detection network model, which is easy to implement.
  • the second network branch of the liveness detection network based on the depth map can be trained separately, and the training method is more flexible and easy to train.
  • the living body detection method proposed in the application example of the present disclosure can be used in scenarios such as face access control and face payment, which facilitates farther-distance face-swiping traffic and payment, and is more convenient. At the same time, higher defense can also improve the security of access control and property.
  • Embodiments of the present disclosure also provide a computer-readable storage medium, on which computer program instructions are stored, and the above-mentioned method is implemented when the computer program instructions are executed by a processor.
  • the computer readable storage medium may be a non-transitory computer readable storage medium.
  • An embodiment of the present disclosure also proposes an electronic device, including: a processor; a memory for storing instructions executable by the processor; wherein the processor is configured to invoke the instructions stored in the memory to execute the above method.
  • An embodiment of the present disclosure also provides a computer program product, including computer readable codes.
  • the processor in the device executes the method for implementing the living body detection method provided in any of the above embodiments. instruction.
  • the embodiments of the present disclosure also provide another computer program product, which is used for storing computer-readable instructions, and when the instructions are executed, the computer executes the operations of the living body detection method provided in any of the above-mentioned embodiments.
  • Embodiments of the present disclosure also provide another computer program product, including computer-readable codes, or a volatile computer-readable storage medium or a non-volatile computer-readable storage medium carrying computer-readable codes, when the computer can
  • the processor in the electronic device executes instructions for implementing the living body detection method provided in any one of the above embodiments.
  • Electronic devices may be provided as terminals, servers, or other forms of devices.
  • FIG. 7 shows a block diagram of an electronic device 800 according to an embodiment of the present disclosure.
  • the electronic device 800 may be a terminal such as a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, or a personal digital assistant.
  • electronic device 800 may include one or more of the following components: processing component 802, memory 804, power supply component 806, multimedia component 808, audio component 810, input/output (I/O) interface 812, sensor component 814 , and the communication component 816.
  • the processing component 802 generally controls the overall operations of the electronic device 800, such as those associated with display, telephone calls, data communications, camera operations, and recording operations.
  • the processing component 802 may include one or more processors 820 to execute instructions to complete all or part of the steps of the above method. Additionally, processing component 802 may include one or more modules that facilitate interaction between processing component 802 and other components. For example, processing component 802 may include a multimedia module to facilitate interaction between multimedia component 808 and processing component 802 .
  • the memory 804 is configured to store various types of data to support operations at the electronic device 800 . Examples of such data include instructions for any application or method operating on the electronic device 800, contact data, phonebook data, messages, pictures, videos, and the like.
  • the memory 804 can be implemented by any type of volatile or non-volatile storage device or their combination, such as static random access memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable Programmable Read Only Memory (EPROM), Programmable Read Only Memory (PROM), Read Only Memory (ROM), Magnetic Memory, Flash Memory, Magnetic or Optical Disk.
  • SRAM static random access memory
  • EEPROM electrically erasable programmable read-only memory
  • EPROM erasable Programmable Read Only Memory
  • PROM Programmable Read Only Memory
  • ROM Read Only Memory
  • Magnetic Memory Flash Memory
  • Magnetic or Optical Disk Magnetic Disk
  • the power supply component 806 provides power to various components of the electronic device 800 .
  • Power components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for electronic device 800 .
  • the multimedia component 808 includes a screen providing an output interface between the electronic device 800 and the user.
  • the screen may include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user.
  • the touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may not only sense a boundary of a touch or swipe action, but also detect duration and pressure associated with the touch or swipe action.
  • the multimedia component 808 includes a front camera and/or a rear camera. When the electronic device 800 is in an operation mode, such as a shooting mode or a video mode, the front camera and/or the rear camera can receive external multimedia data. Each front camera and rear camera can be a fixed optical lens system or have focal length and optical zoom capability.
  • the audio component 810 is configured to output and/or input audio signals.
  • the audio component 810 includes a microphone (MIC), which is configured to receive external audio signals when the electronic device 800 is in operation modes, such as call mode, recording mode and voice recognition mode. Received audio signals may be further stored in memory 804 or sent via communication component 816 .
  • the audio component 810 also includes a speaker for outputting audio signals.
  • the I/O interface 812 provides an interface between the processing component 802 and a peripheral interface module, which may be a keyboard, a click wheel, a button, and the like. These buttons may include, but are not limited to: a home button, volume buttons, start button, and lock button.
  • Sensor assembly 814 includes one or more sensors for providing status assessments of various aspects of electronic device 800 .
  • the sensor component 814 can detect the open/closed state of the electronic device 800, the relative positioning of components, such as the display and the keypad of the electronic device 800, the sensor component 814 can also detect the electronic device 800 or a Changes in position of components, presence or absence of user contact with electronic device 800 , electronic device 800 orientation or acceleration/deceleration and temperature changes in electronic device 800 .
  • Sensor assembly 814 may include a proximity sensor configured to detect the presence of nearby objects in the absence of any physical contact.
  • Sensor assembly 814 may also include an optical sensor, such as a CMOS or CCD image sensor, for use in imaging applications.
  • the sensor component 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor or a temperature sensor.
  • the communication component 816 is configured to facilitate wired or wireless communication between the electronic device 800 and other devices.
  • the electronic device 800 can access a wireless network based on communication standards, such as WiFi, 2G or 3G, or a combination thereof.
  • the communication component 816 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel.
  • the communication component 816 also includes a near field communication (NFC) module to facilitate short-range communication.
  • NFC near field communication
  • the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, Infrared Data Association (IrDA) technology, Ultra Wide Band (UWB) technology, Bluetooth (BT) technology and other technologies.
  • RFID Radio Frequency Identification
  • IrDA Infrared Data Association
  • UWB Ultra Wide Band
  • Bluetooth Bluetooth
  • electronic device 800 may be implemented by one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable A programmable gate array (FPGA), controller, microcontroller, microprocessor or other electronic component implementation for performing the methods described above.
  • ASICs application specific integrated circuits
  • DSPs digital signal processors
  • DSPDs digital signal processing devices
  • PLDs programmable logic devices
  • FPGA field programmable A programmable gate array
  • controller microcontroller, microprocessor or other electronic component implementation for performing the methods described above.
  • a non-volatile computer-readable storage medium such as the memory 804 including computer program instructions, which can be executed by the processor 820 of the electronic device 800 to implement the above method.
  • FIG. 8 shows a block diagram of an electronic device 1900 according to an embodiment of the present disclosure.
  • electronic device 1900 may be provided as a server.
  • electronic device 1900 includes processing component 1922 , which further includes one or more processors, and a memory resource represented by memory 1932 for storing instructions executable by processing component 1922 , such as application programs.
  • the application programs stored in memory 1932 may include one or more modules each corresponding to a set of instructions.
  • the processing component 1922 is configured to execute instructions to perform the above method.
  • Electronic device 1900 may also include a power supply component 1926 configured to perform power management of electronic device 1900, a wired or wireless network interface 1950 configured to connect electronic device 1900 to a network, and an input-output (I/O) interface 1958 .
  • the electronic device 1900 can operate based on an operating system stored in the memory 1932, such as Windows Server TM , Mac OS X TM , UnixTM, Linux TM , FreeBSD TM or the like.
  • the modules contained in the living body detection device 20 correspond to the hardware modules contained in the electronic equipment provided as terminals, servers or other forms of equipment. It is a flexible decision and is not limited to the following disclosed embodiments.
  • each module contained in the life detection device 20 may correspond to the processing component 802 in the electronic device in the form of a terminal; in one example, each module contained in the life detection device 20 may also It corresponds to the processing unit 1922 in the electronic device in the form of a server.
  • a non-transitory computer-readable storage medium such as the memory 1932 including computer program instructions, which can be executed by the processing component 1922 of the electronic device 1900 to implement the above method.
  • the present disclosure can be a system, method and/or computer program product.
  • a computer program product may include a computer readable storage medium having computer readable program instructions thereon for causing a processor to implement various aspects of the present disclosure.
  • a computer readable storage medium may be a tangible device that can retain and store instructions for use by an instruction execution device.
  • a computer readable storage medium may be, for example, but is not limited to, an electrical storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing.
  • Computer-readable storage media include: portable computer diskettes, hard disks, random access memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM), or flash memory), static random access memory (SRAM), compact disc read only memory (CD-ROM), digital versatile disc (DVD), memory stick, floppy disk, mechanically encoded device, such as a printer with instructions stored thereon A hole card or a raised structure in a groove, and any suitable combination of the above.
  • RAM random access memory
  • ROM read-only memory
  • EPROM erasable programmable read-only memory
  • flash memory static random access memory
  • SRAM static random access memory
  • CD-ROM compact disc read only memory
  • DVD digital versatile disc
  • memory stick floppy disk
  • mechanically encoded device such as a printer with instructions stored thereon
  • a hole card or a raised structure in a groove and any suitable combination of the above.
  • computer-readable storage media are not to be construed as transient signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (e.g., pulses of light through fiber optic cables), or transmitted electrical signals.
  • Computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or downloaded to an external computer or external storage device over a network, such as the Internet, a local area network, a wide area network, and/or a wireless network.
  • the network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers, and/or edge servers.
  • a network adapter card or a network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in each computing/processing device .
  • Computer program instructions for performing the operations of the present disclosure may be assembly instructions, instruction set architecture (ISA) instructions, machine instructions, machine-dependent instructions, microcode, firmware instructions, state setting data, or Source or object code written in any combination, including object-oriented programming languages - such as Smalltalk, C++, etc., and conventional procedural programming languages - such as the "C" language or similar programming languages.
  • Computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server implement.
  • the remote computer can be connected to the user computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (such as via the Internet using an Internet service provider). connect).
  • LAN local area network
  • WAN wide area network
  • an electronic circuit such as a programmable logic circuit, field programmable gate array (FPGA), or programmable logic array (PLA)
  • FPGA field programmable gate array
  • PDA programmable logic array
  • These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine such that when executed by the processor of the computer or other programmable data processing apparatus , producing an apparatus for realizing the functions/actions specified in one or more blocks in the flowchart and/or block diagram.
  • These computer-readable program instructions can also be stored in a computer-readable storage medium, and these instructions cause computers, programmable data processing devices and/or other devices to work in a specific way, so that the computer-readable medium storing instructions includes An article of manufacture comprising instructions for implementing various aspects of the functions/acts specified in one or more blocks in flowcharts and/or block diagrams.
  • each block in a flowchart or block diagram may represent a module, a portion of a program segment, or an instruction that includes one or more Executable instructions.
  • the functions noted in the block may occur out of the order noted in the figures. For example, two blocks in succession may, in fact, be executed substantially concurrently, or they may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented by a dedicated hardware-based system that performs the specified function or action , or may be implemented by a combination of dedicated hardware and computer instructions.
  • the computer program product can be specifically realized by means of hardware, software or a combination thereof.
  • the computer program product is embodied as a computer storage medium, and in another optional embodiment, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK) etc. Wait.
  • a software development kit Software Development Kit, SDK

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Image Analysis (AREA)

Abstract

La présente divulgation concerne un procédé et appareil de détection de corps vivant, un dispositif électronique, et un support d'enregistrement. Le procédé comprend les étapes consistant à : acquérir une carte de profondeur et une image bidimensionnelle d'un objet cible; sur la base d'informations d'image de profondeur comprises dans la carte de profondeur, déterminer un moyen de détection de corps vivant cible; et, sur la base de l'image bidimensionnelle, réaliser une détection de corps vivant sur l'objet cible à l'aide du moyen de détection de corps vivant cible, et obtenir un résultat de détection de corps vivant pour l'objet cible.
PCT/CN2021/126438 2021-06-30 2021-10-26 Procédé et appareil de détection de corps vivant, dispositif électronique et support d'enregistrement WO2023273050A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110737711.7A CN113469036A (zh) 2021-06-30 2021-06-30 活体检测方法及装置、电子设备和存储介质
CN202110737711.7 2021-06-30

Publications (1)

Publication Number Publication Date
WO2023273050A1 true WO2023273050A1 (fr) 2023-01-05

Family

ID=77876653

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/126438 WO2023273050A1 (fr) 2021-06-30 2021-10-26 Procédé et appareil de détection de corps vivant, dispositif électronique et support d'enregistrement

Country Status (2)

Country Link
CN (1) CN113469036A (fr)
WO (1) WO2023273050A1 (fr)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113469036A (zh) * 2021-06-30 2021-10-01 北京市商汤科技开发有限公司 活体检测方法及装置、电子设备和存储介质

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107832677A (zh) * 2017-10-19 2018-03-23 深圳奥比中光科技有限公司 基于活体检测的人脸识别方法及系统
US20190034702A1 (en) * 2017-07-26 2019-01-31 Baidu Online Network Technology (Beijing) Co., Ltd. Living body detecting method and apparatus, device and storage medium
US20190347823A1 (en) * 2018-05-10 2019-11-14 Beijing Sensetime Technology Development Co., Ltd Method and apparatus for detecting living body, system, electronic device, and storage medium
CN110852134A (zh) * 2018-07-27 2020-02-28 北京市商汤科技开发有限公司 活体检测方法、装置及系统、电子设备和存储介质
CN111079576A (zh) * 2019-11-30 2020-04-28 腾讯科技(深圳)有限公司 活体检测方法、装置、设备及存储介质
CN111382639A (zh) * 2018-12-30 2020-07-07 深圳市光鉴科技有限公司 一种活体人脸检测方法及装置
CN111666901A (zh) * 2020-06-09 2020-09-15 创新奇智(北京)科技有限公司 一种活体人脸检测方法、装置、电子设备及存储介质
CN113469036A (zh) * 2021-06-30 2021-10-01 北京市商汤科技开发有限公司 活体检测方法及装置、电子设备和存储介质

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108805024B (zh) * 2018-04-28 2020-11-24 Oppo广东移动通信有限公司 图像处理方法、装置、计算机可读存储介质和电子设备
CN109711243B (zh) * 2018-11-01 2021-02-09 长沙小钴科技有限公司 一种基于深度学习的静态三维人脸活体检测方法
CN109871773A (zh) * 2019-01-21 2019-06-11 深圳市云眸科技有限公司 活体检测方法、装置及门禁机
CN111582155B (zh) * 2020-05-07 2024-02-09 腾讯科技(深圳)有限公司 活体检测方法、装置、计算机设备和存储介质
CN112232323B (zh) * 2020-12-15 2021-04-16 杭州宇泛智能科技有限公司 人脸验证方法、装置、计算机设备和存储介质
CN113011385B (zh) * 2021-04-13 2024-07-05 深圳市赛为智能股份有限公司 人脸静默活体检测方法、装置、计算机设备及存储介质

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190034702A1 (en) * 2017-07-26 2019-01-31 Baidu Online Network Technology (Beijing) Co., Ltd. Living body detecting method and apparatus, device and storage medium
CN107832677A (zh) * 2017-10-19 2018-03-23 深圳奥比中光科技有限公司 基于活体检测的人脸识别方法及系统
US20190347823A1 (en) * 2018-05-10 2019-11-14 Beijing Sensetime Technology Development Co., Ltd Method and apparatus for detecting living body, system, electronic device, and storage medium
CN110852134A (zh) * 2018-07-27 2020-02-28 北京市商汤科技开发有限公司 活体检测方法、装置及系统、电子设备和存储介质
CN111382639A (zh) * 2018-12-30 2020-07-07 深圳市光鉴科技有限公司 一种活体人脸检测方法及装置
CN111079576A (zh) * 2019-11-30 2020-04-28 腾讯科技(深圳)有限公司 活体检测方法、装置、设备及存储介质
CN111666901A (zh) * 2020-06-09 2020-09-15 创新奇智(北京)科技有限公司 一种活体人脸检测方法、装置、电子设备及存储介质
CN113469036A (zh) * 2021-06-30 2021-10-01 北京市商汤科技开发有限公司 活体检测方法及装置、电子设备和存储介质

Also Published As

Publication number Publication date
CN113469036A (zh) 2021-10-01

Similar Documents

Publication Publication Date Title
US20210097715A1 (en) Image generation method and device, electronic device and storage medium
US11321575B2 (en) Method, apparatus and system for liveness detection, electronic device, and storage medium
CN109889724B (zh) 图像虚化方法、装置、电子设备及可读存储介质
WO2021031609A1 (fr) Procédé et dispositif de détection de corps vivant, appareil électronique et support de stockage
CN108154465B (zh) 图像处理方法及装置
JP2016531362A (ja) 肌色調整方法、肌色調整装置、プログラム及び記録媒体
CN109859144B (zh) 图像处理方法及装置、电子设备和存储介质
CN110944230B (zh) 视频特效的添加方法、装置、电子设备及存储介质
CN108154466B (zh) 图像处理方法及装置
CN110569822A (zh) 图像处理方法及装置、电子设备和存储介质
CN110555930B (zh) 门锁控制方法及装置、电子设备和存储介质
CN112219224B (zh) 图像处理方法及装置、电子设备和存储介质
WO2022077970A1 (fr) Procédé et appareil d'ajout d'effets spéciaux
CN105528765A (zh) 处理图像的方法及装置
CN109726614A (zh) 3d立体成像方法及装置、可读存储介质、电子设备
CN112184787A (zh) 图像配准方法及装置、电子设备和存储介质
WO2022151686A1 (fr) Procédé et appareil d'affichage d'images de scènes, dispositif, support de stockage, programme et produit
WO2020233201A1 (fr) Procédé et dispositif de détermination de position d'icône
CN108040204A (zh) 一种基于多摄像头的图像拍摄方法、装置及存储介质
EP3816927B1 (fr) Procédé et appareil d'apprentissage de modèles de traitement d'images et support d'enregistrement
CN106982327A (zh) 图像处理方法和装置
CN115205172A (zh) 图像处理方法及装置、电子设备和存储介质
WO2023273050A1 (fr) Procédé et appareil de détection de corps vivant, dispositif électronique et support d'enregistrement
US11989863B2 (en) Method and device for processing image, and storage medium
US11265529B2 (en) Method and apparatus for controlling image display

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21947977

Country of ref document: EP

Kind code of ref document: A1