CN113255587A - Face-brushing payment system based on depth camera - Google Patents

Face-brushing payment system based on depth camera Download PDF

Info

Publication number
CN113255587A
CN113255587A CN202110701197.1A CN202110701197A CN113255587A CN 113255587 A CN113255587 A CN 113255587A CN 202110701197 A CN202110701197 A CN 202110701197A CN 113255587 A CN113255587 A CN 113255587A
Authority
CN
China
Prior art keywords
image
face
module
living body
depth
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110701197.1A
Other languages
Chinese (zh)
Other versions
CN113255587B (en
Inventor
朱力
吕方璐
汪博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Guangjian Aoshen Technology Co ltd
Shenzhen Guangjian Technology Co Ltd
Original Assignee
Shanghai Guangjian Aoshen Technology Co ltd
Shenzhen Guangjian Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Guangjian Aoshen Technology Co ltd, Shenzhen Guangjian Technology Co Ltd filed Critical Shanghai Guangjian Aoshen Technology Co ltd
Priority to CN202110701197.1A priority Critical patent/CN113255587B/en
Publication of CN113255587A publication Critical patent/CN113255587A/en
Application granted granted Critical
Publication of CN113255587B publication Critical patent/CN113255587B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a face brushing payment system based on a depth camera, which comprises a depth camera module and a mobile phone module: the depth camera module comprises an image acquisition module, a face detection module, a depth reconstruction module and a living body detection module; the image acquisition module is used for acquiring an RGB image, an IR image and an infrared light spot image of a target face; the depth reconstruction module is used for performing depth reconstruction on the target face according to the infrared spot image and the RGB image to generate a depth face image; the living body detection module is used for carrying out living body detection on any one or more of the infrared light spot image, the IR image and the depth face image and then outputting a living body face detection result; and the mobile phone module is used for receiving the living body face detection result and the face area, identifying the face area when the living body face detection result passes, and determining and displaying corresponding payment account information of the face area. The invention can facilitate human-computer interaction during face brushing payment and improve the face brushing payment efficiency.

Description

Face-brushing payment system based on depth camera
Technical Field
The invention relates to the field of 3D imaging, in particular to a face brushing payment system based on a depth camera.
Background
As the core device of the face-brushing payment terminal, the face recognition camera module plays a very key role. At present, a relatively mature face recognition camera module adopts a structured light scheme or a TOF scheme.
The tof (time of flight) technique is a 3D imaging technique that emits measurement light from a projector and reflects the measurement light back to a receiver via a target face, thereby obtaining a spatial distance from an object to a sensor from a propagation time of the measurement light in the propagation path. Common ToF techniques include single point scanning projection methods and area light projection methods.
The structured light three scheme is based on the optical triangulation measurement principle. The optical projector projects the structured light with a certain mode on the surface of the object to form a light bar three-dimensional image modulated by the surface shape of the object to be measured on the surface. The three-dimensional image is detected by a camera at another location to obtain a two-dimensional distorted image of the light bar. The degree of distortion of the light bar depends on the relative position between the optical projector and the camera and the object surface profile (height). Intuitively, the displacement (or offset) displayed along the bar is proportional to the height of the object surface, the kink indicates a change in plane, and the discontinuity indicates a physical gap in the surface. When the relative position between the optical projector and the camera is fixed, the three-dimensional profile of the object surface can be reproduced by the distorted two-dimensional light bar image coordinates.
The degree of front end perception has been widened to the degree of depth camera module, the anti false body that solution 2D face identification met that can be fine attacks and the problem that under the extreme condition discernment rate of accuracy reduces, the effect has obtained market's recognition, and the demand is strong, can be applied to scenes such as lock, entrance guard and payment based on 3D face identification. However, when the method is applied to scenes such as payment, how to improve the snapshot efficiency of the face image and realize quick payment is not provided with a corresponding solution in the prior art. .
Disclosure of Invention
In view of the defects in the prior art, the invention aims to provide a face brushing payment system based on a depth camera.
The face brushing payment system based on the depth camera comprises a depth camera module and a mobile phone module, wherein the depth camera module comprises:
the depth camera module comprises an image acquisition module, a face detection module, a depth reconstruction module and a living body detection module;
the image acquisition module is used for acquiring an RGB image, an IR image and an infrared light spot image of a target face;
the face detection module is used for carrying out face detection on the RGB image and/or the IR image to determine a face area;
the depth reconstruction module is used for performing depth reconstruction on the target face according to the infrared spot image and the RGB image to generate a depth face image;
the living body detection module is used for carrying out living body detection on any one or more of the infrared light spot image, the IR image and the depth face image and outputting a living body face detection result after the living body detection is passed;
and the mobile phone module is used for receiving the living body face detection result and the face area, identifying the face area when the living body face detection result passes, and determining and displaying corresponding payment account information of the face area.
Preferably, the depth camera module and the mobile phone module have any one of the following positional relationships:
-the depth camera module and the handset module are connected via a bluetooth connection;
-the depth camera module and the handset module are electrically connected by a connection line;
-the depth camera module is arranged on the display screen side of the handset module;
-the depth camera module is arranged on the underside of the display screen of the handset module;
the depth camera module is arranged on the opposite side of the display screen of the handset module.
Preferably, the image acquisition module comprises the following parts:
a light projecting part for projecting dot matrix light to the target face;
the image acquisition part is used for alternately acquiring an IR image and an infrared light spot image of the target face, and the infrared light spot image is formed by dot matrix light reflected by the target face;
and the RGB image acquisition part is used for acquiring the RGB image of the target face through the RGB camera.
Preferably, the facial expression detection device further comprises an expression detection module;
the expression detection module is used for carrying out expression detection on the face area when the face area is detected in the RGB image and the IR image, and determining the expression type of the face area;
and the depth reconstruction module is used for performing depth reconstruction on the target face according to the infrared spot image and the RGB image to generate a depth face image when the expression type is any expression type in a preset expression type set.
Preferably, the face detection module comprises the following parts:
the human face detection part is used for carrying out human face detection on the RGB image and the IR image to generate a human face detection result;
and the detection result analysis part is used for triggering the expression detection module when the face areas are detected in the RGB images and the IR images, and triggering the image acquisition module when the face areas are not detected in the RGB images and the IR images.
Preferably, the system further comprises an image quality detection module;
the image quality detection module is used for carrying out quality detection on the RGB image and the IR image;
the living body detection module comprises a first living body detection module and a second living body detection module;
the first living body detection module is used for carrying out living body detection on the IR image when the RGB image and the IR image meet the preset quality standard;
and the second living body detection module is used for carrying out living body detection on the depth face image when the IR image passes through the living body detection, and outputting a living body face detection result after the depth face image passes through the living body detection.
Preferably, the expression detection module includes the following parts:
an expression type set storage part for storing a preset expression type set;
the expression type judging module is used for acquiring the expression type set, judging whether the expression type is one of the preset expression type set or not, triggering the depth rebuilding module when the expression type is one of the preset expression type set, and sending first prompt information and triggering the image acquisition module when the expression type is not one of the preset expression type set.
Preferably, the image quality detection module includes the following parts:
an image quality standard storage section for storing a preset image quality standard;
and the image quality standard judging module is used for acquiring the image quality standard, judging whether the RGB image and the IR image meet the preset image quality standard or not, triggering the living body detection module when the RGB image and the IR image meet the preset image quality standard, and sending second prompt information and triggering the image acquisition module when the RGB image and the IR image do not meet the preset image quality standard.
Preferably, the first liveness detection module comprises:
an IR image biopsy section for performing biopsy on the IR image to generate an IR image biopsy result;
and the first living body detection result judging part is used for triggering the second living body detection module when the IR image living body detection result is judged to pass the living body detection, and sending third prompt information and triggering the image acquisition module when the IR image living body detection result is judged to not pass the living body detection.
Preferably, the second liveness detection module comprises the following parts:
the depth image living body detection part is used for carrying out living body detection on the depth face image to generate a depth image living body detection result;
and the second living body detection result judging part is used for outputting a living body face detection result when judging that the living body detection result of the depth image passes the living body detection, and sending fourth prompt information and triggering the image acquisition module when judging that the living body detection result of the depth image does not pass the living body detection.
Preferably, the depth reconstruction module comprises the following parts:
the light spot extracting part is used for preprocessing the infrared light spot image during the process of face detection so as to extract a plurality of light spot areas in the infrared light spot image;
a face region acquisition unit for acquiring a face region from the RGB image filtered by the expression type;
and the depth face image generating part is used for determining a light spot region corresponding to the face region according to the face region and generating a depth face image of the face region according to the light spot region corresponding to the face region.
Compared with the prior art, the invention has the following beneficial effects:
the depth camera module is electrically connected with the mobile phone module through Bluetooth or a connecting wire, and can detect the face area of a target face, which is acquired by the depth camera module, from RGB (red, green and blue) images, IR (infrared) images and infrared light spot images, and send the detected face area to the mobile phone module as a living body face detection result, so that the mobile phone module can carry out payment verification, human-computer interaction during face brushing payment can be facilitated, and the face brushing payment efficiency is improved;
according to the invention, the face detection, expression priority, depth reconstruction, IR image living body detection and depth face image living body detection are sequentially carried out on the collected RGB image, IR image and infrared light spot image, so that the face image is rapidly captured, and the face brushing payment efficiency is improved;
according to the invention, the face detection and the expression optimization are sequentially carried out on the RGB image subjected to the depth reconstruction, the depth reconstruction is carried out only according to the face area and the infrared light spot image in the RGB image after the expression optimization, and the infrared light spot image is preprocessed before reconstruction, so that the depth reconstruction efficiency is improved;
according to the invention, the depth reconstruction is carried out on the RGB image and the IR image after the expression is optimized, failure caused by poor image quality in the depth reconstruction is avoided, and the depth face image can be generated after the living body detection is carried out on the IR image, so that the snapshot process can be executed compactly, the time of the whole snapshot process is shortened, and the face brushing payment efficiency is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts. Other features, objects and advantages of the invention will become more apparent upon reading of the detailed description of non-limiting embodiments with reference to the following drawings:
FIG. 1 is a schematic diagram of the working principle of a face-brushing payment system based on a depth camera in the embodiment of the present invention;
FIG. 2 is a block diagram of a face-brushing payment system based on a depth camera according to an embodiment of the present invention;
FIG. 3 is a block diagram of an image capture module according to an embodiment of the present invention;
FIG. 4 is a block diagram of a depth reconstruction module according to an embodiment of the present invention; and
FIG. 5 is a block diagram of a biopsy module according to an embodiment of the invention.
Detailed Description
The present invention will be described in detail with reference to specific examples. The following examples will assist those skilled in the art in further understanding the invention, but are not intended to limit the invention in any way. It should be noted that variations and modifications can be made by persons skilled in the art without departing from the spirit of the invention. All falling within the scope of the present invention.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims, as well as in the drawings, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are, for example, capable of operation in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The technical solution of the present invention will be described in detail below with specific examples. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments.
The invention provides a snapshot method of a depth camera, and aims to solve the problems in the prior art.
The following describes the technical solutions of the present invention and how to solve the above technical problems with specific embodiments. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments. Embodiments of the present invention will be described below with reference to the accompanying drawings.
Fig. 1 is a schematic diagram of a working principle of a depth camera-based face-brushing payment system in an embodiment of the present invention, and as shown in fig. 1, the depth camera-based face-brushing payment system provided by the present invention includes the following modules:
the depth camera module comprises an image acquisition module, a face detection module, a depth reconstruction module and a living body detection module;
the image acquisition module is used for acquiring an RGB image, an IR image and an infrared light spot image of a target face;
the face detection module is used for carrying out face detection on the RGB image and/or the IR image to determine a face area;
the depth reconstruction module is used for performing depth reconstruction on the target face according to the infrared spot image and the RGB image to generate a depth face image;
the living body detection module is used for carrying out living body detection on the IR image and/or the depth face image and outputting a living body face detection result after the living body detection is passed;
and the mobile phone module is used for receiving the living body face detection result and the face area, identifying the face area when the living body face detection result passes, and determining and displaying corresponding payment account information of the face area.
In the embodiment of the invention, the face brushing payment system based on the depth camera is provided with the depth camera module and the mobile phone module, the depth camera module is connected through the Bluetooth or the data line, the human face area detected by the RGB image, the IR image and the infrared spot image of the target human face collected by the depth camera module can be sent to the mobile phone module as the living human face detection result, so that the mobile phone module can carry out payment verification, human-computer interaction during face brushing payment can be facilitated, and the face brushing payment efficiency is improved.
In the embodiment of the present invention, the mobile phone module and the depth camera module may be two separate parts, or the depth camera module and the mobile phone module may be disposed as an integral structure, such as disposing the depth camera module on the display screen side of the mobile phone module, the lower side of the display screen, or the opposite side of the display screen.
In the embodiment of the present invention, the living body face recognition result may be a depth face image through living body detection, an IR image, and an RGB image through quality detection, and may also be a successful result of living body detection.
Fig. 2 is a schematic block diagram of a depth camera-based face brushing payment system according to a modification of the present invention, and as shown in fig. 2, the depth camera-based face brushing payment system further includes an expression detection module and an image quality detection module;
the expression detection module is used for carrying out expression detection on the face area when the face area is detected in the RGB image and the IR image, and determining the expression type of the face area;
and the depth reconstruction module is used for performing depth reconstruction on the target face according to the infrared spot image and the RGB image to generate a depth face image when the expression type is any expression type in a preset expression type set.
In the embodiment of the present invention, the face region detection may be a pixel range that frames out the face region in the RGB image or the IR image and determines the face region. The expression detection can be performed through an expression detection model based on a neural network.
In an embodiment of the present invention, the expression type set includes expression flat and smile. The first prompt message may be to correct the head, keep the expression flat, please look ahead, etc.
The image quality detection module is used for carrying out quality detection on the RGB image and the IR image;
the living body detection module comprises a first living body detection module and a second living body detection module;
the first living body detection module is used for carrying out living body detection on the IR image when the RGB image and the IR image meet the preset quality standard;
and the second living body detection module is used for carrying out living body detection on the depth face image when the IR image passes through the living body detection, and outputting a living body face detection result after the depth face image passes through the living body detection.
Fig. 3 is a schematic block diagram of an image capturing module according to an embodiment of the present invention, and as shown in fig. 3, the image capturing module includes the following components:
a light projecting part for projecting dot matrix light to the target face;
the image acquisition part is used for alternately acquiring an IR image and an infrared light spot image of the target face, and the infrared light spot image is formed by dot matrix light reflected by the target face;
the RGB image acquisition part is used for acquiring an RGB image of the target face through an RGB camera;
the image previewing part is used for acquiring an RGB image, an IR image and an infrared spot image of the target face and previewing the RGB image in real time;
the processor module is used for controlling the light projection part, the image acquisition part, the RGB image acquisition part and the image preview part, and can also control the operation of the face detection module, the depth reconstruction module, the living body detection module, the expression detection module and the image quality detection module.
In the embodiment of the invention, the infrared image acquisition part and the RGB image acquisition part are realized by adopting a depth camera, and the depth camera is used for carrying out RGB images, IR images and infrared light spot images;
the depth camera comprises a discrete beam projector, a surface light source projector, an RGB camera and an infrared camera
A discrete beam projector for projecting a plurality of discrete collimated beams toward a target face;
the surface light source projector is used for projecting floodlight to the target human face;
and the infrared camera is used for receiving the discrete collimated light beam reflected by the target face, acquiring an infrared light spot image of the surface of the target face according to the reflected discrete collimated light beam, receiving floodlight reflected by the target face and acquiring an IR image of the surface of the target face according to the reflected floodlight.
And the RGB camera is used for acquiring RGB images of the target face.
In the embodiment of the invention, the face detection module comprises the following parts:
the human face detection part is used for carrying out human face detection on the RGB image and the IR image to generate a human face detection result;
and the detection result analysis part is used for triggering the expression detection module when the face areas are detected in the RGB images and the IR images, and triggering the image acquisition module when the face areas are not detected in the RGB images and the IR images.
In the embodiment of the invention, the expression detection module comprises the following parts:
an expression type set storage part for storing a preset expression type set;
the expression type judging module is used for acquiring the expression type set, judging whether the expression type is one of the preset expression type set or not, triggering the depth rebuilding module when the expression type is one of the preset expression type set, and sending first prompt information and triggering the image acquisition module when the expression type is not one of the preset expression type set.
The image quality detection module comprises the following parts:
an image quality standard storage section for storing a preset image quality standard;
and the image quality standard judging module is used for acquiring the image quality standard, judging whether the RGB image and the IR image meet the preset image quality standard or not, triggering the living body detection module when the RGB image and the IR image meet the preset image quality standard, and sending second prompt information and triggering the image acquisition module when the RGB image and the IR image do not meet the preset image quality standard.
In an embodiment of the present invention, the image quality criterion may be a contrast threshold, which may be set to 150: and 1, when the contrast ratio of the RGB image and the IR image is greater than the contrast threshold value, the RGB image and the IR image are determined to meet the preset image quality standard.
In the embodiment of the present invention, the image quality standard may further adopt a PSNR (Peak Signal to Noise Ratio) threshold; the PSNR threshold may be set to 30dB, and when the contrast between the RGB image and the IR image is greater than the PSNR threshold, the RGB image and the IR image are determined to meet a preset image quality standard.
The second prompt message can increase the exposure time, decrease the exposure time, perform backlight compensation and the like.
Fig. 4 is a schematic block diagram of a depth reconstruction module according to an embodiment of the present invention, and as shown in fig. 4, the depth reconstruction module includes the following components:
the light spot extracting part is used for preprocessing the infrared light spot image during the process of face detection so as to extract a plurality of light spot areas in the infrared light spot image;
a face region acquisition unit for acquiring a face region from the RGB image filtered by the expression type;
and the depth face image generating part is used for determining a light spot region corresponding to the face region according to the face region and generating a depth face image of the face region according to the light spot region corresponding to the face region.
In the embodiment of the invention, the deep face image is produced according to the structured light technology, specifically, the deep face image of the face area is obtained according to the deformation or displacement of the light spot area, and the rugged depth information of the face area to be detected is obtained.
In the embodiment of the invention, the depth image of the face area can be obtained through the time delay or the phase difference of a plurality of infrared spot images, namely, the depth image is calculated through the TOF technology.
FIG. 5 is a block diagram of a biopsy module according to an embodiment of the invention, and as shown in FIG. 5, a first biopsy module includes the following components:
an IR image biopsy section for performing biopsy on the IR image to generate an IR image biopsy result;
and the first living body detection result judging part is used for triggering the second living body detection module when the IR image living body detection result is judged to pass the living body detection, and sending third prompt information and triggering the image acquisition module when the IR image living body detection result is judged to not pass the living body detection.
The third prompt message may be a text message or a language message that "the live body test has passed".
In the embodiment of the invention, the IR image can be subjected to living body detection through a living body detection model which is based on a neural network and generated through living body IR image training, and the deep face image can be subjected to living body detection through another living body detection model which is based on the neural network and generated through living body deep face image training.
The second living body detecting module includes the following parts:
the depth image living body detection part is used for carrying out living body detection on the depth face image to generate a depth image living body detection result;
and the second living body detection result judging part is used for outputting a living body face detection result when judging that the living body detection result of the depth image passes the living body detection, and sending fourth prompt information and triggering the image acquisition module when judging that the living body detection result of the depth image does not pass the living body detection.
The fourth prompt message may be a text message or a language message of "live body detection failed".
In the embodiment of the invention, when the infrared spot image is detected through a living body, whether the spot image is a living body face spot image or not is judged according to the spot definition of the infrared spot image, whether the spot definition is in a preset spot definition threshold interval or not is judged, when the spot definition of a pixel area is in the preset spot definition threshold interval, the spot image is judged to be the living body face spot image, and the spot definition threshold interval is 10-30. The number of optical spot distinctness is: d (f) = (∑ x Σ y | G (x, y) |)/C, C is the total number of pixels in the pixel region, d (f) is the numerical value of the sharpness of the optical spot, and G (x, y) is the numerical value of the center pixel after convolution.
In the embodiment of the invention, when the infrared light spot image is detected through a living body, the living body detection can be carried out on the infrared light spot image through a living body detection model which is based on a neural network and is generated through living body infrared light spot image training.
In the implementation of the invention, the face brushing payment system based on the depth camera is provided with the depth camera module and the mobile phone module, the depth camera module is connected through the Bluetooth or the data line, the human face area detected by the RGB image, the IR image and the infrared spot image of the target human face collected by the depth camera module can be sent to the mobile phone module as the living human face detection result, so that the payment verification of the mobile phone module can be realized, the human-computer interaction during face brushing payment can be facilitated, and the face brushing payment efficiency is improved; the face image is rapidly captured by sequentially carrying out face detection, expression priority, depth reconstruction, IR image in-vivo detection and depth face image in-vivo detection on the collected RGB image, IR image and infrared spot image, so that the face brushing payment efficiency is improved; according to the invention, the face detection and the expression optimization are sequentially carried out on the RGB image subjected to the depth reconstruction, the depth reconstruction is carried out only according to the face area and the infrared light spot image in the RGB image after the expression optimization, and the infrared light spot image is preprocessed before reconstruction, so that the depth reconstruction efficiency is improved; according to the invention, the depth reconstruction is carried out on the RGB image and the IR image after the expression is optimized, failure caused by poor image quality in the depth reconstruction is avoided, and the depth face image can be generated after the living body detection is carried out on the IR image, so that the snapshot process can be executed compactly, the time of the whole snapshot process is shortened, and the face brushing payment efficiency is improved.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description of specific embodiments of the present invention has been presented. It is to be understood that the present invention is not limited to the specific embodiments described above, and that various changes and modifications may be made by one skilled in the art within the scope of the appended claims without departing from the spirit of the invention.

Claims (8)

1. The utility model provides a face payment system of brushing based on depth camera which characterized in that, includes depth camera module and cell-phone module:
the depth camera module comprises an image acquisition module, an expression detection module, a face detection module, a depth reconstruction module and a living body detection module;
the image acquisition module is used for acquiring an RGB image, an IR image and an infrared light spot image of a target face;
the face detection module is used for carrying out face detection on the RGB image and/or the IR image to determine a face area;
the expression detection module is used for carrying out expression detection on the face area and determining the expression type of the face area;
the expression detection module comprises the following parts:
an expression type set storage part for storing a preset expression type set;
the expression type judging module is used for acquiring the expression type set, judging whether the expression type is one of the preset expression type set or not, triggering the depth rebuilding module when the expression type is one of the preset expression type set, and sending first prompt information and triggering the image acquisition module when the expression type is not one of the preset expression type set;
the depth reconstruction module is used for performing depth reconstruction on the target face according to the infrared light spot image and the RGB image to generate a depth face image when the expression type is any expression type in a preset expression type set;
the living body detection module is used for carrying out living body detection on any one or more of the infrared light spot image, the IR image and the depth face image and outputting a living body face detection result after the living body detection is passed;
the mobile phone module is used for receiving the living body face detection result and the face area, identifying the face area when the living body face detection result passes, and determining and displaying corresponding payment account information of the face area.
2. The depth camera-based face payment system of claim 1, wherein the depth camera module and the handset module have any one of the following positional relationships:
-the depth camera module and the handset module are connected via a bluetooth connection;
-the depth camera module and the handset module are electrically connected by a connection line;
-the depth camera module is arranged on the display screen side of the handset module;
-the depth camera module is arranged on the underside of the display screen of the handset module;
-the depth camera module is arranged on the opposite side of the display screen of the handset module.
3. The depth camera-based face payment system of claim 1, wherein the image capture module comprises:
a light projecting part for projecting dot matrix light to the target face;
the image acquisition part is used for alternately acquiring an IR image and an infrared light spot image of the target face, and the infrared light spot image is formed by reflecting the dot matrix light by the target face;
and the RGB image acquisition part is used for acquiring the RGB image of the target face through the RGB camera.
4. The depth camera-based face payment system of claim 1, wherein the face detection module comprises:
the human face detection part is used for carrying out human face detection on the RGB image and the IR image to generate a human face detection result;
and the detection result analysis part is used for triggering the expression detection module when the RGB image and the IR image detect the human face area, and triggering the image acquisition module when the RGB image and the IR image do not detect the human face area.
5. The depth camera-based face payment system of claim 1, further comprising an image quality detection module;
the image quality detection module is used for performing quality detection on the RGB image and the IR image;
the living body detection module comprises a first living body detection module and a second living body detection module;
the first living body detection module is used for carrying out living body detection on the IR image when the RGB image and the IR image meet preset quality standards;
and the second living body detection module is used for carrying out living body detection on the depth face image when the IR image passes through the living body detection, and outputting a living body face detection result after the depth face image passes through the living body detection.
6. The depth camera-based face payment system of claim 5, wherein the image quality detection module comprises:
an image quality standard storage section for storing a preset image quality standard;
and the image quality standard judging module is used for acquiring the image quality standard, judging whether the RGB image and the IR image meet the preset image quality standard or not, triggering the living body detection module when the RGB image and the IR image meet the preset image quality standard, sending second prompt information when the RGB image and the IR image do not meet the preset image quality standard, and triggering the image acquisition module.
7. The depth camera-based face payment system of claim 5, wherein the first liveness detection module comprises:
an IR image biopsy section for performing a biopsy on the IR image to generate an IR image biopsy result;
the first living body detection result judging part is used for triggering the second living body detection module when the living body detection result of the IR image is judged to be passed through the living body detection, and sending third prompt information and triggering the image acquisition module when the living body detection result of the IR image is judged to be not passed through the living body detection;
the second in-vivo detection module includes:
the depth image living body detection part is used for carrying out living body detection on the depth face image to generate a depth image living body detection result;
and the second living body detection result judging part is used for outputting a living body face detection result when judging that the living body detection result of the depth image passes the living body detection, and sending fourth prompt information and triggering the image acquisition module when judging that the living body detection result of the depth image does not pass the living body detection.
8. The depth camera-based face-brushing payment system of claim 1, wherein the depth reconstruction module comprises:
the light spot extracting part is used for preprocessing the infrared light spot image during the process of face detection so as to extract a plurality of light spot areas in the infrared light spot image;
a face region acquisition unit for acquiring a face region from the RGB image filtered by the expression type;
and the depth face image generating part is used for determining the light spot region corresponding to the face region according to the face region and generating a depth face image of the face region according to the light spot region corresponding to the face region.
CN202110701197.1A 2021-06-24 2021-06-24 Face-brushing payment system based on depth camera Active CN113255587B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110701197.1A CN113255587B (en) 2021-06-24 2021-06-24 Face-brushing payment system based on depth camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110701197.1A CN113255587B (en) 2021-06-24 2021-06-24 Face-brushing payment system based on depth camera

Publications (2)

Publication Number Publication Date
CN113255587A true CN113255587A (en) 2021-08-13
CN113255587B CN113255587B (en) 2021-10-15

Family

ID=77189404

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110701197.1A Active CN113255587B (en) 2021-06-24 2021-06-24 Face-brushing payment system based on depth camera

Country Status (1)

Country Link
CN (1) CN113255587B (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010187208A (en) * 2009-02-12 2010-08-26 Nikon Corp Electronic still camera
US20170323299A1 (en) * 2016-05-03 2017-11-09 Facebook, Inc. Facial recognition identification for in-store payment transactions
CN107832677A (en) * 2017-10-19 2018-03-23 深圳奥比中光科技有限公司 Face identification method and system based on In vivo detection
CN108197609A (en) * 2018-02-02 2018-06-22 梁纳星 A kind of accurate people face identifying system
CN108287738A (en) * 2017-12-21 2018-07-17 维沃移动通信有限公司 A kind of application control method and device
CN108647504A (en) * 2018-03-26 2018-10-12 深圳奥比中光科技有限公司 Realize the method and system that information security is shown
CN108764052A (en) * 2018-04-28 2018-11-06 Oppo广东移动通信有限公司 Image processing method, device, computer readable storage medium and electronic equipment
EP3422250A1 (en) * 2017-06-26 2019-01-02 Samsung Electronics Co., Ltd. Facial verification method and apparatus
CN109583304A (en) * 2018-10-23 2019-04-05 宁波盈芯信息科技有限公司 A kind of quick 3D face point cloud generation method and device based on structure optical mode group
CN109636401A (en) * 2018-11-30 2019-04-16 上海爱优威软件开发有限公司 A kind of method of payment and system based on the micro- expression of user
CN110598555A (en) * 2019-08-12 2019-12-20 阿里巴巴集团控股有限公司 Image processing method, device and equipment
CN111079576A (en) * 2019-11-30 2020-04-28 腾讯科技(深圳)有限公司 Living body detection method, living body detection device, living body detection equipment and storage medium

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010187208A (en) * 2009-02-12 2010-08-26 Nikon Corp Electronic still camera
US20170323299A1 (en) * 2016-05-03 2017-11-09 Facebook, Inc. Facial recognition identification for in-store payment transactions
EP3422250A1 (en) * 2017-06-26 2019-01-02 Samsung Electronics Co., Ltd. Facial verification method and apparatus
CN107832677A (en) * 2017-10-19 2018-03-23 深圳奥比中光科技有限公司 Face identification method and system based on In vivo detection
CN108287738A (en) * 2017-12-21 2018-07-17 维沃移动通信有限公司 A kind of application control method and device
CN108197609A (en) * 2018-02-02 2018-06-22 梁纳星 A kind of accurate people face identifying system
CN108647504A (en) * 2018-03-26 2018-10-12 深圳奥比中光科技有限公司 Realize the method and system that information security is shown
CN108764052A (en) * 2018-04-28 2018-11-06 Oppo广东移动通信有限公司 Image processing method, device, computer readable storage medium and electronic equipment
CN109583304A (en) * 2018-10-23 2019-04-05 宁波盈芯信息科技有限公司 A kind of quick 3D face point cloud generation method and device based on structure optical mode group
CN109636401A (en) * 2018-11-30 2019-04-16 上海爱优威软件开发有限公司 A kind of method of payment and system based on the micro- expression of user
CN110598555A (en) * 2019-08-12 2019-12-20 阿里巴巴集团控股有限公司 Image processing method, device and equipment
CN111079576A (en) * 2019-11-30 2020-04-28 腾讯科技(深圳)有限公司 Living body detection method, living body detection device, living body detection equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
J. YANG等: ""Biometrics Verification Techniques Combing with Digital Signature for Multimodal Biometrics Payment System"", 《2010 INTERNATIONAL CONFERENCE ON MANAGEMENT OF E-COMMERCE AND E-GOVERNMENT》 *
杨波等: ""基于人脸识别的智慧轨交安全支付系统"", 《通信技术》 *

Also Published As

Publication number Publication date
CN113255587B (en) 2021-10-15

Similar Documents

Publication Publication Date Title
EP3192008B1 (en) Systems and methods for liveness analysis
JP3867512B2 (en) Image processing apparatus, image processing method, and program
US10742962B2 (en) Method and system for capturing images for wound assessment with moisture detection
US20150287215A1 (en) Image processor and image processing method
CN110969077A (en) Living body detection method based on color change
CA3147418A1 (en) Living body detection method and system for human face by using two long-baseline cameras
JP4235018B2 (en) Moving object detection apparatus, moving object detection method, and moving object detection program
CN113255587B (en) Face-brushing payment system based on depth camera
JP3965894B2 (en) Image processing apparatus and image processing method
US20120026617A1 (en) Mirror and adjustment method therefor
Hanna et al. A System for Non-Intrusive Human Iris Acquisition and Identification.
CN110321782B (en) System for detecting human body characteristic signals
CN114120385A (en) Depth camera capture system with display screen
TW201205449A (en) Video camera and a controlling method thereof
CN111985424B (en) Image verification method under multi-person scene
EP3217257A1 (en) Displacement determination program, method, and information processing apparatus
CN114613069A (en) Intelligent self-service terminal and intelligent auxiliary method thereof
CN115968487A (en) Anti-spoofing system
CN115018894A (en) Structured light reconstruction module
CN113673285B (en) Depth reconstruction method, system, equipment and medium during capturing of depth camera
CN111967422A (en) Self-service face recognition service method
JP4664805B2 (en) Face edge detection device, face edge detection method, and program
CN113673284B (en) Depth camera snapshot method, system, equipment and medium
CN113673287B (en) Depth reconstruction method, system, equipment and medium based on target time node
KR101276158B1 (en) A method of real-time 3d image processing using face feature points tracking

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant