WO2016197297A1 - 活体检测方法、活体检测系统以及计算机程序产品 - Google Patents

活体检测方法、活体检测系统以及计算机程序产品 Download PDF

Info

Publication number
WO2016197297A1
WO2016197297A1 PCT/CN2015/080963 CN2015080963W WO2016197297A1 WO 2016197297 A1 WO2016197297 A1 WO 2016197297A1 CN 2015080963 W CN2015080963 W CN 2015080963W WO 2016197297 A1 WO2016197297 A1 WO 2016197297A1
Authority
WO
WIPO (PCT)
Prior art keywords
living body
matrix data
predetermined
detected
light source
Prior art date
Application number
PCT/CN2015/080963
Other languages
English (en)
French (fr)
Inventor
范浩强
Original Assignee
北京旷视科技有限公司
北京小孔科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京旷视科技有限公司, 北京小孔科技有限公司 filed Critical 北京旷视科技有限公司
Priority to PCT/CN2015/080963 priority Critical patent/WO2016197297A1/zh
Priority to US15/580,210 priority patent/US10614291B2/en
Priority to CN201580000335.6A priority patent/CN105637532B/zh
Publication of WO2016197297A1 publication Critical patent/WO2016197297A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/143Sensing or illuminating at different wavelengths

Definitions

  • the present disclosure relates to the field of living body detection, and more particularly, to a living body detecting method, a living body detecting system, and a computer program product capable of realizing human body living body detection.
  • face recognition systems are increasingly used in security, finance and other fields that require authentication, such as bank remote account opening, access control systems, and remote transaction operation verification.
  • authentication such as bank remote account opening, access control systems, and remote transaction operation verification.
  • the first person to be verified is a legal living organism. That is to say, the face recognition system needs to be able to prevent an attacker from using a photo, a 3D face model or a mask to attack.
  • the method of solving the above problem is usually called living body detection, and the purpose is to judge whether the acquired biometrics are from a living, on-the-spot, real person.
  • the existing biometric detection technology relies on special hardware devices (such as infrared cameras, depth cameras) or can only prevent simple static photo attacks.
  • the existing living body detection systems are mostly matched, that is, the test subject needs to perform corresponding actions according to the system instruction or stay in place for a certain period of time, which will affect the user experience and the living body detection efficiency.
  • the present disclosure has been made in view of the above problems.
  • the present disclosure provides a living body detecting method, a living body detecting system, and a computer program product, which are based on human skin to generate sub-surface scattering of light, thereby generating a large spot after receiving light, and photographs, screens, masks, and the like.
  • the subsurface scattering is weak compared to the principle of forming a small spot, enabling a non-cooperating in vivo detection that effectively distinguishes between normal users and photo, video, and mask attackers, and does not require users. Special cooperation increases the safety and ease of use of the living body detection system.
  • a living body detecting method comprising: illuminating a face of an object to be detected using a laser light source; capturing an image of a face of an object to be detected illuminated by the laser light source; and calculating the waiting Detecting a spot area of an image of a face of the subject; and comparing the light The spot area is compared with a first predetermined area threshold, and if the spot area is greater than the first predetermined area threshold, determining that the object to be detected is a living body.
  • the living body detecting method wherein the calculating a spot area of an image of a face of the object to be detected includes: acquiring image matrix data of an image of a face of the object to be detected; Performing a binarization conversion on the image matrix data to convert a pixel point of the image matrix data having a gray value greater than or equal to the first predetermined threshold to have a first gray scale a first type of pixel of the value, converting a pixel point of the image matrix data having a gray value smaller than the first predetermined threshold to a second type of pixel having a second gray value, obtaining a first binary value Image matrix data, the first gray value is greater than the second gray value; determining a maximum number of the first type of pixel points in the first binarized image matrix data that are connected to each other, and calculating the A maximum number of areas corresponding to the first type of pixel points that communicate with each other is used as the spot area.
  • the living body detecting method wherein the laser light source is a light source that generates a spot light spot, and the laser light source is relatively fixed to a position of the object to be detected.
  • a living body detecting method wherein the laser light source is a light source that generates a plurality of spot spots, and the laser light source is relatively changed in position with respect to the object to be detected, the capturing via the The image of the face of the object to be detected illuminated by the light source includes: capturing an image of a face of the object to be detected illuminated by the laser light source, and determining an area image of the image corresponding to the predetermined area of the object to be detected as An image of the face of the object to be detected.
  • the living body detecting method wherein the laser light source is a laser light source that can adjust a light emission direction, and the laser light source and the position of the object to be detected are relatively changed, the acquiring The image matrix data of the image of the face of the object to be detected includes: acquiring preliminary image matrix data of a face of the object to be detected illuminated by the laser light source; performing preliminary image matrix data on the preliminary image matrix data based on the first predetermined gray threshold value Binarization conversion to convert pixel points of the preliminary image matrix data having gray values greater than or equal to the first predetermined threshold into the first type of pixel points having a first gray value, Pixels having a gray value smaller than the first predetermined threshold in the preliminary image matrix data are converted into the second type of pixel points having a second gray value to obtain binarized preliminary image matrix data; Computing a maximum number of the first type of pixel points that are connected to each other in the preliminary image matrix data, and calculating the maximum number of the first connected to each other A first pixel
  • the living body detecting method further includes performing the binarization conversion on the image matrix data based on a second predetermined grayscale threshold to have greater than or equal to the image matrix data Converting a pixel point of the second predetermined threshold gray value to the first type of pixel having a first gray value, and having the gray value of the image matrix data smaller than the second predetermined threshold Converting a pixel to the second type of pixel having a second gray value, obtaining second binarized image matrix data; if the number of the first type of pixel in the second binarized image matrix data The illumination is stopped when the predetermined first predetermined number of thresholds is exceeded.
  • the living body detecting method further includes performing the binarization conversion on the image matrix data based on a third predetermined grayscale threshold to have greater than or equal to the image matrix data Converting a pixel point of the gray value of the third predetermined threshold into the first type of pixel having the first gray value, and having the gray value of the image matrix data smaller than the third predetermined threshold Converting a pixel to the second type of pixel having a second gray value, obtaining third binarized image matrix data; calculating a location of the first type of pixel in the third binarized image matrix data Corresponding third center of gravity position; if the third center of gravity position is outside the predetermined first area threshold, then the illumination is stopped.
  • the living body detecting method further includes: determining a predetermined pixel point region of the image matrix data corresponding to a predetermined region of a face of the object to be detected; calculating the maximum number of mutual a first center of gravity position corresponding to the connected first type of pixel points; if the first center of gravity position is within the predetermined pixel point area, the illumination is stopped.
  • the living body detecting method further includes comparing the spot area with a second predetermined area threshold, and stopping the irradiation if the spot area is larger than the second predetermined area threshold.
  • the living body detecting method further includes: determining a predetermined pixel point of the image matrix data corresponding to a predetermined point of the face of the object to be detected; calculating the maximum number of interconnections a first center of gravity position corresponding to the first type of pixel point; calculating a distance between the first center of gravity position and the predetermined pixel point, and stopping the illumination if the distance is less than a predetermined distance threshold.
  • the living body detecting method further includes: calculating interconnections Passing a plurality of spot areas corresponding to the first type of pixel points; if one of the plurality of spot areas is greater than a second predetermined area threshold or each of the plurality of spot areas is smaller than a third predetermined area threshold Then stop the irradiation.
  • a living body detecting system including: a laser light source unit for emitting an irradiation light to illuminate a face of an object to be detected; and an image capturing unit for capturing via the laser light source unit An image of a face of the object to be detected that is illuminated; a living body detecting unit that determines whether the object to be detected is a living body, wherein the living body detecting unit calculates a spot area of an image of a face of the object to be detected, and Comparing the spot area with a first predetermined area threshold, and if the spot area is greater than the first predetermined area threshold, determining that the object to be detected is a living body.
  • a living body detecting system acquires image matrix data of an image of a face of the object to be detected; and the image matrix based on a first predetermined grayscale threshold Performing a binarization conversion on the data to convert a pixel point of the image matrix data having a gray value greater than or equal to the first predetermined threshold into a first type of pixel having a first gray value, the image Converting a pixel point having a gray value smaller than the first predetermined threshold value into a second type of pixel point having a second gray value in the matrix data, obtaining first binarized image matrix data, the first gray value And greater than the second gray value; determining a maximum number of the first type of pixel points in the first binarized image matrix data that are connected to each other, and calculating the maximum number of the first type of pixels that are connected to each other The area corresponding to the spot is taken as the spot area.
  • a living body detecting system wherein the laser light source unit is a light source unit that generates a spot light spot, and the position of the laser light source unit and the object to be detected is relatively fixed.
  • the living body detecting system wherein the laser light source unit is a light source unit that generates a plurality of spot light spots, and the laser light source unit and the position of the object to be detected are relatively changed,
  • the image capturing unit captures an image of a face of the object to be detected illuminated by the laser light source unit, the living body detecting unit determines an area image of the image corresponding to the predetermined area of the object to be detected as the to-be-detected The image of the face of the object.
  • the living body detecting system acquires preliminary image matrix data of a face of the object to be detected illuminated by the laser light source unit; based on the first predetermined grayscale threshold, the preliminary Image matrix data performing binarization conversion to convert pixel points of the preliminary image matrix data having gray values greater than or equal to the first predetermined threshold into the first type of pixel points having a first gray value Converting, in the preliminary image matrix data, pixel points having a gray value smaller than the first predetermined threshold into the second type of pixel points having a second gray value to obtain binarized preliminary image matrix data Determining a maximum number of the first type of pixel points that are connected to each other in the binarized preliminary image matrix data, and calculating a first first center of gravity position corresponding to the maximum number of mutually connected first type of pixel points
  • the living body detecting system according to another embodiment of the present disclosure, wherein the living body detecting unit is further configured to perform the binarization conversion on the image matrix data based on a second predetermined grayscale threshold value to a pixel point having a gray value greater than or equal to the second predetermined threshold in the image matrix data is converted into the first type of pixel having the first gray value, and the image matrix data has less than the first Converting a pixel of the gray value of the predetermined threshold to the second type of pixel having the second gray value, obtaining second binarized image matrix data; if the second binarized image matrix data is When the number of the first type of pixel points exceeds a predetermined first predetermined number of thresholds, the laser light source unit is controlled to stop the illumination.
  • the living body detecting system according to another embodiment of the present disclosure, wherein the living body detecting unit is further configured to perform the binarization conversion on the image matrix data based on a third predetermined grayscale threshold value to a pixel point in the image matrix data having a gray value greater than or equal to the third predetermined threshold is converted into the first type of pixel having the first gray value, and the image matrix data has less than the first Converting a pixel point of the gray value of the predetermined threshold to the second type of pixel having the second gray value, obtaining third binarized image matrix data; calculating the third binarized image matrix data Corresponding third center of gravity position of the first type of pixel; if the third center of gravity position is outside the predetermined first area threshold, controlling the laser light source unit to stop illumination.
  • the living body detecting system according to another embodiment of the present disclosure, wherein the living body detecting unit is further configured to: determine a predetermined pixel point region of the image matrix data corresponding to a predetermined region of a face of the object to be detected Calculating a first first center of gravity position corresponding to the maximum number of interconnected first type of pixel points; if the first center of gravity position is within the predetermined pixel point area, Then, the laser light source unit is controlled to stop the illumination.
  • the living body detecting system according to another embodiment of the present disclosure, wherein the living body detecting unit is further configured to: compare the spot area with a second predetermined area threshold if the spot area is larger than the second predetermined area threshold And controlling the laser light source unit to stop the illumination.
  • the living body detecting system according to another embodiment of the present disclosure, wherein the living body detecting unit is further configured to: determine a predetermined pixel point of the image matrix data corresponding to a predetermined point of a face of the object to be detected; Calculating, by the maximum number of first centroid positions corresponding to the first type of pixel points that are connected to each other; calculating a distance between the first center of gravity position and the predetermined pixel point, if the distance is less than a predetermined distance threshold, The multi-point light source unit is controlled to stop the illumination.
  • the living body detecting system according to another embodiment of the present disclosure, wherein the living body detecting unit is further configured to: calculate a plurality of spot areas corresponding to the first type of pixel points that are in communication with each other; if the plurality of spots The laser light source unit is controlled to stop the illumination when one of the areas is greater than the second predetermined area threshold or each of the plurality of spot areas is smaller than the third predetermined area threshold.
  • a computer program product comprising a computer readable storage medium on which computer program instructions are stored, the computer program instructions being executed while being executed by a computer a step of: acquiring an image of a face of the object to be detected illuminated by the laser light source; calculating a spot area of the image of the face of the object to be detected; and comparing the spot area with a first predetermined area threshold if the spot The area is greater than the first predetermined area threshold, and the object to be detected is determined to be a living body.
  • FIG. 1 is a flow chart illustrating a living body detecting method according to an embodiment of the present invention.
  • FIG. 2 is a functional block diagram illustrating a living body detection system in accordance with an embodiment of the present invention.
  • FIG. 3 is a schematic diagram further illustrating a first example living body detection system in accordance with an embodiment of the present invention.
  • FIG. 4 is a flow chart further illustrating a first example living body detection method in accordance with an embodiment of the present invention.
  • FIG. 5 is a schematic diagram further illustrating a second example living body detection system in accordance with an embodiment of the present invention.
  • FIG. 6 is a flow chart further illustrating a second example living body detecting method according to an embodiment of the present invention.
  • FIG. 7 is a schematic diagram further illustrating a third example living body detection system in accordance with an embodiment of the present invention.
  • FIG. 8 is a flow chart further illustrating a third example living body detecting method according to an embodiment of the present invention.
  • FIG. 9 is a schematic block diagram illustrating a living body detecting system according to an embodiment of the present invention.
  • FIG. 1 is a flow chart illustrating a living body detecting method according to an embodiment of the present invention. As shown in FIG. 1, a living body detecting method according to an embodiment of the present invention includes the following steps.
  • step S101 the face of the object to be detected is illuminated using a laser light source.
  • the laser source may be a source that produces a spotted spot, or the laser source is a source that produces a plurality of spotted spots.
  • the laser source is a laser source that can adjust the direction of light emission. Thereafter, the processing proceeds to step S102.
  • step S102 an image of the face of the object to be detected illuminated via the laser light source is captured.
  • the image of the face of the object to be detected will include a spot formed by subsurface scattering.
  • step S103 the spot area S of the image of the face of the object to be detected is calculated.
  • the specific processing for calculating the spot area will be described in detail as follows.
  • the spot area may represent the area of the spot, or it may represent the thickness of the strip. Thereafter, the processing proceeds to step S104.
  • step S104 it is determined whether the spot area S calculated in step S103 is greater than the first predetermined area threshold T1.
  • the first predetermined area threshold T1 is used in advance with a large number of face images as positive samples and with photos, video playback, paper masks, and 3D model images as negative samples. Deep learning, support vector machine and other statistical learning methods are determined.
  • step S104 If a positive result is obtained in step S104, that is, the spot area S is larger than the first predetermined area threshold T1, the process proceeds to step S105.
  • step S105 it is determined that the object to be detected is a living body.
  • step S104 determines whether the spot area S is a living body. If a negative result is obtained in step S104, that is, the spot area S is not larger than the first predetermined area threshold T1, the processing proceeds to step S106. In step S106, it is determined that the object to be detected is not a living body.
  • subsurface scattering is generated for light based on human skin, so that a large spot is generated after receiving light, and subsurface scattering phases of articles such as photographs, screens, masks, and the like are generated. It is weaker than usual and only forms the principle of smaller spots.
  • the predetermined area threshold should be smaller than the spot formed by the subsurface scattering of the skin by the human skin, and larger than the spot formed by the subsurface scattering of the articles such as photos, screens, masks and the like.
  • the actual specific value of the predetermined area threshold may be set according to actual conditions, and is not limited herein.
  • the object to be detected having a spot area larger than the predetermined area threshold is determined to be a living body by judging the magnitude relationship between the obtained spot area and the predetermined area threshold.
  • the living body detecting method since the laser light source is used and there is no need to restrict the motion fit of the user, it is necessary to set a safety control mechanism in order to avoid occurrence of the laser light source illuminating the eye of the object to be detected, or to wait The detection object deviates from the detection area and other special cases.
  • the living body detection is performed by detecting the size of the subsurface scattering spot after the object to be detected is irradiated by the laser light source, thereby effectively preventing photos, 3D face models, and mask attacks.
  • the living body detecting system 20 includes a laser light source unit 21, an image capturing unit 22, and a living body detecting unit 23.
  • the laser light source unit 21, the image capturing unit 22, and the living body detecting unit 23 may be configured by, for example, hardware, software, firmware, and any feasible combination thereof.
  • the laser light source unit 21 is configured to emit an illumination light to illuminate a face of an object to be detected.
  • the laser source may be a point laser having a power of 5 mW and an output wavelength of 850 nm.
  • the position and angle of the laser light source arrangement can ensure that it can illuminate a suitable part of the face of the test subject, such as lips, cheeks, nose, etc. The place.
  • the laser light source unit 21 may be a light source that generates a spot light spot, or the laser light source unit 21 is a light source that generates a plurality of spot light spots.
  • the laser light source unit 21 is a laser light source that can adjust the direction in which light is emitted.
  • the image capturing unit 22 is for capturing an image of a face of an object to be detected illuminated by the laser light source unit 21.
  • the image capture unit 22 is configured corresponding to the laser light source unit 21.
  • the image capture unit 22 is a CCD imaging module configured with a 850 nm narrow band filter, the image capture unit 22 having an exposure parameter that is capable of capturing a spot formed by subsurface scattering.
  • the image capturing unit 22 may be physically separated from the subsequent living body detecting unit 23, or physically located at the same position or even inside the same casing.
  • the image capturing unit 22 further transmits the acquired image of the face of the object to be detected to the subsequent station via a wired or wireless method.
  • the living body detecting unit 23 is described. In the case where the image capturing unit 22 and the living body detecting unit 23 behind it are physically located at the same position or even inside the same casing, the living body detecting unit 23 will face the face of the object to be detected via the internal bus. The image is sent to the living body detecting unit 23. Before the video data is transmitted via wired or wireless means or via the internal bus, its predetermined format can be encoded and compressed into video data packets to reduce the amount of traffic and bandwidth required for transmission.
  • the living body detecting unit 23 is configured to determine whether the object to be detected is a living body. Specifically, the living body detecting unit 23 calculates a spot area S of an image of a face of the object to be detected, and compares the spot area S with a first predetermined area threshold T1, if the spot area S is larger than the first A predetermined area threshold T1 determines that the object to be detected is a living body.
  • FIGS. 1 and 2 a living body detecting method and a living body detecting system according to an embodiment of the present invention are summarized with reference to FIGS. 1 and 2.
  • first to third exemplary living body detecting methods and living body detecting systems according to embodiments of the present invention will be further described with reference to FIGS. 3 through 8.
  • FIG. 3 is a schematic diagram further illustrating a first example living body detection system in accordance with an embodiment of the present invention.
  • the position of the living body detecting system 20 and the object 30 to be detected is relatively fixed.
  • the living body detecting system 20 shown in FIG. 3 is a face puncher with a relatively close working distance.
  • the laser light source unit 21 in the living body detecting system 20 shown in FIG. 3 is a light source unit that generates a spot light spot, and the position of the laser light source unit 21 and the object to be detected 30 is relatively fixed.
  • the laser light source unit 21 emits an illumination light to illuminate a face of the object to be detected, for example, illuminating the lips, cheeks, Nose, etc.
  • the illustrated image capturing unit 22 captures an image of the face of the object to be detected illuminated by the laser light source unit 21.
  • the living body detecting unit 23 shown determines whether the object to be detected is a living body.
  • FIG. 4 is a flow chart further illustrating a first example living body detection method in accordance with an embodiment of the present invention. As shown in FIG. 4, a first exemplary living body detecting method according to an embodiment of the present invention is applied to the first exemplary living body detecting system according to an embodiment of the present invention shown in FIG. 3, which includes the following steps.
  • step S401 the face of the object to be detected is illuminated using a laser light source.
  • the laser source may be a source that produces a spotted spot.
  • step S402 an image of the face of the object to be detected illuminated via the laser light source is captured. Thereafter, the processing proceeds to step S403.
  • step S403 image matrix data of an image of the face of the object to be detected is acquired.
  • the image matrix data of the image of the face of the object to be detected may be represented as I[x, y]. Thereafter, the processing proceeds to step S404.
  • step S404 performing binarization conversion on the image matrix data based on the first predetermined grayscale threshold to convert pixel points in the image matrix data having grayscale values greater than or equal to the first predetermined threshold Converting a pixel of the image matrix data having a gray value smaller than the first predetermined threshold into a second type of pixel having a second gray value for a first type of pixel having a first gray value Obtaining first binarized image matrix data, the first gray value being greater than the second gray value.
  • the first binarized image matrix data can be expressed as:
  • t1 is the first predetermined grayscale threshold. Thereafter, the processing proceeds to step S405.
  • step S405 a maximum number of the first type of pixel points that are connected to each other in the first binarized image matrix data are determined.
  • a width-first search (BFS) algorithm is applied to the first binarized image matrix data Ib to calculate a connectivity component, and a maximum number of connectivity components are selected. Thereafter, the processing proceeds to step S406.
  • BFS width-first search
  • step S406 an area corresponding to the maximum number of mutually connected first-type pixel points is calculated as the spot area S. Thereafter, the processing proceeds to step S407.
  • step S407 it is determined whether the spot area S calculated in step S406 is greater than the first predetermined area threshold T1.
  • the first predetermined area threshold T1 is used in advance with a large number of face images as positive samples and with photos, video playback, paper masks, and 3D model images as negative samples. Deep learning, support vector machine and other statistical learning methods are determined.
  • step S407 If a positive result is obtained in step S407, that is, the spot area S is larger than the first predetermined area threshold T1, the processing proceeds to step S408.
  • step S408 it is determined that the object to be detected is a living body.
  • step S407 if a negative result is obtained in step S407, that is, the spot area S is not larger than the first predetermined area threshold T1, the processing proceeds to step S409. In step S409, it is determined that the object to be detected is not a living body.
  • a safety control mechanism is set in the first example living body detecting system according to an embodiment of the present invention.
  • step S404 After capturing an image of a face of the object to be detected illuminated by the laser light source, similar to the processing of step S404 described above, performing the binarization conversion on the image matrix data based on the second predetermined grayscale threshold t2, Converting a pixel point of the image matrix data having a gray value greater than or equal to the second predetermined threshold t2 into the first type of pixel having the first gray value, having the image matrix data therein Pixels of the gradation value smaller than the second predetermined threshold t2 are converted into the second type of pixel points having the second gradation value, and second binarized image matrix data is obtained. If the number of the first type of pixel points in the second binarized image matrix data exceeds a predetermined first predetermined number of thresholds s1, that is, a plurality of bright spots are not normally present, the illumination is stopped.
  • step S404 After capturing an image of a face of the object to be detected illuminated by the laser light source, similar to the processing of step S404 described above, performing the binarization conversion on the image matrix data based on a third predetermined grayscale threshold t3, Converting a pixel point of the image matrix data having a gray value greater than or equal to the third predetermined threshold t3 into the first type of pixel having the first gray value, having the image matrix data therein A pixel point smaller than the gradation value of the third predetermined threshold value t3 is converted into the second type of pixel point having the second gradation value, and the third binarized image matrix data is obtained.
  • determining A predetermined pixel point region of the image matrix data corresponding to a predetermined region of the face of the object to be detected For example, a pre-trained face detector (such as Haar Cascade) is used to obtain the position of the face and the left and right eyes. Similar to the processing of step S404 described above, a maximum number of first centroid positions corresponding to the first type of pixel points that are connected to each other are calculated. If the first center of gravity position is within the predetermined pixel point area (ie, the left and right eye areas), the illumination is stopped.
  • a pre-trained face detector such as Haar Cascade
  • a first exemplary living body detecting system is configured with a light source unit that generates a spot light spot, which is used in a scene where a light source unit is fixed relative to a target to be detected, and utilizes subsurface scattering properties of living skin and other materials.
  • the difference in live detection can effectively protect against photos, videos, and area attacks, and increases the safety and ease of use of the living body detection system without the need for special user cooperation.
  • FIG. 5 is a schematic diagram further illustrating a second example living body detection system in accordance with an embodiment of the present invention.
  • the position of the living body detecting system 50 and the object 30 to be detected is relatively unfixed.
  • the living body detection system 50 shown in FIG. 5 is an access control system having a working distance farther than the face puncher of FIG.
  • the laser light source 51 is a light source that generates a plurality of spot lights, and the laser light source 51 and the position of the object 30 to be detected relatively change.
  • the laser source 51 is configured by a laser having a power of 500 mW and a wavelength of 850 nm and a grating. Through the grating, the laser projects a plurality of spot spots, and walks evenly over the range in which the object 30 to be detected may exist, and the spot ranges of the respective points do not coincide.
  • the illustrated image capturing unit 52 captures an image of the face of the object to be detected illuminated by the laser light source unit 51.
  • the living body detecting unit 53 shown determines whether the object to be detected is a living body.
  • FIG. 6 is a flow chart further illustrating a second example living body detecting method according to an embodiment of the present invention. As shown in FIG. 6, a second exemplary living body detecting method according to an embodiment of the present invention is applied to the second exemplary living body detecting system according to an embodiment of the present invention shown in FIG. 5, which includes the following steps.
  • Steps S601 to S605 illustrated in FIG. 6 are respectively the same as steps S401 to S405 illustrated in FIG. 4 described above, and a repetitive description thereof will be omitted herein.
  • step S06 After determining the maximum number of mutually connected first-type pixel points in the first binarized image matrix data by binarizing the image of the face of the object to be detected, in step S06, determining the maximum number of mutually connected Whether the first type of pixel is located in a predetermined area of the face of the object to be detected. Since the laser light source 51 is a light source that generates a plurality of spot spots in the second exemplary living body detecting system according to an embodiment of the present invention, if an affirmative result is obtained in step S606, it indicates that One of the plurality of dot spots generated by the laser light source 51 falls into a suitable area such as the lips, cheeks, and nose, and the processing proceeds to step S607.
  • step S607 an area corresponding to the maximum number of mutually connected first-type pixel points is calculated as the spot area S. Thereafter, the processing proceeds to step S608.
  • step S608 it is determined whether the spot area S calculated in step S607 is greater than the first predetermined area threshold T1.
  • the first predetermined area threshold T1 is a setting determined by using a large number of face images as a positive sample and a photo, video playback, a paper mask, and a 3D model image as negative samples, using a deep learning, a support vector machine, or the like. of.
  • step S608 If a positive result is obtained in step S608, that is, the spot area S is larger than the first predetermined area threshold T1, the processing proceeds to step S609. In step S609, it is determined that the object to be detected is a living body.
  • step S608 if a negative result is obtained in step S608, that is, the spot area S is not larger than the first predetermined area threshold T1, the processing proceeds to step S610.
  • step S610 it is determined that the object to be detected is not a living body.
  • step S606 if a negative result is obtained in step S606, indicating that the plurality of spot spots generated by the laser light source 51 do not fall into a suitable area such as a lip, a cheek, or a nose, the process returns to step S602 to continue capturing. An image of the face of the object to be detected illuminated by the laser light source.
  • a safety control mechanism is also provided in the second example living body detecting system according to an embodiment of the present invention.
  • step S404 based on the second predetermined grayscale threshold t2
  • Performing the binarization conversion on the image matrix data to convert a pixel point of the image matrix data having a gray value greater than or equal to the second predetermined threshold t2 into the first having the first gray value a pixel-like point converting a pixel point of the image matrix data having a gray value smaller than the second predetermined threshold t2 into the second type of pixel point having a second gray value to obtain a second binarization Image matrix data. If the number of the first type of pixel points in the second binarized image matrix data exceeds a predetermined first predetermined number of thresholds s1, that is, a plurality of bright spots are not normally present, the illumination is stopped.
  • step S607 Calculating, in step S607, obtaining the maximum number of mutually connected first-type pixel points After the corresponding area is used as the spot area S, the spot area S is compared with a second predetermined area threshold T2, and the second predetermined area threshold T2 is greater than the first predetermined area threshold T1. If the spot area S is greater than the second predetermined area threshold T2, that is, there is a spot having an excessive area, the irradiation is stopped.
  • determining a face of the image matrix data corresponding to the face of the object to be detected A predetermined pixel area of the predetermined area.
  • a pre-trained face detector such as Haar Cascade
  • the position of the spot closest to the left and right eyes is determined, and the distance D of the nearest spot to the left and right eyes is calculated. If the distance D is smaller than the predetermined distance threshold d, that is, the spot is too close to the eye of the object to be detected, the irradiation is stopped.
  • a second exemplary living body detecting system is configured with a light source unit that generates a plurality of spot spots, which are used in a scene where the light source unit and the object to be detected are not fixed in relative positions, and which utilizes living skin and other materials.
  • the difference in surface scattering properties enables in vivo detection, which can effectively prevent photos, videos, and area attacks, and increases the safety and ease of use of the living body detection system without special user cooperation.
  • FIG. 7 is a schematic diagram further illustrating a third example living body detection system in accordance with an embodiment of the present invention.
  • the position of the living body detecting system 70 and the object 30 to be detected is relatively unfixed, and the mutual position may vary greatly.
  • the living body detection system 70 shown in FIG. 7 is an access control system or a monitoring system that has a working distance farther than the face card driver of FIG. 3 and the door access system of FIG.
  • the laser light source unit 71 is a laser light source that can adjust the direction in which light is emitted.
  • the laser source 71 is configured by a power 20 mW, wavelength 850 nm laser and an exit direction drive unit (not shown).
  • the living body detecting unit 73 can capture a preliminary image of the face of the object to be detected irradiated via the laser light source unit 71 according to the image capturing unit 72, and acquire a face and a face using a pre-trained face detector such as Haar Cascade. The location of the appropriate part of the section, as well as the location of the current spot.
  • the exit direction driving unit adjusts the light emission direction of the laser light source unit 71 by the living body detecting unit 73 (alternatively, the separately configured spot position tracking unit) so that the spot position falls at a position of the appropriate portion.
  • FIG. 8 is a flowchart further illustrating a third example living body detecting method according to an embodiment of the present invention.
  • a third exemplary living body detecting method according to an embodiment of the present invention is applied to a third exemplary living body detecting system according to an embodiment of the present invention shown in FIG. 7, which includes the following steps.
  • step S801 the face of the object to be detected is illuminated using a laser light source.
  • the laser light source is a laser light source that can adjust a light emission direction. Thereafter, the processing proceeds to step S802.
  • step S802 preliminary image matrix data of a face of the object to be detected illuminated via the laser light source is acquired. Thereafter, the processing proceeds to step S803.
  • step S803 similar to the processing in step S404, performing binarization conversion on the preliminary image matrix data based on the first predetermined grayscale threshold to have the first image matrix data having greater than or equal to the first Pixels of gray values of predetermined threshold values are converted into the first type of pixel points having a first gray value, and pixel points of the preliminary image matrix data having gray values smaller than the first predetermined threshold are converted The second type of pixel points having the second gray value are obtained to obtain binarized preliminary image matrix data. Thereafter, the processing proceeds to step S804.
  • step S804 determining a maximum number of the first type of pixel points that are connected to each other in the binarized preliminary image matrix data, and calculating a maximum number of the first type of pixel points that are connected to each other A center of gravity.
  • a first number of positions of the first center of gravity of the first type of pixel points that are connected to each other in the binarized preliminary image matrix data is a location of a current spot.
  • step S805 a second center of gravity position of the predetermined area corresponding to the face of the object to be detected in the preliminary image is determined.
  • the second center of gravity position of the predetermined area of the face is the position of the appropriate portion of the face.
  • step S806 the light emitting direction of the laser light source is adjusted such that the first center of gravity position coincides with the second center of gravity position. That is, the direction in which the light emitted by the laser light source is emitted is adjusted such that the spot position falls on a suitable portion of the face. Thereafter, the processing proceeds to step S807.
  • step S807 image matrix data of an image of a face of the object to be detected illuminated by the laser light source that adjusts the light emission direction is acquired.
  • the image matrix data of the image of the face of the object to be detected acquired in step S807 is image matrix data finally used for living body detection.
  • step S404 shown in FIG. 4 steps S404 to S409 shown in FIG. 4 as above are performed to perform the living body detection based on the image matrix data of the image of the face of the object to be detected.
  • a safety control mechanism is also set in the living body detection system.
  • a plurality of spot areas corresponding to the first type of pixel points that are in communication with each other are calculated. If one of the plurality of spot areas is greater than the second predetermined area threshold T2 or each of the plurality of spot areas is smaller than the third predetermined area threshold T3, ie, there is an excessively large or too small spot, the illumination is stopped.
  • the average pixel value of the image of the face of the object to be detected may be determined, and if the average pixel value is not within the preset range, that is, if the entire image is too bright or too dark, the illumination is stopped.
  • the illumination is stopped.
  • a third exemplary living body detecting system is configured with a light source unit that can adjust a light emission direction to track a suitable portion of an object to be detected, in a scene where the relative position of the light source unit and the object to be detected is not fixed and the distance is long.
  • a light source unit that can adjust a light emission direction to track a suitable portion of an object to be detected, in a scene where the relative position of the light source unit and the object to be detected is not fixed and the distance is long.
  • it utilizes the difference in subsurface scattering properties of living skin and other materials for living body detection, which can effectively prevent photos, videos, and area attacks, and increases the safety and ease of use of the living body detection system without special user cooperation.
  • FIG. 9 is a schematic block diagram illustrating a living body detecting system according to an embodiment of the present invention.
  • a living body detecting system 9 according to an embodiment of the present invention includes a processor 91, a memory 92, and computer program instructions 93 stored in the memory 92.
  • the computer program instructions 93 may implement the functions of the respective functional modules of the living body detection system according to an embodiment of the present invention when the processor 91 is in operation, and/or may perform various steps of the living body detection method according to an embodiment of the present invention.
  • the following steps are performed: acquiring video data acquired via the video data collecting device; capturing an image of a face of the object to be detected illuminated via the laser light source; Calculating a spot area of the image of the face of the object to be detected; and comparing the spot area with a first predetermined area threshold, and if the spot area is greater than the first predetermined area threshold, determining that the object to be detected is Living body.
  • the step of calculating the spot area of the image of the face of the object to be detected when the computer program instruction 93 is executed by the processor 91 comprises: acquiring an image of the face of the object to be detected Image matrix data; performing binarization conversion on the image matrix data based on the first predetermined grayscale threshold to have greater than or equal to the first predetermined threshold in the image matrix data a pixel of the gray value of the value is converted into a first type of pixel having a first gray value, and a pixel of the image matrix data having a gray value smaller than the first predetermined threshold is converted to have a second a second type of pixel of the gray value, the first binarized image matrix data is obtained, the first gray value is greater than the second gray value; and the maximum number of the first binarized image matrix data is determined
  • the first type of pixel points that are connected to each other, and the area corresponding to the maximum number of the first type of pixel points that are connected to each other is calculated as the spot area.
  • the step of acquiring an image of the face of the object to be detected illuminated by the light source when the computer program instruction 93 is executed by the processor 91 includes: acquiring a face of the object to be detected illuminated via the laser light source The image of the portion determines an area image of the image corresponding to the predetermined area of the object to be detected as an image of the face of the object to be detected.
  • the step of acquiring the image matrix data of the image of the face of the object to be detected when the computer program instruction 93 is executed by the processor 91 comprises: acquiring an object to be detected illuminated by the laser light source Preliminary image matrix data of the face; performing binarization conversion on the preliminary image matrix data based on the first predetermined grayscale threshold to have gray in the preliminary image matrix data equal to or greater than the first predetermined threshold a pixel of the degree value is converted into the first type of pixel having the first gray value, and a pixel point of the preliminary image matrix data having a gray value smaller than the first predetermined threshold is converted to have a second The second type of pixel points of the gray value to obtain binarized preliminary image matrix data; determining a maximum number of the first type of pixel points that are connected to each other in the binarized preliminary image matrix data, and calculating the a first number of first centroid positions corresponding to the first type of pixel points connected to each other; determining a predetermined area of the preliminary image corresponding to the face of the
  • Each module in the living body detecting system according to an embodiment of the present invention may be implemented by a computer program stored in a memory stored in a processor in a living body detecting system according to an embodiment of the present invention, or may be in a computer according to an embodiment of the present invention
  • the computer instructions stored in the computer readable storage medium of the program product are implemented by the computer when executed.
  • the computer readable storage medium can be any combination of one or more computer readable storage media, for example, a computer readable storage medium includes computer readable program code for randomly generating a sequence of action instructions, and another computer can The read storage medium contains computer readable program code for performing face activity recognition.
  • the computer readable storage medium may include, for example, a memory card of a smart phone, a storage component of a tablet, a hard disk of a personal computer, a random access memory (RAM), a read only memory (ROM), an erasable programmable read only memory. (EPROM), Portable Compact Disk Read Only Memory (CD-ROM), USB memory, or any combination of the above storage media.
  • RAM random access memory
  • ROM read only memory
  • EPROM erasable programmable read only memory
  • CD-ROM Portable Compact Disk Read Only Memory
  • USB memory or any combination of the above storage media.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Image Analysis (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Image Processing (AREA)

Abstract

本发明涉及能够实现人体活体检测的活体检测方法、活体检测系统以及计算机程序产品。所述活体检测方法包括:使用激光光源照射待检测对象的脸部;捕获经由所述激光光源照射的待检测对象的脸部的图像;计算所述待检测对象的脸部的图像的光斑面积;以及比较所述光斑面积与第一预定面积阈值,如果所述光斑面积大于所述第一预定面积阈值,则确定所述待检测对象为活体。

Description

活体检测方法、活体检测系统以及计算机程序产品 技术领域
本公开涉及活体检测领域,更具体地,本公开涉及能够实现人体活体检测的活体检测方法、活体检测系统以及计算机程序产品。
背景技术
目前,人脸识别系统越来越多地应用于安防、金融等领域中需要身份验证的场景,诸如银行远程开户、门禁系统、远程交易操作验证等。在这些高安全级别的应用领域中,除了确保被验证者的人脸相似度符合数据库中存储的底库数据外,首先需要被验证者是一个合法的生物活体。也就是说,人脸识别系统需要能够防范攻击者使用照片、3D人脸模型或者面具等方式进行攻击。
解决上述问题的方法通常称为活体检测,其目的是判断获取到的生物特征是否来自一个有生命、在现场的、真实的人。目前市场上的技术产品中还没有公认成熟的活体验证方案,已有的活体检测技术要么依赖特殊的硬件设备(诸如红外相机、深度相机),要么只能防范简单的静态照片攻击。此外,现有的活体检测系统大多是配合式的,即需要被测试人员根据系统指示做出相应动作或者停留在原地不动一段时间,如此将影响用户体验和活体检测效率。
发明内容
鉴于上述问题而提出了本公开。本公开提供了一种活体检测方法、活体检测系统以及计算机程序产品,其基于人体皮肤对于光产生亚表面散射,从而在接收到光线后会产生较大的光斑,而照片、屏幕、面具等物品的亚表面散射相比之下很弱,只会形成较小光斑的原理,实现了一种非配合式活体检测,从而有效地区分出正常用户与照片、视频和面具攻击者,并且无需用户的特殊配合,增加了活体检测系统的安全性和易用度。
根据本公开的一个实施例,提供了一种活体检测方法,包括:使用激光光源照射待检测对象的脸部;捕获经由所述激光光源照射的待检测对象的脸部的图像;计算所述待检测对象的脸部的图像的光斑面积;以及比较所述光 斑面积与第一预定面积阈值,如果所述光斑面积大于所述第一预定面积阈值,则确定所述待检测对象为活体。
此外,根据本公开的一个实施例的活体检测方法,其中所述计算所述待检测对象的脸部的图像的光斑面积包括:获取所述待检测对象的脸部的图像的图像矩阵数据;基于第一预定灰度阈值,对所述图像矩阵数据执行二值化转换,以将所述图像矩阵数据中具有大于等于所述第一预定阈值的灰度值的像素点转换为具有第一灰度值的第一类像素点,将所述图像矩阵数据中具有小于所述第一预定阈值的灰度值的像素点转换为具有第二灰度值的第二类像素点,获得第一二值化图像矩阵数据,所述第一灰度值大于所述第二灰度值;确定所述第一二值化图像矩阵数据中最大数目的相互连通的所述第一类像素点,计算所述最大数目的相互连通的所述第一类像素点所对应的面积作为所述光斑面积。
此外,根据本公开的一个实施例的活体检测方法,其中所述激光光源为产生点状光斑的光源,并且所述激光光源与所述待检测对象的位置相对固定。
此外,根据本公开的一个实施例的活体检测方法,其中所述激光光源为产生多个点状光斑的光源,并且所述激光光源与所述待检测对象的位置相对变化,所述捕获经由所述光源照射的待检测对象的脸部的图像包括:捕获经由所述激光光源照射的待检测对象的脸部的图像,确定所述图像中对应于所述待检测对象的预定区域的区域图像作为所述待检测对象的脸部的图像。
此外,根据本公开的一个实施例的活体检测方法,其中所述激光光源为可调整光线出射方向的激光光源,并且所述激光光源与所述待检测对象的位置相对变化,所述获取所述待检测对象的脸部的图像的图像矩阵数据包括:获取经由所述激光光源照射的待检测对象的脸部的初步图像矩阵数据;基于第一预定灰度阈值,对所述初步图像矩阵数据执行二值化转换,以将所述初步图像矩阵数据中具有大于等于所述第一预定阈值的灰度值的像素点转换为具有第一灰度值的所述第一类像素点,将所述初步图像矩阵数据中具有小于所述第一预定阈值的灰度值的像素点转换为具有第二灰度值的所述第二类像素点,以获得二值化初步图像矩阵数据;确定所述二值化初步图像矩阵数据中最大数目的相互连通的所述第一类像素点,计算所述最大数目的相互连通的所述第一类像素点所对应的第一重心位置;确定所述初步图像中对应于所述待检测对象的脸部的预定区域的第二重心位置;调整所述激光光源照 射的光线出射方向,以使得所述第一重心位置与所述第二重心位置重合,获取经由调整所述光线出射方向的所述激光光源照射的待检测对象的脸部的图像的图像矩阵数据。
此外,根据本公开的一个实施例的活体检测方法,还包括:基于第二预定灰度阈值,对所述图像矩阵数据执行所述二值化转换,以将所述图像矩阵数据中具有大于等于所述第二预定阈值的灰度值的像素点转换为具有第一灰度值的所述第一类像素点,将所述图像矩阵数据中具有小于所述第二预定阈值的灰度值的像素点转换为具有第二灰度值的所述第二类像素点,获得第二二值化图像矩阵数据;如果所述第二二值化图像矩阵数据中所述第一类像素点的数目超过预定第一预定数目阈值,则停止照射。
此外,根据本公开的一个实施例的活体检测方法,还包括:基于第三预定灰度阈值,对所述图像矩阵数据执行所述二值化转换,以将所述图像矩阵数据中具有大于等于所述第三预定阈值的灰度值的像素点转换为具有第一灰度值的所述第一类像素点,将所述图像矩阵数据中具有小于所述第三预定阈值的灰度值的像素点转换为具有第二灰度值的所述第二类像素点,获得第三二值化图像矩阵数据;计算所述第三二值化图像矩阵数据中所述第一类像素点的所对应的第三重心位置;如果所述第三重心位置在预定第一区域阈值外,则停止照射。
此外,根据本公开的一个实施例的活体检测方法,还包括:确定所述图像矩阵数据中对应于所述待检测对象的脸部的预定区域的预定像素点区域;计算所述最大数目的相互连通的所述第一类像素点所对应的第一重心位置;如果所述第一重心位置在所述预定像素点区域内,则停止照射。
此外,根据本公开的一个实施例的活体检测方法,还包括:比较所述光斑面积与第二预定面积阈值,如果所述光斑面积大于所述第二预定面积阈值,则停止照射。
此外,根据本公开的一个实施例的活体检测方法,还包括:确定所述图像矩阵数据中对应于所述待检测对象的脸部的预定点的预定像素点;计算所述最大数目的相互连通的所述第一类像素点所对应的第一重心位置;计算所述第一重心位置与所述预定像素点的距离,如果所述距离小于预定距离阈值,则停止照射。
此外,根据本公开的一个实施例的活体检测方法,还包括:计算相互连 通的所述第一类像素点所对应的多个光斑面积;如果所述多个光斑面积中的一个大于第二预定面积阈值或者所述多个光斑面积中的每一个小于第三预定面积阈值,则停止照射。
根据本公开的另一个实施例,提供了一种活体检测系统,包括:激光光源单元,用于发射照射光线以照射待检测对象的脸部;图像捕获单元,用于捕获经由所述激光光源单元照射的待检测对象的脸部的图像;活体检测单元,用于确定所述待检测对象是否为活体,其中,所述活体检测单元计算所述待检测对象的脸部的图像的光斑面积,并且比较所述光斑面积与第一预定面积阈值,如果所述光斑面积大于所述第一预定面积阈值,则确定所述待检测对象为活体。
此外,根据本公开的另一个实施例的活体检测系统,其中所述活体检测单元获取所述待检测对象的脸部的图像的图像矩阵数据;基于第一预定灰度阈值,对所述图像矩阵数据执行二值化转换,以将所述图像矩阵数据中具有大于等于所述第一预定阈值的灰度值的像素点转换为具有第一灰度值的第一类像素点,将所述图像矩阵数据中具有小于所述第一预定阈值的灰度值的像素点转换为具有第二灰度值的第二类像素点,获得第一二值化图像矩阵数据,所述第一灰度值大于所述第二灰度值;确定所述第一二值化图像矩阵数据中最大数目的相互连通的所述第一类像素点,计算所述最大数目的相互连通的所述第一类像素点所对应的面积作为所述光斑面积。
此外,根据本公开的另一个实施例的活体检测系统,其中所述激光光源单元为产生点状光斑的光源单元,并且所述激光光源单元与所述待检测对象的位置相对固定。
此外,根据本公开的另一个实施例的活体检测系统,其中所述激光光源单元为产生多个点状光斑的光源单元,并且所述激光光源单元与所述待检测对象的位置相对变化,所述图像捕获单元捕获经由所述激光光源单元照射的待检测对象的脸部的图像,所述活体检测单元确定所述图像中对应于所述待检测对象的预定区域的区域图像作为所述待检测对象的脸部的图像。
此外,根据本公开的另一个实施例的活体检测系统,其中所述激光光源单元为可调整光线出射方向的激光光源,并且所述激光光源与所述待检测对象的位置相对变化,所述活体检测单元获取经由所述激光光源单元照射的待检测对象的脸部的初步图像矩阵数据;基于第一预定灰度阈值,对所述初步 图像矩阵数据执行二值化转换,以将所述初步图像矩阵数据中具有大于等于所述第一预定阈值的灰度值的像素点转换为具有第一灰度值的所述第一类像素点,将所述初步图像矩阵数据中具有小于所述第一预定阈值的灰度值的像素点转换为具有第二灰度值的所述第二类像素点,以获得二值化初步图像矩阵数据;确定所述二值化初步图像矩阵数据中最大数目的相互连通的所述第一类像素点,计算所述最大数目的相互连通的所述第一类像素点所对应的第一重心位置;确定所述初步图像中对应于所述待检测对象的脸部的预定区域的第二重心位置;所述活体检测单元控制所述激光光源单元调整所述光线出射方向,以使得所述第一重心位置与所述第二重心位置重合,获取经由调整所述光线出射方向的所述激光光源单元照射的待检测对象的脸部的图像矩阵数据。
此外,根据本公开的另一个实施例的活体检测系统,其中所述活体检测单元还用于:基于第二预定灰度阈值,对所述图像矩阵数据执行所述二值化转换,以将所述图像矩阵数据中具有大于等于所述第二预定阈值的灰度值的像素点转换为具有第一灰度值的所述第一类像素点,将所述图像矩阵数据中具有小于所述第二预定阈值的灰度值的像素点转换为具有第二灰度值的所述第二类像素点,获得第二二值化图像矩阵数据;如果所述第二二值化图像矩阵数据中所述第一类像素点的数目超过预定第一预定数目阈值,则控制所述激光光源单元停止照射。
此外,根据本公开的另一个实施例的活体检测系统,其中所述活体检测单元还用于:基于第三预定灰度阈值,对所述图像矩阵数据执行所述二值化转换,以将所述图像矩阵数据中具有大于等于所述第三预定阈值的灰度值的像素点转换为具有第一灰度值的所述第一类像素点,将所述图像矩阵数据中具有小于所述第三预定阈值的灰度值的像素点转换为具有第二灰度值的所述第二类像素点,获得第三二值化图像矩阵数据;计算所述第三二值化图像矩阵数据中所述第一类像素点的所对应的第三重心位置;如果所述第三重心位置在预定第一区域阈值外,则控制所述激光光源单元停止照射。
此外,根据本公开的另一个实施例的活体检测系统,其中所述活体检测单元还用于:确定所述图像矩阵数据中对应于所述待检测对象的脸部的预定区域的预定像素点区域;计算所述最大数目的相互连通的所述第一类像素点所对应的第一重心位置;如果所述第一重心位置在所述预定像素点区域内, 则控制所述激光光源单元停止照射。
此外,根据本公开的另一个实施例的活体检测系统,其中所述活体检测单元还用于:比较所述光斑面积与第二预定面积阈值,如果所述光斑面积大于所述第二预定面积阈值,则控制所述激光光源单元停止照射。
此外,根据本公开的另一个实施例的活体检测系统,其中所述活体检测单元还用于:确定所述图像矩阵数据中对应于所述待检测对象的脸部的预定点的预定像素点;计算所述最大数目的相互连通的所述第一类像素点所对应的第一重心位置;计算所述第一重心位置与所述预定像素点的距离,如果所述距离小于预定距离阈值,则控制所述多点状光源单元停止照射。
此外,根据本公开的另一个实施例的活体检测系统,其中所述活体检测单元还用于:计算相互连通的所述第一类像素点所对应的多个光斑面积;如果所述多个光斑面积中的一个大于第二预定面积阈值或者所述多个光斑面积中的每一个小于第三预定面积阈值,则控制所述激光光源单元停止照射。
根据本公开的又一个实施例,提供了一种计算机程序产品,包括计算机可读存储介质,在所述计算机可读存储介质上存储了计算机程序指令,所述计算机程序指令在被计算机运行时执行以下步骤:获取经由激光光源照射的待检测对象的脸部的图像;计算所述待检测对象的脸部的图像的光斑面积;以及比较所述光斑面积与第一预定面积阈值,如果所述光斑面积大于所述第一预定面积阈值,则确定所述待检测对象为活体。
要理解的是,前面的一般描述和下面的详细描述两者都是示例性的,并且意图在于提供要求保护的技术的进一步说明。
附图说明
通过结合附图对本发明实施例进行更详细的描述,本发明的上述以及其它目的、特征和优势将变得更加明显。附图用来提供对本发明实施例的进一步理解,并且构成说明书的一部分,与本发明实施例一起用于解释本发明,并不构成对本发明的限制。在附图中,相同的参考标号通常代表相同部件或步骤。
图1是图示根据本发明实施例的活体检测方法的流程图。
图2是图示根据本发明实施例的活体检测系统的功能性框图。
图3是进一步图示根据本发明实施例的第一示例活体检测系统的示意图。
图4是进一步图示根据本发明实施例的第一示例活体检测方法的流程图。
图5是进一步图示根据本发明实施例的第二示例活体检测系统的示意图。
图6是进一步图示根据本发明实施例的第二示例活体检测方法的流程图。
图7是进一步图示根据本发明实施例的第三示例活体检测系统的示意图。
图8是进一步图示根据本发明实施例的第三示例活体检测方法的流程图。
图9是图示根据本发明实施例的活体检测系统的示意性框图。
具体实施方式
为了使得本发明的目的、技术方案和优点更为明显,下面将参照附图详细描述根据本发明的示例实施例。显然,所描述的实施例仅仅是本发明的一部分实施例,而不是本发明的全部实施例,应理解,本发明不受这里描述的示例实施例的限制。基于本公开中描述的本发明实施例,本领域技术人员在没有付出创造性劳动的情况下所得到的所有其它实施例都应落入本发明的保护范围之内。
以下,将参考附图详细描述本发明的优选实施例。
图1是图示根据本发明实施例的活体检测方法的流程图。如图1所示,根据本发明实施例的活体检测方法包括以下步骤。
在步骤S101中,使用激光光源照射待检测对象的脸部。如下将详细描述的,在本发明的一个实施例中,所述激光光源可以为产生点状光斑的光源,或者所述激光光源为产生多个点状光斑的光源。可替代地,所述激光光源为可调整光线出射方向的激光光源。此后,处理进到步骤S102。
在步骤S102中,捕获经由激光光源照射的待检测对象的脸部的图像。如下将详细描述的,待检测对象的脸部的图像中将包括亚表面散射形成的光斑。此后,处理进到步骤S103。
在步骤S103中,计算待检测对象的脸部的图像的光斑面积S。如下将详细描述计算光斑面积的具体处理过程。根据所采用的激光光源的不同,光斑面积可能表示点状光斑的面积,也可能表示条状光斑的粗细。此后,处理进到步骤S104。
在步骤S104中,判断在步骤S103中计算的光斑面积S是否大于第一预定面积阈值T1。所述第一预定面积阈值T1是预先以大量人脸图像作为正样本以及以照片、视频回放、纸片面具以及3D模型图像作为负样本,使用 深度学习,支撑向量机等统计学习方法所确定设置的。
如果在步骤S104中获得肯定结果,即光斑面积S大于第一预定面积阈值T1,则处理进到步骤S105。在步骤S105中,确定待检测对象为活体。
相反地,如果在步骤S104中获得否定结果,即光斑面积S不大于第一预定面积阈值T1,则处理进到步骤S106。在步骤S106中,确定待检测对象为非活体。
在上述根据本发明的实施例的活体检测方法中,基于人体皮肤对于光产生亚表面散射,从而在接收到光线后会产生较大的光斑,而照片、屏幕、面具等物品的亚表面散射相比之下很弱,只会形成较小光斑的原理。其中,在光照条件相同的情况下,预定面积阈值应小于人体皮肤对于光产生亚表面散射形成的光斑,而大于照片、屏幕、面具等物品的亚表面散射形成的光斑。预定面积阈值的实际具体数值可以根据实际情况进行设定,在此并不进行限定。通过判断所获得的光斑面积与预定面积阈值的大小关系,将具有大于预定面积阈值的光斑面积的待检测对象确定为活体。
此外,在根据本发明的实施例的活体检测方法中,由于使用激光光源,并且不需要限制用户的动作配合,所以需要设置安全控制机制,以便避免出现激光光源照射待检测对象的眼睛,或者待检测对象偏离检测区域等特殊情况的出现。
上述根据本发明实施例的活体检测方法,通过检测待检测对象经由激光光源照射后的亚表面散射光斑大小进行活体检测,从而可以有效地防范照片、3D人脸模型和面具攻击。
以下,将参照图2进一步描述执行上述活体检测方法的活体检测系统。
图2是图示根据本发明实施例的活体检测系统的功能性框图。如图2所示,根据本发明实施例的活体检测系统20包括激光光源单元21、图像捕获单元22和活体检测单元23。所述激光光源单元21、图像捕获单元22和活体检测单元23例如可以由诸如硬件、软件、固件以及它们的任意可行的组合配置。
具体地,所述激光光源单元21用于发射照射光线以照射待检测对象的脸部。在本发明的一个实施例中,激光光源可以是一个功率5mw,输出波长850nm的点状激光器。激光光源布置的位置和角度可以保证其能够照射检测对象的脸部的合适部位,诸如嘴唇、面颊、鼻子等面部皮肤较为暴露和平整 的地方。如上所述,所述激光光源单元21可以为产生点状光斑的光源,或者所述激光光源单元21为产生多个点状光斑的光源。可替代地,所述激光光源单元21为可调整光线出射方向的激光光源。
所述图像捕获单元22用于捕获经由所述激光光源单元21照射的待检测对象的脸部的图像。在本发明的一个实施例中,所述图像捕获单元22对应于所述激光光源单元21配置。例如,所述图像捕获单元22为配置有850nm窄带滤光片的CCD成像模组,所述图像捕获单元22的曝光参数其能够捕获亚表面散射所形成的光斑。所述图像捕获单元22可以与其后的活体检测单元23物理上分离,或者物理上位于同一位置甚至位于同一机壳内部。在所述图像捕获单元22与其后的活体检测单元23物理上分离的情况下,所述图像捕获单元22进一步经由有线或者无线方式将获取的待检测对象的脸部的图像发送给其后的所述活体检测单元23。在所述图像捕获单元22与其后的所述活体检测单元23物理上位于同一位置甚至位于同一机壳内部的情况下,所述活体检测单元23经由内部总线将所述待检测对象的脸部的图像发送给所述活体检测单元23。在经由有线或者无线方式或者经由内部总线发送所述视频数据之前,可以将其预定格式进行编码和压缩为视频数据包,以减少发送需要占用的通信量和带宽。
所述活体检测单元23用于确定所述待检测对象是否为活体。具体地,所述活体检测单元23计算所述待检测对象的脸部的图像的光斑面积S,并且比较所述光斑面积S与第一预定面积阈值T1,如果所述光斑面积S大于所述第一预定面积阈值T1,则确定所述待检测对象为活体。
以上,参照图1和图2概述了根据本发明实施例的活体检测方法和活体检测系统。以下,将参照图3到图8进一步描述根据本发明实施例的第一到第三示例活体检测方法和活体检测系统。
图3是进一步图示根据本发明实施例的第一示例活体检测系统的示意图。
如图3所示,活体检测系统20与待检测对象30的位置相对固定。例如,图3所示的活体检测系统20为工作距离较近的人脸打卡器。具体地,图3所示的活体检测系统20中的激光光源单元21为产生点状光斑的光源单元,并且所述激光光源单元21与所述待检测对象30的位置相对固定。所述激光光源单元21发射照射光线以照射待检测对象的脸部,例如照射嘴唇、面颊、 鼻子等。所示图像捕获单元22捕获经由所述激光光源单元21照射的待检测对象的脸部的图像。所示活体检测单元23确定所述待检测对象是否为活体。
图4是进一步图示根据本发明实施例的第一示例活体检测方法的流程图。如图4所示,根据本发明实施例的第一示例活体检测方法应用于图3所示的根据本发明实施例的第一示例活体检测系统,其包括以下步骤。
在步骤S401中,使用激光光源照射待检测对象的脸部。在本发明实施例的第一示例中,所述激光光源可以为产生点状光斑的光源。此后,处理进到步骤S402。
在步骤S402中,捕获经由激光光源照射的待检测对象的脸部的图像。此后,处理进到步骤S403。
在步骤S403中,获取所述待检测对象的脸部的图像的图像矩阵数据。在本发明实施例的第一示例中,所述待检测对象的脸部的图像的图像矩阵数据可以表示为I[x,y]。此后,处理进到步骤S404。
在步骤S404中,基于第一预定灰度阈值,对所述图像矩阵数据执行二值化转换,以将所述图像矩阵数据中具有大于等于所述第一预定阈值的灰度值的像素点转换为具有第一灰度值的第一类像素点,将所述图像矩阵数据中具有小于所述第一预定阈值的灰度值的像素点转换为具有第二灰度值的第二类像素点,获得第一二值化图像矩阵数据,所述第一灰度值大于所述第二灰度值。第一二值化图像矩阵数据可以表示为:
Figure PCTCN2015080963-appb-000001
其中,t1为第一预定灰度阈值。此后,处理进到步骤S405。
在步骤S405中,确定所述第一二值化图像矩阵数据中最大数目的相互连通的所述第一类像素点。在本发明的一个实施例中,对于第一二值化图像矩阵数据Ib应用宽度优先搜索(BFS)算法计算联通分量,选择最大数目的联通分量。此后,处理进到步骤S406。
在步骤S406中,计算所述最大数目的相互连通的所述第一类像素点所对应的面积作为所述光斑面积S。此后,处理进到步骤S407。
在步骤S407中,判断在步骤S406中计算的光斑面积S是否大于第一预定面积阈值T1。所述第一预定面积阈值T1是预先以大量人脸图像作为正样本以及以照片、视频回放、纸片面具以及3D模型图像作为负样本,使用 深度学习,支撑向量机等统计学习方法所确定设置的。
如果在步骤S407中获得肯定结果,即光斑面积S大于第一预定面积阈值T1,则处理进到步骤S408。在步骤S408中,确定待检测对象为活体。
相反地,如果在步骤S407中获得否定结果,即光斑面积S不大于第一预定面积阈值T1,则处理进到步骤S409。在步骤S409中,确定待检测对象为非活体。
如上所述,为了避免出现激光光源照射待检测对象的眼睛,或者待检测对象偏离检测区域等特殊情况,根据本发明实施例的第一示例活体检测系统中设置安全控制机制。
第一安全控机制
在捕获经由所述激光光源照射的待检测对象的脸部的图像之后,类似于上述步骤S404的处理,基于第二预定灰度阈值t2,对所述图像矩阵数据执行所述二值化转换,以将所述图像矩阵数据中具有大于等于所述第二预定阈值t2的灰度值的像素点转换为具有第一灰度值的所述第一类像素点,将所述图像矩阵数据中具有小于所述第二预定阈值t2的灰度值的像素点转换为具有第二灰度值的所述第二类像素点,获得第二二值化图像矩阵数据。如果所述第二二值化图像矩阵数据中所述第一类像素点的数目超过预定第一预定数目阈值s1,即不正常地出现多个亮点,则停止照射。
第二安全控机制
在捕获经由所述激光光源照射的待检测对象的脸部的图像之后,类似于上述步骤S404的处理,基于第三预定灰度阈值t3,对所述图像矩阵数据执行所述二值化转换,以将所述图像矩阵数据中具有大于等于所述第三预定阈值t3的灰度值的像素点转换为具有第一灰度值的所述第一类像素点,将所述图像矩阵数据中具有小于所述第三预定阈值t3的灰度值的像素点转换为具有第二灰度值的所述第二类像素点,获得第三二值化图像矩阵数据。计算所述第三二值化图像矩阵数据中所述第一类像素点的所对应的第三重心位置(mx,my)。如果所述第三重心位置(mx,my)在预定第一区域阈值外(即,mx≥mx1或mx≤mx0;my≥my1或my≤my0),即亮点落在预定的图像采集区域外,则停止照射。
第三安全控机制
在捕获经由所述激光光源照射的待检测对象的脸部的图像之后,确定所 述图像矩阵数据中对应于所述待检测对象的脸部的预定区域的预定像素点区域。例如,使用预先训练好的人脸检测器(诸如Haar Cascade)来获取脸部以及左右眼的位置。类似于上述步骤S404的处理,计算最大数目的相互连通的所述第一类像素点所对应的第一重心位置。如果所述第一重心位置在所述预定像素点区域(即,左右眼区域)内,则停止照射。
根据本发明实施例的第一示例活体检测系统,配置有产生点状光斑的光源单元,在光源单元与待检测对象相对位置固定的场景中使用,其利用活体皮肤和其他材质的亚表面散射性质的不同进行活体检测,能够有效地防御照片、视频、面积攻击,并且无需用户特殊配合,增加了活体检测系统的安全性和易用度。
图5是进一步图示根据本发明实施例的第二示例活体检测系统的示意图。
如图5所示,活体检测系统50与待检测对象30的位置相对不固定。例如,图5所示的活体检测系统50为工作距离比图3中的人脸打卡器更远的门禁系统。所述激光光源51为产生多个点状光斑的光源,并且所述激光光源51与所述待检测对象30的位置相对变化。在本发明的一个实施例中,所述激光光源51由一个功率500mW、波长850nm激光器与光栅配置。通过光栅,激光器投射出多个点状光斑,均匀地散步在待检测对象30可能存在的范围,并且各个点的光斑范围不重合。所示图像捕获单元52捕获经由所述激光光源单元51照射的待检测对象的脸部的图像。所示活体检测单元53确定所述待检测对象是否为活体。
图6是进一步图示根据本发明实施例的第二示例活体检测方法的流程图。如图6所示,根据本发明实施例的第二示例活体检测方法应用于图5所示的根据本发明实施例的第二示例活体检测系统,其包括以下步骤。
图6中图示的步骤S601到S605分别与上述图4中图示的步骤S401到S405相同,在此将省略其重复描述。
在通过二值化待检测对象的脸部的图像,确定第一二值化图像矩阵数据中最大数目的相互连通的第一类像素点之后,在步骤S06中,判断该最大数目的相互连通的第一类像素点是否位于所述待检测对象的脸部的预定区域中。由于根据本发明实施例的第二示例活体检测系统中所述激光光源51为产生多个点状光斑的光源,如果在步骤S606中获得肯定结果,则表明所述 激光光源51产生的多个点状光斑中的一个落入诸如嘴唇、脸颊、鼻子的合适区域内,则处理进到步骤S607。
在步骤S607中,计算所述最大数目的相互连通的所述第一类像素点所对应的面积作为所述光斑面积S。此后,处理进到步骤S608。
在步骤S608中,判断在步骤S607中计算的光斑面积S是否大于第一预定面积阈值T1。所述第一预定面积阈值T1是预先以大量人脸图像作为正样本以及以照片、视频回放、纸片面具以及3D模型图像作为负样本,使用深度学习,支撑向量机等统计学习方法所确定设置的。
如果在步骤S608中获得肯定结果,即光斑面积S大于第一预定面积阈值T1,则处理进到步骤S609。在步骤S609中,确定待检测对象为活体。
相反地,如果在步骤S608中获得否定结果,即光斑面积S不大于第一预定面积阈值T1,则处理进到步骤S610。在步骤S610中,确定待检测对象为非活体。
返回步骤S606,如果在步骤S606中获得否定结果,即表明所述激光光源51产生的多个点状光斑没有落入诸如嘴唇、脸颊、鼻子的合适区域内,则处理返回步骤S602,以便继续捕获经由激光光源照射的待检测对象的脸部的图像。
与图3和图4所示的第一示例相同,根据本发明实施例的第二示例活体检测系统中也设置安全控制机制。
第一安全控机制
与上述第一示例的第一安全机制相同,在捕获经由所述激光光源照射的待检测对象的脸部的图像之后,类似于上述步骤S404的处理,基于第二预定灰度阈值t2,对所述图像矩阵数据执行所述二值化转换,以将所述图像矩阵数据中具有大于等于所述第二预定阈值t2的灰度值的像素点转换为具有第一灰度值的所述第一类像素点,将所述图像矩阵数据中具有小于所述第二预定阈值t2的灰度值的像素点转换为具有第二灰度值的所述第二类像素点,获得第二二值化图像矩阵数据。如果所述第二二值化图像矩阵数据中所述第一类像素点的数目超过预定第一预定数目阈值s1,即不正常地出现多个亮点,则停止照射。
第二安全控机制
在步骤S607中计算获得所述最大数目的相互连通的所述第一类像素点 所对应的面积作为所述光斑面积S之后,将所述光斑面积S与第二预定面积阈值T2比较,第二预定面积阈值T2大于第一预定面积阈值T1。如果存在所述光斑面积S大于第二预定面积阈值T2,即表示存在过大面积的光斑,则停止照射。
第三安全控机制
与上述第一示例的第三安全机制相类似,在捕获经由所述激光光源照射的待检测对象的脸部的图像之后,确定所述图像矩阵数据中对应于所述待检测对象的脸部的预定区域的预定像素点区域。例如,使用预先训练好的人脸检测器(诸如Haar Cascade)来获取脸部以及左右眼的位置。进一步地,类似于上述步骤S605的处理,确定距离左右眼最近的光斑的位置,计算最近的光斑距离左右眼的距离D。如果距离D小于预定距离阈值d,即表示光斑距离待检测对象的眼部过近,则停止照射。
根据本发明实施例的第二示例活体检测系统,配置有产生多个点状光斑的光源单元,在光源单元与待检测对象相对位置不固定的场景中使用,其利用活体皮肤和其他材质的亚表面散射性质的不同进行活体检测,能够有效地防御照片、视频、面积攻击,并且无需用户特殊配合,增加了活体检测系统的安全性和易用度。
图7是进一步图示根据本发明实施例的第三示例活体检测系统的示意图。
如图7所示,活体检测系统70与待检测对象30的位置相对不固定,并且相互位置可以存在较大变化。例如,图7所示的活体检测系统70为工作距离比图3中的人脸打卡器和图5中的门禁系统更远的门禁系统或者监控系统。所述激光光源单元71为可调整光线出射方向的激光光源。在本发明的一个实施例中,所述激光光源71由一个功率20mW、波长850nm激光器与出射方向驱动单元(未图示)配置。活体检测单元73可以根据图像捕获单元72捕获经由所述激光光源单元71照射的待检测对象的脸部的初步图像,使用预先训练好的人脸检测器(诸如Haar Cascade)来获取脸部以及脸部的合适部位的位置,以及当前光斑的位置。通过活体检测单元73(可替代地,可以是单独配置的光斑位置跟踪单元)控制出射方向驱动单元调整所述激光光源单元71的光线出射方向,以使得光斑位置落在合适部位的位置。
图8是进一步图示根据本发明实施例的第三示例活体检测方法的流程 图。如图8所示,根据本发明实施例的第三示例活体检测方法应用于图7所示的根据本发明实施例的第三示例活体检测系统,其包括以下步骤。
在步骤S801中,使用激光光源照射待检测对象的脸部。在本发明实施例的第三示例中,所述激光光源为可调整光线出射方向的激光光源。此后,处理进到步骤S802。
在步骤S802中,获取经由所述激光光源照射的待检测对象的脸部的初步图像矩阵数据。此后,处理进到步骤S803。
在步骤S803中,类似于步骤S404中的处理,基于第一预定灰度阈值,对所述初步图像矩阵数据执行二值化转换,以将所述初步图像矩阵数据中具有大于等于所述第一预定阈值的灰度值的像素点转换为具有第一灰度值的所述第一类像素点,将所述初步图像矩阵数据中具有小于所述第一预定阈值的灰度值的像素点转换为具有第二灰度值的所述第二类像素点,以获得二值化初步图像矩阵数据。此后,处理进到步骤S804。
在步骤S804中,确定所述二值化初步图像矩阵数据中最大数目的相互连通的所述第一类像素点,计算所述最大数目的相互连通的所述第一类像素点所对应的第一重心位置。在本发明的一个实施例中,所述二值化初步图像矩阵数据中最大数目的相互连通的所述第一类像素点的第一重心位置是当前光斑所在位置。此后,处理进到步骤S805。
在步骤S805中,确定所述初步图像中对应于所述待检测对象的脸部的预定区域的第二重心位置。在本发明的一个实施例中,脸部的预定区域的第二重心位置是脸部合适部位的位置。此后,处理进到步骤S806。
在步骤S806中,调整所述激光光源照射的光线出射方向,以使得所述第一重心位置与所述第二重心位置重合。即,调整所述激光光源照射的光线出射方向,使得光斑位置落入脸部合适部位上。此后,处理进到步骤S807。
在步骤S807中,获取经由调整所述光线出射方向的所述激光光源照射的待检测对象的脸部的图像的图像矩阵数据。在步骤S807中获取的待检测对象的脸部的图像的图像矩阵数据是最终用于活体检测的图像矩阵数据。
此后,处理进到如图4所示的步骤S404,并且执行如上参照图4所示的步骤S404到S409,以基于待检测对象的脸部的图像的图像矩阵数据执行活体检测。
与图3到图6所示的第一和第二示例相同,根据本发明实施例的第三示 例活体检测系统中也设置安全控制机制。
例如,在步骤S807之后,计算相互连通的所述第一类像素点所对应的多个光斑面积。如果所述多个光斑面积中的一个大于第二预定面积阈值T2或者所述多个光斑面积中的每一个小于第三预定面积阈值T3,即存在过大或者过小的光斑,则停止照射。
此外,可以确定待检测对象的脸部的图像的平均像素值,如果平均像素值不在预先设置的范围内,即存在图像整体过亮或者过暗的情况,则停止照射。
进一步地,还可以确定待检测对象的脸部的图像脸部的大小,如果脸部的大小不在预先设置的范围内,即存在脸部的大小过大或者过小的情况,则停止照射。
根据本发明实施例的第三示例活体检测系统,配置有可调整光线出射方向的光源单元以追踪待检测对象的合适部位,在光源单元与待检测对象相对位置不固定并且距离较远的场景中使用,其利用活体皮肤和其他材质的亚表面散射性质的不同进行活体检测,能够有效地防御照片、视频、面积攻击,并且无需用户特殊配合,增加了活体检测系统的安全性和易用度。
图9是图示根据本发明实施例的活体检测系统的示意性框图。如图9所示,根据本发明实施例的活体检测系统9包括:处理器91、存储器92、以及在所述存储器92的中存储的计算机程序指令93。
所述计算机程序指令93在所述处理器91运行时可以实现根据本发明实施例的活体检测系统的各个功能模块的功能,并且/或者可以执行根据本发明实施例的活体检测方法的各个步骤。
具体地,在所述计算机程序指令93被所述处理器91运行时执行以下步骤:获取经由视频数据采集装置采集的视频数据;捕获经由所述激光光源照射的待检测对象的脸部的图像;计算所述待检测对象的脸部的图像的光斑面积;以及比较所述光斑面积与第一预定面积阈值,如果所述光斑面积大于所述第一预定面积阈值,则确定所述待检测对象为活体。
此外,在所述计算机程序指令93被所述处理器91运行时执行所述计算所述待检测对象的脸部的图像的光斑面积的步骤包括:获取所述待检测对象的脸部的图像的图像矩阵数据;基于第一预定灰度阈值,对所述图像矩阵数据执行二值化转换,以将所述图像矩阵数据中具有大于等于所述第一预定阈 值的灰度值的像素点转换为具有第一灰度值的第一类像素点,将所述图像矩阵数据中具有小于所述第一预定阈值的灰度值的像素点转换为具有第二灰度值的第二类像素点,获得第一二值化图像矩阵数据,所述第一灰度值大于所述第二灰度值;确定所述第一二值化图像矩阵数据中最大数目的相互连通的所述第一类像素点,计算所述最大数目的相互连通的所述第一类像素点所对应的面积作为所述光斑面积。
此外,在所述计算机程序指令93被所述处理器91运行时执行获取经由所述光源照射的待检测对象的脸部的图像的步骤包括:获取经由所述激光光源照射的待检测对象的脸部的图像,确定所述图像中对应于所述待检测对象的预定区域的区域图像作为所述待检测对象的脸部的图像。
此外,在所述计算机程序指令93被所述处理器91运行时执行所述获取所述待检测对象的脸部的图像的图像矩阵数据的步骤包括:获取经由所述激光光源照射的待检测对象的脸部的初步图像矩阵数据;基于第一预定灰度阈值,对所述初步图像矩阵数据执行二值化转换,以将所述初步图像矩阵数据中具有大于等于所述第一预定阈值的灰度值的像素点转换为具有第一灰度值的所述第一类像素点,将所述初步图像矩阵数据中具有小于所述第一预定阈值的灰度值的像素点转换为具有第二灰度值的所述第二类像素点,以获得二值化初步图像矩阵数据;确定所述二值化初步图像矩阵数据中最大数目的相互连通的所述第一类像素点,计算所述最大数目的相互连通的所述第一类像素点所对应的第一重心位置;确定所述初步图像中对应于所述待检测对象的脸部的预定区域的第二重心位置;调整所述激光光源照射的光线出射方向,以使得所述第一重心位置与所述第二重心位置重合,获取经由调整所述光线出射方向的所述激光光源照射的待检测对象的脸部的图像的图像矩阵数据。
根据本发明实施例的活体检测系统中的各模块可以通过根据本发明实施例的活体检测系统中的处理器运行在存储器中存储的计算机程序指令来实现,或者可以在根据本发明实施例的计算机程序产品的计算机可读存储介质中存储的计算机指令被计算机运行时实现。
所述计算机可读存储介质可以是一个或多个计算机可读存储介质的任意组合,例如一个计算机可读存储介质包含用于随机地生成动作指令序列的计算机可读的程序代码,另一个计算机可读存储介质包含用于进行人脸活动识别的计算机可读的程序代码。
所述计算机可读存储介质例如可以包括智能电话的存储卡、平板电脑的存储部件、个人计算机的硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦除可编程只读存储器(EPROM)、便携式紧致盘只读存储器(CD-ROM)、USB存储器、或者上述存储介质的任意组合。
在上面详细描述的本发明的示例实施例仅仅是说明性的,而不是限制性的。本领域技术人员应该理解,在不脱离本发明的原理和精神的情况下,可对这些实施例进行各种修改,组合或子组合,并且这样的修改应落入本发明的范围内。

Claims (23)

  1. 一种活体检测方法,包括:
    使用激光光源照射待检测对象的脸部;
    捕获经由所述激光光源照射的待检测对象的脸部的图像;
    计算所述待检测对象的脸部的图像的光斑面积;以及
    比较所述光斑面积与第一预定面积阈值,如果所述光斑面积大于所述第一预定面积阈值,则确定所述待检测对象为活体。
  2. 如权利要求1所述的活体检测方法,其中所述计算所述待检测对象的脸部的图像的光斑面积包括:
    获取所述待检测对象的脸部的图像的图像矩阵数据;
    基于第一预定灰度阈值,对所述图像矩阵数据执行二值化转换,以将所述图像矩阵数据中具有大于等于所述第一预定阈值的灰度值的像素点转换为具有第一灰度值的第一类像素点,将所述图像矩阵数据中具有小于所述第一预定阈值的灰度值的像素点转换为具有第二灰度值的第二类像素点,获得第一二值化图像矩阵数据,所述第一灰度值大于所述第二灰度值;
    确定所述第一二值化图像矩阵数据中最大数目的相互连通的所述第一类像素点,计算所述最大数目的相互连通的所述第一类像素点所对应的面积作为所述光斑面积。
  3. 如权利要求2所述的活体检测方法,其中所述激光光源为产生点状光斑的光源,并且所述激光光源与所述待检测对象的位置相对固定。
  4. 如权利要求2所述的活体检测方法,其中所述激光光源为产生多个点状光斑的光源,并且所述激光光源与所述待检测对象的位置相对变化,所述捕获经由所述光源照射的待检测对象的脸部的图像包括:
    捕获经由所述激光光源照射的待检测对象的脸部的图像,确定所述图像中对应于所述待检测对象的预定区域的区域图像作为所述待检测对象的脸部的图像。
  5. 如权利要求2所述的活体检测方法,其中所述激光光源为可调整光线出射方向的激光光源,并且所述激光光源与所述待检测对象的位置相对变化,所述获取所述待检测对象的脸部的图像的图像矩阵数据包括:
    获取经由所述激光光源照射的待检测对象的脸部的初步图像矩阵数据;
    基于第一预定灰度阈值,对所述初步图像矩阵数据执行二值化转换,以将所述初步图像矩阵数据中具有大于等于所述第一预定阈值的灰度值的像素点转换为具有第一灰度值的所述第一类像素点,将所述初步图像矩阵数据中具有小于所述第一预定阈值的灰度值的像素点转换为具有第二灰度值的所述第二类像素点,以获得二值化初步图像矩阵数据;
    确定所述二值化初步图像矩阵数据中最大数目的相互连通的所述第一类像素点,计算所述最大数目的相互连通的所述第一类像素点所对应的第一重心位置;
    确定所述初步图像中对应于所述待检测对象的脸部的预定区域的第二重心位置;
    调整所述激光光源照射的光线出射方向,以使得所述第一重心位置与所述第二重心位置重合,获取经由调整所述光线出射方向的所述激光光源照射的待检测对象的脸部的图像的图像矩阵数据。
  6. 如权利要求3所述的活体检测方法,还包括:
    基于第二预定灰度阈值,对所述图像矩阵数据执行所述二值化转换,以将所述图像矩阵数据中具有大于等于所述第二预定阈值的灰度值的像素点转换为具有第一灰度值的所述第一类像素点,将所述图像矩阵数据中具有小于所述第二预定阈值的灰度值的像素点转换为具有第二灰度值的所述第二类像素点,获得第二二值化图像矩阵数据;
    如果所述第二二值化图像矩阵数据中所述第一类像素点的数目超过预定第一预定数目阈值,则停止照射。
  7. 如权利要求3所述的活体检测方法,还包括:
    基于第三预定灰度阈值,对所述图像矩阵数据执行所述二值化转换,以将所述图像矩阵数据中具有大于等于所述第三预定阈值的灰度值的像素点转换为具有第一灰度值的所述第一类像素点,将所述图像矩阵数据中具有小于所述第三预定阈值的灰度值的像素点转换为具有第二灰度值的所述第二类像素点,获得第三二值化图像矩阵数据;
    计算所述第三二值化图像矩阵数据中所述第一类像素点的所对应的第三重心位置;
    如果所述第三重心位置在预定第一区域阈值外,则停止照射。
  8. 如权利要求3所述的活体检测方法,还包括:
    确定所述图像矩阵数据中对应于所述待检测对象的脸部的预定区域的预定像素点区域;
    计算所述最大数目的相互连通的所述第一类像素点所对应的第一重心位置;
    如果所述第一重心位置在所述预定像素点区域内,则停止照射。
  9. 如权利要求4所述的活体检测方法,还包括:
    比较所述光斑面积与第二预定面积阈值,如果所述光斑面积大于所述第二预定面积阈值,则停止照射。
  10. 如权利要求4所述的活体检测方法,还包括:
    确定所述图像矩阵数据中对应于所述待检测对象的脸部的预定点的预定像素点;
    计算所述最大数目的相互连通的所述第一类像素点所对应的第一重心位置;
    计算所述第一重心位置与所述预定像素点的距离,如果所述距离小于预定距离阈值,则停止照射。
  11. 如权利要求5所述的活体检测方法,还包括:
    计算相互连通的所述第一类像素点所对应的多个光斑面积;
    如果所述多个光斑面积中的一个大于第二预定面积阈值或者所述多个光斑面积中的每一个小于第三预定面积阈值,则停止照射。
  12. 一种活体检测系统,包括:
    激光光源单元,用于发射照射光线以照射待检测对象的脸部;
    图像捕获单元,用于捕获经由所述激光光源单元照射的待检测对象的脸部的图像;
    活体检测单元,用于确定所述待检测对象是否为活体,
    其中,所述活体检测单元计算所述待检测对象的脸部的图像的光斑面积,并且比较所述光斑面积与第一预定面积阈值,如果所述光斑面积大于所述第一预定面积阈值,则确定所述待检测对象为活体。
  13. 如权利要求12所述的活体检测系统,其中所述活体检测单元获取所述待检测对象的脸部的图像的图像矩阵数据;
    基于第一预定灰度阈值,对所述图像矩阵数据执行二值化转换,以将所述图像矩阵数据中具有大于等于所述第一预定阈值的灰度值的像素点转换 为具有第一灰度值的第一类像素点,将所述图像矩阵数据中具有小于所述第一预定阈值的灰度值的像素点转换为具有第二灰度值的第二类像素点,获得第一二值化图像矩阵数据,所述第一灰度值大于所述第二灰度值;
    确定所述第一二值化图像矩阵数据中最大数目的相互连通的所述第一类像素点,计算所述最大数目的相互连通的所述第一类像素点所对应的面积作为所述光斑面积。
  14. 如权利要求13所述的活体检测系统,其中所述激光光源单元为产生点状光斑的光源单元,并且所述激光光源单元与所述待检测对象的位置相对固定。
  15. 如权利要求13所述的活体检测系统,其中所述激光光源单元为产生多个点状光斑的光源单元,并且所述激光光源单元与所述待检测对象的位置相对变化,
    所述图像捕获单元捕获经由所述激光光源单元照射的待检测对象的脸部的图像,
    所述活体检测单元确定所述图像中对应于所述待检测对象的预定区域的区域图像作为所述待检测对象的脸部的图像。
  16. 如权利要求13所述的活体检测系统,
    其中所述激光光源单元为可调整光线出射方向的激光光源,并且所述激光光源与所述待检测对象的位置相对变化,
    所述活体检测单元获取经由所述激光光源单元照射的待检测对象的脸部的初步图像矩阵数据;基于第一预定灰度阈值,对所述初步图像矩阵数据执行二值化转换,以将所述初步图像矩阵数据中具有大于等于所述第一预定阈值的灰度值的像素点转换为具有第一灰度值的所述第一类像素点,将所述初步图像矩阵数据中具有小于所述第一预定阈值的灰度值的像素点转换为具有第二灰度值的所述第二类像素点,以获得二值化初步图像矩阵数据;确定所述二值化初步图像矩阵数据中最大数目的相互连通的所述第一类像素点,计算所述最大数目的相互连通的所述第一类像素点所对应的第一重心位置;确定所述初步图像中对应于所述待检测对象的脸部的预定区域的第二重心位置;
    所述活体检测单元控制所述激光光源单元调整所述光线出射方向,以使得所述第一重心位置与所述第二重心位置重合,获取经由调整所述光线出射 方向的所述激光光源单元照射的待检测对象的脸部的图像矩阵数据。
  17. 如权利要求14所述的活体检测系统,其中所述活体检测单元还用于:
    基于第二预定灰度阈值,对所述图像矩阵数据执行所述二值化转换,以将所述图像矩阵数据中具有大于等于所述第二预定阈值的灰度值的像素点转换为具有第一灰度值的所述第一类像素点,将所述图像矩阵数据中具有小于所述第二预定阈值的灰度值的像素点转换为具有第二灰度值的所述第二类像素点,获得第二二值化图像矩阵数据;
    如果所述第二二值化图像矩阵数据中所述第一类像素点的数目超过预定第一预定数目阈值,则控制所述激光光源单元停止照射。
  18. 如权利要求14所述的活体检测系统,其中所述活体检测单元还用于:
    基于第三预定灰度阈值,对所述图像矩阵数据执行所述二值化转换,以将所述图像矩阵数据中具有大于等于所述第三预定阈值的灰度值的像素点转换为具有第一灰度值的所述第一类像素点,将所述图像矩阵数据中具有小于所述第三预定阈值的灰度值的像素点转换为具有第二灰度值的所述第二类像素点,获得第三二值化图像矩阵数据;
    计算所述第三二值化图像矩阵数据中所述第一类像素点的所对应的第三重心位置;
    如果所述第三重心位置在预定第一区域阈值外,则控制所述激光光源单元停止照射。
  19. 如权利要求14所述的活体检测系统,其中所述活体检测单元还用于:
    确定所述图像矩阵数据中对应于所述待检测对象的脸部的预定区域的预定像素点区域;
    计算所述最大数目的相互连通的所述第一类像素点所对应的第一重心位置;
    如果所述第一重心位置在所述预定像素点区域内,则控制所述激光光源单元停止照射。
  20. 如权利要求15所述的活体检测系统,其中所述活体检测单元还用于:
    比较所述光斑面积与第二预定面积阈值,如果所述光斑面积大于所述第二预定面积阈值,则控制所述激光光源单元停止照射。
  21. 如权利要求15所述的活体检测系统,其中所述活体检测单元还用于:
    确定所述图像矩阵数据中对应于所述待检测对象的脸部的预定点的预定像素点;
    计算所述最大数目的相互连通的所述第一类像素点所对应的第一重心位置;
    计算所述第一重心位置与所述预定像素点的距离,如果所述距离小于预定距离阈值,则控制所述多点状光源单元停止照射。
  22. 如权利要求16所述的活体检测系统,其中所述活体检测单元还用于:
    计算相互连通的所述第一类像素点所对应的多个光斑面积;
    如果所述多个光斑面积中的一个大于第二预定面积阈值或者所述多个光斑面积中的每一个小于第三预定面积阈值,则控制所述激光光源单元停止照射。
  23. 一种计算机程序产品,包括计算机可读存储介质,在所述计算机可读存储介质上存储了计算机程序指令,所述计算机程序指令在被计算机运行时执行以下步骤:
    获取经由激光光源照射的待检测对象的脸部的图像;
    计算所述待检测对象的脸部的图像的光斑面积;以及
    比较所述光斑面积与第一预定面积阈值,如果所述光斑面积大于所述第一预定面积阈值,则确定所述待检测对象为活体。
PCT/CN2015/080963 2015-06-08 2015-06-08 活体检测方法、活体检测系统以及计算机程序产品 WO2016197297A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
PCT/CN2015/080963 WO2016197297A1 (zh) 2015-06-08 2015-06-08 活体检测方法、活体检测系统以及计算机程序产品
US15/580,210 US10614291B2 (en) 2015-06-08 2015-06-08 Living body detection method, living body detection system and computer program product
CN201580000335.6A CN105637532B (zh) 2015-06-08 2015-06-08 活体检测方法、活体检测系统以及计算机程序产品

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2015/080963 WO2016197297A1 (zh) 2015-06-08 2015-06-08 活体检测方法、活体检测系统以及计算机程序产品

Publications (1)

Publication Number Publication Date
WO2016197297A1 true WO2016197297A1 (zh) 2016-12-15

Family

ID=56050768

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/080963 WO2016197297A1 (zh) 2015-06-08 2015-06-08 活体检测方法、活体检测系统以及计算机程序产品

Country Status (3)

Country Link
US (1) US10614291B2 (zh)
CN (1) CN105637532B (zh)
WO (1) WO2016197297A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108664880A (zh) * 2017-03-27 2018-10-16 三星电子株式会社 活性测试方法和设备
CN111046703A (zh) * 2018-10-12 2020-04-21 杭州海康威视数字技术股份有限公司 人脸防伪检测方法、装置及多目相机
CN112084980A (zh) * 2020-09-14 2020-12-15 北京数衍科技有限公司 行人的脚步状态识别方法和装置

Families Citing this family (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105912986B (zh) 2016-04-01 2019-06-07 北京旷视科技有限公司 一种活体检测方法和系统
CN107992794B (zh) * 2016-12-30 2019-05-28 腾讯科技(深圳)有限公司 一种活体检测方法、装置和存储介质
CN108363939B (zh) * 2017-01-26 2022-03-04 阿里巴巴集团控股有限公司 特征图像的获取方法及获取装置、用户认证方法
CN107451556B (zh) * 2017-07-28 2021-02-02 Oppo广东移动通信有限公司 检测方法及相关产品
WO2019171827A1 (ja) * 2018-03-08 2019-09-12 ソニー株式会社 情報処理装置、情報処理方法、およびプログラム
US10867506B2 (en) * 2018-03-16 2020-12-15 Sean Michael Siembab Surrounding intelligent motion sensor with adaptive recognition
US10438477B1 (en) * 2018-03-16 2019-10-08 Sean Michael Siembab Surrounding intelligent motion sensor
JP7131118B2 (ja) * 2018-06-22 2022-09-06 富士通株式会社 認証装置、認証プログラム、認証方法
CN112639802A (zh) * 2018-09-18 2021-04-09 Oppo广东移动通信有限公司 用于生成伪结构光照射面部的方法、系统及存储介质
US10885363B2 (en) 2018-10-25 2021-01-05 Advanced New Technologies Co., Ltd. Spoof detection using structured light illumination
US10783388B2 (en) * 2018-10-26 2020-09-22 Alibaba Group Holding Limited Spoof detection using multiple image acquisition devices
CN111310528B (zh) * 2018-12-12 2022-08-12 马上消费金融股份有限公司 一种图像检测方法、身份验证方法、支付方法及装置
US11170242B2 (en) 2018-12-26 2021-11-09 Advanced New Technologies Co., Ltd. Spoof detection using dual-band fluorescence
US10970574B2 (en) 2019-02-06 2021-04-06 Advanced New Technologies Co., Ltd. Spoof detection using dual-band near-infrared (NIR) imaging
US11328043B2 (en) 2019-03-15 2022-05-10 Advanced New Technologies Co., Ltd. Spoof detection by comparing images captured using visible-range and infrared (IR) illuminations
CN110059638A (zh) * 2019-04-19 2019-07-26 中控智慧科技股份有限公司 一种身份识别方法及装置
WO2021046773A1 (zh) * 2019-09-11 2021-03-18 深圳市汇顶科技股份有限公司 人脸防伪检测方法、装置、芯片、电子设备和计算机可读介质
CN110781770B (zh) * 2019-10-08 2022-05-06 高新兴科技集团股份有限公司 基于人脸识别的活体检测方法、装置及设备
US11250282B2 (en) * 2019-11-14 2022-02-15 Nec Corporation Face spoofing detection using a physical-cue-guided multi-source multi-channel framework
CN113096059B (zh) * 2019-12-19 2023-10-31 合肥君正科技有限公司 一种车内监控相机排除夜晚光源干扰遮挡检测的方法
US11468712B2 (en) * 2020-01-09 2022-10-11 AuthenX Inc. Liveness detection apparatus, system and method
US20210334505A1 (en) * 2020-01-09 2021-10-28 AuthenX Inc. Image capturing system and method for capturing image
CN111401223B (zh) * 2020-03-13 2023-09-19 北京新氧科技有限公司 一种脸型对比方法、装置及设备
JP7383542B2 (ja) * 2020-03-24 2023-11-20 株式会社東芝 光検出器及び距離計測装置
CN113468920A (zh) * 2020-03-31 2021-10-01 深圳市光鉴科技有限公司 基于人脸光斑图像的活体检测方法、系统、设备及介质
CN111814659B (zh) * 2020-07-07 2024-03-29 杭州海康威视数字技术股份有限公司 一种活体检测方法、和系统
CN112633181B (zh) * 2020-12-25 2022-08-12 北京嘀嘀无限科技发展有限公司 数据处理方法、系统、装置、设备和介质
CN112766175B (zh) * 2021-01-21 2024-05-28 宠爱王国(北京)网络科技有限公司 活体检测方法、装置及非易失性存储介质
CN115205246B (zh) * 2022-07-14 2024-04-09 中国南方电网有限责任公司超高压输电公司广州局 换流阀电晕放电紫外图像特征提取方法和装置

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5077803A (en) * 1988-09-16 1991-12-31 Fujitsu Limited Biological detecting system and fingerprint collating system employing same
WO2001001329A1 (en) * 1999-06-24 2001-01-04 British Telecommunications Public Limited Company Personal identification
CN1426760A (zh) * 2001-12-18 2003-07-02 中国科学院自动化研究所 基于活体虹膜的身份识别方法
CN102129558A (zh) * 2011-01-30 2011-07-20 哈尔滨工业大学 基于普尔钦斑分析的虹膜采集系统及虹膜采集方法

Family Cites Families (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10105707A (ja) * 1996-09-25 1998-04-24 Sony Corp 画像照合装置
JP2003075135A (ja) 2001-08-31 2003-03-12 Nec Corp 指紋画像入力装置および指紋画像による生体識別方法
CA2479664A1 (en) * 2004-09-24 2006-03-24 Edythe P. Lefeuvre Method and system for detecting image orientation
JP2007193729A (ja) * 2006-01-23 2007-08-02 Seiko Epson Corp 印刷装置、画像処理装置、印刷方法、および画像処理方法
JP4951291B2 (ja) * 2006-08-08 2012-06-13 株式会社日立メディアエレクトロニクス 生体認証装置
CN100573553C (zh) 2007-01-18 2009-12-23 中国科学院自动化研究所 基于薄板样条形变模型的活体指纹检测方法
WO2010082942A1 (en) * 2008-02-01 2010-07-22 Canfield Scientific, Incorporated Automatic mask design and registration and feature detection for computer-aided skin analysis
KR101307603B1 (ko) * 2009-04-13 2013-09-12 후지쯔 가부시끼가이샤 생체 정보 등록 장치, 생체 정보 등록 방법 및 생체 정보 등록용 컴퓨터 프로그램과 생체 인증 장치, 생체 인증 방법 및 생체 인증용 컴퓨터 프로그램
JP5365407B2 (ja) * 2009-08-17 2013-12-11 ソニー株式会社 画像取得装置及び画像取得方法
JP5507181B2 (ja) * 2009-09-29 2014-05-28 富士フイルム株式会社 放射線画像撮影装置及び放射線画像撮影装置の動作方法
CN103534664B (zh) * 2011-05-12 2016-08-31 苹果公司 存在感测
JP5831018B2 (ja) * 2011-07-29 2015-12-09 富士通株式会社 生体認証装置及び生体認証装置における利用者の手の位置の調整方法
FR2981769B1 (fr) * 2011-10-25 2013-12-27 Morpho Dispositif anti-fraude
JP5896792B2 (ja) * 2012-03-09 2016-03-30 キヤノン株式会社 非球面計測方法、非球面計測装置および光学素子加工装置
CN102860845A (zh) * 2012-08-30 2013-01-09 中国科学技术大学 活体动物体内的细胞的捕获、操控方法及相应的装置
JP5859934B2 (ja) * 2012-09-04 2016-02-16 富士フイルム株式会社 放射線撮影システム並びにその作動方法、および放射線画像検出装置並びにその作動プログラム
JP6091866B2 (ja) * 2012-11-30 2017-03-08 株式会社キーエンス 計測顕微鏡装置、画像生成方法及び計測顕微鏡装置操作プログラム並びにコンピュータで読み取り可能な記録媒体
JP6041669B2 (ja) * 2012-12-28 2016-12-14 キヤノン株式会社 撮像装置及び撮像システム
CN103393401B (zh) * 2013-08-06 2015-05-06 中国科学院光电技术研究所 一种双波前矫正器活体人眼视网膜高分辨力成像系统
JP6303332B2 (ja) * 2013-08-28 2018-04-04 富士通株式会社 画像処理装置、画像処理方法および画像処理プログラム
EP4250738A3 (en) * 2014-04-22 2023-10-11 Snap-Aid Patents Ltd. Method for controlling a camera based on processing an image captured by other camera
US20170119298A1 (en) * 2014-09-02 2017-05-04 Hong Kong Baptist University Method and Apparatus for Eye Gaze Tracking and Detection of Fatigue

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5077803A (en) * 1988-09-16 1991-12-31 Fujitsu Limited Biological detecting system and fingerprint collating system employing same
WO2001001329A1 (en) * 1999-06-24 2001-01-04 British Telecommunications Public Limited Company Personal identification
CN1426760A (zh) * 2001-12-18 2003-07-02 中国科学院自动化研究所 基于活体虹膜的身份识别方法
CN102129558A (zh) * 2011-01-30 2011-07-20 哈尔滨工业大学 基于普尔钦斑分析的虹膜采集系统及虹膜采集方法

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108664880A (zh) * 2017-03-27 2018-10-16 三星电子株式会社 活性测试方法和设备
EP3382598A3 (en) * 2017-03-27 2018-12-19 Samsung Electronics Co., Ltd. Liveness test method and apparatus
US11176392B2 (en) 2017-03-27 2021-11-16 Samsung Electronics Co., Ltd. Liveness test method and apparatus
US11721131B2 (en) 2017-03-27 2023-08-08 Samsung Electronics Co., Ltd. Liveness test method and apparatus
CN108664880B (zh) * 2017-03-27 2023-09-05 三星电子株式会社 活性测试方法和设备
CN111046703A (zh) * 2018-10-12 2020-04-21 杭州海康威视数字技术股份有限公司 人脸防伪检测方法、装置及多目相机
CN111046703B (zh) * 2018-10-12 2023-04-18 杭州海康威视数字技术股份有限公司 人脸防伪检测方法、装置及多目相机
CN112084980A (zh) * 2020-09-14 2020-12-15 北京数衍科技有限公司 行人的脚步状态识别方法和装置
CN112084980B (zh) * 2020-09-14 2024-05-28 北京数衍科技有限公司 行人的脚步状态识别方法和装置

Also Published As

Publication number Publication date
CN105637532B (zh) 2020-08-14
CN105637532A (zh) 2016-06-01
US10614291B2 (en) 2020-04-07
US20180165512A1 (en) 2018-06-14

Similar Documents

Publication Publication Date Title
WO2016197297A1 (zh) 活体检测方法、活体检测系统以及计算机程序产品
US10621454B2 (en) Living body detection method, living body detection system, and computer program product
EP2680191B1 (en) Facial recognition
EP2680192B1 (en) Facial recognition
Stein et al. Fingerphoto recognition with smartphone cameras
US20150161434A1 (en) Differentiating real faces from representations
KR102317180B1 (ko) 3차원 깊이정보 및 적외선정보에 기반하여 생체여부의 확인을 행하는 얼굴인식 장치 및 방법
JP2007280367A (ja) 顔照合装置
US11594076B2 (en) Remote biometric identification and lighting
KR101610525B1 (ko) 조도를 고려한 동공 검출 장치 및 그 방법
WO2019017080A1 (ja) 照合装置及び照合方法
KR101310040B1 (ko) 적응적 조명조절을 이용한 얼굴 인식장치 및 그 방법
Das et al. Face liveness detection based on frequency and micro-texture analysis
Garud et al. Face liveness detection
Ohki et al. Efficient spoofing attack detection against unknown sample using end-to-end anomaly detection
KR101704717B1 (ko) 홍채 인식 장치 및 그 동작 방법
KR20170076894A (ko) 디지털 이미지 판단시스템 및 그 방법, 이를 위한 애플리케이션 시스템
JP6896307B1 (ja) 画像判定方法および画像判定装置
KR102439216B1 (ko) 인공지능 딥러닝 모델을 이용한 마스크 착용 얼굴 인식 방법 및 서버
RU2791821C1 (ru) Биометрическая идентификационная система и способ биометрической идентификации
WO2023229498A1 (en) Biometric identification system and method for biometric identification
Hemalatha et al. A study of liveness detection in face biometric systems
CN116508055A (zh) 判定方法、判定程序以及信息处理装置
Devi et al. REAL TIME FACE LIVENESS DETECTION WITH IMAGE QUALITY AND TEXTURE PARAMETER.
El Nahal Mobile Multimodal Biometric System for Security

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15894567

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 15580210

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15894567

Country of ref document: EP

Kind code of ref document: A1