CN108764121A - Method, computing device and readable storage medium storing program for executing for detecting live subject - Google Patents

Method, computing device and readable storage medium storing program for executing for detecting live subject Download PDF

Info

Publication number
CN108764121A
CN108764121A CN201810510901.3A CN201810510901A CN108764121A CN 108764121 A CN108764121 A CN 108764121A CN 201810510901 A CN201810510901 A CN 201810510901A CN 108764121 A CN108764121 A CN 108764121A
Authority
CN
China
Prior art keywords
image
average gray
distance curve
gray
gray scale
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810510901.3A
Other languages
Chinese (zh)
Other versions
CN108764121B (en
Inventor
王晓鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
EYESMART TECHNOLOGY Ltd
Original Assignee
Release Code Fusion (shanghai) Mdt Infotech Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Release Code Fusion (shanghai) Mdt Infotech Ltd filed Critical Release Code Fusion (shanghai) Mdt Infotech Ltd
Priority to CN201810510901.3A priority Critical patent/CN108764121B/en
Publication of CN108764121A publication Critical patent/CN108764121A/en
Application granted granted Critical
Publication of CN108764121B publication Critical patent/CN108764121B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/28Quantising the image, e.g. histogram thresholding for discrimination between background and foreground patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/42Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of methods for detecting live subject, and suitable for being executed in computing device, the method comprising the steps of:Receive the face gray level image of object to be detected;Detection zone is intercepted from face gray level image, which includes at least eye;The detection zone is quantified to obtain quantized image;Characteristics of image is extracted according to quantized image;Judge whether object to be detected is live subject based on acquired characteristics of image.The invention also discloses corresponding computing device and readable storage medium storing program for executing.

Description

Method, computing device and readable storage medium storing program for executing for detecting live subject
Technical field
The present invention relates to technical field of image processing more particularly to a kind of method being used to detect live subject, calculating to set Standby and readable storage medium storing program for executing.
Background technology
With the continuous improvement of people's safety precaution demand and consciousness, fingerprint recognition, iris recognition, recognition of face etc. are based on The identity identifying technology of living things feature recognition is rapidly developed, is widely used.But in practical applications, photo deception, video deception It is continuously emerged with a variety of malicious attack means such as threedimensional model deceptions, security risk is brought to identifying system.Wherein, photo is with generation Valence is small, implements simply to become most common attack pattern.To defend such attack, judge biological characteristic whether from lived Individual, that is, In vivo detection are an essential links.
In vivo detection technology is mostly detected using the physiological characteristic of people at present.Such as the In vivo detection of face can With information such as rotation, red-eye effects based on head, this scheme or there are system complex, problems of high cost, or need to use Family is cooperated on one's own initiative, and user experience is reduced.
Therefore, there is an urgent need to a kind of more advanced schemes for detecting live subject.
Invention content
For this purpose, the present invention provides a kind of method, computing device and readable storage medium storing program for executing for detecting live subject, with power Illustrate certainly or at least alleviate existing at least one problem above.
According to an aspect of the invention, there is provided a kind of method for detecting live subject, is suitable in computing device Middle execution, method include step:Receive the face gray level image of object to be detected;Detection zone is intercepted from face gray level image Domain, detection zone include at least eye;Detection zone is quantified to obtain quantized image;Image is extracted according to quantized image Feature;Judge whether object to be detected is live subject based on acquired characteristics of image.
Optionally, in the method according to the invention, the step of being quantified detection zone to obtain quantized image is wrapped It includes:Calculate the average gray value of detection zone;If average gray value is in predetermined average gray section, to grey in detection zone The pixel that angle value is in predetermined gray scale interval corresponding with predetermined average gray section is quantified, and its gray value is quantified as Predetermined number tonal gradation.
Optionally, in the method according to the invention, predetermined average gray section includes the first average gray section, second One or more in average gray section, third average gray section and the 4th average gray scale interval, predetermined gray scale interval Including and corresponding first gray scale interval in the first average gray section, the second gray area corresponding with the second average gray section Between and the corresponding third gray scale interval in third average gray section and the 4th gray scale interval corresponding with the 4th average gray scale interval In one or more.
Optionally, in the method according to the invention, the first average gray section be [65,100), the second average gray area Between for [100,145), third average gray section be [145,180), the 4th average gray scale interval be [180,210);With first Corresponding first gray scale interval in average gray section be [100,200), the second gray area corresponding with the second average gray section Between for [130,230), third gray scale interval corresponding with third average gray section be [170,255], with the 4th average gray Corresponding 4th gray scale interval in section is [200,255].
Optionally, in the method according to the invention, characteristics of image includes image outline feature, is extracted according to quantized image The step of image outline feature includes:According to the boundary of different tonal gradations in quantized image, at least a pair of of contour line of extraction, often Include revolver profile and right wheel profile to contour line;For each pair of contour line, calculate between revolver profile and right wheel profile The distance Curve that distance changes with the line direction of quantized image.
Optionally, in the method according to the invention, quantized image includes left-side images and image right, is schemed according to quantization The boundary of different tonal gradations as in, extraction at least a pair of of contour line the step of include:For the boundary of each two tonal gradation, Respectively in left-side images and image right, determine that it often corresponds to one or more pixel of the boundary in row pixel, and Extraction farthest pixel of perpendicular bisector wherein between eyes, it is wide with the revolver profile and right wheel that are respectively formed a pair of of contour line Line.
Optionally, in the method according to the invention, characteristics of image includes gradation of image feature, is extracted according to quantized image The step of gradation of image feature includes:Calculate the intensity profile histogram of quantized image;And quantized image is calculated in level side To Gray scale projection curve.
Optionally, in the method according to the invention, further include according to the step of quantized image extraction gradation of image feature: If the average gray value of detection zone is in average gray scale interval, the average gray value of nose or eye is calculated.
Optionally, in the method according to the invention, average gray scale interval is [65,130].
Optionally, in the method according to the invention, object to be detected is judged based on acquired characteristics of image whether Include for the step of live subject:Based on image outline feature, judgement at least one in following decision condition, middle-range are carried out From curve using the line direction of quantized image as horizontal axis, with the distance between revolver profile and right wheel profile for the longitudinal axis:Same horizontal seat Under mark, the have a common boundary ordinate of distance Curve of corresponding a pair of of contour line of relatively low tonal gradation is less than higher tonal gradation and has a common boundary pair The ordinate of the distance Curve for a pair of of the contour line answered;For every distance Curve, this distance Curve is in the first abscissa domain U1It is upper that there are the first minimum, U1=x | | and x-a | < δ1, a is the corresponding abscissa of eye in quantized image, δ1Between two Away from the 0th multiple;For every distance Curve, there are at least one local maximums for this distance Curve;For every distance Curve, this distance Curve is in the second abscissa domain U2It is upper there are second maximum, and the second maximum and first minimum The difference for being worth corresponding abscissa is not more than the first multiple of two spacing, U2=x | x > a };For every distance Curve, first Minimum is more than the second multiple of two spacing and the third multiple less than two spacing;For every distance Curve, the second pole Big quadruple of the value no more than two spacing and the 5th multiple not less than two spacing;For every distance Curve, second The difference of maximum and the first minimum is more than the 6th multiple of two spacing;For every distance Curve, in third abscissa domain U3On, the slope of this distance Curve is monotonically changed within the scope of predetermined slope, U3=x | b- δ3< x < b }, b is second very big It is worth corresponding abscissa, δ3For the 7th multiple of two spacing;For every distance Curve, in the 4th abscissa domain U4On, this The slope of distance Curve is monotonically changed within the scope of predetermined slope, U4=x | b < x < b+ δ4, δ4It is the eight times of two spacing Number;For every distance Curve, in the 5th abscissa domain U5On, the slope of this distance Curve is located at the second predetermined slope model It encloses, U5=x | | and x-c | < δ5, c is the corresponding abscissa of the first minimum, δ5For the 9th multiple of two spacing;For every two Distance Curve, in the 6th abscissa domain U6On, the correlation of two distance Curves is more than predetermined correlation values, U6=x | c- δ6< x < b }, δ5For the tenth multiple of two spacing.
Optionally, in the method according to the invention, object to be detected is judged based on acquired characteristics of image whether Include for the step of live subject:Based on gradation of image feature, judgement at least one in following decision condition is carried out:Quantization figure The intensity profile histogram of picture is continuous;The slope of quantized image Gray scale projection curve in the horizontal direction with by the Gray Projection song Related coefficient between the slope of a curve obtained after line overturning is more than pre-determined factor.
Optionally, in the method according to the invention, if the average gray value of detection zone is in average gray area Between, then it is based on gradation of image feature, carries out the judgement of following decision condition:The average gray value and detection zone of nose or eye The relationship of average gray value meet scheduled nose or eye gray-scale relation curve.
Optionally, in the method according to the invention, object to be detected is judged based on acquired characteristics of image whether Include for the step of live subject:For each decision condition, whether accorded with based on image outline feature or gradation of image feature The decision condition is closed, to obtain the score that object to be detected corresponds to the decision condition;Corresponded to based on object to be detected each The score of a decision condition, to obtain the gross score of object to be detected;Judged based on the gross score of object to be detected to be detected Whether object is live subject.
Optionally, in the method according to the invention, face gray level image is near-infrared image.
According to another aspect of the present invention, a kind of computing device is provided, including:One or more processors;Storage Device;And one or more programs, wherein one or more programs are stored in memory and are configured as by one or more Processor executes, and one or more programs include for executing appointing in the method according to the present invention for detecting live subject The instruction of one method.
According to the present invention there are one aspects, provide a kind of readable storage medium storing program for executing of storage program, described program packet Instruction is included, described instruction is when executed by a computing apparatus so that computing device executes according to the present invention for detecting live body pair Any one of the method for elephant.
Scheme according to the present invention for detecting live subject, to the detection zone being truncated to from face gray level image Quantized image is obtained after being quantified, characteristics of image is extracted according to the quantized image, and In vivo detection is realized based on the characteristics of image, Reduce complexity while ensureing Detection accuracy, improve In vivo detection speed, have it is of low cost, economic and practical, High efficient and reliable, the effect coordinated without user.
Description of the drawings
To the accomplishment of the foregoing and related purposes, certain illustrative sides are described herein in conjunction with following description and drawings Face, these aspects indicate the various modes that can put into practice principles disclosed herein, and all aspects and its equivalent aspect It is intended to fall in the range of theme claimed.Read following detailed description in conjunction with the accompanying drawings, the disclosure it is above-mentioned And other purposes, feature and advantage will be apparent.Throughout the disclosure, identical reference numeral generally refers to identical Component or element.
Fig. 1 is exemplarily illustrated the structure diagram of computing device 100;And
Fig. 2 is exemplarily illustrated the stream of the method 200 for detecting live subject according to one embodiment of the present invention Cheng Tu;
Fig. 3 A and Fig. 3 B illustrate face gray level image and detection according to one embodiment of the present invention respectively The schematic diagram in region;
Fig. 4 is exemplarily illustrated the schematic diagram of contour line according to one embodiment of the present invention;
Fig. 5 illustrates the schematic diagram of the distance Curve of Fig. 4 illustrated embodiments;And
Fig. 6 illustrates the schematic diagram of nose gray-scale relation scatter plot according to one embodiment of the present invention;And
Fig. 7 A and Fig. 7 B illustrate the face gray level image of live subject and non-living body object after quantization respectively Schematic diagram.
Specific implementation mode
The exemplary embodiment of the disclosure is more fully described below with reference to accompanying drawings.Although showing the disclosure in attached drawing Exemplary embodiment, it being understood, however, that may be realized in various forms the disclosure without should be by embodiments set forth here It is limited.On the contrary, these embodiments are provided to facilitate a more thoroughly understanding of the present invention, and can be by the scope of the present disclosure Completely it is communicated to those skilled in the art.
Fig. 1 is exemplarily illustrated the structure diagram of computing device 100.The computing device 100 can be implemented as server, example Such as file server, database server, apps server and network server can also be embodied as including desktop meter The personal computer of calculation machine and notebook computer configuration.In addition, computing device 100 be also implemented as small size it is portable (or Person moves) part of electronic equipment, these electronic equipments can be such as cellular phone, personal digital assistant (PDA), personal Media player device, wireless network browsing apparatus, personal helmet, application specific equipment or may include appointing above The mixing apparatus of what function.
In basic configuration 102, computing device 100 typically comprise system storage 106 and one or more at Manage device 104.Memory bus 108 can be used for the communication between processor 104 and system storage 106.
Depending on desired configuration, processor 104 can be any kind of processing, including but not limited to:Microprocessor ((μ P), microcontroller (μ C), digital information processor (DSP) or any combination of them.Processor 104 may include all Cache, processor core such as one or more rank of on-chip cache 110 and second level cache 112 etc 114 and register 116.Exemplary processor core 114 may include arithmetic and logical unit (ALU), floating-point unit (FPU), Digital signal processing core (DSP core) or any combination of them.Exemplary Memory Controller 118 can be with processor 104 are used together, or in some implementations, and Memory Controller 218 can be an interior section of processor 104.
Depending on desired configuration, system storage 106 can be any type of memory, including but not limited to:Easily The property lost memory (RAM), nonvolatile memory (ROM, flash memory etc.) or any combination of them.System stores Device 106 may include operating system 120, one or more program 122 and program data 124.In some embodiments, Program 122 can be configured as to be referred to by one or more processor 104 using the execution of program data 124 on an operating system It enables.
Computing device 100 can also include contributing to from various interface equipments (for example, output equipment 142, Peripheral Interface 144 and communication equipment 146) to basic configuration 102 via the communication of bus/interface controller 130 interface bus 140.Example Output equipment 142 include graphics processing unit 148 and audio treatment unit 150.They can be configured as contribute to via One or more port A/V 152 is communicated with the various external equipments of such as display or loud speaker etc.Outside example If interface 144 may include serial interface controller 154 and parallel interface controller 156, they, which can be configured as, contributes to Via one or more port I/O 158 and such as input equipment (for example, keyboard, mouse, pen, voice-input device, touch Input equipment) or the external equipment of other peripheral hardwares (such as printer, scanner etc.) etc communicated.Exemplary communication is set Standby 146 may include network controller 160, can be arranged to convenient for via one or more communication port 164 and one The communication that other a or multiple computing devices 162 pass through network communication link.
Network communication link can be an example of communication media.Communication media can be usually presented as in such as carrier wave Or the computer-readable instruction in the modulated data signal of other transmission mechanisms etc, data structure, program module, and can To include any information delivery media." modulated data signal " can such signal, one in its data set or more It is a or it change can the mode of coding information in the signal carry out.As unrestricted example, communication media can be with Include the wire medium of such as cable network or private line network etc, and such as sound, radio frequency (RF), microwave, infrared (IR) the various wireless mediums or including other wireless mediums.Term computer-readable medium used herein may include depositing Both storage media and communication media.
One or more programs 122 of computing device 100 include according to the present invention for detecting live body pair for executing The instruction of any one of the method for elephant.Fig. 2 illustrate according to one embodiment of the present invention for detecting live body pair The flow chart of the method 200 of elephant.
As shown in Fig. 2, the method 200 for detecting live subject starts from step S210.In step S210, receive to be checked Survey the face gray level image of object.Normally, face gray level image is near-infrared image, and is obtained at image capture device. The face gray level image usually has 256 tonal gradations (0~255).According to embodiment of the present invention, computing device 100 can carry out via one or more communication port 164 and image capture device through the logical of above-mentioned network communication link Letter obtains the face gray level image of its shooting from the image capture device.Image capture device usually may include near-infrared Light source, optical lens and imaging sensor.Certainly, according to another embodiment, computing device 100 can also be embodied as image Collecting device.
It is to be appreciated that image is illustrated as matrix, the height (unit is pixel) of the row correspondence image of matrix, matrix Row correspondence image width (unit is pixel), the pixel of the element correspondence image of matrix, matrix element value correspondence be exactly picture The gray value of element.
Then in step S220, detection zone is intercepted from the face gray level image received, the detection zone is at least Including eye.Specifically, first may be used such as Hough transform method, integral projection method, deformable masterplate method, principle component analysis and Symmetrical method of changing etc existing eye locating method orients eye in face gray level image, comes further according to eye locations Intercept detection zone.According to one embodiment, the detection zone being truncated to can be above eye apart from 1/4 times of eyes of eye Region at spacing to eye lower section between 1 times of eyes spacing of eye, wherein eyes spacing is between two eye pupil holes Horizontal distance can be calculated after positioning eye locations.Fig. 3 A and Fig. 3 B are illustrated respectively according to one of the invention The face gray level image of embodiment and the schematic diagram of detection zone.
After intercepting detection zone, in step S230, which is quantified, to obtain quantized image.One As quantizing process can be as follows:
N+1 gray value x is chosen in the original gradation section to be quantified0,x1,...,xnAs quantized interval side Boundary, x0< x1< ... < xn, n is given positive integer.It is possible to obtain n quantized interval A0,A1,A2,...,An-1, Ai =[xi,xi+1), i=0 ..., n-2, An-1=[xn-1,xn]。
For any pixel in original gradation section, if g ∈ Ai, i=0,1 ..., n-1, then q=xi, wherein g is the picture Original gray value of the element before quantization, q are the gray value of the pixel after quantization.
In view of the gray value comprising the pixel compared with abundant information in image is deposited according to the difference of image averaging gray value In difference, it is therefore desirable to take the image of different average gray values different quantification manners.An implementation according to the present invention Mode can first calculate the average gray value of detection zone, and the average gray value of detection zone is detection zone all pixels The number of pixels of the sum of gray value divided by detection zone.If the average gray value being calculated is in predetermined average gray section, The pixel that predetermined gray scale interval corresponding with predetermined average gray section is then in gray value in detection zone quantifies, will Its gray value is quantified as predetermined number tonal gradation, and predetermined number is usually 2 or 3.
Predetermined average gray section usually may include that the first average gray section, the second average gray section, third are flat One or more in equal gray scale interval and the 4th gray scale interval.Correspondingly, predetermined gray scale interval may include with it is first flat Corresponding first gray scale interval of equal gray scale interval, the second gray scale interval corresponding with the second average gray section are averaged with third One in the corresponding third gray scale interval of gray scale interval and the 4th gray scale interval corresponding with the 4th average gray scale interval or It is multiple.
Wherein, the first average gray section can be [65,100), the first gray scale corresponding with the first average gray section Section can be [100,200), that is, when the average gray value of detection zone is located at [65,100) when, can will be in detection zone Gray value be located at [100,200) the gray value of pixel be quantified as 3 tonal gradations, the gray value quilt of entire detection zone pixel It is quantified as 5 tonal gradations.
Second average gray section can be [100,145), the second gray scale interval corresponding with the second average gray section For [130,230), that is, when detection zone average gray value be located at [100,145) when, can be by gray value position in detection zone In [130,230) the gray value of pixel be quantified as 3 tonal gradations, the gray value of entire detection zone pixel is quantified as 5 A tonal gradation.
Third average gray section can be [145,180), third gray scale interval corresponding with third average gray section Can be [170,255], that is, when detection zone average gray value be located at [145,180) when, can be by gray scale in detection zone Value is quantified as 3 tonal gradations positioned at the gray value of the pixel of [170,255], and the gray value of entire detection zone pixel is quantized For 4 tonal gradations.
4th average gray scale interval can be [180,210), the 4th gray scale interval corresponding with the 4th average gray scale interval Can be [200,255], that is, when detection zone average gray value be located at [180,210) when, can be by gray scale in detection zone Value is quantified as 2 tonal gradations positioned at the gray value of the pixel of [200,255], and the gray value of entire detection zone pixel is quantized For 3 tonal gradations.
In addition, predetermined average gray section can also include the 5th average gray section and/or the 6th average gray section, Wherein the 5th average gray section can be [0,65), the 6th average gray section can be [210,255).If being calculated Average gray value is in the 5th average gray section or the 6th average gray section, then does not carry out live body inspection to the detection zone It surveys, without quantization.
After obtaining quantized image, in step S240, characteristics of image is extracted according to the quantized image, characteristics of image can be with Including image outline feature and/or gradation of image feature.
According to embodiment of the present invention, image outline feature can be at least a pair of of profile in quantization areas Line, extraction process are as follows:
First according to the boundary of different tonal gradations in quantized image, extraction at least a pair of contour line, each pair of contour line wrap Include revolver profile and right wheel profile.It is to be appreciated that quantized image includes left-side images and image right, and left-side images and the right side Symmetry characteristic of the side image based on face can consider symmetrical based on perpendicular bisector between eyes.
It is apparent that left-side images and image right have been quantified as multiple tonal gradations.In left-side images, different two The intersection of a tonal gradation can form revolver profile.Correspondingly, in image right, the intersection meeting of the two tonal gradations Right wheel profile corresponding with the revolver profile is formed, revolver profile right wheel profile corresponding with its is a pair of of contour line.
Specifically, it is contemplated that for the boundary of two tonal gradations, in the same one-row pixels of left-side images and image right There are the pixels that one or more corresponds to the boundary.It according to embodiment of the present invention, can be in left hand view It as in, determines that it often corresponds to one or more pixel of the boundary in row pixel, and extracts vertical wherein between eyes The farthest pixel of bisector, to form revolver profile.Correspondingly, in image right, determining that it often corresponds in row pixel should One or more pixel having a common boundary, and the farthest pixel of perpendicular bisector wherein between eyes is extracted, with formation and revolver The corresponding right wheel profile of profile.The revolver profile and right wheel profile corresponding with revolver profile obtained in this way partners profile Line.
Fig. 4 illustrates the schematic diagram of contour line according to one embodiment of the present invention.As shown in figure 4, extracting 4 To contour line, respectively { l1,r1}、{l2,r2}、{l3,r3And { l4,r4, l and r indicate revolver profile and right wheel profile respectively.
At least a pair of of contour line of extraction and then for each pair of contour line, calculate its revolver profile and right wheel profile it Between distance with quantized image line direction change distance Curve.Wherein, distance Curve is cross with the line direction of quantized image Axis (line number is positive direction from small to large), with the distance between revolver profile and right wheel profile for the longitudinal axis.Therefore, the distance Curve Abscissa be the every row pixel of quantized image line number.
Fig. 5 illustrates the schematic diagram of the distance Curve of Fig. 4 illustrated embodiments.As shown in figure 5, according to extracting 4 distance Curves are calculated in 4 pairs of contour lines.Wherein, the longitudinal axis of distance Curve normalizing between revolver profile and right wheel profile Distance after change.
According to embodiment of the present invention, gradation of image feature may include the intensity profile histogram of quantized image And/or quantized image Gray scale projection curve in the horizontal direction, the step of extracting gradation of image feature according to quantized image, can be with Including:The intensity profile histogram of quantized image is calculated, and/or calculates quantized image Gray scale projection curve in the horizontal direction. In addition, if the average gray value of detection zone is in average gray scale interval, gradation of image feature can also include nose Or the average gray value of eye.Average gray scale interval typically [65,130].
After extracting above-mentioned characteristics of image, in step s 250, judged based on acquired characteristics of image to be detected Whether object is live subject.Specifically, computing device can be previously stored at least one decision condition.It is according to the present invention One embodiment, can store at least one of following decision condition:
Decision condition 1:Under same abscissa, relatively low tonal gradation have a common boundary corresponding a pair of of contour line distance Curve it is vertical Coordinate be less than higher tonal gradation have a common boundary corresponding a pair of of contour line distance Curve ordinate;
Decision condition 2:For every distance Curve, this distance Curve is in the first abscissa domain U1It is upper minimum there are first Value, U1=x | | and x-a | < δ1, a is the corresponding abscissa of eye in quantized image, δ1For the 0th multiple of two spacing, the 0th Multiple can be 1/4;
Decision condition 3:For every distance Curve, there are at least one local maximums for this distance Curve;
Decision condition 4:For every distance Curve, this distance Curve is in the second abscissa domain U2It is upper that there are one second Maximum, and the second maximum and the difference of the corresponding abscissa of the first minimum are not more than the first multiple of two spacing, U2= X | and x > a }, the first multiple can be 0.4;
Decision condition 5:For every distance Curve, the first minimum is more than the second multiple of two spacing and is less than two The third multiple of eye spacing, the second multiple can be 0.3, and third multiple can be 1;
Decision condition 6:For every distance Curve, the second maximum is no more than the quadruple of two spacing and not small In the 5th multiple of two spacing, quadruple can be 2, and the 5th multiple can be 1;
Decision condition 7:For every distance Curve, the difference of the second maximum and the first minimum is more than the of two spacing Six multiples, the 6th multiple can be 0.6;
Decision condition 8:For every distance Curve, in third abscissa domain U3On, the slope of this distance Curve is pre- Determine to be monotonically changed in slope range, U3=x | b- δ3< x < b }, b is the corresponding abscissa of the second maximum, δ3For two spacing The 7th multiple, the 7th multiple can be 1/4, predetermined slope may range from 0 °~60 °;
Decision condition 9:For every distance Curve, in the 4th abscissa domain U4On, the slope of this distance Curve is pre- Determine to be monotonically changed in slope range, U4=x | b < x < b+ δ4, δ4For octuple number of two spacing, octuple number can be 1/4, predetermined slope may range from 120 °~180 °;
Decision condition 10:For every distance Curve, in the 5th abscissa domain U5On, the slope of this distance Curve is located at Second predetermined slope range, U5=x | | and x-c | < δ5, c is the corresponding abscissa of the first minimum, δ5It is the of two spacing Nine multiples, the 9th multiple can be 1/4, and the second predetermined slope may range from 70 °~110 °;
Decision condition 11:For every two distance Curves, in the 6th abscissa domain U6On, the correlation of two distance Curves Property be more than predetermined correlation values, U6=x | c- δ6< x < b }, δ5For the tenth multiple of two spacing, the tenth multiple can be 1/ 4, predetermined correlation values can be 0.4;
Decision condition 12:The intensity profile histogram of quantized image is continuous, continuously refers to that tonal gradation is continuous here, no There can be missing;
Decision condition 13:The slope of quantized image Gray scale projection curve in the horizontal direction is turned over by the Gray scale projection curve Related coefficient between the slope of a curve obtained after turning is more than pre-determined factor, and pre-determined factor can be 0.4;
Decision condition 14:If the average gray value of detection zone is in average gray scale interval, such as [65,130], nose The relationship of the average gray value of the average gray value and detection zone of portion or eye meets scheduled nose or eye gray-scale relation Curve.Here nose or eye gray-scale relation curve includes the first nose or eye gray-scale relation curve and the second nose or eye Portion's gray-scale relation curve, the two is based on nose or eye gray-scale relation scatter plot obtains, and with the average ash of detection zone Angle value is horizontal axis, using the average gray value of nose or eye as the longitudinal axis.Nose or eye gray-scale relation scatter plot can be by more The face gray level image of a live subject counts to obtain.Fig. 6 illustrates the ash of nose according to one embodiment of the present invention The schematic diagram of degree relationship scatter plot.
First nose or eye gray-scale relation curve make in nose or eye gray-scale relation scatter plot 95% or more number Strong point all below first nose or eye gray-scale relation curve, the second nose or eye gray-scale relation curve make nose or 95% or more data point is all above second nose or eye gray-scale relation curve in eye gray-scale relation scatter plot.
If the average gray value of nose or eye data point corresponding with the average gray value of detection zone is located at first Above nose or eye gray-scale relation curve or below the second nose or eye gray-scale relation curve, it is determined that discontented Sufficient nose or eye gray-scale relation curve, otherwise determination meet nose or eye gray-scale relation curve.
Each parameter (multiple of the 0th multiple~the tenth, predetermined slope range, the second predetermined slope model in the above decision condition Enclose, predetermined correlation values, pre-determined factor etc.) illumination condition and device parameter that can be based on acquisition face gray level image and set It sets.
For each pre-stored decision condition, can be carried out based on image outline feature or gradation of image feature The judgement of the decision condition, that is to say, that according to image outline feature or gradation of image feature to determine whether meeting the judgement Condition.For example, above-mentioned decision condition 1-11 can be judged based on image outline feature, above-mentioned decision condition 12-14 can To be judged based on gradation of image feature.
Then, it for each pre-stored decision condition, may be based on whether to meet the decision condition to be waited for Detect the score that object corresponds to the decision condition.
It according to embodiment of the present invention, can be according to obviously meeting, meet, not being inconsistent for a decision condition It closes, obviously meet the score for providing four grades.Wherein, hence it is evident that do not meet and provide score s1, do not meet and provide score s2, symbol Conjunction provides score s3, hence it is evident that meet and provides score s4.Wherein, s1<s2<s3<s4, or further, s1<s2<0<s3<s4.Example Such as, for decision condition 7:For every distance Curve, the difference of the second maximum and the first minimum is more than the of two spacing Six multiples.Assuming that the difference of the second maximum and the first minimum is S, the 6th multiple of two spacing is Thr, then for every Distance Curve, the score score corresponding to the decision condition are:
Wherein, s1<s2<0<s3<s4, ∞<level1<0<leve2<∞.Assuming that Thr=10, level1=-5, leve2=5, s1=-2, s2=-1, s3=1, s4=2.So, if the difference S of obtained the second maximum and the first minimum is 4, divided Number is -2;If the difference S of the second maximum and the first minimum is 9, it is -1 to obtain score;If the second maximum and the first pole The difference S of small value is 14, and it is 1 to obtain score;If the difference S of the second maximum and the first minimum is 16, it is 2 to obtain score.
Certainly, for a decision condition, the score of two grades, this hair can also be provided according to meeting and not meeting The bright marking mode to each decision condition is not limited.
After object to be detected is obtained corresponding to the score of each decision condition, it can be corresponded to based on object to be detected The score of each decision condition obtains the gross score of object to be detected.For example, each decision condition can will be corresponded to Score summation or weighted sum obtain the gross score of object to be detected.
Finally, judge whether object to be detected is live subject based on the gross score of object to be detected.Specifically, if waiting for The gross score for detecting object is more than preset score threshold, then can determine that object to be detected is live subject, and otherwise determination waits for It is not live subject to detect object.
In addition, according to embodiment of the present invention, the multiframe face gray level image of object to be detected can also be received, The score for respectively obtaining each face gray level image obtains object to be detected based on the score of each face gray level image Gross score, it is last to judge whether object to be detected is live subject according to based on the gross score of object to be detected.
Wherein it is possible to obtain the total score of object to be detected to the score averaged of each face gray level image Number.The score of each face gray level image can also be weighted and be summed to obtain the gross score of object to be detected, it is each The corresponding weight of score of face gray level image can be the probability that object to be detected is live subject under the score.
It is to be appreciated that the texture (i.e. profile) of face can pass through the pixel of face gray level image and its ash of spatial neighborhood Degree is distributed to show.For removing the face gray level image of background, naked eyes are difficult to distinguish real human face or photo face, But there is very big difference in its imaging process.Real human face is the three-dimension object of complexity, and photo face is then planar object, at Different illumination reflections and shade are will produce as during, the difference of surface properties is caused, this can be preferably detected using texture Species diversity.The face gray level image of live subject can preferably highlight this texture after quantization, non-living body object Face gray level image can not then be accomplished.
Fig. 7 A and Fig. 7 B illustrate the face gray level image of live subject and non-living body object after quantization respectively Schematic diagram.It is apparent that the texture shown in Fig. 7 A is quite clear, Fig. 7 B are then smudgy.
Therefore, the present invention can be effectively realized for detecting characteristics of image of the scheme based on quantized image of live subject In vivo detection reduces complexity while ensureing Detection accuracy, improves In vivo detection speed, have it is of low cost, Economic and practical, high efficient and reliable, the effect coordinated without user.
It should be appreciated that various technologies described herein are realized together in combination with hardware or software or combination thereof.From And some aspects or part of the process and apparatus of the present invention or the process and apparatus of the present invention can take embedded tangible matchmaker It is situated between, such as the program code in floppy disk, CD-ROM, hard disk drive or other arbitrary machine readable storage mediums (refers to Enable) form, wherein when program is loaded into the machine of such as computer etc, and when being executed by the machine, which becomes real The equipment for trampling the present invention.
In the case where program code executes on programmable computers, computing device generally comprises processor, processor Readable storage medium (including volatile and non-volatile memory and or memory element), at least one input unit, and extremely A few output device.Wherein, memory is configured for storage program code;Processor is configured for according to the memory Instruction in the program code of middle storage executes the various methods of the present invention.
By way of example and not limitation, computer-readable medium includes computer storage media and communication media.It calculates Machine readable medium includes computer storage media and communication media.Computer storage media storage such as computer-readable instruction, The information such as data structure, program module or other data.Communication media is generally modulated with carrier wave or other transmission mechanisms etc. Data-signal processed embodies computer-readable instruction, data structure, program module or other data, and includes that any information passes Pass medium.Above any combination is also included within the scope of computer-readable medium.
It should be appreciated that in order to simplify the disclosure and help to understand one or more of each inventive aspect, it is right above In the description of exemplary embodiment of the present invention, each feature of the invention be grouped together into sometimes single embodiment, figure or In person's descriptions thereof.However, the method for the disclosure should be construed to reflect following intention:I.e. claimed hair The bright feature more features required than being expressly recited in each claim.More precisely, as the following claims As book reflects, inventive aspect is all features less than single embodiment disclosed above.Therefore, it then follows specific real Thus the claims for applying mode are expressly incorporated in the specific implementation mode, wherein each claim itself is used as this hair Bright separate embodiments.
Those skilled in the art should understand that the module of the equipment in example disclosed herein or unit or groups Part can be arranged in equipment as depicted in this embodiment, or alternatively can be positioned at and the equipment in the example In different one or more equipment.Module in aforementioned exemplary can be combined into a module or be segmented into addition multiple Submodule.
The present invention can also include:A8, the method as described in A7, wherein described that gradation of image is extracted according to quantized image The step of feature further includes:If the average gray value of the detection zone is in average gray scale interval, calculate nose or The average gray value of eye.A9, the method as described in A8, wherein the average gray scale interval is [65,130].A11, such as Method described in A7, wherein it is described based on acquired characteristics of image come judge object to be detected whether be live subject step Suddenly include:Based on described image gray feature, judgement at least one in following decision condition is carried out:The ash of the quantized image It is continuous to spend distribution histogram;The slope of the quantized image Gray scale projection curve in the horizontal direction with by the Gray Projection song Related coefficient between the slope of a curve obtained after line overturning is more than pre-determined factor.A12, the method as described in A8, wherein if The average gray value of the detection zone is in the average gray scale interval, then is based on described image gray feature, carries out The judgement of following decision condition:The relationship of the average gray value of the nose or eye and the average gray value of the detection zone Meet scheduled nose or eye gray-scale relation curve.A13, the method as described in any one of A10-12, wherein described to be based on Acquired characteristics of image judges that the step of whether object to be detected is live subject includes:For each decision condition, Whether meet the decision condition based on described image contour feature or described image gray feature, to obtain the object to be detected Corresponding to the score of the decision condition;The score for corresponding to each decision condition based on the object to be detected, to obtain State the gross score of object to be detected;Judge whether the object to be detected is live body based on the gross score of the object to be detected Object.A14, the method as described in any one of A1-13, wherein the face gray level image is near-infrared image.
Those skilled in the art, which are appreciated that, to carry out adaptively the module in the equipment in embodiment Change and they are arranged in the one or more equipment different from the embodiment.It can be the module or list in embodiment Member or component be combined into a module or unit or component, and can be divided into addition multiple submodule or subelement or Sub-component.Other than such feature and/or at least some of process or unit exclude each other, it may be used any Combination is disclosed to all features disclosed in this specification (including adjoint claim, abstract and attached drawing) and so to appoint Where all processes or unit of method or equipment are combined.Unless expressly stated otherwise, this specification (including adjoint power Profit requires, abstract and attached drawing) disclosed in each feature can be by providing the alternative features of identical, equivalent or similar purpose come generation It replaces.
In addition, it will be appreciated by those of skill in the art that although some embodiments described herein include other embodiments In included certain features rather than other feature, but the combination of the feature of different embodiments means in of the invention Within the scope of and form different embodiments.For example, in the following claims, embodiment claimed is appointed One of meaning mode can use in any combination.
In addition, be described as herein can be by the processor of computer system or by executing for some in the embodiment The combination of method or method element that other devices of the function are implemented.Therefore, have for implementing the method or method The processor of the necessary instruction of element forms the device for implementing this method or method element.In addition, device embodiment Element described in this is the example of following device:The device is used to implement performed by the element by the purpose in order to implement the invention Function.
As used in this, unless specifically stated, come using ordinal number " first ", " second ", " third " etc. Description plain objects are merely representative of the different instances for being related to similar object, and are not intended to imply that the object being described in this way must Must have the time it is upper, spatially, in terms of sequence or given sequence in any other manner.
Although the embodiment according to limited quantity describes the present invention, above description, the art are benefited from It is interior it is clear for the skilled person that in the scope of the present invention thus described, it can be envisaged that other embodiments.Additionally, it should be noted that The language that is used in this specification primarily to readable and introduction purpose and select, rather than in order to explain or limit Determine subject of the present invention and selects.Therefore, without departing from the scope and spirit of the appended claims, for this Many modifications and changes will be apparent from for the those of ordinary skill of technical field.For the scope of the present invention, to this The done disclosure of invention is illustrative and not restrictive, and it is intended that the scope of the present invention be defined by the claims appended hereto.

Claims (10)

1. a kind of method for detecting live subject, suitable for being executed in computing device, the method includes the steps:
Receive the face gray level image of object to be detected;
Detection zone is intercepted from the face gray level image, the detection zone includes at least eye;
The detection zone is quantified to obtain quantized image;
Characteristics of image is extracted according to the quantized image;
Judge whether the object to be detected is live subject based on acquired characteristics of image.
2. the method for claim 1, wherein described the step of being quantified detection zone to obtain quantized image, wraps It includes:
Calculate the average gray value of the detection zone;
If the average gray value is in predetermined average gray section, to gray value in the detection zone be in it is described pre- The pixel for allocating the corresponding predetermined gray scale interval of equal gray scale interval is quantified, and its gray value is quantified as predetermined number ash Spend grade.
3. method as claimed in claim 2, wherein the predetermined average gray section includes the first average gray section, the One or more in two average gray sections, third average gray section and the 4th average gray scale interval, the predetermined ash Spend section include the first gray scale interval corresponding with first average gray section, it is corresponding with second average gray section The second gray scale interval, third gray scale interval corresponding with third average gray section and with the described 4th average gray area Between one or more in corresponding 4th gray scale interval.
4. method as claimed in claim 3, wherein first average gray section be [65,100), the second average gray Section be [100,145), third average gray section be [145,180), the 4th average gray scale interval be [180,210);
The first gray scale interval corresponding with first average gray section be [100,200), with second average gray area Between corresponding second gray scale interval be [130,230), third gray scale interval corresponding with third average gray section is [170,255], the 4th gray scale interval corresponding with the described 4th average gray scale interval is [200,255].
5. the method as described in any one of claim 1-4, wherein described image feature includes image outline feature, according to Quantized image extract image outline feature the step of include:
According to the boundary of different tonal gradations in the quantized image, extraction at least a pair of contour line, each pair of contour line include Revolver profile and right wheel profile;
For each pair of contour line, calculates the distance between revolver profile and right wheel profile and change with the line direction of the quantized image Distance Curve.
6. method as claimed in claim 5, wherein the quantized image includes left-side images and image right, the basis The boundary of different tonal gradations in quantized image, extraction at least a pair of of contour line the step of include:
For the boundary of each two tonal gradation,
Respectively in the left-side images and the image right, determine its often correspond in row pixel one of the boundary or The multiple pixels of person, and the farthest pixel of perpendicular bisector wherein between eyes is extracted, to be respectively formed the pair of contour line The revolver profile and the right wheel profile.
7. the method as described in any one of claim 1-6, wherein described image feature includes gradation of image feature, according to Quantized image extract gradation of image feature the step of include:
Calculate the intensity profile histogram of the quantized image;And
Calculate quantized image Gray scale projection curve in the horizontal direction.
8. method as claimed in claim 5, wherein described whether to judge object to be detected based on acquired characteristics of image Include for the step of live subject:
Based on described image contour feature, carry out judgement at least one in following decision condition, wherein the distance Curve with The line direction of the quantized image is horizontal axis, with the distance between revolver profile and right wheel profile for the longitudinal axis:
Under same abscissa, the have a common boundary ordinate of distance Curve of corresponding a pair of of contour line of relatively low tonal gradation is less than higher ash Degree grade have a common boundary corresponding a pair of of contour line distance Curve ordinate;
For every distance Curve, this distance Curve is in the first abscissa domain U1It is upper that there are the first minimum, U1=x | | x-a | < δ1, a is the corresponding abscissa of eye in the quantized image, δ1For the 0th multiple of two spacing;
For every distance Curve, there are at least one local maximums for this distance Curve;
For every distance Curve, this distance Curve is in the second abscissa domain U2It is upper there are second maximum, and described Two maximum and the difference of the corresponding abscissa of first minimum are not more than the first multiple of two spacing, U2=x | x > a};
For every distance Curve, first minimum is more than the second multiple of two spacing and less than the of two spacing Triple;
For every distance Curve, second maximum is not more than the quadruple of two spacing and is not less than two spacing The 5th multiple;
For every distance Curve, the difference of second maximum and first minimum is more than the 6th times of two spacing Number;
For every distance Curve, in third abscissa domain U3On, the slope of this distance Curve is dull within the scope of predetermined slope Variation, U3=x | b- δ3< x < b }, b is the corresponding abscissa of second maximum, δ3For the 7th multiple of two spacing;
For every distance Curve, in the 4th abscissa domain U4On, the slope of this distance Curve is within the scope of the predetermined slope It is monotonically changed, U4=x | b < x < b+ δ4, δ4For octuple number of two spacing;
For every distance Curve, in the 5th abscissa domain U5On, the slope of this distance Curve is located at the second predetermined slope model It encloses, U5=x | | and x-c | < δ5, c is the corresponding abscissa of first minimum, δ5For the 9th multiple of two spacing;
For every two distance Curves, in the 6th abscissa domain U6On, the correlation of two distance Curves is more than predetermined dependency number Value, U6=x | c- δ6< x < b }, δ5For the tenth multiple of two spacing.
9. a kind of computing device, including:
One or more processors;
Memory;And
One or more programs, wherein one or more of programs are stored in the memory and are configured as by described one A or multiple processors execute, and one or more of programs include for executing as described in any one of claim 1-8 Instruction for the method for detecting live subject.
10. a kind of readable storage medium storing program for executing of storage program, described program include instruction, described instruction is worked as to be executed by computing device When so that the computing device executes the method for detecting live subject as described in any one of claim 1-8.
CN201810510901.3A 2018-05-24 2018-05-24 Method for detecting living object, computing device and readable storage medium Active CN108764121B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810510901.3A CN108764121B (en) 2018-05-24 2018-05-24 Method for detecting living object, computing device and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810510901.3A CN108764121B (en) 2018-05-24 2018-05-24 Method for detecting living object, computing device and readable storage medium

Publications (2)

Publication Number Publication Date
CN108764121A true CN108764121A (en) 2018-11-06
CN108764121B CN108764121B (en) 2021-03-02

Family

ID=64005991

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810510901.3A Active CN108764121B (en) 2018-05-24 2018-05-24 Method for detecting living object, computing device and readable storage medium

Country Status (1)

Country Link
CN (1) CN108764121B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111160235A (en) * 2019-12-27 2020-05-15 联想(北京)有限公司 Living body detection method and device and electronic equipment
CN112329720A (en) * 2020-11-26 2021-02-05 杭州海康威视数字技术股份有限公司 Face living body detection method, device and equipment
CN115100714A (en) * 2022-06-27 2022-09-23 平安银行股份有限公司 Living body detection method and device based on face image and server

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1687969A (en) * 2005-05-12 2005-10-26 北京航空航天大学 File image compressing method based on file image content analyzing and characteristic extracting
CN101334895A (en) * 2008-08-07 2008-12-31 清华大学 Image division method aiming at dynamically intensified mammary gland magnetic resonance image sequence
CN103605958A (en) * 2013-11-12 2014-02-26 北京工业大学 Living body human face detection method based on gray scale symbiosis matrixes and wavelet analysis
CN105224285A (en) * 2014-05-27 2016-01-06 北京三星通信技术研究有限公司 Eyes open and-shut mode pick-up unit and method
CN105912986A (en) * 2016-04-01 2016-08-31 北京旷视科技有限公司 In vivo detection method, in vivo detection system and computer program product
CN105989263A (en) * 2015-01-30 2016-10-05 阿里巴巴集团控股有限公司 Method for authenticating identities, method for opening accounts, devices and systems
CN106355139A (en) * 2016-08-22 2017-01-25 厦门中控生物识别信息技术有限公司 Facial anti-fake method and device
US20170344793A1 (en) * 2014-10-22 2017-11-30 Veridium Ip Limited Systems and methods for performing iris identification and verification using mobile devices
US20180060680A1 (en) * 2016-08-30 2018-03-01 Qualcomm Incorporated Device to provide a spoofing or no spoofing indication

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1687969A (en) * 2005-05-12 2005-10-26 北京航空航天大学 File image compressing method based on file image content analyzing and characteristic extracting
CN101334895A (en) * 2008-08-07 2008-12-31 清华大学 Image division method aiming at dynamically intensified mammary gland magnetic resonance image sequence
CN103605958A (en) * 2013-11-12 2014-02-26 北京工业大学 Living body human face detection method based on gray scale symbiosis matrixes and wavelet analysis
CN105224285A (en) * 2014-05-27 2016-01-06 北京三星通信技术研究有限公司 Eyes open and-shut mode pick-up unit and method
US20170344793A1 (en) * 2014-10-22 2017-11-30 Veridium Ip Limited Systems and methods for performing iris identification and verification using mobile devices
CN105989263A (en) * 2015-01-30 2016-10-05 阿里巴巴集团控股有限公司 Method for authenticating identities, method for opening accounts, devices and systems
CN105912986A (en) * 2016-04-01 2016-08-31 北京旷视科技有限公司 In vivo detection method, in vivo detection system and computer program product
CN106355139A (en) * 2016-08-22 2017-01-25 厦门中控生物识别信息技术有限公司 Facial anti-fake method and device
US20180060680A1 (en) * 2016-08-30 2018-03-01 Qualcomm Incorporated Device to provide a spoofing or no spoofing indication

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111160235A (en) * 2019-12-27 2020-05-15 联想(北京)有限公司 Living body detection method and device and electronic equipment
CN112329720A (en) * 2020-11-26 2021-02-05 杭州海康威视数字技术股份有限公司 Face living body detection method, device and equipment
WO2022111512A1 (en) * 2020-11-26 2022-06-02 杭州海康威视数字技术股份有限公司 Facial liveness detection method and apparatus, and device
CN115100714A (en) * 2022-06-27 2022-09-23 平安银行股份有限公司 Living body detection method and device based on face image and server

Also Published As

Publication number Publication date
CN108764121B (en) 2021-03-02

Similar Documents

Publication Publication Date Title
KR102442844B1 (en) Method for Distinguishing a Real Three-Dimensional Object from a Two-Dimensional Spoof of the Real Object
CN111488756B (en) Face recognition-based living body detection method, electronic device, and storage medium
Aydin et al. Dynamic range independent image quality assessment
CN104866868B (en) Metal coins recognition methods based on deep neural network and device
US10275677B2 (en) Image processing apparatus, image processing method and program
CN109829453A (en) It is a kind of to block the recognition methods of text in card, device and calculate equipment
CN107808147A (en) A kind of face Confidence method based on the tracking of real-time face point
CN112801846B (en) Watermark embedding and extracting method and device, computer equipment and storage medium
CN113179421B (en) Video cover selection method and device, computer equipment and storage medium
Yang et al. Underwater image enhancement using scene depth-based adaptive background light estimation and dark channel prior algorithms
CN108764121A (en) Method, computing device and readable storage medium storing program for executing for detecting live subject
CN112348808A (en) Screen perspective detection method and device
Guo et al. Haze removal for single image: A comprehensive review
CN110909601B (en) Beautiful pupil identification method and system based on deep learning
CN110309715B (en) Deep learning-based indoor positioning method, device and system for lamp identification
CN108665459A (en) A kind of image fuzzy detection method, computing device and readable storage medium storing program for executing
CN112818774B (en) Living body detection method and device
Li et al. Blind multiply distorted image quality assessment using relevant perceptual features
CN112825120B (en) Face illumination evaluation method, device, computer readable storage medium and equipment
CN110619624A (en) Image decomposition method and device
Toet et al. Efficient contrast enhancement through log-power histogram modification
CN110738712A (en) geometric pattern reconstruction method, device, equipment and storage medium
CN107492078A (en) The black method made an uproar and computing device in a kind of removal image
Ghosh et al. STN-Net: A Robust GAN-Generated Face Detector
Matsuhira et al. Pointedness of an image: Measuring how pointy an image is perceived

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20211220

Address after: 541000 building D2, HUTANG headquarters economic Park, Guimo Avenue, Qixing District, Guilin City, Guangxi Zhuang Autonomous Region

Patentee after: Guangxi Code Interpretation Intelligent Information Technology Co.,Ltd.

Address before: 201207 2 / F, building 13, 27 Xinjinqiao Road, Pudong New Area pilot Free Trade Zone, Shanghai

Patentee before: SHIMA RONGHE (SHANGHAI) INFORMATION TECHNOLOGY Co.,Ltd.

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20230809

Address after: C203, 205, 206, 2nd floor, Building 106 Lize Zhongyuan, Chaoyang District, Beijing, 100000

Patentee after: EYESMART TECHNOLOGY Ltd.

Address before: 541000 building D2, HUTANG headquarters economic Park, Guimo Avenue, Qixing District, Guilin City, Guangxi Zhuang Autonomous Region

Patentee before: Guangxi Code Interpretation Intelligent Information Technology Co.,Ltd.