Invention content
For this purpose, the present invention provides a kind of method, computing device and readable storage medium storing program for executing for detecting live subject, with power
Illustrate certainly or at least alleviate existing at least one problem above.
According to an aspect of the invention, there is provided a kind of method for detecting live subject, is suitable in computing device
Middle execution, method include step:Receive the face gray level image of object to be detected;Detection zone is intercepted from face gray level image
Domain, detection zone include at least eye;Detection zone is quantified to obtain quantized image;Image is extracted according to quantized image
Feature;Judge whether object to be detected is live subject based on acquired characteristics of image.
Optionally, in the method according to the invention, the step of being quantified detection zone to obtain quantized image is wrapped
It includes:Calculate the average gray value of detection zone;If average gray value is in predetermined average gray section, to grey in detection zone
The pixel that angle value is in predetermined gray scale interval corresponding with predetermined average gray section is quantified, and its gray value is quantified as
Predetermined number tonal gradation.
Optionally, in the method according to the invention, predetermined average gray section includes the first average gray section, second
One or more in average gray section, third average gray section and the 4th average gray scale interval, predetermined gray scale interval
Including and corresponding first gray scale interval in the first average gray section, the second gray area corresponding with the second average gray section
Between and the corresponding third gray scale interval in third average gray section and the 4th gray scale interval corresponding with the 4th average gray scale interval
In one or more.
Optionally, in the method according to the invention, the first average gray section be [65,100), the second average gray area
Between for [100,145), third average gray section be [145,180), the 4th average gray scale interval be [180,210);With first
Corresponding first gray scale interval in average gray section be [100,200), the second gray area corresponding with the second average gray section
Between for [130,230), third gray scale interval corresponding with third average gray section be [170,255], with the 4th average gray
Corresponding 4th gray scale interval in section is [200,255].
Optionally, in the method according to the invention, characteristics of image includes image outline feature, is extracted according to quantized image
The step of image outline feature includes:According to the boundary of different tonal gradations in quantized image, at least a pair of of contour line of extraction, often
Include revolver profile and right wheel profile to contour line;For each pair of contour line, calculate between revolver profile and right wheel profile
The distance Curve that distance changes with the line direction of quantized image.
Optionally, in the method according to the invention, quantized image includes left-side images and image right, is schemed according to quantization
The boundary of different tonal gradations as in, extraction at least a pair of of contour line the step of include:For the boundary of each two tonal gradation,
Respectively in left-side images and image right, determine that it often corresponds to one or more pixel of the boundary in row pixel, and
Extraction farthest pixel of perpendicular bisector wherein between eyes, it is wide with the revolver profile and right wheel that are respectively formed a pair of of contour line
Line.
Optionally, in the method according to the invention, characteristics of image includes gradation of image feature, is extracted according to quantized image
The step of gradation of image feature includes:Calculate the intensity profile histogram of quantized image;And quantized image is calculated in level side
To Gray scale projection curve.
Optionally, in the method according to the invention, further include according to the step of quantized image extraction gradation of image feature:
If the average gray value of detection zone is in average gray scale interval, the average gray value of nose or eye is calculated.
Optionally, in the method according to the invention, average gray scale interval is [65,130].
Optionally, in the method according to the invention, object to be detected is judged based on acquired characteristics of image whether
Include for the step of live subject:Based on image outline feature, judgement at least one in following decision condition, middle-range are carried out
From curve using the line direction of quantized image as horizontal axis, with the distance between revolver profile and right wheel profile for the longitudinal axis:Same horizontal seat
Under mark, the have a common boundary ordinate of distance Curve of corresponding a pair of of contour line of relatively low tonal gradation is less than higher tonal gradation and has a common boundary pair
The ordinate of the distance Curve for a pair of of the contour line answered;For every distance Curve, this distance Curve is in the first abscissa domain
U1It is upper that there are the first minimum, U1=x | | and x-a | < δ1, a is the corresponding abscissa of eye in quantized image, δ1Between two
Away from the 0th multiple;For every distance Curve, there are at least one local maximums for this distance Curve;For every distance
Curve, this distance Curve is in the second abscissa domain U2It is upper there are second maximum, and the second maximum and first minimum
The difference for being worth corresponding abscissa is not more than the first multiple of two spacing, U2=x | x > a };For every distance Curve, first
Minimum is more than the second multiple of two spacing and the third multiple less than two spacing;For every distance Curve, the second pole
Big quadruple of the value no more than two spacing and the 5th multiple not less than two spacing;For every distance Curve, second
The difference of maximum and the first minimum is more than the 6th multiple of two spacing;For every distance Curve, in third abscissa domain
U3On, the slope of this distance Curve is monotonically changed within the scope of predetermined slope, U3=x | b- δ3< x < b }, b is second very big
It is worth corresponding abscissa, δ3For the 7th multiple of two spacing;For every distance Curve, in the 4th abscissa domain U4On, this
The slope of distance Curve is monotonically changed within the scope of predetermined slope, U4=x | b < x < b+ δ4, δ4It is the eight times of two spacing
Number;For every distance Curve, in the 5th abscissa domain U5On, the slope of this distance Curve is located at the second predetermined slope model
It encloses, U5=x | | and x-c | < δ5, c is the corresponding abscissa of the first minimum, δ5For the 9th multiple of two spacing;For every two
Distance Curve, in the 6th abscissa domain U6On, the correlation of two distance Curves is more than predetermined correlation values, U6=x | c-
δ6< x < b }, δ5For the tenth multiple of two spacing.
Optionally, in the method according to the invention, object to be detected is judged based on acquired characteristics of image whether
Include for the step of live subject:Based on gradation of image feature, judgement at least one in following decision condition is carried out:Quantization figure
The intensity profile histogram of picture is continuous;The slope of quantized image Gray scale projection curve in the horizontal direction with by the Gray Projection song
Related coefficient between the slope of a curve obtained after line overturning is more than pre-determined factor.
Optionally, in the method according to the invention, if the average gray value of detection zone is in average gray area
Between, then it is based on gradation of image feature, carries out the judgement of following decision condition:The average gray value and detection zone of nose or eye
The relationship of average gray value meet scheduled nose or eye gray-scale relation curve.
Optionally, in the method according to the invention, object to be detected is judged based on acquired characteristics of image whether
Include for the step of live subject:For each decision condition, whether accorded with based on image outline feature or gradation of image feature
The decision condition is closed, to obtain the score that object to be detected corresponds to the decision condition;Corresponded to based on object to be detected each
The score of a decision condition, to obtain the gross score of object to be detected;Judged based on the gross score of object to be detected to be detected
Whether object is live subject.
Optionally, in the method according to the invention, face gray level image is near-infrared image.
According to another aspect of the present invention, a kind of computing device is provided, including:One or more processors;Storage
Device;And one or more programs, wherein one or more programs are stored in memory and are configured as by one or more
Processor executes, and one or more programs include for executing appointing in the method according to the present invention for detecting live subject
The instruction of one method.
According to the present invention there are one aspects, provide a kind of readable storage medium storing program for executing of storage program, described program packet
Instruction is included, described instruction is when executed by a computing apparatus so that computing device executes according to the present invention for detecting live body pair
Any one of the method for elephant.
Scheme according to the present invention for detecting live subject, to the detection zone being truncated to from face gray level image
Quantized image is obtained after being quantified, characteristics of image is extracted according to the quantized image, and In vivo detection is realized based on the characteristics of image,
Reduce complexity while ensureing Detection accuracy, improve In vivo detection speed, have it is of low cost, economic and practical,
High efficient and reliable, the effect coordinated without user.
Specific implementation mode
The exemplary embodiment of the disclosure is more fully described below with reference to accompanying drawings.Although showing the disclosure in attached drawing
Exemplary embodiment, it being understood, however, that may be realized in various forms the disclosure without should be by embodiments set forth here
It is limited.On the contrary, these embodiments are provided to facilitate a more thoroughly understanding of the present invention, and can be by the scope of the present disclosure
Completely it is communicated to those skilled in the art.
Fig. 1 is exemplarily illustrated the structure diagram of computing device 100.The computing device 100 can be implemented as server, example
Such as file server, database server, apps server and network server can also be embodied as including desktop meter
The personal computer of calculation machine and notebook computer configuration.In addition, computing device 100 be also implemented as small size it is portable (or
Person moves) part of electronic equipment, these electronic equipments can be such as cellular phone, personal digital assistant (PDA), personal
Media player device, wireless network browsing apparatus, personal helmet, application specific equipment or may include appointing above
The mixing apparatus of what function.
In basic configuration 102, computing device 100 typically comprise system storage 106 and one or more at
Manage device 104.Memory bus 108 can be used for the communication between processor 104 and system storage 106.
Depending on desired configuration, processor 104 can be any kind of processing, including but not limited to:Microprocessor
((μ P), microcontroller (μ C), digital information processor (DSP) or any combination of them.Processor 104 may include all
Cache, processor core such as one or more rank of on-chip cache 110 and second level cache 112 etc
114 and register 116.Exemplary processor core 114 may include arithmetic and logical unit (ALU), floating-point unit (FPU),
Digital signal processing core (DSP core) or any combination of them.Exemplary Memory Controller 118 can be with processor
104 are used together, or in some implementations, and Memory Controller 218 can be an interior section of processor 104.
Depending on desired configuration, system storage 106 can be any type of memory, including but not limited to:Easily
The property lost memory (RAM), nonvolatile memory (ROM, flash memory etc.) or any combination of them.System stores
Device 106 may include operating system 120, one or more program 122 and program data 124.In some embodiments,
Program 122 can be configured as to be referred to by one or more processor 104 using the execution of program data 124 on an operating system
It enables.
Computing device 100 can also include contributing to from various interface equipments (for example, output equipment 142, Peripheral Interface
144 and communication equipment 146) to basic configuration 102 via the communication of bus/interface controller 130 interface bus 140.Example
Output equipment 142 include graphics processing unit 148 and audio treatment unit 150.They can be configured as contribute to via
One or more port A/V 152 is communicated with the various external equipments of such as display or loud speaker etc.Outside example
If interface 144 may include serial interface controller 154 and parallel interface controller 156, they, which can be configured as, contributes to
Via one or more port I/O 158 and such as input equipment (for example, keyboard, mouse, pen, voice-input device, touch
Input equipment) or the external equipment of other peripheral hardwares (such as printer, scanner etc.) etc communicated.Exemplary communication is set
Standby 146 may include network controller 160, can be arranged to convenient for via one or more communication port 164 and one
The communication that other a or multiple computing devices 162 pass through network communication link.
Network communication link can be an example of communication media.Communication media can be usually presented as in such as carrier wave
Or the computer-readable instruction in the modulated data signal of other transmission mechanisms etc, data structure, program module, and can
To include any information delivery media." modulated data signal " can such signal, one in its data set or more
It is a or it change can the mode of coding information in the signal carry out.As unrestricted example, communication media can be with
Include the wire medium of such as cable network or private line network etc, and such as sound, radio frequency (RF), microwave, infrared
(IR) the various wireless mediums or including other wireless mediums.Term computer-readable medium used herein may include depositing
Both storage media and communication media.
One or more programs 122 of computing device 100 include according to the present invention for detecting live body pair for executing
The instruction of any one of the method for elephant.Fig. 2 illustrate according to one embodiment of the present invention for detecting live body pair
The flow chart of the method 200 of elephant.
As shown in Fig. 2, the method 200 for detecting live subject starts from step S210.In step S210, receive to be checked
Survey the face gray level image of object.Normally, face gray level image is near-infrared image, and is obtained at image capture device.
The face gray level image usually has 256 tonal gradations (0~255).According to embodiment of the present invention, computing device
100 can carry out via one or more communication port 164 and image capture device through the logical of above-mentioned network communication link
Letter obtains the face gray level image of its shooting from the image capture device.Image capture device usually may include near-infrared
Light source, optical lens and imaging sensor.Certainly, according to another embodiment, computing device 100 can also be embodied as image
Collecting device.
It is to be appreciated that image is illustrated as matrix, the height (unit is pixel) of the row correspondence image of matrix, matrix
Row correspondence image width (unit is pixel), the pixel of the element correspondence image of matrix, matrix element value correspondence be exactly picture
The gray value of element.
Then in step S220, detection zone is intercepted from the face gray level image received, the detection zone is at least
Including eye.Specifically, first may be used such as Hough transform method, integral projection method, deformable masterplate method, principle component analysis and
Symmetrical method of changing etc existing eye locating method orients eye in face gray level image, comes further according to eye locations
Intercept detection zone.According to one embodiment, the detection zone being truncated to can be above eye apart from 1/4 times of eyes of eye
Region at spacing to eye lower section between 1 times of eyes spacing of eye, wherein eyes spacing is between two eye pupil holes
Horizontal distance can be calculated after positioning eye locations.Fig. 3 A and Fig. 3 B are illustrated respectively according to one of the invention
The face gray level image of embodiment and the schematic diagram of detection zone.
After intercepting detection zone, in step S230, which is quantified, to obtain quantized image.One
As quantizing process can be as follows:
N+1 gray value x is chosen in the original gradation section to be quantified0,x1,...,xnAs quantized interval side
Boundary, x0< x1< ... < xn, n is given positive integer.It is possible to obtain n quantized interval A0,A1,A2,...,An-1, Ai
=[xi,xi+1), i=0 ..., n-2, An-1=[xn-1,xn]。
For any pixel in original gradation section, if g ∈ Ai, i=0,1 ..., n-1, then q=xi, wherein g is the picture
Original gray value of the element before quantization, q are the gray value of the pixel after quantization.
In view of the gray value comprising the pixel compared with abundant information in image is deposited according to the difference of image averaging gray value
In difference, it is therefore desirable to take the image of different average gray values different quantification manners.An implementation according to the present invention
Mode can first calculate the average gray value of detection zone, and the average gray value of detection zone is detection zone all pixels
The number of pixels of the sum of gray value divided by detection zone.If the average gray value being calculated is in predetermined average gray section,
The pixel that predetermined gray scale interval corresponding with predetermined average gray section is then in gray value in detection zone quantifies, will
Its gray value is quantified as predetermined number tonal gradation, and predetermined number is usually 2 or 3.
Predetermined average gray section usually may include that the first average gray section, the second average gray section, third are flat
One or more in equal gray scale interval and the 4th gray scale interval.Correspondingly, predetermined gray scale interval may include with it is first flat
Corresponding first gray scale interval of equal gray scale interval, the second gray scale interval corresponding with the second average gray section are averaged with third
One in the corresponding third gray scale interval of gray scale interval and the 4th gray scale interval corresponding with the 4th average gray scale interval or
It is multiple.
Wherein, the first average gray section can be [65,100), the first gray scale corresponding with the first average gray section
Section can be [100,200), that is, when the average gray value of detection zone is located at [65,100) when, can will be in detection zone
Gray value be located at [100,200) the gray value of pixel be quantified as 3 tonal gradations, the gray value quilt of entire detection zone pixel
It is quantified as 5 tonal gradations.
Second average gray section can be [100,145), the second gray scale interval corresponding with the second average gray section
For [130,230), that is, when detection zone average gray value be located at [100,145) when, can be by gray value position in detection zone
In [130,230) the gray value of pixel be quantified as 3 tonal gradations, the gray value of entire detection zone pixel is quantified as 5
A tonal gradation.
Third average gray section can be [145,180), third gray scale interval corresponding with third average gray section
Can be [170,255], that is, when detection zone average gray value be located at [145,180) when, can be by gray scale in detection zone
Value is quantified as 3 tonal gradations positioned at the gray value of the pixel of [170,255], and the gray value of entire detection zone pixel is quantized
For 4 tonal gradations.
4th average gray scale interval can be [180,210), the 4th gray scale interval corresponding with the 4th average gray scale interval
Can be [200,255], that is, when detection zone average gray value be located at [180,210) when, can be by gray scale in detection zone
Value is quantified as 2 tonal gradations positioned at the gray value of the pixel of [200,255], and the gray value of entire detection zone pixel is quantized
For 3 tonal gradations.
In addition, predetermined average gray section can also include the 5th average gray section and/or the 6th average gray section,
Wherein the 5th average gray section can be [0,65), the 6th average gray section can be [210,255).If being calculated
Average gray value is in the 5th average gray section or the 6th average gray section, then does not carry out live body inspection to the detection zone
It surveys, without quantization.
After obtaining quantized image, in step S240, characteristics of image is extracted according to the quantized image, characteristics of image can be with
Including image outline feature and/or gradation of image feature.
According to embodiment of the present invention, image outline feature can be at least a pair of of profile in quantization areas
Line, extraction process are as follows:
First according to the boundary of different tonal gradations in quantized image, extraction at least a pair of contour line, each pair of contour line wrap
Include revolver profile and right wheel profile.It is to be appreciated that quantized image includes left-side images and image right, and left-side images and the right side
Symmetry characteristic of the side image based on face can consider symmetrical based on perpendicular bisector between eyes.
It is apparent that left-side images and image right have been quantified as multiple tonal gradations.In left-side images, different two
The intersection of a tonal gradation can form revolver profile.Correspondingly, in image right, the intersection meeting of the two tonal gradations
Right wheel profile corresponding with the revolver profile is formed, revolver profile right wheel profile corresponding with its is a pair of of contour line.
Specifically, it is contemplated that for the boundary of two tonal gradations, in the same one-row pixels of left-side images and image right
There are the pixels that one or more corresponds to the boundary.It according to embodiment of the present invention, can be in left hand view
It as in, determines that it often corresponds to one or more pixel of the boundary in row pixel, and extracts vertical wherein between eyes
The farthest pixel of bisector, to form revolver profile.Correspondingly, in image right, determining that it often corresponds in row pixel should
One or more pixel having a common boundary, and the farthest pixel of perpendicular bisector wherein between eyes is extracted, with formation and revolver
The corresponding right wheel profile of profile.The revolver profile and right wheel profile corresponding with revolver profile obtained in this way partners profile
Line.
Fig. 4 illustrates the schematic diagram of contour line according to one embodiment of the present invention.As shown in figure 4, extracting 4
To contour line, respectively { l1,r1}、{l2,r2}、{l3,r3And { l4,r4, l and r indicate revolver profile and right wheel profile respectively.
At least a pair of of contour line of extraction and then for each pair of contour line, calculate its revolver profile and right wheel profile it
Between distance with quantized image line direction change distance Curve.Wherein, distance Curve is cross with the line direction of quantized image
Axis (line number is positive direction from small to large), with the distance between revolver profile and right wheel profile for the longitudinal axis.Therefore, the distance Curve
Abscissa be the every row pixel of quantized image line number.
Fig. 5 illustrates the schematic diagram of the distance Curve of Fig. 4 illustrated embodiments.As shown in figure 5, according to extracting
4 distance Curves are calculated in 4 pairs of contour lines.Wherein, the longitudinal axis of distance Curve normalizing between revolver profile and right wheel profile
Distance after change.
According to embodiment of the present invention, gradation of image feature may include the intensity profile histogram of quantized image
And/or quantized image Gray scale projection curve in the horizontal direction, the step of extracting gradation of image feature according to quantized image, can be with
Including:The intensity profile histogram of quantized image is calculated, and/or calculates quantized image Gray scale projection curve in the horizontal direction.
In addition, if the average gray value of detection zone is in average gray scale interval, gradation of image feature can also include nose
Or the average gray value of eye.Average gray scale interval typically [65,130].
After extracting above-mentioned characteristics of image, in step s 250, judged based on acquired characteristics of image to be detected
Whether object is live subject.Specifically, computing device can be previously stored at least one decision condition.It is according to the present invention
One embodiment, can store at least one of following decision condition:
Decision condition 1:Under same abscissa, relatively low tonal gradation have a common boundary corresponding a pair of of contour line distance Curve it is vertical
Coordinate be less than higher tonal gradation have a common boundary corresponding a pair of of contour line distance Curve ordinate;
Decision condition 2:For every distance Curve, this distance Curve is in the first abscissa domain U1It is upper minimum there are first
Value, U1=x | | and x-a | < δ1, a is the corresponding abscissa of eye in quantized image, δ1For the 0th multiple of two spacing, the 0th
Multiple can be 1/4;
Decision condition 3:For every distance Curve, there are at least one local maximums for this distance Curve;
Decision condition 4:For every distance Curve, this distance Curve is in the second abscissa domain U2It is upper that there are one second
Maximum, and the second maximum and the difference of the corresponding abscissa of the first minimum are not more than the first multiple of two spacing, U2=
X | and x > a }, the first multiple can be 0.4;
Decision condition 5:For every distance Curve, the first minimum is more than the second multiple of two spacing and is less than two
The third multiple of eye spacing, the second multiple can be 0.3, and third multiple can be 1;
Decision condition 6:For every distance Curve, the second maximum is no more than the quadruple of two spacing and not small
In the 5th multiple of two spacing, quadruple can be 2, and the 5th multiple can be 1;
Decision condition 7:For every distance Curve, the difference of the second maximum and the first minimum is more than the of two spacing
Six multiples, the 6th multiple can be 0.6;
Decision condition 8:For every distance Curve, in third abscissa domain U3On, the slope of this distance Curve is pre-
Determine to be monotonically changed in slope range, U3=x | b- δ3< x < b }, b is the corresponding abscissa of the second maximum, δ3For two spacing
The 7th multiple, the 7th multiple can be 1/4, predetermined slope may range from 0 °~60 °;
Decision condition 9:For every distance Curve, in the 4th abscissa domain U4On, the slope of this distance Curve is pre-
Determine to be monotonically changed in slope range, U4=x | b < x < b+ δ4, δ4For octuple number of two spacing, octuple number can be
1/4, predetermined slope may range from 120 °~180 °;
Decision condition 10:For every distance Curve, in the 5th abscissa domain U5On, the slope of this distance Curve is located at
Second predetermined slope range, U5=x | | and x-c | < δ5, c is the corresponding abscissa of the first minimum, δ5It is the of two spacing
Nine multiples, the 9th multiple can be 1/4, and the second predetermined slope may range from 70 °~110 °;
Decision condition 11:For every two distance Curves, in the 6th abscissa domain U6On, the correlation of two distance Curves
Property be more than predetermined correlation values, U6=x | c- δ6< x < b }, δ5For the tenth multiple of two spacing, the tenth multiple can be 1/
4, predetermined correlation values can be 0.4;
Decision condition 12:The intensity profile histogram of quantized image is continuous, continuously refers to that tonal gradation is continuous here, no
There can be missing;
Decision condition 13:The slope of quantized image Gray scale projection curve in the horizontal direction is turned over by the Gray scale projection curve
Related coefficient between the slope of a curve obtained after turning is more than pre-determined factor, and pre-determined factor can be 0.4;
Decision condition 14:If the average gray value of detection zone is in average gray scale interval, such as [65,130], nose
The relationship of the average gray value of the average gray value and detection zone of portion or eye meets scheduled nose or eye gray-scale relation
Curve.Here nose or eye gray-scale relation curve includes the first nose or eye gray-scale relation curve and the second nose or eye
Portion's gray-scale relation curve, the two is based on nose or eye gray-scale relation scatter plot obtains, and with the average ash of detection zone
Angle value is horizontal axis, using the average gray value of nose or eye as the longitudinal axis.Nose or eye gray-scale relation scatter plot can be by more
The face gray level image of a live subject counts to obtain.Fig. 6 illustrates the ash of nose according to one embodiment of the present invention
The schematic diagram of degree relationship scatter plot.
First nose or eye gray-scale relation curve make in nose or eye gray-scale relation scatter plot 95% or more number
Strong point all below first nose or eye gray-scale relation curve, the second nose or eye gray-scale relation curve make nose or
95% or more data point is all above second nose or eye gray-scale relation curve in eye gray-scale relation scatter plot.
If the average gray value of nose or eye data point corresponding with the average gray value of detection zone is located at first
Above nose or eye gray-scale relation curve or below the second nose or eye gray-scale relation curve, it is determined that discontented
Sufficient nose or eye gray-scale relation curve, otherwise determination meet nose or eye gray-scale relation curve.
Each parameter (multiple of the 0th multiple~the tenth, predetermined slope range, the second predetermined slope model in the above decision condition
Enclose, predetermined correlation values, pre-determined factor etc.) illumination condition and device parameter that can be based on acquisition face gray level image and set
It sets.
For each pre-stored decision condition, can be carried out based on image outline feature or gradation of image feature
The judgement of the decision condition, that is to say, that according to image outline feature or gradation of image feature to determine whether meeting the judgement
Condition.For example, above-mentioned decision condition 1-11 can be judged based on image outline feature, above-mentioned decision condition 12-14 can
To be judged based on gradation of image feature.
Then, it for each pre-stored decision condition, may be based on whether to meet the decision condition to be waited for
Detect the score that object corresponds to the decision condition.
It according to embodiment of the present invention, can be according to obviously meeting, meet, not being inconsistent for a decision condition
It closes, obviously meet the score for providing four grades.Wherein, hence it is evident that do not meet and provide score s1, do not meet and provide score s2, symbol
Conjunction provides score s3, hence it is evident that meet and provides score s4.Wherein, s1<s2<s3<s4, or further, s1<s2<0<s3<s4.Example
Such as, for decision condition 7:For every distance Curve, the difference of the second maximum and the first minimum is more than the of two spacing
Six multiples.Assuming that the difference of the second maximum and the first minimum is S, the 6th multiple of two spacing is Thr, then for every
Distance Curve, the score score corresponding to the decision condition are:
Wherein, s1<s2<0<s3<s4, ∞<level1<0<leve2<∞.Assuming that Thr=10, level1=-5, leve2=5,
s1=-2, s2=-1, s3=1, s4=2.So, if the difference S of obtained the second maximum and the first minimum is 4, divided
Number is -2;If the difference S of the second maximum and the first minimum is 9, it is -1 to obtain score;If the second maximum and the first pole
The difference S of small value is 14, and it is 1 to obtain score;If the difference S of the second maximum and the first minimum is 16, it is 2 to obtain score.
Certainly, for a decision condition, the score of two grades, this hair can also be provided according to meeting and not meeting
The bright marking mode to each decision condition is not limited.
After object to be detected is obtained corresponding to the score of each decision condition, it can be corresponded to based on object to be detected
The score of each decision condition obtains the gross score of object to be detected.For example, each decision condition can will be corresponded to
Score summation or weighted sum obtain the gross score of object to be detected.
Finally, judge whether object to be detected is live subject based on the gross score of object to be detected.Specifically, if waiting for
The gross score for detecting object is more than preset score threshold, then can determine that object to be detected is live subject, and otherwise determination waits for
It is not live subject to detect object.
In addition, according to embodiment of the present invention, the multiframe face gray level image of object to be detected can also be received,
The score for respectively obtaining each face gray level image obtains object to be detected based on the score of each face gray level image
Gross score, it is last to judge whether object to be detected is live subject according to based on the gross score of object to be detected.
Wherein it is possible to obtain the total score of object to be detected to the score averaged of each face gray level image
Number.The score of each face gray level image can also be weighted and be summed to obtain the gross score of object to be detected, it is each
The corresponding weight of score of face gray level image can be the probability that object to be detected is live subject under the score.
It is to be appreciated that the texture (i.e. profile) of face can pass through the pixel of face gray level image and its ash of spatial neighborhood
Degree is distributed to show.For removing the face gray level image of background, naked eyes are difficult to distinguish real human face or photo face,
But there is very big difference in its imaging process.Real human face is the three-dimension object of complexity, and photo face is then planar object, at
Different illumination reflections and shade are will produce as during, the difference of surface properties is caused, this can be preferably detected using texture
Species diversity.The face gray level image of live subject can preferably highlight this texture after quantization, non-living body object
Face gray level image can not then be accomplished.
Fig. 7 A and Fig. 7 B illustrate the face gray level image of live subject and non-living body object after quantization respectively
Schematic diagram.It is apparent that the texture shown in Fig. 7 A is quite clear, Fig. 7 B are then smudgy.
Therefore, the present invention can be effectively realized for detecting characteristics of image of the scheme based on quantized image of live subject
In vivo detection reduces complexity while ensureing Detection accuracy, improves In vivo detection speed, have it is of low cost,
Economic and practical, high efficient and reliable, the effect coordinated without user.
It should be appreciated that various technologies described herein are realized together in combination with hardware or software or combination thereof.From
And some aspects or part of the process and apparatus of the present invention or the process and apparatus of the present invention can take embedded tangible matchmaker
It is situated between, such as the program code in floppy disk, CD-ROM, hard disk drive or other arbitrary machine readable storage mediums (refers to
Enable) form, wherein when program is loaded into the machine of such as computer etc, and when being executed by the machine, which becomes real
The equipment for trampling the present invention.
In the case where program code executes on programmable computers, computing device generally comprises processor, processor
Readable storage medium (including volatile and non-volatile memory and or memory element), at least one input unit, and extremely
A few output device.Wherein, memory is configured for storage program code;Processor is configured for according to the memory
Instruction in the program code of middle storage executes the various methods of the present invention.
By way of example and not limitation, computer-readable medium includes computer storage media and communication media.It calculates
Machine readable medium includes computer storage media and communication media.Computer storage media storage such as computer-readable instruction,
The information such as data structure, program module or other data.Communication media is generally modulated with carrier wave or other transmission mechanisms etc.
Data-signal processed embodies computer-readable instruction, data structure, program module or other data, and includes that any information passes
Pass medium.Above any combination is also included within the scope of computer-readable medium.
It should be appreciated that in order to simplify the disclosure and help to understand one or more of each inventive aspect, it is right above
In the description of exemplary embodiment of the present invention, each feature of the invention be grouped together into sometimes single embodiment, figure or
In person's descriptions thereof.However, the method for the disclosure should be construed to reflect following intention:I.e. claimed hair
The bright feature more features required than being expressly recited in each claim.More precisely, as the following claims
As book reflects, inventive aspect is all features less than single embodiment disclosed above.Therefore, it then follows specific real
Thus the claims for applying mode are expressly incorporated in the specific implementation mode, wherein each claim itself is used as this hair
Bright separate embodiments.
Those skilled in the art should understand that the module of the equipment in example disclosed herein or unit or groups
Part can be arranged in equipment as depicted in this embodiment, or alternatively can be positioned at and the equipment in the example
In different one or more equipment.Module in aforementioned exemplary can be combined into a module or be segmented into addition multiple
Submodule.
The present invention can also include:A8, the method as described in A7, wherein described that gradation of image is extracted according to quantized image
The step of feature further includes:If the average gray value of the detection zone is in average gray scale interval, calculate nose or
The average gray value of eye.A9, the method as described in A8, wherein the average gray scale interval is [65,130].A11, such as
Method described in A7, wherein it is described based on acquired characteristics of image come judge object to be detected whether be live subject step
Suddenly include:Based on described image gray feature, judgement at least one in following decision condition is carried out:The ash of the quantized image
It is continuous to spend distribution histogram;The slope of the quantized image Gray scale projection curve in the horizontal direction with by the Gray Projection song
Related coefficient between the slope of a curve obtained after line overturning is more than pre-determined factor.A12, the method as described in A8, wherein if
The average gray value of the detection zone is in the average gray scale interval, then is based on described image gray feature, carries out
The judgement of following decision condition:The relationship of the average gray value of the nose or eye and the average gray value of the detection zone
Meet scheduled nose or eye gray-scale relation curve.A13, the method as described in any one of A10-12, wherein described to be based on
Acquired characteristics of image judges that the step of whether object to be detected is live subject includes:For each decision condition,
Whether meet the decision condition based on described image contour feature or described image gray feature, to obtain the object to be detected
Corresponding to the score of the decision condition;The score for corresponding to each decision condition based on the object to be detected, to obtain
State the gross score of object to be detected;Judge whether the object to be detected is live body based on the gross score of the object to be detected
Object.A14, the method as described in any one of A1-13, wherein the face gray level image is near-infrared image.
Those skilled in the art, which are appreciated that, to carry out adaptively the module in the equipment in embodiment
Change and they are arranged in the one or more equipment different from the embodiment.It can be the module or list in embodiment
Member or component be combined into a module or unit or component, and can be divided into addition multiple submodule or subelement or
Sub-component.Other than such feature and/or at least some of process or unit exclude each other, it may be used any
Combination is disclosed to all features disclosed in this specification (including adjoint claim, abstract and attached drawing) and so to appoint
Where all processes or unit of method or equipment are combined.Unless expressly stated otherwise, this specification (including adjoint power
Profit requires, abstract and attached drawing) disclosed in each feature can be by providing the alternative features of identical, equivalent or similar purpose come generation
It replaces.
In addition, it will be appreciated by those of skill in the art that although some embodiments described herein include other embodiments
In included certain features rather than other feature, but the combination of the feature of different embodiments means in of the invention
Within the scope of and form different embodiments.For example, in the following claims, embodiment claimed is appointed
One of meaning mode can use in any combination.
In addition, be described as herein can be by the processor of computer system or by executing for some in the embodiment
The combination of method or method element that other devices of the function are implemented.Therefore, have for implementing the method or method
The processor of the necessary instruction of element forms the device for implementing this method or method element.In addition, device embodiment
Element described in this is the example of following device:The device is used to implement performed by the element by the purpose in order to implement the invention
Function.
As used in this, unless specifically stated, come using ordinal number " first ", " second ", " third " etc.
Description plain objects are merely representative of the different instances for being related to similar object, and are not intended to imply that the object being described in this way must
Must have the time it is upper, spatially, in terms of sequence or given sequence in any other manner.
Although the embodiment according to limited quantity describes the present invention, above description, the art are benefited from
It is interior it is clear for the skilled person that in the scope of the present invention thus described, it can be envisaged that other embodiments.Additionally, it should be noted that
The language that is used in this specification primarily to readable and introduction purpose and select, rather than in order to explain or limit
Determine subject of the present invention and selects.Therefore, without departing from the scope and spirit of the appended claims, for this
Many modifications and changes will be apparent from for the those of ordinary skill of technical field.For the scope of the present invention, to this
The done disclosure of invention is illustrative and not restrictive, and it is intended that the scope of the present invention be defined by the claims appended hereto.