CN106951869B - A kind of living body verification method and equipment - Google Patents

A kind of living body verification method and equipment Download PDF

Info

Publication number
CN106951869B
CN106951869B CN201710175495.5A CN201710175495A CN106951869B CN 106951869 B CN106951869 B CN 106951869B CN 201710175495 A CN201710175495 A CN 201710175495A CN 106951869 B CN106951869 B CN 106951869B
Authority
CN
China
Prior art keywords
parameter
image data
image
textural characteristics
threshold value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710175495.5A
Other languages
Chinese (zh)
Other versions
CN106951869A (en
Inventor
熊鹏飞
王汉杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201710175495.5A priority Critical patent/CN106951869B/en
Publication of CN106951869A publication Critical patent/CN106951869A/en
Application granted granted Critical
Publication of CN106951869B publication Critical patent/CN106951869B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/467Encoded features or binary features, e.g. local binary patterns [LBP]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention discloses a kind of living body verification method and equipment.The described method includes: obtaining the first image data, the first image data are parsed, obtain the textural characteristics of the first image data;The textural characteristics characterize at least one of following attributive character: the fuzzy characteristics of the first image data, the retroreflective feature of the first image data, the first image data bounding box features;The corresponding first kind parameter of the textural characteristics is obtained based on disaggregated model;The second class parameter corresponding with the textural characteristics in the first image data is obtained based on statistical disposition mode;The second class parameter is different from the first kind parameter;When the first kind parameter is greater than first kind threshold value and the second class parameter is greater than the second class threshold value, fusion parameters are determined based on the first kind parameter and the second class parameter;When the fusion parameters are greater than third class threshold value, determine that living body is verified.

Description

A kind of living body verification method and equipment
Technical field
The present invention relates to face recognition technologies, and in particular to a kind of living body verification method and equipment.
Background technique
Existing passive living body verification method is broadly divided into three classes: based drive method, method based on equipment and Method based on texture.Wherein, based drive method mainly passes through analysis image background or the unconscious behavior of user Three dimensional depth variation is judged whether there is, to distinguish photo or true man.Method based on equipment passes through different light sources or light The facial image acquired under intensity detects the difference of real human face Yu photo/video image.Such methods are based on true people Face is different from reflective degree of the photo/video to light source to the reflective degree of light source.Method based on texture is to pass through analysis chart A kind of characteristics of image of certain of picture is directly classified.
Above-mentioned three classes method has some shortcomings: based drive method there is still a need for user make some rotary heads or Side face movement just can be to be not completely passive verifying, and video cannot be distinguished.Although method energy based on equipment Preferable effect is enough obtained, but depends critically upon equipment, scalability is not strong.Method based on image texture uses single figure As feature is difficult to describe different attack samples.For example frequency-domain analysis is invalid for high-definition image, reflectance under half-light for clapping No-reflection effective image taken the photograph etc..
Summary of the invention
To solve existing technical problem, the embodiment of the present invention provides a kind of living body verification method and equipment.
In order to achieve the above objectives, the technical solution of the embodiment of the present invention is achieved in that
The embodiment of the invention provides a kind of living body verification methods, which comprises
The first image data is obtained, the first image data are parsed, obtains the textural characteristics of the first image data; The textural characteristics characterize at least one of following attributive character: the fuzzy characteristics of the first image data, first figure As the retroreflective features of data, the bounding box features of the first image data;
The corresponding first kind parameter of the textural characteristics is obtained based on disaggregated model;And
The second class parameter corresponding with the textural characteristics in the first image data is obtained based on statistical disposition mode; The second class parameter is different from the first kind parameter;
When the first kind parameter is greater than first kind threshold value and the second class parameter is greater than the second class threshold value, it is based on The first kind parameter and the second class parameter determine fusion parameters;
When the fusion parameters are greater than third class threshold value, determine that living body is verified.
In above scheme, the method also includes: determine the first kind parameter no more than the first kind threshold value or institute State the second class parameter no more than the second class threshold value or the fusion parameters no more than the third class threshold value when, determine and live Experience card does not pass through.
In above scheme, the textural characteristics for obtaining the first image data, comprising:
The first textural characteristics, the second textural characteristics and the third texture feature of the first image data are obtained respectively;Institute State the fuzzy characteristics of the first textural characteristics characterization the first image data;Second textural characteristics characterize the first image The retroreflective feature of data;The bounding box features of the third texture characteristic present the first image data;
It is described that the corresponding first kind parameter of the textural characteristics is obtained based on preconfigured disaggregated model, comprising: to be based on Preconfigured first disaggregated model obtains corresponding first parameter of first textural characteristics, is based on preconfigured second point Class model obtains corresponding second parameter of second textural characteristics, obtains described the based on preconfigured third disaggregated model The corresponding third parameter of three textural characteristics.
The second class parameter corresponding with the textural characteristics in the statistics the first image data, comprising:
Count the 4th parameter corresponding with first textural characteristics and second texture in the first image data Corresponding 5th parameter of feature and corresponding 6th parameter of the third texture feature.
It is described to determine that the first kind parameter is greater than first kind threshold value and the second class parameter is greater than in above scheme When the second class threshold value, fusion parameters are determined based on the first kind parameter and the second class parameter, comprising:
Determine that first parameter is greater than first threshold and second parameter is greater than second threshold and the third is joined Number is greater than that third threshold value and the 4th parameter are greater than the 4th threshold value and the 5th parameter is greater than the 5th threshold value and described the Six parameters be greater than six threshold values when, based on first parameter, second parameter, the third parameter, the 4th parameter, 5th parameter and the 6th parameter determine fusion parameters.
It is described to determine that the first kind parameter is joined no more than the first kind threshold value or second class in above scheme When number is not more than the third class threshold value no more than the second class threshold value or the fusion parameters, determine that living body verifying is obstructed It crosses, comprising:
Determine first parameter no more than the first threshold or second parameter no more than the second threshold, Or the third parameter is not more than the 4th threshold value or the described 5th no more than the third threshold value or the 4th parameter When parameter is not more than the 5th threshold value or the 6th parameter no more than six threshold value or the fusion parameters are little When the third class threshold value, determine that living body verifying does not pass through.
In above scheme, first textural characteristics for obtaining the first image data, comprising:
The first image data are converted into hue saturation (HSV, Hue Saturation Value) model data; Local binary patterns (LBP, Local Binary Patterns) processing is carried out to the HSV model data, is corresponded to respectively The first LBP characteristic in tone data, the 2nd LBP characteristic corresponding to saturation data and correspond to lightness data The 3rd LBP characteristic, by the first LBP characteristic, the 2nd LBP characteristic and the 3rd LBP feature Data are as first textural characteristics.
In above scheme, the second textural characteristics of the first image data are obtained, comprising:
The retroreflective feature of the first image data is extracted, and extracts the color histogram spy of the first image data Sign, using the retroreflective feature and color histogram feature as second textural characteristics;
Wherein, the retroreflective feature for extracting the first image data, comprising: obtain the anti-of the first image data Rate image is penetrated, obtains iridescent image based on the first image data and the albedo image;To the albedo image Piecemeal processing is carried out, obtains image block gray-scale statistical parameter as the retroreflective feature.
In above scheme, the third texture feature of the first image data is obtained, comprising:
The first image data are filtered, the first edge picture number of the first image data is obtained According to;
LBP processing is carried out to the first edge image data, obtains the 4th LBP spy for characterizing the third texture feature Levy data.
In above scheme, the 4th ginseng corresponding with first textural characteristics in the statistics the first image data Number, comprising:
Gaussian filtering process is carried out to the first image data, obtains the Gaussian image number of the first image data According to;
Difference image data is obtained based on the first image data and the Gaussian image data, obtains the difference diagram As the gradient information of data is as the 4th parameter.
In above scheme, the 5th parameter corresponding with second textural characteristics in the first image data, packet are counted It includes:
Obtain the iridescent image of the first image data;Binary conversion treatment is carried out to the iridescent image, is based on two-value Change treated image and piecemeal is carried out to the iridescent image, counts the region that brightness in each block image meets preset threshold The first proportionate relationship in corresponding sub-block image calculates the summation of corresponding first proportionate relationship of all block images as institute State the 5th parameter.
In above scheme, the 6th parameter corresponding with the third texture feature in the first image data, packet are counted It includes:
Identify the face region in the first image data;
Edge detection process is carried out to the first image data, obtains second edge image data, identifies described the Length meets the first straight line of the first preset condition in two edge datas;
Position in the first straight line is extracted other than the face region and slope meets the second preset condition Second straight line counts the quantity of the second straight line as the 6th parameter.
It is described based on first parameter, second parameter, the third parameter, the 4th ginseng in above scheme Several, described 5th parameter and the 6th parameter determine fusion parameters, comprising:
Corresponding first weight coefficient of first parameter, second ginseng are obtained using machine learning algorithm respectively in advance Corresponding 4th power of corresponding second weight coefficient of number, the corresponding third weight coefficient of the third parameter, the 4th parameter Weight coefficient, corresponding 5th weight coefficient of the 5th parameter and corresponding 6th weight coefficient of the 6th parameter;
Obtain the first product, second parameter and second power of first parameter and first weight coefficient Second product of weight coefficient, the third product of the third parameter and the third weight coefficient, the 4th parameter with it is described The 4th product, the 5th product and the described 6th of the 5th parameter and the 5th weight coefficient of 4th weight coefficient 6th product of parameter and the 6th weight coefficient;
By first product, second product, the third product, the 4th product, the 5th sum of products The fusion parameters are obtained after 6th product addition.
The embodiment of the invention also provides a kind of living body verify equipment, the equipment include: resolution unit, taxon, Statistic unit and integrated unit;Wherein,
The resolution unit parses the first image data for obtaining the first image data;
The taxon, for obtaining the textural characteristics of the first image data;The textural characteristics characterization is following At least one of attributive character: the retroreflective feature, described of the fuzzy characteristics of the first image data, the first image data The bounding box features of first image data;The corresponding first kind parameter of the textural characteristics is obtained based on disaggregated model;
The statistic unit, for based on statistical disposition mode obtain in the first image data with the textural characteristics Corresponding second class parameter;The second class parameter is different from the first kind parameter;
The integrated unit for judging whether the first kind parameter is greater than first kind threshold value, and judges described Whether two class parameters are greater than second threshold;When the first kind parameter is greater than first kind threshold value and the second class parameter is greater than When the second class threshold value, fusion parameters are determined based on the first kind parameter and the second class parameter;When the fusion parameters are big When third class threshold value, determine that living body is verified.
In above scheme, the integrated unit, be also used to determine the first kind parameter no more than the first kind threshold value, Or the second class parameter is when being not more than the third class threshold value no more than the second class threshold value or the fusion parameters, really Determine living body verifying not pass through.
In above scheme, the taxon, for obtain respectively the first image data the first textural characteristics, Two textural characteristics and third texture feature;The fog-level of the first textural characteristics characterization the first image data;It is described The reflective degree of second textural characteristics characterization the first image data;The third texture characteristic present the first image number According to whether including frame;It is also used to obtain first textural characteristics corresponding the based on preconfigured first disaggregated model One parameter obtains corresponding second parameter of second textural characteristics based on preconfigured second disaggregated model, based on preparatory The third disaggregated model of configuration obtains the corresponding third parameter of the third texture feature;
The statistic unit, for counting the 4th ginseng corresponding with first textural characteristics in the first image data Number, the 5th parameter corresponding with second textural characteristics and corresponding 6th parameter of the third texture feature.
In above scheme, the integrated unit, for determining that first parameter is greater than first threshold and second ginseng Number is greater than that second threshold and the third parameter are greater than third threshold value and the 4th parameter is greater than the 4th threshold value and described the Five parameters are greater than the 5th threshold value and when the 6th parameter is greater than six threshold values, based on first parameter, second ginseng Several, the described third parameter, the 4th parameter, the 5th parameter and the 6th parameter determine fusion parameters.
In above scheme, the integrated unit is also used to determine first parameter no more than the first threshold or institute The second parameter is stated no more than the second threshold or the third parameter no more than the third threshold value or the 4th parameter No more than the 4th threshold value or the 5th parameter no more than the 5th threshold value or the 6th parameter no more than described When six threshold values or when the fusion parameters are not more than the third class threshold value, determine that living body verifying does not pass through.
In above scheme, the taxon, for the first image data to be converted to HSV model data;To institute It states HSV model data and carries out LBP processing, obtain respectively and correspond to the first LBP characteristic of tone data, correspond to saturation degree 2nd LBP characteristic of data and the 3rd LBP characteristic corresponding to lightness data, by the first LBP characteristic, The 2nd LBP characteristic and the 3rd LBP characteristic are as first textural characteristics.
In above scheme, the taxon, for extracting the retroreflective feature of the first image data, and extraction institute The color histogram feature for stating the first image data, using the retroreflective feature and color histogram feature as second texture Feature;
Wherein, the taxon is based on first figure for obtaining the albedo image of the first image data As data and the albedo image obtain iridescent image;Piecemeal processing is carried out to the albedo image, obtains image point Block gray-scale statistical parameter is as the retroreflective feature.
In above scheme, the taxon obtains described for being filtered to the first image data The first edge image data of one image data;LBP processing is carried out to the first edge image data, obtains characterization described the 4th LBP characteristic of three textural characteristics.
In above scheme, the statistic unit obtains institute for carrying out gaussian filtering process to the first image data State the Gaussian image data of the first image data;Difference diagram is obtained based on the first image data and the Gaussian image data As data, the gradient information of the difference image data is obtained as the 4th parameter.
In above scheme, the statistic unit, for obtaining the iridescent image of the first image data;To described reflective Image carries out binary conversion treatment, carries out piecemeal to the iridescent image based on the image after binary conversion treatment, counts each piecemeal Brightness meets first proportionate relationship of the region of preset threshold in corresponding sub-block image in image, calculates all block images pair The summation for the first proportionate relationship answered is as the 5th parameter.
In above scheme, the statistic unit, the face region in the first image data for identification;To institute It states the first image data and carries out edge detection process, obtain second edge image data, identify the second edge picture number Meet the first straight line of the first preset condition according to middle length;Extract in the first straight line position the face region with Outside and slope meet the second preset condition second straight line, count the quantity of the second straight line as the 6th parameter.
In above scheme, the integrated unit, for using machine learning algorithm to obtain first parameter respectively in advance Corresponding first weight coefficient, corresponding second weight coefficient of second parameter, the corresponding third weight of the third parameter Coefficient, corresponding 4th weight coefficient of the 4th parameter, corresponding 5th weight coefficient of the 5th parameter and the described 6th Corresponding 6th weight coefficient of parameter;Obtain first parameter and first weight coefficient the first product, described second The third product of parameter and the second product of second weight coefficient, the third parameter and the third weight coefficient, institute State the 4th product of the 4th parameter and the 4th weight coefficient, the 5th parameter multiplies with the 5th of the 5th weight coefficient 6th product of long-pending and described 6th parameter and the 6th weight coefficient;By first product, second product, The fusion parameters are obtained after the third product, the 4th product, the 6th product addition described in the 5th sum of products.
Living body verification method provided in an embodiment of the present invention and equipment, which comprises the first image data is obtained, with And parsing the first image data;Obtain the textural characteristics of the first image data;The textural characteristics characterization is with subordinate At least one of property feature: the fog-levels of the first image data, the reflective degree of the first image data, described Whether one image data includes frame;The corresponding first kind ginseng of the textural characteristics is obtained based on preconfigured disaggregated model Number;And the second class parameter corresponding with the textural characteristics in statistics the first image data;Determine the first kind ginseng When number is greater than first kind threshold value and the second class parameter and is greater than the second class threshold value, based on the first kind parameter and described the Two class parameters determine fusion parameters;When the fusion parameters are greater than third class threshold value, determine that living body is verified.Using this hair The technical solution of bright embodiment extracts a variety of textural characteristics, and first kind ginseng is on the one hand obtained in such a way that disaggregated model clusters It counts and carries out threshold decision, it is on the other hand corresponding with textural characteristics in statistical picture data in such a way that feature distribution counts Second class parameter simultaneously carries out threshold decision, realizes that living body is tested finally by the mode of fusion first kind parameter and the second class parameter The technical solution of card, the embodiment of the present invention is triggered from image data, does not depend on user and equipment, and multi-modal fusion makes algorithm logical The rate of mistake is substantially improved, and the different types of attack such as image for effectively having defendd photograph print, display screen to show greatly improves The accuracy rate of authentication.
Detailed description of the invention
Fig. 1 is the overall procedure schematic diagram of the living body verification method of the embodiment of the present invention;
Fig. 2 is the flow diagram one of the living body verification method of the embodiment of the present invention;
Fig. 3 a to Fig. 3 d is respectively existing living body attack source schematic diagram;
Fig. 4 a to Fig. 4 c is respectively that the treatment process of the first textural characteristics in the living body verification method of the embodiment of the present invention is shown It is intended to;
Fig. 5 a and Fig. 5 b are respectively the schematic diagram of the first textural characteristics in the living body verification method of the embodiment of the present invention;
Fig. 6 a and Fig. 6 b are respectively the schematic diagram of the second textural characteristics in the living body verification method of the embodiment of the present invention;
Fig. 7 is the schematic diagram of third texture feature in the living body verification method of the embodiment of the present invention;
Fig. 8 is the flow diagram two of the living body verification method of the embodiment of the present invention;
Fig. 9 is the effect curve schematic diagram of the living body verification method of the embodiment of the present invention;
Figure 10 is that the living body of the embodiment of the present invention verifies the composed structure schematic diagram of equipment;
Figure 11 is that the living body of the embodiment of the present invention verifies composed structure schematic diagram of the equipment as hardware.
Specific embodiment
With reference to the accompanying drawing and specific embodiment the present invention is described in further detail.
Before the living body verification method to the embodiment of the present invention is described in detail, first to the work of the embodiment of the present invention The general realisation of body proof scheme is illustrated.Fig. 1 is that the overall procedure of the living body verification method of the embodiment of the present invention shows It is intended to;As shown in Figure 1, the living body verification method of the embodiment of the present invention may include following several stages:
Stage 1: input video stream namely living body verifying equipment obtain image data.
Stage 2: living body verifies equipment and carries out Face datection.
Stage 3: In vivo detection, after testing result is shown to be living body, into the stage 4: image data is sent to backstage Carry out face verification;After testing result shows not to be living body, the In vivo detection stage is reentered.Wherein, the tool of In vivo detection Body realizes that process can refer to shown in the description of subsequent living body verification method provided in an embodiment of the present invention.
The embodiment of the invention provides a kind of living body verification methods.Fig. 2 is the living body verification method of the embodiment of the present invention Flow diagram one;As shown in Figure 2, which comprises
Step 101: obtaining the first image data, parse the first image data, obtain the first image data Textural characteristics;The textural characteristics characterize at least one of following attributive character: the fuzzy characteristics of the first image data, institute State the retroreflective feature of the first image data, the bounding box features of the first image data.
Step 102: the corresponding first kind parameter of the textural characteristics is obtained based on disaggregated model.
Step 103: being obtained corresponding with the textural characteristics in the first image data based on statistical disposition mode Two class parameters;The second class parameter is different from the first kind parameter.
Step 104: when the first kind parameter is greater than first kind threshold value and the second class parameter is greater than the second class threshold value When, fusion parameters are determined based on the first kind parameter and the second class parameter.
Step 105: when the fusion parameters are greater than third class threshold value, determining that living body is verified.
As an implementation, the method also includes: determine the first kind parameter no more than the first kind threshold When value or the second class parameter are not more than the second class threshold value, determine that living body verifying does not pass through.
The living body verification method of the embodiment of the present invention is applied in living body verifying equipment.The living body verifying equipment specifically may be used To be the electronic equipment with image acquisition units, to obtain image data by described image acquisition unit;The electronics is set It is standby specifically to can be the mobile devices such as mobile phone, tablet computer, it is also possible to personal computer, configured with the access control system (door Access control system is specially the system for carrying out control to exit and entrance) access control equipment etc.;Wherein, described image acquisition unit has Body can be the camera of setting on an electronic device.
In the present embodiment, living body verifies equipment, and (in the following embodiment of the present invention, the living body verifying equipment is referred to as Equipment) after obtaining image data by image acquisition units, described image data are parsed, the first image data are obtained Textural characteristics;Wherein, image data obtained includes multiple image.
Under normal conditions, living body faces is pretended to be mainly to wrap with the source by living body verifying (which can be described as attacking) It includes: photograph print, the photo that display/display screen is shown, video of display etc..Fig. 3 a to Fig. 3 d is respectively existing living body Attack source schematic diagram;Specifically this few class image can be analyzed, above-mentioned different types of figure can be summarized as shown in Fig. 3 a to Fig. 3 d Picture has different characteristics, such as photograph print would generally include frame;The image that display screen or display are shown would generally have Have a moire fringes, readability also can than include true man image it is low, and the image that display screen or display are shown all can There are reflective etc..Certainly, above-mentioned characteristic is not limited to uniquely attack samples sources.Therefore, the present embodiment is based on above-mentioned several Kind attack characteristic obtains the textural characteristics of the first image data.
As an implementation, the textural characteristics for obtaining the first image data, comprising: described in obtaining respectively The first textural characteristics, the second textural characteristics and the third texture feature of first image data;First textural characteristics characterize institute State the fuzzy characteristics of the first image data;The retroreflective feature of the second textural characteristics characterization the first image data;It is described The bounding box features of third texture characteristic present the first image data;Correspondingly, described be based on preconfigured disaggregated model Obtain the corresponding first kind parameter of the textural characteristics, comprising: obtain described first based on preconfigured first disaggregated model Corresponding first parameter of textural characteristics obtains second textural characteristics corresponding the based on preconfigured second disaggregated model Two parameters obtain the corresponding third parameter of the third texture feature based on preconfigured third disaggregated model.Correspondingly, institute State the second class parameter corresponding with the textural characteristics in statistics the first image data, comprising: statistics the first image In data and corresponding 4th parameter of first textural characteristics, the 5th parameter corresponding with second textural characteristics and institute State corresponding 6th parameter of third texture feature.
Specifically, in the present embodiment, first textural characteristics for obtaining the first image data, comprising: will be described First image data is converted to HSV model data;LBP processing is carried out to the HSV model data, obtains correspond to tone respectively First LBP characteristic of data, corresponding to the 2nd LBP characteristic of saturation data and corresponding to the third of lightness data LBP characteristic makees the first LBP characteristic, the 2nd LBP characteristic and the 3rd LBP characteristic For first textural characteristics.
In the present embodiment, the fuzzy characteristics of the first textural characteristics characterization the first image data;The fuzzy spy Sign, which specifically can be, indicates that the feature of the fog-level of the first image data namely the fuzzy characteristics specifically can be institute The readability for stating texture and boundary in the first image data is not up to the feature presented when preset requirement;In a kind of implementation In mode, the fuzzy characteristics can specifically pass through LBP character representation.
Specifically, the first image data are specifically as follows RGB (RGB) image data;Then RGB data is converted For HSV model data, can obtain respectively indicates the H model data of tone, indicates the S model data of saturation degree and indicates lightness V model data.LBP processing is carried out to the H model data, S model data and V model data respectively, to obtain H pattern number According to the image gradient information in, S model data and V model data.By taking H model data as an example, then the H model data is carried out Gray proces obtain the gray level image of the H model data, further determine that each characteristic point and phase in the gray level image Versus grayscale relationship between eight adjacent characteristic points multiplies the gray level image of three characteristic point matrix for three, often as shown in fig. 4 a The gray scale of a characteristic point is for example shown in Fig. 4 a;The gray value of each characteristic point is subjected to numeralization expression, specifically can refer to figure Shown in 4b.Further, the gray scale of adjacent eight characteristic points is compared with the gray scale of central feature point, if adjacent characteristic point Gray scale be greater than central feature point gray scale, then the value of the adjacent characteristic point is denoted as 1;Conversely, if the ash of adjacent characteristic point Degree is less than or equal to the gray scale of central feature point, then the value of the adjacent characteristic point is denoted as 0, for details, reference can be made to shown in Fig. 4 c.Into The value of adjacent characteristic point is connected to obtain 8 strings of binary characters by one step, and the string of binary characters can be understood as point Gray value of the cloth in (0,255).In the specific implementation process, it can refer to shown in Fig. 4 c, if being made with first, upper left corner characteristic point It for initiation feature point, is arranged according to clockwise direction, then 8 character strings obtained are 10001111.Thus it can get described The corresponding string of binary characters of each characteristic point (i.e. central feature point) in procedural image.Further, it in order to remove redundancy, unites Count in the corresponding string of binary characters of each characteristic point, 0 and 1 string of binary characters of the variation less than 2;For example, character string is In 10001111, first and 1 time, the 4th and the 5th 0 and 1 variation of the variation of second 0 and 11 time, variation is amounted to twice, It is unsatisfactory for the condition of " 0 and 1 variation is less than 2 ".In another example character string is in 00001111, only by the 4th and the 5th 0 and 1 Variation 1 time meets the condition of " 0 and 1 variation is less than 2 ".Then, the string of binary characters after statistics is mapped to (0,58) range Interior, the data after mapping can be used as the first LBP characteristic corresponding to tone data;Data processing can also be greatly reduced in this way Amount.
Above-mentioned treatment process can specifically be realized by following formula:
LBP=[code0, code1 ... ..., code7] (1)
Code (m, n)=Img (y+m, x+n) > Img (y, x)? 1:0 (2)
Wherein, the display parameters of a certain characteristic point in expression formula (1) in the first image data of LBP expression and adjacent feature The relativeness of the display parameters of point;The characteristic point is any feature point in the first image data;Code0, Code1 ... ..., code7 respectively indicate the display parameters of the adjacent characteristic point of the characteristic point;As an implementation, institute Stating display parameters specifically can be gray value, it is of course also possible to be other display parameters.Expression formula (2) is indicated characteristic point (y+ M, x+n) gray value and the gray value of characteristic point (y, x) be compared, if the gray value of characteristic point (y+m, x+n) is greater than feature The string of binary characters code (m, n) of characteristic point (m, n) is then denoted as 1, is otherwise denoted as 0 by the gray value of point (y, x).
Similarly, above-mentioned data mode can also be used for the 2nd LBP characteristic and the 3rd LBP characteristic to obtain, this In repeat no more.Further, by the first LBP characteristic, the 2nd LBP characteristic and the 3rd LBP feature of acquisition First textural characteristics are used as after data concatenation, it can be understood as LBP characteristic (including the first LBP for tieing up three 59 Characteristic, the 2nd LBP characteristic and the 3rd LBP characteristic) it is sequentially connected in series.Fig. 5 a and Fig. 5 b are respectively that the present invention is implemented The schematic diagram of first textural characteristics in the living body verification method of example;Fig. 5 a is the image data extraction for being determined as living body faces in advance The first textural characteristics out;Fig. 5 b is the first textural characteristics for being determined as the image data extraction of non-living body face in advance and going out.
In the present embodiment, the second textural characteristics of the first image data are obtained, comprising: extract the first image number According to retroreflective feature, and extract the first image data color histogram feature, the retroreflective feature and color is straight Square figure feature is as second textural characteristics;Wherein, the retroreflective feature for extracting the first image data, comprising: obtain The albedo image for obtaining the first image data is obtained reflective based on the first image data and the albedo image Image;Piecemeal processing is carried out to the albedo image, obtains image block gray-scale statistical parameter as the retroreflective feature.
In the present embodiment, the retroreflective feature of the second textural characteristics characterization the first image data;The reflective spy Sign specifically can be the feature for indicating highlight regions distribution and image chroma distribution in the first image data.Specifically, table Second textural characteristics of sign retroreflective feature include two classes: one kind is to describe the feature of the highlight regions distribution of image, wherein described Highlight regions can reach the region of preset threshold for luminance parameter;Color caused by another kind of then correspondence image reflectivity is different Colour, i.e. color histogram feature.Image when due to secondary shooting (such as it is included in the attack pattern of non-living body face Photograph print, the image that shows of display/display screen etc. can be understood as secondary shooting) be approximately plane, material also with Real human face is different, therefore easily causes the variation of color.Specifically, the first image data as RGB image, in RGB Under color space, the albedo image of the first image data is obtained, is based on the first image data and the reflection Rate image obtains iridescent image;Specifically, iridescent image is the difference of the first image data and its albedo image.Its In, albedo image can refer to and obtain shown in following formula:
Spect (y, x)=* 255 (3) (1-max (max (r (y, x) * t, g (y, x) * t), b (y, x) * t))
T=1.0/ (r (y, x)+g (y, x)+b (y, x)) (4)
Wherein, Spect (y, x) indicates the reflectivity data of characteristic point (y, x) in the first image data;R (y, x) table Show that characteristic point (y, x) corresponds to the data of red channel in RGB color;G (y, x) indicates characteristic point (y, x) in RGB face Correspond to the data of green channel in the colour space;It is logical that b (y, x) indicates that characteristic point (y, x) corresponds to blue in RGB color The data in road.
Further, piecemeal processing is carried out to iridescent image, chooses the mean value (mean) and variance (delta) of image block As retroreflective feature;Since iridescent image is specifically gray level image, then the mean and delta of described image square are especially by ash The mean and delta of angle value are indicated.Fig. 6 a and Fig. 6 b are respectively the second texture spy in the living body verification method of the embodiment of the present invention The schematic diagram of sign;Fig. 6 a left figure is the corresponding image data of collected non-living body face, and right figure is after the image real time transfer The iridescent image of acquisition;Fig. 6 b left figure is the corresponding image data of collected living body faces, and right figure is the image real time transfer The iridescent image obtained afterwards.
Expression tone can be obtained respectively by the first image data HSV model data for color histogram feature H model data, indicate saturation degree S model data and indicate lightness V model data.Respectively by H model data, S model Data and V model data project to 32 dimension spaces, obtain 32768 dimension color histograms.The histogram component that gets colors is highest Color histogram feature of 100 dimensional features as the first image data.
In the present embodiment, the third texture feature of the first image data is obtained, comprising: to the first image data It is filtered, obtains the first edge image data of the first image data;To the first edge image data into Row LBP processing, obtains the 4th LBP characteristic for characterizing the third texture feature.
In the present embodiment, the bounding box features of the third texture characteristic present the first image data;The frame is special Sign specifically can be the feature whether in characterization the first image data with frame;The bounding box features are specifically as follows institute State the linear feature that the region in the first image data other than face region is presented.
Specifically, in order to obtain the bounding box features in the first image data, first to the first image data into Row filtering processing obtains the corresponding first edge image of the first image data.As an implementation, Suo Bei can be used That (Sobel) operator (specifically may include two groups of matrixes for transverse edge detection and longitudinal edge detection) and described first Pixel value in image data makees planar convolution, obtains the corresponding first edge image of the first image data.Further, Gray proces are carried out to the first edge image, the corresponding gray level image of the first edge image is obtained, determines the ash Spend image in each characteristic point and eight adjacent characteristic points between versus grayscale relationship, such as three multiply three characteristic point square The gray level image of battle array, carries out numeralization expression for the gray value of each characteristic point, by the gray scale of adjacent eight characteristic points and center The gray scale of characteristic point is compared, if the gray scale of adjacent characteristic point is greater than the gray scale of central feature point, by the adjacent feature The value of point is denoted as 1;Conversely, if the gray scale of adjacent characteristic point is less than or equal to the gray scale of central feature point, by the adjacent feature The value of point is denoted as 0;Further, the value of adjacent characteristic point is connected to obtain 8 strings of binary characters, the binary-coded character String can be understood as the gray value for being distributed in (0,255).In the specific implementation process, referring to shown in Fig. 4 c, if with the upper left corner the One characteristic point is arranged as initiation feature point according to clockwise direction, then 8 character strings obtained are 10001111.By This can get the corresponding string of binary characters of each characteristic point (i.e. central feature point) in the procedural image.Further, it is Removal redundancy, counts in the corresponding string of binary characters of each characteristic point, 0 and 1 string of binary characters of the variation less than 2;Example Such as, character string is first and 1 time, the 4th and the 5th 0 and 1 variation of the variation of second 0 and 11 time in 10001111, always Meter variation twice, is unsatisfactory for the condition of " 0 and 1 variation is less than 2 ".In another example character string is in 00001111, only by the 4th and Change 1 time for 5th 0 and 1, meets the condition of " 0 and 1 variation is less than 2 ".Then, the string of binary characters after statistics is mapped to In (0,58) range, the data after mapping can be used as the 4th LBP characteristic corresponding to the third texture feature;In this way Data processing amount can be greatly reduced.Due to having filtered out other smooths, the corresponding 4th LBP feature of the first edge image Data can protrude the marginal portion in image, describe the bounding box features of image.
Above-mentioned technical proposal is to carry out texture feature extraction to the first image data based on three kinds of characteristics.The present embodiment In, great amount of samples data are acquired in advance, and the sample data can specifically include to be extracted using above-mentioned texture feature extraction mode The first textural characteristics and corresponding type (i.e. vague category identifier), and/or the second textural characteristics and corresponding type it is (i.e. anti- Light type), and/or third texture feature and corresponding type (i.e. frame type) and sample data may include above-mentioned three At least one of kind textural characteristics textural characteristics and corresponding type.Engineering is carried out for each type of textural characteristics Training is practised, the corresponding disaggregated model of each type of textural characteristics is obtained.Specifically, corresponding to vague category identifier, obtain corresponding First disaggregated model.Such as can be as shown in Figure 5 b, for the first texture for marking the image data for being to obtain in advance Feature, all has the inclined stripe in streak feature, such as Fig. 5 b in first image and third image, in chapter 2 image Approximate horizontal stripe etc.;Then can the common characteristic based on the first textural characteristics kind for corresponding to vague category identifier (such as striped is special Sign) machine learning training is carried out, obtain corresponding first disaggregated model of first textural characteristics.Corresponding to reflective type, obtain Obtain corresponding second disaggregated model.Corresponding to frame type, corresponding third disaggregated model is obtained.
Then in the present embodiment, by the textural characteristics of acquisition, (including at least one of following textural characteristics: the first texture is special Sign, the second textural characteristics, third texture feature) it is input in the disaggregated model of corresponding types, obtain corresponding first kind parameter. For example, being input to the first textural characteristics of acquisition in the first disaggregated model corresponding to vague category identifier, first line is obtained Manage corresponding first parameter of feature, the fog-level of the first parameter characterization the first image data;By the second of acquisition Textural characteristics are input in the second disaggregated model corresponding to reflective type, obtain corresponding second ginseng of second textural characteristics Number, the reflective degree of the second parameter characterization the first image data;The third texture feature of acquisition is input to correspondence In the third disaggregated model of frame type, the corresponding third parameter of the third texture feature, the third parameter list are obtained Levy whether the first image data include frame.Further, correspond to each disaggregated model one threshold value of corresponding configuration, when It when the parameter of acquisition is not more than corresponding threshold value, determines that the people for including in the first image data is non-living body, that is, determines living body Verifying does not pass through;Correspondingly, when the parameter of acquisition is greater than corresponding threshold value, further combined with the statistical classification of following three kinds of characteristics As a result subsequent fusion is carried out to determine.For example, when first parameter is not more than second no more than first threshold or the second parameter It when threshold value or third parameter are not more than third threshold value, determines that the people for including in the first image data is non-living body, that is, determines Living body verifying does not pass through.
In the present embodiment, the 4th ginseng corresponding with first textural characteristics in the statistics the first image data Number, comprising: gaussian filtering process is carried out to the first image data, obtains the Gaussian image number of the first image data According to;Difference image data is obtained based on the first image data and the Gaussian image data, obtains the difference image number According to gradient information as the 4th parameter.
Specifically, carrying out gaussian filtering process to the first image data, Gaussian image data are obtained;Count described The gradient information of the difference image of one image data and the Gaussian image data is as the 4th parameter.The above process is specific It can be realized by following formula:
Gx (y, x)=Img (y, x+1)-Img (y, x-1) (5)
Bx (y, x)=Img (y, x+kernel)-Img (y, x-kernel) (6)
Vx (y, x)=max (0, Gx (y, x)-Bx (y, x)) (7)
Gy (y, x)=Img (y+1, x)-Img (y-1, x) (8)
By (y, x)=Img (y+kernel, x)-Img (y-kernel, x) (9)
Vy (y, x)=max (0, Gy (y, x)-By (y, x)) (10)
Blur=max (Sum (Gx)-Sum (Vx), Sum (Gy)-Sum (Vy)) (11)
Wherein, Gx (y, x) indicates the gradient of characteristic point (y, x) in x-axis;Bx (y, x) is indicated from characteristic point (y, x) laterally Distance is the difference of two pixels in left and right of kenel;Wherein, kernel indicates transformable distance.Vx (y, x) expression Gx (y, X) difference between Bx (y, x) and the operation result being maximized between 0;Gy (y, x) indicates characteristic point (y, x) on the y axis Gradient;By (y, x) indicates the difference from two pixels up and down that characteristic point (y, x) fore-and-aft distance is kenel;Vy (y, x) table The operation result for showing the difference between Gy (y, x) and By (y, x) and being maximized between 0;Blur indicates the first image number According to fog-level the 4th parameter;Wherein, Sum (Gx) indicates that each characteristic point is in x-axis in the first image data The sum of gradient;Sum (Gy) indicates the sum of the gradient of each characteristic point on the y axis in the first image data;Sum (Vx) is indicated The sum of the corresponding Vx of each characteristic point in the first image data;Sum (Vy) indicates each spy in the first image data The sum of the corresponding Vy of sign point.
In the present embodiment, the 5th parameter corresponding with second textural characteristics in the first image data, packet are counted It includes: obtaining the iridescent image of the first image data;Binary conversion treatment is carried out to the iridescent image, is based on binary conversion treatment Image afterwards carries out piecemeal to the iridescent image, counts brightness in each block image and meets the region of preset threshold corresponding The first proportionate relationship in block image calculates the summation of corresponding first proportionate relationship of all block images as the described 5th Parameter.Above-mentioned treatment process can specifically be realized by following formula:
Spec=sum (count (Rect (y, x)=1)/count (Rect)) (12)
Wherein, Spec indicates the 5th parameter of the reflective degree of the first image data;Rect (y, x) indicates binaryzation Pixel value in the block image of iridescent image.Count (Rect) indicates all characteristic points in the block image of iridescent image Number.
In the present embodiment, the 6th parameter corresponding with the third texture feature in the first image data, packet are counted It includes: the face region in identification the first image data;Edge detection process is carried out to the first image data, is obtained Second edge image data is obtained, identifies that first of the first preset condition of length satisfaction in the second edge image data is straight Line;Position in the first straight line is extracted other than the face region and slope meets the second of the second preset condition Straight line counts the quantity of the second straight line as the 6th parameter.
Specifically, carrying out edge detection to the first image data;As an implementation, the side Canny can be used Edge detection algorithm carries out edge detection to the first image data, can specifically include: first by the first image data (being specifically as follows rgb image data) is converted to gray level image, gaussian filtering process is carried out to the gray level image, to remove figure As noise;Image gradient information is further calculated, image border amplitude and direction are calculated according to the image gradient information;To image Edge amplitude application non-maxima suppression only retains the maximum point of amplitude localized variation, generates the edge of refinement;Using dual threshold Edge detection simultaneously connects edge, makes the marginal point extracted with more robustness, to generate second edge image data.Further Ground carries out Hough (hough) transformation to the second edge image data, straight in the second edge image data to find Line;Further, the first straight line of the first preset condition of length satisfaction in all straight lines is identified;Wherein, as a kind of embodiment party Formula, the first straight line for identifying the first preset condition of length satisfaction in all straight lines, comprising: identify that length is super in all straight lines The straight line of width half of the first image data is crossed as first straight line.On the other hand, to the first image data into In row resolving, the face in the first image data is detected, obtains face region, the face place The edge in region can be indicated by the face frame of output.Then further the first straight line is identified, obtains described first In straight line other than the face region and slope meet the second preset condition straight line as second straight line;Wherein, institute State slope meet the second preset condition second straight line, comprising: in the first straight line other than the face region and The straight line for being no more than predetermined angle with the angle where the edge of the face region between straight line is straight as described second Line;As an example, such as 30 degree of the predetermined angle are not limited to above-mentioned cited example certainly.Second then obtained The signal of straight line can be as shown in Figure 7.The acquisition process of above-mentioned second straight line can be realized by following formula:
Line=sum (count (Canny (y, x)) (13)
Wherein, Line indicates the quantity of second straight line;Sum indicates adduction operation;Canny (y, x) is indicated across the side Canny The straight line of edge detection algorithm treated edge pixel point (y, x);Count indicates statistics across the straight of edge pixel point (y, x) The number of line.
It is described based on first parameter, second parameter, the third parameter, the 4th ginseng in the present embodiment Several, described 5th parameter and the 6th parameter determine fusion parameters, comprising: obtain institute respectively using machine learning algorithm in advance It is corresponding to state corresponding first weight coefficient of the first parameter, corresponding second weight coefficient of second parameter, the third parameter Third weight coefficient, corresponding 4th weight coefficient of the 4th parameter, corresponding 5th weight coefficient of the 5th parameter The 6th weight coefficient corresponding with the 6th parameter;First parameter is obtained with first weight coefficient first multiplies Long-pending, described second parameter and the second product of second weight coefficient, the third parameter and the third weight coefficient Third product, the 4th product of the 4th parameter and the 4th weight coefficient, the 5th parameter and the 5th weight 5th product of coefficient and the 6th product of the 6th parameter and the 6th weight coefficient;By first product, institute It states described in obtaining after the second product, the third product, the 4th product, the 6th product addition described in the 5th sum of products Fusion parameters.
Specifically, being expressed as Blur_s using the first parameter that above-mentioned processing mode obtains, the second parameter is expressed as Spec_ S, third parameter are expressed as Line_s, and the 4th parameter is expressed as Blur, and the 5th parameter is expressed as Spec, and the 6th parameter is expressed as Line.Further, the machine learning that machine learning algorithm carries out weighted value can be used, intend respectively for above-mentioned sextuple component It closes, the fusion parameters of acquisition meet following formula:
Live=a1*Blur_s+a2*Spec_s+a3*Line_s+a4*Blur+a5*Spec+a6*Li ne (14)
Further, the fusion parameters of acquisition are compared with preset third class threshold value, when the fusion parameters are small When the third class threshold value, it is determined as non-living body face, that is, determines that living body verifying does not pass through;Correspondingly, when the fusion is joined When number is not less than the third class threshold value, it is determined as living body faces, that is, determines that living body is verified.
Based on foregoing description, the stage 3 namely living body verification process shown in Fig. 1 be can refer to shown in Fig. 8, including three kinds The statistical disposition process of the classification processing process of textural characteristics and three kinds of textural characteristics;Certainly, in other embodiments In, the fuzzy characteristics enumerated in the present example, retroreflective feature and bounding box features are not limited to, other attack application scenarios Involved in textural characteristics also within the protection scope of the embodiment of the present invention.Specifically, to the face in image data After the completion of detection, described image data are performed corresponding processing respectively, comprising: to the fuzzy textural characteristics in image data into Row extracts, and fuzzy textural characteristics are input to Fuzzy Classifier, obtain the first parameter;By first parameter and first threshold into Row compares, and when first parameter is less than the first threshold, is determined as non-living body face, when a parameter is not less than When the first threshold, first parameter is sent into Parameter fusion process;Reflective textural characteristics in image data are carried out It extracts, reflective textural characteristics is input to reflective classifier, obtain the second parameter;Second parameter and second threshold are carried out Compare, when second parameter is less than the second threshold, be determined as non-living body face, when second parameter is not more than institute When stating second threshold, second parameter is sent into Parameter fusion process;Frame textural characteristics in image data are mentioned It takes, frame textural characteristics is inputted into frame classifier, obtain third parameter;The third parameter is compared with third threshold value Compared with when the third parameter is less than the third threshold value, being determined as non-living body face, when the third parameter is not less than described When third threshold value, the third parameter is sent into Parameter fusion process;To the fuzzy parameter (i.e. the 4th parameter) in image data It is counted, the 4th parameter is compared with the 4th threshold value, when the 4th parameter is less than four threshold value, sentenced It is set to non-living body face, when the 4th parameter is not less than four threshold value, the 4th parameter is sent into Parameter fusion Process;Reflective parameter (i.e. the 5th parameter) in image data is counted, the 5th parameter and the 5th threshold value are carried out Compare, when the 5th parameter is less than five threshold value, be determined as non-living body face, when the 5th parameter is not less than institute When stating five threshold values, the 5th parameter is sent into Parameter fusion process;To frame parameter (the i.e. the 6th ginseng in image data Number) it is counted, the 6th parameter is compared with the 6th threshold value, when the 6th parameter is less than the 6th threshold value When, it is determined as non-living body face, when the 6th parameter is not less than six threshold value, the 6th parameter is sent into parameter Merge process.Further, by above-mentioned first parameter, the second parameter, third parameter, the 4th parameter, the 5th parameter and the 6th ginseng Number carries out Parameter fusion, and fusion parameters are further compared with corresponding threshold value, when fusion parameters are less than the threshold value, determines It is determined as living body faces when the fusion parameters are not less than the threshold value for non-living body face, travels further into the stage 4, will scheme Face verification is carried out as data are sent to backstage.
The living body proof scheme of the embodiment of the present invention is not limited to passively judge, but can be used as the fusion judgement of active living body Supplement.Due to this method and active living body do not have it is any conflict, passive living body is on user experience also to the present invention without negative Face interference, therefore can preferably realize that living body is verified with the judgement of active living body.In active is verified with the living body passively combined, The invention can be used for the pretreatment of positive action judgement, i.e., only first passively judges under the premise of being true man, just carry out subsequent Movement judgement, can also both handle simultaneously, even if that is, user action is correct, it is nonetheless possible to being judged as attacker.This can More effectively to prevent the attack of video.
Fig. 9 is the effect curve schematic diagram of the living body verification method of the embodiment of the present invention;It is given in Fig. 9 using different calculations Receiver operator characteristics (ROC, Receiver Operating Characteristic) curve of method;ROC shown in Fig. 9 is bent In line, horizontal axis indicates that accidentally percent of pass, the longitudinal axis indicate accuracy rate.As can be seen that the living body verification method Jing Guo the embodiment of the present invention Correspond to the ROC curve after fusion (combine), when very low mistake passes through, accuracy rate is substantially improved to 0.8 or so, this makes Obtaining technical solution provided in this embodiment can be good at guarding against the attack of different type attack sample, while real human face also can Verifying is completed, user experience will not be affected.The present invention is independent of any equipment and user's interaction, to the complicated journey of calculating Degree also has little effect, and belongs to complete glitch-free scheme.And other use the work of single sorting algorithm or statistic algorithm Body verification method, such as shown in Fig. 9, the ROC curve for using fuzzy Classified Algorithms Applied to obtain is BlursCorresponding curve;Using The ROC curve that reflective sorting algorithm obtains is SpecsCorresponding curve;Use frame sorting algorithm obtain ROC curve for LinesCorresponding curve;The ROC curve for using fuzzy statistics algorithm to obtain is the corresponding curve of Blur;It is calculated using reflective statistics The ROC curve that method obtains is the corresponding curve of Spec;The ROC curve for using frame statistic algorithm to obtain is the corresponding song of Line Line;And the accuracy rate of above-mentioned six kinds of modes is far smaller than the accuracy rate for using amalgamation mode.
The embodiment of the invention also provides a kind of living bodies to verify equipment.Figure 10 is that the living body of the embodiment of the present invention verifies equipment Composed structure schematic diagram;As shown in Figure 10, the equipment includes: resolution unit 31, taxon 32, statistic unit 33 and melts Close unit 34;Wherein,
The resolution unit 31 parses the first image data for obtaining the first image data;
The taxon 32, for obtaining the textural characteristics of the first image data;Textural characteristics characterization with At least one of properties feature: the fuzzy characteristics of the first image data, the retroreflective feature of the first image data, institute State the bounding box features of the first image data;The corresponding first kind parameter of the textural characteristics is obtained based on disaggregated model;
The statistic unit 33, it is special with the texture in the first image data for being obtained based on statistical disposition mode Levy corresponding second class parameter;The second class parameter is different from the first kind parameter;
The integrated unit 34, for judging whether the first kind parameter is greater than first kind threshold value, and described in judgement Whether the second class parameter is greater than second threshold;When the first kind parameter be greater than first kind threshold value and the second class parameter it is big When the second class threshold value, fusion parameters are determined based on the first kind parameter and the second class parameter;When the fusion parameters When greater than third class threshold value, determine that living body is verified.
As an implementation, the integrated unit 34 is also used to determine the first kind parameter no more than described the A kind of threshold value or the second class parameter are not more than the second class threshold value or the fusion parameters are not more than the third class When threshold value, determine that living body verifying does not pass through.
Specifically, as an implementation, the taxon 32, for obtaining the first image data respectively First textural characteristics, the second textural characteristics and third texture feature;First textural characteristics characterize the first image data Fuzzy characteristics;The retroreflective feature of the second textural characteristics characterization the first image data;The third texture mark sheet Levy the bounding box features of the first image data;It is also used to obtain first texture based on preconfigured first disaggregated model Corresponding first parameter of feature obtains corresponding second ginseng of second textural characteristics based on preconfigured second disaggregated model Number obtains the corresponding third parameter of the third texture feature based on preconfigured third disaggregated model;
The statistic unit 33, for count in the first image data with first textural characteristics the corresponding 4th Parameter, the 5th parameter corresponding with second textural characteristics and corresponding 6th parameter of the third texture feature.
Further, the integrated unit 34, for determining that first parameter is greater than first threshold and second ginseng Number is greater than that second threshold and the third parameter are greater than third threshold value and the 4th parameter is greater than the 4th threshold value and described the Five parameters are greater than the 5th threshold value and when the 6th parameter is greater than six threshold values, based on first parameter, second ginseng Several, the described third parameter, the 4th parameter, the 5th parameter and the 6th parameter determine fusion parameters.
As an implementation, the integrated unit 34 is also used to determine first parameter no more than described first Threshold value or second parameter are not more than the second threshold or the third parameter is not more than the third threshold value or described 4th parameter is not more than the 5th threshold value or the 6th parameter not no more than the 4th threshold value or the 5th parameter When greater than six threshold value or when the fusion parameters are not more than the third class threshold value, determine that living body verifying does not pass through.
Specifically, in the present embodiment, the taxon 32, for the first image data to be converted to HSV model Data;LBP processing is carried out to the HSV model data, obtains the first LBP characteristic, right for corresponding to tone data respectively It should be in the 2nd LBP characteristic of saturation data and corresponding to the 3rd LBP characteristic of lightness data, by described first LBP characteristic, the 2nd LBP characteristic and the 3rd LBP characteristic are as first textural characteristics.
Specifically, the first image data are specifically as follows rgb image data;RGB data is then converted into HSV model Data can obtain respectively and indicate the H model data of tone, indicate the S model data of saturation degree and indicate the V pattern number of lightness According to.LBP processing is carried out to the H model data, S model data and V model data respectively, to obtain H model data, S mould Image gradient information in type data and V model data.By taking H model data as an example, then the H model data is carried out at gray scale Reason, obtains the gray level image of the H model data, further determines that each characteristic point in the gray level image and adjacent eight Versus grayscale relationship between a characteristic point multiplies the gray level image of three characteristic point matrix, each feature for three as shown in fig. 4 a The gray scale of point is for example shown in Fig. 4 a;The gray value of each characteristic point is subjected to numeralization expression, specifically can refer to shown in Fig. 4 b. Further, the gray scale of adjacent eight characteristic points is compared with the gray scale of central feature point, if the gray scale of adjacent characteristic point Greater than the gray scale of central feature point, then the value of the adjacent characteristic point is denoted as 1;Conversely, if the gray scale of adjacent characteristic point is less than Equal to the gray scale of central feature point, then the value of the adjacent characteristic point is denoted as 0, for details, reference can be made to shown in Fig. 4 c.Further, The value of adjacent characteristic point is connected to obtain 8 strings of binary characters, the string of binary characters can be understood as being distributed in (0, 255) gray value.In the specific implementation process, it can refer to shown in Fig. 4 c, if special using first, upper left corner characteristic point as starting Point is levied, is arranged according to clockwise direction, then 8 character strings obtained are 10001111.Thus it can get the procedural image In the corresponding string of binary characters of each characteristic point (i.e. central feature point).Further, in order to remove redundancy, each spy is counted In the corresponding string of binary characters of sign point, 0 and 1 string of binary characters of the variation less than 2;For example, character string be 10001111 in, It first and 1 time, the 4th and the 5th 0 and 1 variation of the variation of second 0 and 11 time, amounts to variation twice, is unsatisfactory for " 0 and 1 Change the condition less than 2 ".In another example character string is only to be changed 1 time by the 4th and the 5th 0 and 1 in 00001111, meet The condition of " 0 and 1 variation is less than 2 ".Then, the string of binary characters after statistics is mapped in (0,58) range, after mapping Data can be used as the first LBP characteristic corresponding to tone data;Data processing amount can also be greatly reduced in this way.
Similarly, above-mentioned data mode can also be used for the 2nd LBP characteristic and the 3rd LBP characteristic to obtain, this In repeat no more.Further, by the first LBP characteristic, the 2nd LBP characteristic and the 3rd LBP feature of acquisition First textural characteristics are used as after data concatenation, it can be understood as LBP characteristic (including the first LBP for tieing up three 59 Characteristic, the 2nd LBP characteristic and the 3rd LBP characteristic) it is sequentially connected in series.Fig. 5 a is to be determined as living body faces in advance The first textural characteristics that image data extraction goes out;Fig. 5 b is first for being determined as the image data extraction of non-living body face in advance and going out Textural characteristics.
In the present embodiment, the taxon 32, for extracting the retroreflective feature of the first image data, and extraction The color histogram feature of the first image data, using the retroreflective feature and color histogram feature as second line Manage feature;Wherein, the taxon 32 is based on described first for obtaining the albedo image of the first image data Image data and the albedo image obtain iridescent image;Piecemeal processing is carried out to the albedo image, obtains image Piecemeal gray-scale statistical parameter is as the retroreflective feature.
Specifically, the second textural characteristics of characterization retroreflective feature include two classes: one kind is to describe the highlight regions of image, i.e., Retroreflective feature;Color shades caused by another kind of then correspondence image reflectivity is different change, i.e. color histogram feature.Due to two Image when secondary shooting (such as the figure that photograph print included in the attack pattern of non-living body face, display/display screen are shown Picture etc. can be understood as secondary shooting) it is approximately plane, material is also different from real human face, therefore easily causes color Variation.Specifically, the first image data as RGB image obtains the first image data under RGB color Albedo image obtains iridescent image based on the first image data and the albedo image;Specifically, iridescent image For the difference of the first image data and its albedo image.Further, piecemeal processing is carried out to iridescent image, chooses figure As the mean and delta of piecemeal are as retroreflective feature;Since iridescent image is specifically gray level image, then described image square Mean and delta is indicated especially by the mean and delta of gray value.Fig. 6 a left figure is that collected non-living body face is corresponding Image data, right figure are the iridescent image that obtains after the image real time transfer;Fig. 6 b left figure is corresponding for collected living body faces Image data, right figure is the iridescent image that obtains after the image real time transfer.
Expression tone can be obtained respectively by the first image data HSV model data for color histogram feature H model data, indicate saturation degree S model data and indicate lightness V model data.Respectively by H model data, S model Data and V model data project to 32 dimension spaces, obtain 32768 dimension color histograms.The histogram component that gets colors is highest Color histogram feature of 100 dimensional features as the first image data.
In the present embodiment, the taxon 32, for being filtered to the first image data, described in acquisition The first edge image data of first image data;To the first edge image data carry out LBP processing, characterized described in 4th LBP characteristic of third texture feature.
Specifically, in order to obtain the bounding box features in the first image data, first to the first image data into Row filtering processing obtains the corresponding first edge image of the first image data.As an implementation, it can be used Sobel operator (specifically may include two groups of matrixes for transverse edge detection and longitudinal edge detection) and the first image Pixel value in data makees planar convolution, obtains the corresponding first edge image of the first image data.Further, to institute It states first edge image and carries out gray proces, obtain the corresponding gray level image of the first edge image, determine the grayscale image The versus grayscale relationship between each characteristic point and eight adjacent characteristic points as in, for example, three multiply three characteristic point matrix The gray value of each characteristic point is carried out numeralization expression by gray level image, by the gray scale and central feature of adjacent eight characteristic points The gray scale of point is compared, if the gray scale of adjacent characteristic point is greater than the gray scale of central feature point, by the adjacent characteristic point Value is denoted as 1;Conversely, if the gray scale of adjacent characteristic point is less than or equal to the gray scale of central feature point, by the adjacent characteristic point Value is denoted as 0;Further, the value of adjacent characteristic point is connected to obtain 8 strings of binary characters, the string of binary characters can To be interpreted as being distributed in the gray value of (0,255).In the specific implementation process, referring to shown in Fig. 4 c, if with first, the upper left corner Characteristic point is arranged as initiation feature point according to clockwise direction, then 8 character strings obtained are 10001111.Thus may be used Obtain the corresponding string of binary characters of each characteristic point (i.e. central feature point) in the procedural image.Further, in order to go Except redundancy, count in the corresponding string of binary characters of each characteristic point, 0 and 1 string of binary characters of the variation less than 2;For example, word Symbol string is in 10001111, and first changes 1 time, the 4th and the 5th 0 and 1 with second 0 and 1 and change 1 time, amounts to and changes Twice, it is unsatisfactory for the condition of " 0 and 1 variation is less than 2 ".In another example character string is in 00001111, only by the 4th and the 5th 0 and 1 variation 1 time meets the condition of " 0 and 1 variation is less than 2 ".Then, the string of binary characters after statistics is mapped to (0,58) In range, the data after mapping can be used as the 4th LBP characteristic corresponding to the third texture feature;It so also can be significantly Reduce data processing amount.Due to having filtered out other smooths, the corresponding 4th LBP characteristic energy of the first edge image Marginal portion in enough prominent image, describes the bounding box features of image.
Above-mentioned technical proposal is to carry out texture feature extraction to the first image data based on three kinds of characteristics.The present embodiment In, acquisition great amount of samples data, the sample data can specifically include special using above-mentioned texture the taxon 32 in advance Levy the first textural characteristics and corresponding type (i.e. vague category identifier), and/or the second textural characteristics that extracting mode extracts and Corresponding type (i.e. reflective type), and/or third texture feature and corresponding type (i.e. frame type) and sample number According to may include at least one of above-mentioned three kinds of textural characteristics textural characteristics and corresponding type.For each type of texture Feature carries out machine learning training, obtains the corresponding disaggregated model of each type of textural characteristics.Specifically, corresponding to fuzzy class Type obtains corresponding first disaggregated model.Such as can be as shown in Figure 5 b, for marking the image data for being to obtain in advance The first textural characteristics obtained all have the inclined stripe in streak feature, such as Fig. 5 b in first image and third image, the Approximate horizontal stripe etc. in two chapter images;It then can be based on the common characteristic for the first textural characteristics kind for corresponding to vague category identifier (such as streak feature) carries out machine learning training, obtains corresponding first disaggregated model of first textural characteristics.Correspond to Reflective type obtains corresponding second disaggregated model.Corresponding to frame type, corresponding third disaggregated model is obtained.
Then in the present embodiment, the taxon 32 by the textural characteristics of acquisition (including following textural characteristics at least it One: the first textural characteristics, the second textural characteristics, third texture feature) it is input in the disaggregated model of corresponding types, it obtains corresponding First kind parameter.For example, being input to the first textural characteristics of acquisition in the first disaggregated model corresponding to vague category identifier, obtain Obtain corresponding first parameter of first textural characteristics, the fog-level of the first parameter characterization the first image data; Second textural characteristics of acquisition are input in the second disaggregated model corresponding to reflective type, second textural characteristics are obtained Corresponding second parameter, the reflective degree of the second parameter characterization the first image data;The third texture of acquisition is special Sign is input in the third disaggregated model corresponding to frame type, obtains the corresponding third parameter of the third texture feature, institute State whether third parameter characterization the first image data include frame.Further, it is corresponding to correspond to each disaggregated model A threshold value is configured, when the parameter of acquisition is not more than corresponding threshold value, determines that the people for including in the first image data is non-live Body determines that living body verifying does not pass through;Correspondingly, when the parameter of acquisition is greater than corresponding threshold value, further combined with following three kinds The statistical classification result of characteristic carries out subsequent fusion and determines.For example, when first parameter is not more than first threshold or second When parameter is not more than third threshold value no more than second threshold or third parameter, the people for including in the first image data is determined It is non-living body, that is, determines that living body verifying does not pass through.
In the present embodiment, the statistic unit 33 is obtained for carrying out gaussian filtering process to the first image data The Gaussian image data of the first image data;Difference is obtained based on the first image data and the Gaussian image data Image data obtains the gradient information of the difference image data as the 4th parameter.
Specifically, carrying out gaussian filtering process to the first image data, Gaussian image data are obtained;Count described The gradient information of the difference image of one image data and the Gaussian image data is as the 4th parameter.
In the present embodiment, the statistic unit 33, for obtaining the iridescent image of the first image data;To described anti- Light image carries out binary conversion treatment, carries out piecemeal to the iridescent image based on the image after binary conversion treatment, statistics is each to divide Brightness meets first proportionate relationship of the region of preset threshold in corresponding sub-block image in block image, calculates all block images The summation of corresponding first proportionate relationship is as the 5th parameter.
In the present embodiment, the statistic unit 33, the face region in the first image data for identification;It is right The first image data carry out edge detection process, obtain second edge image data, identify the second edge image Length meets the first straight line of the first preset condition in data;Position is extracted in the first straight line in the face region In addition and slope meet the second preset condition second straight line, count the quantity of the second straight line as the 6th parameter.
Specifically, the statistic unit 33 carries out edge detection to the first image data;As an implementation, Canny edge detection algorithm can be used, edge detection is carried out to the first image data, can specifically include: first will be described First image data (being specifically as follows rgb image data) is converted to gray level image, carries out gaussian filtering to the gray level image Processing, to remove picture noise;Image gradient information is further calculated, image border amplitude is calculated according to the image gradient information With direction;To image border amplitude application non-maxima suppression, only retains the maximum point of amplitude localized variation, generate the side of refinement Edge;Using dual threshold edge detection and edge is connected, makes the marginal point extracted with more robustness, to generate second edge figure As data.Further, hough transformation is carried out to the second edge image data, to find the second edge picture number Straight line in;Further, the first straight line of the first preset condition of length satisfaction in all straight lines is identified;Wherein, as one Kind embodiment, the first straight line for identifying the first preset condition of length satisfaction in all straight lines, comprising: identify all straight lines Middle length is more than the straight line of the width half of the first image data as first straight line.On the other hand, to first figure As being detected in data progress resolving to the face in the first image data, acquisition face region is described The edge of face region can be indicated by the face frame of output.Then further the first straight line is identified, is obtained In the first straight line other than the face region and slope meet the second preset condition straight line it is straight as second Line;Wherein, the slope meets the second straight line of the second preset condition, comprising: in the first straight line where the face Angle other than region and between straight line where the edge of the face region is no more than the straight line conduct of predetermined angle The second straight line;As an example, such as 30 degree of the predetermined angle are not limited to above-mentioned cited example certainly.
In the present embodiment, the integrated unit 34, for using machine learning algorithm to obtain first ginseng respectively in advance The corresponding third power of corresponding first weight coefficient of number, corresponding second weight coefficient of second parameter, the third parameter Weight coefficient, corresponding 4th weight coefficient of the 4th parameter, corresponding 5th weight coefficient of the 5th parameter and described Corresponding 6th weight coefficient of six parameters;Obtain first parameter and first weight coefficient the first product, described The third product of two parameters and the second product of second weight coefficient, the third parameter and the third weight coefficient, The 5th of 4th parameter and the 4th product of the 4th weight coefficient, the 5th parameter and the 5th weight coefficient 6th product of product and the 6th parameter and the 6th weight coefficient;First product, described second are multiplied The fusion ginseng is obtained after long-pending, the described third product, the 4th product, the 6th product addition described in the 5th sum of products Number.
Specifically, being expressed as Blur_s using the first parameter that above-mentioned processing mode obtains, the second parameter is expressed as Spec_ S, third parameter are expressed as Line_s, and the 4th parameter is expressed as Blur, and the 5th parameter is expressed as Spec, and the 6th parameter is expressed as Line.Further, the machine learning that machine learning algorithm carries out weighted value can be used, intend respectively for above-mentioned sextuple component It closes, the fusion parameters of acquisition meet shown in aforementioned expression (14);Further, by the fusion parameters of acquisition and preset third Class threshold value is compared, and when the fusion parameters are less than the third class threshold value, are determined as non-living body face, that is, are determined living body Verifying does not pass through;Correspondingly, being determined as living body faces when the fusion parameters are not less than the third class threshold value, that is, determining Living body is verified.
Resolution unit 31, taxon 32,33 and of statistic unit in the embodiment of the present invention, in the living body verifying equipment Integrated unit 34, in practical applications can be by central processing unit (CPU, the Central Processing in the terminal Unit), digital signal processor (DSP, Digital Signal Processor), micro-control unit (MCU, Microcontroller Unit) or programmable gate array (FPGA, Field-Programmable Gate Array) realization.
The embodiment of the invention also provides a kind of living bodies to verify equipment, and living body verifies equipment as one example of hardware entities As shown in figure 11.The equipment includes processor 61, storage medium 62, camera 65 and at least one external communication interface 63;The processor 61, storage medium 62, camera 65 and external communication interface 63 are connected by bus 64.
The living body verification method of the embodiment of the present invention can be integrated in institute by the algorithms library form of algorithm and arbitrary format It states in living body verifying equipment;It can specifically be integrated in the client that can be run in the living body verifying equipment.In practical applications, Algorithm can be packaged together with client, when user activates client, i.e. unlatching living body authentication function, client call algorithm Library, and start camera, the image data acquired by camera carries out living body according to the source data of acquisition and sentences as source data It is fixed.
In several embodiments provided herein, it should be understood that disclosed device and method can pass through it Its mode is realized.Apparatus embodiments described above are merely indicative, for example, the division of the unit, only A kind of logical function partition, there may be another division manner in actual implementation, such as: multiple units or components can combine, or It is desirably integrated into another system, or some features can be ignored or not executed.In addition, shown or discussed each composition portion Mutual coupling or direct-coupling or communication connection is divided to can be through some interfaces, the INDIRECT COUPLING of equipment or unit Or communication connection, it can be electrical, mechanical or other forms.
Above-mentioned unit as illustrated by the separation member, which can be or may not be, to be physically separated, aobvious as unit The component shown can be or may not be physical unit, it can and it is in one place, it may be distributed over multiple network lists In member;Some or all of units can be selected to achieve the purpose of the solution of this embodiment according to the actual needs.
In addition, each functional unit in various embodiments of the present invention can be fully integrated in one processing unit, it can also To be each unit individually as a unit, can also be integrated in one unit with two or more units;It is above-mentioned Integrated unit both can take the form of hardware realization, can also realize in the form of hardware adds SFU software functional unit.
Those of ordinary skill in the art will appreciate that: realize that all or part of the steps of above method embodiment can pass through The relevant hardware of program instruction is completed, and program above-mentioned can be stored in a computer readable storage medium, the program When being executed, step including the steps of the foregoing method embodiments is executed;And storage medium above-mentioned include: movable storage device, it is read-only Memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disk or The various media that can store program code such as person's CD.
If alternatively, the above-mentioned integrated unit of the present invention is realized in the form of software function module and as independent product When selling or using, it also can store in a computer readable storage medium.Based on this understanding, the present invention is implemented Substantially the part that contributes to existing technology can be embodied in the form of software products the technical solution of example in other words, The computer software product is stored in a storage medium, including some instructions are used so that computer equipment (can be with It is personal computer, server or network equipment etc.) execute all or part of each embodiment the method for the present invention. And storage medium above-mentioned includes: that movable storage device, ROM, RAM, magnetic or disk etc. are various can store program code Medium.
The above description is merely a specific embodiment, but scope of protection of the present invention is not limited thereto, any Those familiar with the art in the technical scope disclosed by the present invention, can easily think of the change or the replacement, and should all contain Lid is within protection scope of the present invention.Therefore, protection scope of the present invention should be based on the protection scope of the described claims.

Claims (22)

1. a kind of living body verification method, which is characterized in that the described method includes:
The first image data is obtained, the first image data are parsed, obtains the textural characteristics of the first image data;
The corresponding first kind parameter of the textural characteristics is obtained based on disaggregated model;
The second class parameter corresponding with the textural characteristics in the first image data is obtained based on statistical disposition mode;It is described Second class parameter is different from the first kind parameter;
When the first kind parameter is greater than first kind threshold value and the second class parameter is greater than the second class threshold value, based on described First kind parameter and the second class parameter determine fusion parameters;
When the fusion parameters are greater than third class threshold value, determine that living body is verified;
Wherein, the textural characteristics for obtaining the first image data, comprising:
The first textural characteristics, the second textural characteristics and the third texture feature of the first image data are obtained respectively;Described The fuzzy characteristics of one textural characteristics characterization the first image data;Second textural characteristics characterize the first image data Retroreflective feature;The bounding box features of the third texture characteristic present the first image data;The bounding box features are described The linear feature that region in first image data other than face region is presented;
It is described that the corresponding first kind parameter of the textural characteristics is obtained based on disaggregated model, comprising: to be based on preconfigured first Disaggregated model obtains corresponding first parameter of first textural characteristics, based on described in the acquisition of preconfigured second disaggregated model It is corresponding to obtain the third texture feature based on preconfigured third disaggregated model for corresponding second parameter of second textural characteristics Third parameter;
It is described that the second class parameter corresponding with the textural characteristics in the first image data is obtained based on statistical disposition mode, It include: the 4th parameter corresponding with first textural characteristics in statistics the first image data, special with second texture Levy corresponding 5th parameter and corresponding 6th parameter of the third texture feature.
2. the method according to claim 1, wherein the method also includes: determine the first kind parameter not It is not more than greater than the first kind threshold value or the second class parameter no more than the second class threshold value or the fusion parameters When the third class threshold value, determine that living body verifying does not pass through.
3. the method according to claim 1, wherein it is described when the first kind parameter be greater than first kind threshold value, And the second class parameter determines that fusion is joined based on the first kind parameter and the second class parameter when being greater than the second class threshold value Number, comprising:
When first parameter is greater than first threshold and second parameter is greater than second threshold and the third parameter is greater than Third threshold value and the 4th parameter are greater than the 4th threshold value and the 5th parameter is greater than the 5th threshold value and the 6th parameter When greater than six threshold values, based on first parameter, second parameter, the third parameter, the 4th parameter, described the Five parameters and the 6th parameter determine fusion parameters.
4. according to the method described in claim 2, it is characterized in that, described determine that the first kind parameter is not more than described first Class threshold value or the second class parameter are not more than the second class threshold value or the fusion parameters are not more than the third class threshold When value, determine that living body verifying does not pass through, comprising:
Determine that first parameter is joined no more than first threshold or second parameter no more than second threshold or the third Number no more than third threshold value or the 4th parameter no more than the 4th threshold value or the 5th parameter no more than the 5th threshold value or When 6th parameter is not more than six threshold values or when the fusion parameters are not more than the third class threshold value, determine that living body is tested Card does not pass through.
5. the method according to claim 1, wherein first texture for obtaining the first image data is special Sign, comprising:
The first image data are converted into hue saturation HSV model data;Part two is carried out to the HSV model data Value mode LBP processing obtains correspond to the first LBP characteristic of tone data, corresponding to the second of saturation data respectively LBP characteristic and the 3rd LBP characteristic corresponding to lightness data, by the first LBP characteristic, described second LBP characteristic and the 3rd LBP characteristic are as first textural characteristics.
6. the method according to claim 1, wherein obtain the first image data the second textural characteristics, Include:
The retroreflective feature of the first image data is extracted, and extracts the color histogram feature of the first image data, Using the retroreflective feature and color histogram feature as second textural characteristics;
Wherein, the retroreflective feature for extracting the first image data, comprising: obtain the reflectivity of the first image data Image obtains iridescent image based on the first image data and the albedo image;The albedo image is carried out Piecemeal processing obtains image block gray-scale statistical parameter as the retroreflective feature.
7. the method according to claim 1, wherein obtain the first image data third texture feature, Include:
The first image data are filtered, the first edge image data of the first image data is obtained;
LBP processing is carried out to the first edge image data, obtains the 4th LBP characteristic for characterizing the third texture feature According to.
8. the method according to claim 1, wherein with described first in the statistics the first image data Corresponding 4th parameter of textural characteristics, comprising:
Gaussian filtering process is carried out to the first image data, obtains the Gaussian image data of the first image data;
Difference image data is obtained based on the first image data and the Gaussian image data, obtains the difference image number According to gradient information as the 4th parameter.
9. the method according to claim 1, wherein statistics the first image data in second texture Corresponding 5th parameter of feature, comprising:
Obtain the iridescent image of the first image data;Binary conversion treatment is carried out to the iridescent image, based at binaryzation Image after reason carries out piecemeal to the iridescent image, counts brightness in each block image and meets the region of preset threshold in phase The first proportionate relationship in block image is answered, calculates the summation of corresponding first proportionate relationship of all block images as described Five parameters.
10. the method according to claim 1, wherein statistics the first image data in the third line Manage corresponding 6th parameter of feature, comprising:
Identify the face region in the first image data;
Edge detection process is carried out to the first image data, second edge image data is obtained, identifies second side Length meets the first straight line of the first preset condition in edge image data;
Position in the first straight line is extracted other than the face region and slope meets the second of the second preset condition Straight line counts the quantity of the second straight line as the 6th parameter.
11. according to the method described in claim 3, it is characterized in that, it is described based on first parameter, second parameter, The third parameter, the 4th parameter, the 5th parameter and the 6th parameter determine fusion parameters, comprising:
Corresponding first weight coefficient of first parameter, second parameter pair are obtained using machine learning algorithm respectively in advance The second weight coefficient, the corresponding third weight coefficient of the third parameter, the corresponding 4th weight system of the 4th parameter answered Corresponding 5th weight coefficient of several, described 5th parameter and corresponding 6th weight coefficient of the 6th parameter;
Obtain the first product, second parameter and second weight system of first parameter and first weight coefficient Third product, the 4th parameter and the described 4th of several the second product, the third parameter and the third weight coefficient The 4th product, the 5th product and the 6th parameter of the 5th parameter and the 5th weight coefficient of weight coefficient With the 6th product of the 6th weight coefficient;
It will be described in first product, second product, the third product, the 4th product, the 5th sum of products The fusion parameters are obtained after 6th product addition.
12. a kind of living body verifies equipment, which is characterized in that the equipment include: resolution unit, taxon, statistic unit and Integrated unit;Wherein,
The resolution unit parses the first image data for obtaining the first image data;
The taxon, for obtaining the textural characteristics of the first image data;The texture is obtained based on disaggregated model The corresponding first kind parameter of feature;
The statistic unit, it is corresponding with the textural characteristics in the first image data for being obtained based on statistical disposition mode The second class parameter;The second class parameter is different from the first kind parameter;
The integrated unit for judging whether the first kind parameter is greater than first kind threshold value, and judges second class Whether parameter is greater than the second class threshold value;It is greater than the when the first kind parameter is greater than first kind threshold value and the second class parameter When two class threshold values, fusion parameters are determined based on the first kind parameter and the second class parameter;When the fusion parameters are greater than When third class threshold value, determine that living body is verified;
Wherein, the textural characteristics for obtaining the first image data, comprising:
The first textural characteristics, the second textural characteristics and the third texture feature of the first image data are obtained respectively;Described The fuzzy characteristics of one textural characteristics characterization the first image data;Second textural characteristics characterize the first image data Retroreflective feature;The bounding box features of the third texture characteristic present the first image data;The bounding box features are described The linear feature that region in first image data other than face region is presented;
It is described that the corresponding first kind parameter of the textural characteristics is obtained based on disaggregated model, comprising: to be based on preconfigured first Disaggregated model obtains corresponding first parameter of first textural characteristics, based on described in the acquisition of preconfigured second disaggregated model It is corresponding to obtain the third texture feature based on preconfigured third disaggregated model for corresponding second parameter of second textural characteristics Third parameter;
It is described that the second class parameter corresponding with the textural characteristics in the first image data is obtained based on statistical disposition mode, It include: the 4th parameter corresponding with first textural characteristics in statistics the first image data, special with second texture Levy corresponding 5th parameter and corresponding 6th parameter of the third texture feature.
13. equipment according to claim 12, which is characterized in that the integrated unit is also used to determine the first kind Parameter is not more than the second class threshold value or the fusion parameters no more than the first kind threshold value or the second class parameter When no more than the third class threshold value, determine that living body verifying does not pass through.
14. equipment according to claim 12, which is characterized in that the integrated unit, for determining first parameter Greater than first threshold and second parameter is greater than second threshold and the third parameter is greater than third threshold value and the described 4th When parameter is greater than the 4th threshold value and the 5th parameter is greater than the 5th threshold value and the 6th parameter is greater than six threshold values, it is based on First parameter, second parameter, the third parameter, the 4th parameter, the 5th parameter and the 6th ginseng Number determines fusion parameters.
15. equipment according to claim 13, which is characterized in that the integrated unit is also used to determine first ginseng Number no more than first threshold or second parameter no more than second threshold or the third parameter no more than third threshold value or 4th parameter is not more than no more than the 4th threshold value or the 5th parameter no more than the threshold value or the 6th parameter When six threshold values or when the fusion parameters are not more than the third class threshold value, determine that living body verifying does not pass through.
16. equipment according to claim 12, which is characterized in that the taxon is used for the first image number According to being converted to HSV model data;LBP processing is carried out to the HSV model data, obtains correspond to the first of tone data respectively LBP characteristic, corresponding to the 2nd LBP characteristic of saturation data and corresponding to the 3rd LBP characteristic of lightness data According to using the first LBP characteristic, the 2nd LBP characteristic and the 3rd LBP characteristic as described first Textural characteristics.
17. equipment according to claim 12, which is characterized in that the taxon, for extracting the first image The retroreflective feature of data, and the color histogram feature of the first image data is extracted, by the retroreflective feature and color Histogram feature is as second textural characteristics;
Wherein, the taxon is based on the first image number for obtaining the albedo image of the first image data Accordingly and the albedo image obtains iridescent image;Piecemeal processing is carried out to the albedo image, obtains image block ash Statistical parameter is spent as the retroreflective feature.
18. equipment according to claim 12, which is characterized in that the taxon, for the first image number According to being filtered, the first edge image data of the first image data is obtained;To the first edge image data LBP processing is carried out, the 4th LBP characteristic for characterizing the third texture feature is obtained.
19. equipment according to claim 12, which is characterized in that the statistic unit, for the first image number According to gaussian filtering process is carried out, the Gaussian image data of the first image data are obtained;Based on the first image data with The Gaussian image data obtain difference image data, obtain the gradient information of the difference image data as the 4th ginseng Number.
20. equipment according to claim 12, which is characterized in that the statistic unit, for obtaining the first image The iridescent image of data;Binary conversion treatment is carried out to the iridescent image, based on the image after binary conversion treatment to described reflective Image carries out piecemeal, counts brightness in each block image and meets first ratio of the region of preset threshold in corresponding sub-block image Example relationship, calculates the summation of corresponding first proportionate relationship of all block images as the 5th parameter.
21. equipment according to claim 12, which is characterized in that the statistic unit, for identification the first image Face region in data;Edge detection process is carried out to the first image data, obtains second edge image data, Identify the first straight line of the first preset condition of length satisfaction in the second edge image data;It extracts in the first straight line Position is other than the face region and slope meets the second straight line of the second preset condition, counts the second straight line Quantity as the 6th parameter.
22. equipment according to claim 13, which is characterized in that the integrated unit, for using machine learning in advance Algorithm obtains corresponding first weight coefficient of first parameter, corresponding second weight coefficient of second parameter, institute respectively It is corresponding to state the corresponding third weight coefficient of third parameter, corresponding 4th weight coefficient of the 4th parameter, the 5th parameter The 5th weight coefficient and corresponding 6th weight coefficient of the 6th parameter;Obtain first parameter and first weight First product of coefficient, the second product of second parameter and second weight coefficient, the third parameter and described the The third product of three weight coefficients, the 4th product of the 4th parameter and the 4th weight coefficient, the 5th parameter with 5th product of the 5th weight coefficient and the 6th product of the 6th parameter and the 6th weight coefficient;By institute State the first product, second product, the third product, the 4th product, the 6th product described in the 5th sum of products The fusion parameters are obtained after addition.
CN201710175495.5A 2017-03-22 2017-03-22 A kind of living body verification method and equipment Active CN106951869B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710175495.5A CN106951869B (en) 2017-03-22 2017-03-22 A kind of living body verification method and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710175495.5A CN106951869B (en) 2017-03-22 2017-03-22 A kind of living body verification method and equipment

Publications (2)

Publication Number Publication Date
CN106951869A CN106951869A (en) 2017-07-14
CN106951869B true CN106951869B (en) 2019-03-15

Family

ID=59472685

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710175495.5A Active CN106951869B (en) 2017-03-22 2017-03-22 A kind of living body verification method and equipment

Country Status (1)

Country Link
CN (1) CN106951869B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107609463B (en) * 2017-07-20 2021-11-23 百度在线网络技术(北京)有限公司 Living body detection method, living body detection device, living body detection equipment and storage medium
CN107609494A (en) * 2017-08-31 2018-01-19 北京飞搜科技有限公司 A kind of human face in-vivo detection method and system based on silent formula
CN108304708A (en) * 2018-01-31 2018-07-20 广东欧珀移动通信有限公司 Mobile terminal, face unlocking method and related product
CN109145716B (en) * 2018-07-03 2019-04-16 南京思想机器信息科技有限公司 Boarding gate verifying bench based on face recognition
CN109558794A (en) * 2018-10-17 2019-04-02 平安科技(深圳)有限公司 Image-recognizing method, device, equipment and storage medium based on moire fringes
CN111178112B (en) * 2018-11-09 2023-06-16 株式会社理光 Face recognition device
CN109740572B (en) * 2019-01-23 2020-09-29 浙江理工大学 Human face living body detection method based on local color texture features
CN110263708B (en) * 2019-06-19 2020-03-13 郭玮强 Image source identification method, device and computer readable storage medium
CN113221842B (en) * 2021-06-04 2023-12-29 第六镜科技(北京)集团有限责任公司 Model training method, image recognition method, device, equipment and medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104778457A (en) * 2015-04-18 2015-07-15 吉林大学 Video face identification algorithm on basis of multi-instance learning
CN105389554A (en) * 2015-11-06 2016-03-09 北京汉王智远科技有限公司 Face-identification-based living body determination method and equipment

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105243376A (en) * 2015-11-06 2016-01-13 北京汉王智远科技有限公司 Living body detection method and device
CN105389553A (en) * 2015-11-06 2016-03-09 北京汉王智远科技有限公司 Living body detection method and apparatus

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104778457A (en) * 2015-04-18 2015-07-15 吉林大学 Video face identification algorithm on basis of multi-instance learning
CN105389554A (en) * 2015-11-06 2016-03-09 北京汉王智远科技有限公司 Face-identification-based living body determination method and equipment

Also Published As

Publication number Publication date
CN106951869A (en) 2017-07-14

Similar Documents

Publication Publication Date Title
CN106951869B (en) A kind of living body verification method and equipment
CN108038456B (en) Anti-deception method in face recognition system
Jourabloo et al. Face de-spoofing: Anti-spoofing via noise modeling
CN111488756B (en) Face recognition-based living body detection method, electronic device, and storage medium
CN108319953B (en) Occlusion detection method and device, electronic equipment and the storage medium of target object
Narihira et al. Learning lightness from human judgement on relative reflectance
CN103413147B (en) A kind of licence plate recognition method and system
CN110472623A (en) Image detecting method, equipment and system
Chen et al. Image splicing detection via camera response function analysis
CN108596197A (en) A kind of seal matching process and device
CN109858439A (en) A kind of biopsy method and device based on face
CN110516616A (en) A kind of double authentication face method for anti-counterfeit based on extensive RGB and near-infrared data set
CN109902667A (en) Human face in-vivo detection method based on light stream guide features block and convolution GRU
CN106295645B (en) A kind of license plate character recognition method and device
CN109918971B (en) Method and device for detecting number of people in monitoring video
CN108647634A (en) Framing mask lookup method, device, computer equipment and storage medium
WO2009078957A1 (en) Systems and methods for rule-based segmentation for objects with full or partial frontal view in color images
CN109657715B (en) Semantic segmentation method, device, equipment and medium
CN111860369A (en) Fraud identification method and device and storage medium
KR101048582B1 (en) Method and device for detecting faces in color images
CN103366390A (en) Terminal, image processing method and device thereof
CN105184771A (en) Adaptive moving target detection system and detection method
CN111179202A (en) Single image defogging enhancement method and system based on generation countermeasure network
CN109284759A (en) One kind being based on the magic square color identification method of support vector machines (svm)
CN109977941A (en) Licence plate recognition method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant