CN106951869A - A kind of live body verification method and equipment - Google Patents

A kind of live body verification method and equipment Download PDF

Info

Publication number
CN106951869A
CN106951869A CN201710175495.5A CN201710175495A CN106951869A CN 106951869 A CN106951869 A CN 106951869A CN 201710175495 A CN201710175495 A CN 201710175495A CN 106951869 A CN106951869 A CN 106951869A
Authority
CN
China
Prior art keywords
parameter
image data
image
threshold value
textural characteristics
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710175495.5A
Other languages
Chinese (zh)
Other versions
CN106951869B (en
Inventor
熊鹏飞
王汉杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201710175495.5A priority Critical patent/CN106951869B/en
Publication of CN106951869A publication Critical patent/CN106951869A/en
Application granted granted Critical
Publication of CN106951869B publication Critical patent/CN106951869B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/467Encoded features or binary features, e.g. local binary patterns [LBP]

Abstract

The embodiment of the invention discloses a kind of live body verification method and equipment.Methods described includes:The first view data is obtained, described first image data are parsed, the textural characteristics of described first image data are obtained;The textural characteristics characterize at least one of following attributive character:The fuzzy characteristics of described first image data, the retroreflective feature of described first image data, the bounding box features of described first image data;The corresponding first kind parameter of the textural characteristics is obtained based on disaggregated model;Equations of The Second Kind parameter corresponding with the textural characteristics in described first image data is obtained based on statistical disposition mode;The Equations of The Second Kind parameter is different from the first kind parameter;When the first kind parameter is more than first kind threshold value and the Equations of The Second Kind parameter is more than Equations of The Second Kind threshold value, fusion parameters are determined based on the first kind parameter and the Equations of The Second Kind parameter;When the fusion parameters are more than the 3rd class threshold value, determine that live body is verified.

Description

A kind of live body verification method and equipment
Technical field
The present invention relates to face recognition technology, and in particular to a kind of live body verification method and equipment.
Background technology
Existing passive live body verification method is broadly divided into three classes:Based drive method, the method based on equipment and Method based on texture.Wherein, based drive method is mainly by analyzing image background or the unconscious behavior of user Judge whether that three dimensional depth changes, so as to distinguish photo or true man.Method based on equipment passes through different light sources or light The facial image gathered under intensity detects the difference of real human face and photo/video image.This kind of method is to be based on true people Face is to the reflective degree of light source, different to the reflective degree of light source from photo/video.Method based on texture is by analysis chart The a certain class characteristics of image of picture is directly classified.
Above-mentioned three classes method has in place of some shortcomings:Based drive method still need user make some rotary heads or The action of side face just can be to be not completely passive checking, and cannot be distinguished by for video.Although the method energy based on equipment Preferable effect is enough obtained, but depends critically upon equipment, scalability is not strong.Method based on image texture uses single figure As feature is difficult the different attack sample of description.Such as frequency-domain analysis is invalid for high-definition image, and reflectance under half-light for clapping No-reflection effective image taken the photograph etc..
The content of the invention
To solve existing technical problem, the embodiment of the present invention provides a kind of live body verification method and equipment.
To reach above-mentioned purpose, what the technical scheme of the embodiment of the present invention was realized in:
The embodiments of the invention provide a kind of live body verification method, methods described includes:
The first view data is obtained, described first image data are parsed, the textural characteristics of described first image data are obtained; The textural characteristics characterize at least one of following attributive character:The fuzzy characteristics of described first image data, first figure Retroreflective feature, the bounding box features of described first image data as data;
The corresponding first kind parameter of the textural characteristics is obtained based on disaggregated model;And,
Equations of The Second Kind parameter corresponding with the textural characteristics in described first image data is obtained based on statistical disposition mode; The Equations of The Second Kind parameter is different from the first kind parameter;
When the first kind parameter is more than first kind threshold value and the Equations of The Second Kind parameter is more than Equations of The Second Kind threshold value, it is based on The first kind parameter and the Equations of The Second Kind parameter determine fusion parameters;
When the fusion parameters are more than the 3rd class threshold value, determine that live body is verified.
In such scheme, methods described also includes:Judge that the first kind parameter is not more than the first kind threshold value or institute State Equations of The Second Kind parameter and be not more than the Equations of The Second Kind threshold value or when the fusion parameters are not more than the 3rd class threshold value, it is determined that living Experience card does not pass through.
In such scheme, the textural characteristics for obtaining described first image data, including:
The first textural characteristics, the second textural characteristics and the third texture feature of described first image data are obtained respectively;Institute State the fuzzy characteristics that the first textural characteristics characterize described first image data;Second textural characteristics characterize described first image The retroreflective feature of data;The bounding box features of the third texture characteristic present described first image data;
It is described that the corresponding first kind parameter of the textural characteristics is obtained based on the disaggregated model being pre-configured with, including:It is based on The first disaggregated model being pre-configured with obtains corresponding first parameter of first textural characteristics, based on second point be pre-configured with Class model obtains corresponding second parameter of second textural characteristics, and described the is obtained based on the 3rd disaggregated model being pre-configured with Corresponding 3rd parameter of three textural characteristics.
Equations of The Second Kind parameter corresponding with the textural characteristics in the statistics described first image data, including:
Count the 4th parameter corresponding with first textural characteristics and second texture in described first image data Corresponding 5th parameter of feature and corresponding 6th parameter of the third texture feature.
It is described to judge that the first kind parameter is more than first kind threshold value and the Equations of The Second Kind parameter is more than in such scheme During Equations of The Second Kind threshold value, fusion parameters are determined based on the first kind parameter and the Equations of The Second Kind parameter, including:
Judge that first parameter is more than first threshold and second parameter is more than Second Threshold and the 3rd ginseng Number is more than the 4th threshold value more than the 3rd threshold value and the 4th parameter and the 5th parameter is more than the 5th threshold value and described the Six parameters be more than six threshold values when, based on first parameter, second parameter, the 3rd parameter, the 4th parameter, 5th parameter and the 6th parameter determine fusion parameters.
It is described to judge that the first kind parameter is not more than the first kind threshold value or Equations of The Second Kind ginseng in such scheme Number is not more than the Equations of The Second Kind threshold value or the fusion parameters when being not more than the 3rd class threshold value, determines that live body checking is obstructed Cross, including:
Judge first parameter be not more than the first threshold or second parameter be not more than the Second Threshold, Or the 3rd parameter is not more than the 3rd threshold value or the 4th parameter is not more than the 4th threshold value or the described 5th Parameter is not more than the 5th threshold value or when the 6th parameter is not more than six threshold value, or the fusion parameters are little When the 3rd class threshold value, determine that live body checking does not pass through.
In such scheme, first textural characteristics for obtaining described first image data, including:
Described first image data are converted into hue saturation (HSV, Hue Saturation Value) model data; Local binary patterns (LBP, Local Binary Patterns) processing is carried out to the HSV model datas, is corresponded to respectively In the first LBP characteristics of tone data, the 2nd LBP characteristics corresponding to saturation data and corresponding to lightness data The 3rd LBP characteristics, by the first LBP characteristics, the 2nd LBP characteristics and the 3rd LBP features Data are used as first textural characteristics.
In such scheme, the second textural characteristics of described first image data are obtained, including:
The retroreflective feature of described first image data is extracted, and extracts the color histogram spy of described first image data Levy, regard the retroreflective feature and color histogram feature as second textural characteristics;
Wherein, the retroreflective feature for extracting described first image data, including:Obtain the anti-of described first image data Rate image is penetrated, iridescent image is obtained based on described first image data and the albedo image;To the albedo image Piecemeal processing is carried out, image block gray-scale statistical parameter is obtained and is used as the retroreflective feature.
In such scheme, the third texture feature of described first image data is obtained, including:
Described first image data are filtered with processing, the first edge picture number of described first image data is obtained According to;
LBP processing is carried out to the first edge view data, the 4th LBP spies for characterizing the third texture feature are obtained Levy data.
In such scheme, the 4th ginseng corresponding with first textural characteristics in the statistics described first image data Number, including:
Gaussian filtering process is carried out to described first image data, the Gaussian image number of described first image data is obtained According to;
Difference image data is obtained based on described first image data and the Gaussian image data, the difference diagram is obtained As the gradient information of data is used as the 4th parameter.
In such scheme, the 5th parameter corresponding with second textural characteristics in statistics described first image data, bag Include:
Obtain the iridescent image of described first image data;Binary conversion treatment is carried out to the iridescent image, based on two-value Image after change processing carries out piecemeal to the iridescent image, counts the region that brightness in each block image meets predetermined threshold value The first proportionate relationship in corresponding sub-block image, the summation for calculating corresponding first proportionate relationship of all block images is used as institute State the 5th parameter.
In such scheme, the 6th parameter corresponding with the third texture feature in statistics described first image data, bag Include:
Recognize the face region in described first image data;
Edge detection process is carried out to described first image data, second edge view data is obtained, described the is identified Length meets the first straight line of the first preparatory condition in two edge datas;
Position in the first straight line is extracted beyond the face region and slope meets the second preparatory condition Second straight line, the quantity for counting the second straight line is used as the 6th parameter.
It is described based on first parameter, second parameter, the 3rd parameter, the 4th ginseng in such scheme Several, described 5th parameter and the 6th parameter determine fusion parameters, including:
Corresponding first weight coefficient of first parameter, second ginseng are obtained using machine learning algorithm respectively in advance Corresponding 4th power of corresponding second weight coefficient of number, corresponding 3rd weight coefficient of the 3rd parameter, the 4th parameter Weight coefficient, corresponding 5th weight coefficient of the 5th parameter and corresponding 6th weight coefficient of the 6th parameter;
Obtain first parameter and the first product, second parameter and the described second power of first weight coefficient 3rd product of the second product of weight coefficient, the 3rd parameter and the 3rd weight coefficient, the 4th parameter with it is described The 4th product, the 5th parameter and the 5th product and the described 6th of the 5th weight coefficient of 4th weight coefficient Parameter and the 6th product of the 6th weight coefficient;
By first product, second product, the 3rd product, the 4th product, the 5th sum of products The fusion parameters are obtained after 6th product addition.
The embodiment of the present invention additionally provides a kind of live body checking equipment, and the equipment includes:Resolution unit, taxon, Statistic unit and integrated unit;Wherein,
The resolution unit, for obtaining the first view data, parses described first image data;
The taxon, the textural characteristics for obtaining described first image data;The textural characteristics characterize following At least one of attributive character:It is the fuzzy characteristics of described first image data, the retroreflective feature of described first image data, described The bounding box features of first view data;The corresponding first kind parameter of the textural characteristics is obtained based on disaggregated model;
The statistic unit, for based on statistical disposition mode obtain in described first image data with the textural characteristics Corresponding Equations of The Second Kind parameter;The Equations of The Second Kind parameter is different from the first kind parameter;
The integrated unit, for judging whether the first kind parameter is more than first kind threshold value, and judges described the Whether two class parameters are more than Second Threshold;It is more than when the first kind parameter is more than first kind threshold value and the Equations of The Second Kind parameter During Equations of The Second Kind threshold value, fusion parameters are determined based on the first kind parameter and the Equations of The Second Kind parameter;When the fusion parameters are big When the 3rd class threshold value, determine that live body is verified.
In such scheme, the integrated unit, be additionally operable to judge the first kind parameter be not more than the first kind threshold value, Or the Equations of The Second Kind parameter is not more than the Equations of The Second Kind threshold value or when the fusion parameters are not more than the 3rd class threshold value, really Determine live body checking not pass through.
In such scheme, the taxon, for obtain respectively described first image data the first textural characteristics, Two textural characteristics and third texture feature;First textural characteristics characterize the fog-level of described first image data;It is described Second textural characteristics characterize the reflective degree of described first image data;The third texture characteristic present described first image number According to whether including frame;It is additionally operable to obtain first textural characteristics corresponding the based on the first disaggregated model for being pre-configured with One parameter, obtains corresponding second parameter of second textural characteristics, based on advance based on the second disaggregated model being pre-configured with 3rd disaggregated model of configuration obtains corresponding 3rd parameter of the third texture feature;
The statistic unit, for counting the 4th ginseng corresponding with first textural characteristics in described first image data Number, the 5th parameter corresponding with second textural characteristics and corresponding 6th parameter of the third texture feature.
In such scheme, the integrated unit, for judging that first parameter is more than first threshold and second ginseng Number is more than the 3rd threshold value more than Second Threshold and the 3rd parameter and the 4th parameter is more than the 4th threshold value and described the When five parameters are more than the 5th threshold value and the 6th parameter and are more than six threshold values, based on first parameter, second ginseng Several, described 3rd parameter, the 4th parameter, the 5th parameter and the 6th parameter determine fusion parameters.
In such scheme, the integrated unit is additionally operable to judge that first parameter is not more than the first threshold or institute State that the second parameter is not more than the Second Threshold or the 3rd parameter is not more than the 3rd threshold value or the 4th parameter No more than described 4th threshold value or the 5th parameter are not more than the 5th threshold value or the 6th parameter be not more than it is described During six threshold values, or the fusion parameters are when being not more than the 3rd class threshold value, determine that live body checking does not pass through.
In such scheme, the taxon, for described first image data to be converted into HSV model datas;To institute State HSV model datas and carry out LBP processing, obtain correspond to the first LBP characteristics of tone data, corresponding to saturation degree respectively The 2nd LBP characteristics and the 3rd LBP characteristics corresponding to lightness data of data, by the first LBP characteristics, The 2nd LBP characteristics and the 3rd LBP characteristics are used as first textural characteristics.
In such scheme, the taxon, the retroreflective feature for extracting described first image data, and extract institute The color histogram feature of the first view data is stated, the retroreflective feature and color histogram feature is regard as second texture Feature;
Wherein, the taxon, the albedo image for obtaining described first image data, based on first figure As data and the albedo image obtain iridescent image;Piecemeal processing is carried out to the albedo image, image point is obtained Block gray-scale statistical parameter is used as the retroreflective feature.
In such scheme, the taxon, for described first image data to be filtered with processing, obtains described the The first edge view data of one view data;LBP processing is carried out to the first edge view data, obtains and characterizes described the 4th LBP characteristics of three textural characteristics.
In such scheme, the statistic unit, for carrying out gaussian filtering process to described first image data, obtains institute State the Gaussian image data of the first view data;Difference diagram is obtained based on described first image data and the Gaussian image data As data, the gradient information for obtaining the difference image data is used as the 4th parameter.
In such scheme, the statistic unit, the iridescent image for obtaining described first image data;To described reflective Image carries out binary conversion treatment, carries out piecemeal to the iridescent image based on the image after binary conversion treatment, counts each piecemeal Brightness meets first proportionate relationship of the region of predetermined threshold value in corresponding sub-block image in image, calculates all block images pair The summation for the first proportionate relationship answered is used as the 5th parameter.
In such scheme, the statistic unit, for recognizing the face region in described first image data;To institute State the first view data and carry out edge detection process, obtain second edge view data, identify the second edge picture number The first straight line of the first preparatory condition is met according to middle length;Extract in the first straight line position the face region with Outside and slope meets the second straight line of the second preparatory condition, the quantity for counting the second straight line is used as the 6th parameter.
In such scheme, the integrated unit, for obtaining first parameter respectively using machine learning algorithm in advance Corresponding first weight coefficient, corresponding second weight coefficient of second parameter, corresponding 3rd weight of the 3rd parameter Coefficient, corresponding 4th weight coefficient of the 4th parameter, corresponding 5th weight coefficient of the 5th parameter and the described 6th Corresponding 6th weight coefficient of parameter;Obtain first parameter and first weight coefficient the first product, described second Parameter and the second product of second weight coefficient, the 3rd product of the 3rd parameter and the 3rd weight coefficient, institute The 4th product of the 4th parameter and the 4th weight coefficient, the 5th parameter is stated with the 5th of the 5th weight coefficient to multiply Product and the 6th parameter and the 6th product of the 6th weight coefficient;By first product, second product, The fusion parameters are obtained after 3rd product, the 4th product, the 6th product addition described in the 5th sum of products.
Live body verification method provided in an embodiment of the present invention and equipment, methods described include:The first view data is obtained, with And parsing described first image data;Obtain the textural characteristics of described first image data;The textural characteristics are characterized with subordinate At least one of property feature:The fog-level of described first image data, the reflective degree of described first image data, described Whether one view data includes frame;The corresponding first kind ginseng of the textural characteristics is obtained based on the disaggregated model being pre-configured with Number;And, Equations of The Second Kind parameter corresponding with the textural characteristics in statistics described first image data;Judge the first kind ginseng When number is more than Equations of The Second Kind threshold value more than first kind threshold value and the Equations of The Second Kind parameter, based on the first kind parameter and described the Two class parameters determine fusion parameters;When the fusion parameters are more than the 3rd class threshold value, determine that live body is verified.Using this hair The technical scheme of bright embodiment, extracts a variety of textural characteristics, and first kind ginseng is on the one hand obtained by way of disaggregated model is clustered Count and carry out threshold decision, it is on the other hand corresponding with textural characteristics in statistical picture data by way of feature distribution is counted Equations of The Second Kind parameter simultaneously carries out threshold decision, realizes that live body is tested finally by the mode of fusion first kind parameter and Equations of The Second Kind parameter Card, the technical scheme of the embodiment of the present invention is triggered from view data, independent of user and equipment, and multi-modal fusion make it that algorithm leads to The rate of mistake is substantially improved, and has effectively defendd the different types of attacks such as the image that photograph print, display screen show, has greatly improved The accuracy rate of authentication.
Brief description of the drawings
Fig. 1 is the overall procedure schematic diagram of the live body verification method of the embodiment of the present invention;
Fig. 2 is the schematic flow sheet one of the live body verification method of the embodiment of the present invention;
Fig. 3 a to Fig. 3 d are respectively existing live body attack source schematic diagram;
Fig. 4 a to Fig. 4 c be respectively the embodiment of the present invention live body verification method in the processing procedures of the first textural characteristics show It is intended to;
Fig. 5 a and Fig. 5 b be respectively the embodiment of the present invention live body verification method in the first textural characteristics schematic diagram;
Fig. 6 a and Fig. 6 b be respectively the embodiment of the present invention live body verification method in the second textural characteristics schematic diagram;
Fig. 7 for the embodiment of the present invention live body verification method in third texture feature schematic diagram;
Fig. 8 is the schematic flow sheet two of the live body verification method of the embodiment of the present invention;
Fig. 9 is the effect curve schematic diagram of the live body verification method of the embodiment of the present invention;
Figure 10 verifies the composition structural representation of equipment for the live body of the embodiment of the present invention;
Figure 11 verifies composition structural representation of the equipment as hardware for the live body of the embodiment of the present invention.
Embodiment
Below in conjunction with the accompanying drawings and specific embodiment the present invention is further detailed explanation.
Before the live body verification method of the embodiment of the present invention is described in detail, the work first to the embodiment of the present invention The general realisation of body proof scheme is illustrated.Fig. 1 shows for the overall procedure of the live body verification method of the embodiment of the present invention It is intended to;As shown in figure 1, the live body verification method of the embodiment of the present invention may include following several stages:
Stage 1:Input video stream, namely live body checking equipment obtain view data.
Stage 2:Live body checking equipment carries out Face datection.
Stage 3:In vivo detection, after testing result is shown to be live body, into the stage 4:View data is sent to backstage Carry out face verification;After testing result shows not to be live body, the In vivo detection stage is reentered.Wherein, the tool of In vivo detection Body implementation process can refer to shown in the description of follow-up live body verification method provided in an embodiment of the present invention.
The embodiments of the invention provide a kind of live body verification method.Fig. 2 is the live body verification method of the embodiment of the present invention Schematic flow sheet one;As shown in Fig. 2 methods described includes:
Step 101:The first view data is obtained, described first image data are parsed, described first image data are obtained Textural characteristics;The textural characteristics characterize at least one of following attributive character:The fuzzy characteristics of described first image data, institute State retroreflective feature, the bounding box features of described first image data of the first view data.
Step 102:The corresponding first kind parameter of the textural characteristics is obtained based on disaggregated model.
Step 103:Obtained based on statistical disposition mode corresponding with the textural characteristics in described first image data Two class parameters;The Equations of The Second Kind parameter is different from the first kind parameter.
Step 104:When the first kind parameter is more than first kind threshold value and the Equations of The Second Kind parameter is more than Equations of The Second Kind threshold value When, fusion parameters are determined based on the first kind parameter and the Equations of The Second Kind parameter.
Step 105:When the fusion parameters are more than the 3rd class threshold value, determine that live body is verified.
As a kind of embodiment, methods described also includes:Judge that the first kind parameter is not more than the first kind threshold When value or the Equations of The Second Kind parameter are not more than the Equations of The Second Kind threshold value, determine that live body checking does not pass through.
The live body verification method of the embodiment of the present invention is applied in live body checking equipment.The live body checking equipment specifically may be used To be the electronic equipment with image acquisition units, to obtain view data by described image collecting unit;The electronics is set It is standby can be specifically the mobile devices such as mobile phone, panel computer or personal computer, be configured with the gate control system (door Access control system is specially the system that control is carried out to exit and entrance) access control equipment etc.;Wherein, described image collecting unit has Body can be provided in the camera on electronic equipment.
In the present embodiment, (in following embodiment of the invention, the live body checking equipment is referred to as live body checking equipment Equipment) after view data is obtained by image acquisition units, described image data are parsed, described first image data are obtained Textural characteristics;Wherein, the view data obtained includes multiple image.
Under normal circumstances, mainly wrap in the source for pretending to be living body faces to verify (which can be described as attack) by live body Include:Photo that photograph print, display/display screen are shown, video of display etc..Fig. 3 a to Fig. 3 d are respectively existing live body Attack source schematic diagram;It can specifically analyze this few class image as shown in Fig. 3 a to Fig. 3 d, above-mentioned different types of figure can be summarized As having different characteristics, such as photograph print would generally include frame;The image that display screen or display are shown would generally have Have a moire fringes, readability also can than include true man image it is low, and the image that display screen or display are shown all can Exist reflective etc..Certainly, above-mentioned characteristic is not limited to uniquely attack samples sources.Therefore, the present embodiment is based on above-mentioned several Plant the textural characteristics that attack characteristic obtains described first image data.
As a kind of embodiment, the textural characteristics of the acquisition described first image data, including:Obtain respectively described The first textural characteristics, the second textural characteristics and the third texture feature of first view data;First textural characteristics characterize institute State the fuzzy characteristics of the first view data;Second textural characteristics characterize the retroreflective feature of described first image data;It is described The bounding box features of third texture characteristic present described first image data;Accordingly, it is described based on the disaggregated model being pre-configured with The corresponding first kind parameter of the textural characteristics is obtained, including:Described first is obtained based on the first disaggregated model being pre-configured with Corresponding first parameter of textural characteristics, second textural characteristics corresponding the are obtained based on the second disaggregated model for being pre-configured with Two parameters, corresponding 3rd parameter of the third texture feature is obtained based on the 3rd disaggregated model being pre-configured with.Accordingly, institute Equations of The Second Kind parameter corresponding with the textural characteristics in statistics described first image data is stated, including:Count described first image The 4th parameter corresponding with first textural characteristics, the 5th parameter corresponding with second textural characteristics and institute in data State corresponding 6th parameter of third texture feature.
Specifically, in the present embodiment, first textural characteristics for obtaining described first image data, including:Will be described First view data is converted to HSV model datas;LBP processing is carried out to the HSV model datas, obtains correspond to tone respectively First LBP characteristics of data, the 2nd LBP characteristics corresponding to saturation data and the corresponding to lightness data the 3rd LBP characteristics, the first LBP characteristics, the 2nd LBP characteristics and the 3rd LBP characteristics are made For first textural characteristics.
In the present embodiment, first textural characteristics characterize the fuzzy characteristics of described first image data;The fuzzy spy Levy can be specifically the fog-level for representing described first image data feature, namely the fuzzy characteristics can be specifically institute The readability for stating the texture in the first view data and border is not up to the feature presented during preset requirement;In one kind implementation In mode, the fuzzy characteristics can specifically pass through LBP character representations.
Specifically, described first image data are specifically as follows RGB (RGB) view data;Then RGB data is changed For HSV model datas, it can respectively obtain and represent the H model datas of tone, represent the S model datas of saturation degree and represent lightness V model datas.LBP processing is carried out to the H model datas, S model datas and V model datas respectively, so as to obtain H pattern numbers According to the image gradient information in, S model datas and V model datas.By taking H model datas as an example, then the H model datas are carried out Gray proces, obtain the gray level image of the H model datas, further determine that each characteristic point and phase in the gray level image Versus grayscale relation between eight adjacent characteristic points, is the gray level image of three characteristic point matrix for multiplying three, often as shown in fig. 4 a The gray scale of individual characteristic point is for example shown in Fig. 4 a;The gray value of each characteristic point is subjected to the expression that quantizes, figure is specifically can refer to Shown in 4b.Further, the gray scale and the gray scale of central feature point of adjacent eight characteristic points are compared, if adjacent feature point Gray scale be more than central feature point gray scale, then the value of the adjacent feature point is designated as 1;If conversely, the ash of adjacent feature point The value of the adjacent feature point is then designated as 0, for details, reference can be made to shown in Fig. 4 c by degree less than or equal to the gray scale of central feature point.Enter One step, the string of binary characters of 8 is obtained by the value series connection of adjacent feature point, and the string of binary characters can be understood as point Gray value of the cloth in (0,255).In specific implementation process, it can refer to shown in Fig. 4 c, if being made with first, upper left corner characteristic point For initiation feature point, arranged according to clockwise direction, then the character string of 8 obtained is 10001111.Thus it can obtain described The corresponding string of binary characters of each characteristic point (i.e. central feature point) in procedural image.Further, in order to remove redundancy, unite Count in the corresponding string of binary characters of each characteristic point, 0 and 1 change less than 2 string of binary characters;For example, character string is In 10001111, first changes 1 time, the 4th and the 5th 0 and 1 with second 0 and 1 and changes 1 time, amounts to change twice, It is unsatisfactory for " 0 and 1 condition of the change less than 2 ".Again for example, character string be 00001111 in, only by the 4th and the 5th 0 and 1 Change 1 time, meets " 0 and 1 condition of the change less than 2 ".Then, the string of binary characters after statistics is mapped to (0,58) scope Interior, the data after mapping can be used as the first LBP characteristics corresponding to tone data;It so can also greatly reduce data processing Amount.
Above-mentioned processing procedure can specifically be realized by following formula:
LBP=[code0, code1 ... ..., code7] (1)
Code (m, n)=Img (y+m, x+n)>Img (y, x)1:0 (2)
Wherein, LBP represents the display parameters and adjacent feature of a certain characteristic point in the first view data in expression formula (1) The relativeness of the display parameters of point;The characteristic point is any feature point in described first image data;Code0, Code1 ... ..., code7 represent the display parameters of the adjacent characteristic point of the characteristic point respectively;It is used as a kind of embodiment, institute It can be specifically gray value to state display parameters, it is of course also possible to be other display parameters.Expression formula (2) is represented characteristic point (y+ M, x+n) the gray value of gray value and characteristic point (y, x) be compared, if the gray value of characteristic point (y+m, x+n) is more than feature The gray value of point (y, x), then be designated as 1 by the string of binary characters code (m, n) of characteristic point (m, n), be otherwise designated as 0.
Similarly, it can also be obtained for the 2nd LBP characteristics and the 3rd LBP characteristics using above-mentioned data mode, this In repeat no more.Further, by the first LBP characteristics, the 2nd LBP characteristics and the 3rd LBP features of acquisition First textural characteristics are used as after data concatenation, it can be understood as by LBP characteristics (including the first LBP of three 59 dimensions Characteristic, the 2nd LBP characteristics and the 3rd LBP characteristics) it is sequentially connected in series.Fig. 5 a and Fig. 5 b are respectively implementation of the present invention The schematic diagram of first textural characteristics in the live body verification method of example;Fig. 5 a are the image data extraction for being determined as living body faces in advance The first textural characteristics gone out;Fig. 5 b are to be determined as the first textural characteristics that the image data extraction of non-living body face goes out in advance.
In the present embodiment, the second textural characteristics of described first image data are obtained, including:Extract described first image number According to retroreflective feature, and extract described first image data color histogram feature, the retroreflective feature and color is straight Square figure feature is used as second textural characteristics;Wherein, the retroreflective feature for extracting described first image data, including:Obtain The albedo image of described first image data is obtained, obtains reflective based on described first image data and the albedo image Image;Piecemeal processing is carried out to the albedo image, image block gray-scale statistical parameter is obtained and is used as the retroreflective feature.
In the present embodiment, second textural characteristics characterize the retroreflective feature of described first image data;The reflective spy It can be specifically the feature for representing highlight regions distribution and image chroma distribution in described first image data to levy.Specifically, table Levying the second textural characteristics of retroreflective feature includes two classes:One class is the feature for the highlight regions distribution for describing image, wherein, it is described Highlight regions can reach the region of predetermined threshold value for luminance parameter;Color caused by another kind of then correspondence image reflectivity is different Colour, i.e. color histogram feature.Image is (such as included in the attack pattern of non-living body face during due to secondary shooting Photograph print, the image that shows of display/display screen etc. can be understood as secondary shooting) be approximately plane, material also with Real human face is different, therefore easily causes the change of color.Specifically, as the first view data of RGB image, in RGB Under color space, the albedo image of described first image data is obtained, based on described first image data and the reflection Rate image obtains iridescent image;Specifically, iridescent image is described first image data and the difference of its albedo image.Its In, albedo image can refer to acquisition shown in following formula:
Spect (y, x)=(1-max (max (r (y, x) * t, g (y, x) * t), b (y, x) * t)) * 255 (3)
T=1.0/ (r (y, x)+g (y, x)+b (y, x)) (4)
Wherein, Spect (y, x) represents the reflectivity data of characteristic point (y, x) in described first image data;R (y, x) table Show that characteristic point (y, x) corresponds to the data of red channel in RGB color;G (y, x) represents characteristic point (y, x) in RGB face Correspond to the data of green channel in the colour space;It is logical that b (y, x) represents that characteristic point (y, x) corresponds to blueness in RGB color The data in road.
Further, piecemeal processing is carried out to iridescent image, chooses the average (mean) and variance (delta) of image block It is used as retroreflective feature;Because iridescent image is specifically gray level image, then the mean and delta of described image square are especially by ash The mean and delta of angle value are represented.Fig. 6 a and Fig. 6 b be respectively the embodiment of the present invention live body verification method in the second texture it is special The schematic diagram levied;Fig. 6 a left figures are the corresponding view data of non-living body face collected, and right figure is after the image real time transfer The iridescent image of acquisition;Fig. 6 b left figures are the corresponding view data of the living body faces collected, and right figure is the image real time transfer The iridescent image obtained afterwards.
For color histogram feature, by the first described view data HSV model datas, expression tone can be obtained respectively H model datas, represent saturation degree S model datas and represent lightness V model datas.Respectively by H model datas, S models Data and V model datas project to 32 dimension spaces, obtain 32768 dimension color histograms.Get colors histogram component highest 100 dimensional features as described first image data color histogram feature.
In the present embodiment, the third texture feature of described first image data is obtained, including:To described first image data Processing is filtered, the first edge view data of described first image data is obtained;The first edge view data is entered Row LBP processing, obtains the 4th LBP characteristics for characterizing the third texture feature.
In the present embodiment, the bounding box features of the third texture characteristic present described first image data;The frame is special Levying can be specifically to characterize the feature whether in described first image data with frame;The bounding box features are specifically as follows institute State the linear feature that the region in the first view data beyond face region is presented.
Specifically, in order to obtain the bounding box features in described first image data, entering first to described first image data Row filtering process, obtains the corresponding first edge image of described first image data.As a kind of embodiment, rope shellfish can be used That (Sobel) operator (specifically may include the two groups of matrixes detected for transverse edge and longitudinal edge is detected) and described first Pixel value in view data makees planar convolution, obtains the corresponding first edge image of described first image data.Further, Gray proces are carried out to the first edge image, the corresponding gray level image of the first edge image is obtained, determines the ash The versus grayscale relation spent between each characteristic point in image and eight adjacent characteristic points, such as three multiply three characteristic point square The gray level image of battle array, carries out the expression that quantizes, by the gray scale of adjacent eight characteristic points and center by the gray value of each characteristic point The gray scale of characteristic point is compared, if the gray scale of adjacent feature point is more than the gray scale of central feature point, by the adjacent feature The value of point is designated as 1;If conversely, the gray scale of adjacent feature point is less than or equal to the gray scale of central feature point, by the adjacent feature The value of point is designated as 0;Further, the value series connection of adjacent feature point is obtained into the string of binary characters of 8, the binary-coded character String can be understood as being distributed in the gray value of (0,255).In specific implementation process, shown in reference picture 4c, if with the upper left corner One characteristic point is arranged as initiation feature point according to clockwise direction, then the character string of 8 obtained is 10001111.By This can obtain the corresponding string of binary characters of each characteristic point (i.e. central feature point) in the procedural image.Further, it is Removal redundancy, is counted in the corresponding string of binary characters of each characteristic point, 0 and 1 string of binary characters of the change less than 2;Example Such as, character string is that in 10001111, first and second 0 and 1 change 1 time, the 4th and the 5th 0 and 1 change 1 time, always Meter change twice, is unsatisfactory for " 0 and 1 condition of the change less than 2 ".Again for example, character string is in 00001111, only by the 4th and Change 1 time for 5th 0 and 1, meet " 0 and 1 condition of the change less than 2 ".Then, the string of binary characters after statistics is mapped to In the range of (0,58), the data after mapping can be used as the 4th LBP characteristics corresponding to the third texture feature;So Data processing amount can be greatly reduced.Due to having filtered other smooths, the corresponding 4th LBP features of the first edge image Data can protrude the marginal portion in image, describe the bounding box features of image.
Above-mentioned technical proposal is to carry out texture feature extraction to described first image data based on three kinds of characteristics.The present embodiment In, great amount of samples data are gathered in advance, and the sample data can specifically include extracting using above-mentioned texture feature extraction mode The first textural characteristics and corresponding type (i.e. vague category identifier), and/or the second textural characteristics and corresponding type it is (i.e. anti- Light type), and/or third texture feature and corresponding type (i.e. frame type), and sample data may include above-mentioned three Plant at least one textural characteristics and corresponding type in textural characteristics.Engineering is carried out for each type of textural characteristics Training is practised, the corresponding disaggregated model of each type of textural characteristics is obtained.Specifically, corresponding to vague category identifier, obtaining corresponding First disaggregated model.For example can as shown in Figure 5 b, the first texture that the view data for being for marking in advance is obtained Feature, is respectively provided with streak feature, such as Fig. 5 b in the inclined stripe in first image and the 3rd image, chapter 2 image Approximate horizontal stripe etc.;(for example striped is special for common characteristic that then can be based on the first textural characteristics kind corresponding to vague category identifier Levy) machine learning training is carried out, obtain corresponding first disaggregated model of first textural characteristics.Corresponding to reflective type, obtain Obtain corresponding second disaggregated model.Corresponding to frame type, corresponding 3rd disaggregated model is obtained.
Then in the present embodiment, by the textural characteristics of acquisition (including at least one of following textural characteristics:First texture is special Levy, the second textural characteristics, third texture feature) input, into the disaggregated model of corresponding types, obtains corresponding first kind parameter. For example, the first textural characteristics of acquisition are inputted into the first disaggregated model corresponding to vague category identifier, first line is obtained Manage corresponding first parameter of feature, the fog-level of the first parameter characterization described first image data;By the second of acquisition Textural characteristics are inputted into the second disaggregated model corresponding to reflective type, obtain corresponding second ginseng of second textural characteristics Number, the reflective degree of the second parameter characterization described first image data;The third texture feature of acquisition is inputted to correspondence In the 3rd disaggregated model of frame type, corresponding 3rd parameter of the third texture feature, the 3rd parameter list are obtained Levy whether described first image data include frame.Further, corresponding to each disaggregated model correspondence one threshold value of configuration, when When the parameter of acquisition is not more than correspondence threshold value, it is non-living body to determine the people included in described first image data, that is, determines live body Checking does not pass through;Accordingly, when the parameter of acquisition is more than correspondence threshold value, further combined with the statistical classification of following three kinds of characteristics As a result follow-up fusion is carried out to judge.For example, when first parameter is not more than first threshold or the second parameter is not more than second When threshold value or the 3rd parameter are not more than three threshold values, it is non-living body to determine the people included in described first image data, that is, is determined Live body checking does not pass through.
In the present embodiment, the 4th ginseng corresponding with first textural characteristics in the statistics described first image data Number, including:Gaussian filtering process is carried out to described first image data, the Gaussian image number of described first image data is obtained According to;Difference image data is obtained based on described first image data and the Gaussian image data, the difference image number is obtained According to gradient information be used as the 4th parameter.
Specifically, carrying out gaussian filtering process to described first image data, Gaussian image data are obtained;Count described One view data and the gradient information of the difference image of the Gaussian image data are used as the 4th parameter.Said process is specific It can be realized by following formula:
Gx (y, x)=Img (y, x+1)-Img (y, x-1) (5)
Bx (y, x)=Img (y, x+kernel)-Img (y, x-kernel) (6)
Vx (y, x)=max (0, Gx (y, x)-Bx (y, x)) (7)
Gy (y, x)=Img (y+1, x)-Img (y-1, x) (8)
By (y, x)=Img (y+kernel, x)-Img (y-kernel, x) (9)
Vy (y, x)=max (0, Gy (y, x)-By (y, x)) (10)
Blur=max (Sum (Gx)-Sum (Vx), Sum (Gy)-Sum (Vy)) (11)
Wherein, Gx (y, x) represents gradient of the characteristic point (y, x) in x-axis;Bx (y, x) is represented from characteristic point (y, x) laterally Distance is the difference of kenel two pixels in left and right;Wherein, kernel represents transformable distance.Vx (y, x) expressions Gx (y, X) operation result of maximum is taken between difference and 0 between Bx (y, x);Gy (y, x) represents characteristic point (y, x) on the y axis Gradient;By (y, x) represents the difference for kenel two pixels up and down from characteristic point (y, x) fore-and-aft distance;Vy (y, x) table Show the operation result that maximum is taken between the difference between Gy (y, x) and By (y, x) and 0;Blur represents described first image number According to fog-level the 4th parameter;Wherein, Sum (Gx) represents that each characteristic point is in x-axis in described first image data The sum of gradient;Sum (Gy) represents the sum of each gradient of characteristic point on the y axis in described first image data;Sum (Vx) is represented The corresponding Vx of each characteristic point sum in described first image data;Sum (Vy) represents each special in described first image data Levy a little corresponding Vy sum.
In the present embodiment, the 5th parameter corresponding with second textural characteristics in statistics described first image data, bag Include:Obtain the iridescent image of described first image data;Binary conversion treatment is carried out to the iridescent image, based on binary conversion treatment Image afterwards carries out piecemeal to the iridescent image, counts brightness in each block image and meets the region of predetermined threshold value corresponding The first proportionate relationship in block image, the summation for calculating corresponding first proportionate relationship of all block images is used as the described 5th Parameter.Above-mentioned processing procedure can specifically be realized by following formula:
Spec=sum (count (Rect (y, x)=1)/count (Rect)) (12)
Wherein, Spec represents the 5th parameter of the reflective degree of described first image data;Rect (y, x) represents binaryzation Pixel value in the block image of iridescent image.All characteristic points in the block image of count (Rect) expression iridescent images Number.
In the present embodiment, the 6th parameter corresponding with the third texture feature in statistics described first image data, bag Include:Recognize the face region in described first image data;Edge detection process is carried out to described first image data, obtained Second edge view data is obtained, identifies that first of the first preparatory condition of length satisfaction in the second edge view data is straight Line;Position in the first straight line is extracted beyond the face region and slope meets the second of the second preparatory condition Straight line, the quantity for counting the second straight line is used as the 6th parameter.
Specifically, carrying out rim detection to described first image data;As a kind of embodiment, Canny sides can be used Edge detection algorithm carries out rim detection to described first image data, can specifically include:First by described first image data (being specifically as follows rgb image data) is converted to gray level image, gaussian filtering process is carried out to the gray level image, to remove figure As noise;Image gradient information is further calculated, image border amplitude and direction are calculated according to the image gradient information;To image Edge amplitude application non-maxima suppression, only retains the maximum point of amplitude localized variation, generates the edge of refinement;Using dual threshold Rim detection simultaneously connects edge, the marginal point of extraction is had more robustness, so as to generate second edge view data.Further Ground, carries out Hough (hough) to the second edge view data and converts, straight in the second edge view data to find Line;Further, the first straight line of the first preparatory condition of length satisfaction in all straight lines is recognized;Wherein, as a kind of embodiment party Length meets the first straight line of the first preparatory condition in formula, all straight lines of identification, including:Recognize that length surpasses in all straight lines The straight line of width half of described first image data is crossed as first straight line.On the other hand, described first image data are entered In row resolving, the face in described first image data is detected, face region, the face place is obtained The edge in region can be represented by the face frame of output.Then further the first straight line is identified, described first is obtained In straight line beyond the face region and slope meet the second preparatory condition straight line be used as second straight line;Wherein, institute The second straight line that slope meets the second preparatory condition is stated, including:In the first straight line beyond the face region and Angle between straight line where the edge of the face region is straight as described second no more than the straight line of predetermined angle Line;As a kind of example, such as 30 degree of the predetermined angle certainly, is not limited to above-mentioned cited example.Second then obtained The signal of straight line can be as shown in Figure 7.The acquisition process of above-mentioned second straight line can be realized by following formula:
Line=sum (count (Canny (y, x)) (13)
Wherein, Line represents the quantity of second straight line;Sum is represented plus and computing;Canny (y, x) is represented through Canny sides The straight line of edge pixel point (y, x) after the processing of edge detection algorithm;Count represents statistics through the straight of edge pixel point (y, x) The number of line.
It is described based on first parameter, second parameter, the 3rd parameter, the 4th ginseng in the present embodiment Several, described 5th parameter and the 6th parameter determine fusion parameters, including:Institute is obtained using machine learning algorithm respectively in advance State corresponding first weight coefficient of the first parameter, corresponding second weight coefficient of second parameter, the 3rd parameter correspondence The 3rd weight coefficient, corresponding 4th weight coefficient of the 4th parameter, corresponding 5th weight coefficient of the 5th parameter The 6th weight coefficient corresponding with the 6th parameter;First parameter is obtained with first weight coefficient first multiplies Long-pending, described second parameter and the second product, the 3rd parameter and the 3rd weight coefficient of second weight coefficient 3rd product, the 4th parameter and the 4th product, the 5th parameter and the 5th weight of the 4th weight coefficient The 5th product and the 6th parameter of coefficient and the 6th product of the 6th weight coefficient;By first product, institute State and obtain described after the second product, the 3rd product, the 4th product, the 6th product addition described in the 5th sum of products Fusion parameters.
Specifically, the first parameter obtained using above-mentioned processing mode is expressed as Blur_s, the second parameter is expressed as Spec_ S, the 3rd parameter is expressed as Line_s, and the 4th parameter is expressed as Blur, and the 5th parameter is expressed as Spec, and the 6th parameter is expressed as Line.Further, the machine learning of weighted value can be carried out using machine learning algorithm, is intended respectively for above-mentioned sextuple component Close, the fusion parameters of acquisition meet following formula:
Live=a1*Blur_s+a2*Spec_s+a3*Line_s+a4*Blur+a5*Spec+a6*Li ne (14)
Further, the fusion parameters of acquisition are compared with default 3rd class threshold value, when the fusion parameters are small When the 3rd class threshold value, it is determined as non-living body face, that is, determines that live body checking does not pass through;Accordingly, when the fusion ginseng When number is not less than the 3rd class threshold value, it is determined as living body faces, that is, determines that live body is verified.
Based on described above, stage 3 namely live body verification process shown in Fig. 1 can refer to shown in Fig. 8, including three kinds The statistical disposition flow of the classification handling process of textural characteristics and three kinds of textural characteristics;Certainly, in other embodiment In, fuzzy characteristics, retroreflective feature and the bounding box features enumerated in the present example are not limited to, other attack application scenarios In involved textural characteristics also within the protection domain of the embodiment of the present invention.Specifically, the face in view data After the completion of detection, described image data are handled accordingly respectively, including:Fuzzy textural characteristics in view data are entered Row is extracted, and fuzzy textural characteristics are inputted to Fuzzy Classifier, the first parameter is obtained;First parameter is entered with first threshold Row compares, and when first parameter is less than the first threshold, is determined as non-living body face, when a described parameter is not less than During the first threshold, first parameter is sent into Parameter fusion flow;Reflective textural characteristics in view data are carried out Extract, reflective textural characteristics are inputted to reflective grader, the second parameter is obtained;Second parameter and Second Threshold are carried out Compare, when second parameter is less than the Second Threshold, be determined as non-living body face, when second parameter is not more than institute When stating Second Threshold, second parameter is sent into Parameter fusion flow;Frame textural characteristics in view data are carried Take, frame textural characteristics are inputted into frame grader, obtain the 3rd parameter;3rd parameter is compared with the 3rd threshold value Compared with when the 3rd parameter is less than three threshold value, being determined as non-living body face, when the 3rd parameter is not less than described During three threshold values, the 3rd parameter is sent into Parameter fusion flow;To the fuzzy parameter (i.e. the 4th parameter) in view data Counted, the 4th parameter is compared with the 4th threshold value, when the 4th parameter is less than four threshold value, sentenced It is set to non-living body face, when the 4th parameter is not less than four threshold value, the 4th parameter is sent into Parameter fusion Flow;Reflective parameter (i.e. the 5th parameter) in view data is counted, the 5th parameter and the 5th threshold value are carried out Compare, when the 5th parameter is less than five threshold value, be determined as non-living body face, when the 5th parameter is not less than institute When stating five threshold values, the 5th parameter is sent into Parameter fusion flow;To frame parameter (the i.e. the 6th ginseng in view data Number) counted, the 6th parameter is compared with the 6th threshold value, when the 6th parameter is less than the 6th threshold value When, it is determined as non-living body face, when the 6th parameter is not less than six threshold value, the 6th parameter is sent into parameter Merge flow.Further, by above-mentioned first parameter, the second parameter, the 3rd parameter, the 4th parameter, the 5th parameter and the 6th ginseng Number carries out Parameter fusion, and fusion parameters are further compared with corresponding threshold value, when fusion parameters are less than the threshold value, judges For non-living body face, when the fusion parameters are not less than the threshold value, it are determined as living body faces, travel further into the stage 4, will scheme Face verification is carried out as data are sent to backstage.
The live body proof scheme of the embodiment of the present invention is not limited to passive judgement, but can merge and judge as active live body Supplement.Due to this method and active live body do not have it is any conflict, passive live body is on Consumer's Experience also to the present invention without negative Face is disturbed, therefore can preferably realize that live body is verified with the judgement of active live body.In active is verified with the live body passively combined, The invention can be used for the pretreatment of positive action judgement, i.e., only first passively judge under the premise of being true man, just carry out subsequently Action judge, can also both simultaneously handle, even if that is, user action correctly, it is nonetheless possible to being judged as attacker.This can More effectively to prevent the attack of video.
Fig. 9 is the effect curve schematic diagram of the live body verification method of the embodiment of the present invention;Give and calculated using different in Fig. 9 Receiver operator characteristics (ROC, Receiver Operating Characteristic) curve of method;ROC shown in Fig. 9 is bent In line, transverse axis represents percent of pass by mistake, and the longitudinal axis represents accuracy rate.As can be seen that the live body verification method by the embodiment of the present invention The ROC curve corresponded to after fusion (combine), when very low mistake passes through, accuracy rate is substantially improved to 0.8 or so, and this makes The technical scheme for obtaining the present embodiment offer can be good at guarding against the attack that different type attacks sample, while real human face also can Checking is completed, influence will not be brought on Consumer's Experience.The present invention is independent of any equipment and user mutual, to calculating complicated journey Degree also has little to no effect, and belongs to complete glitch-free scheme.And other use the work of single sorting algorithm or statistic algorithm Body verification method, such as shown in Fig. 9, use the ROC curve that fuzzy Classified Algorithms Applied is obtained for BlursCorresponding curve;Using The ROC curve that reflective sorting algorithm is obtained is SpecsCorresponding curve;Use frame sorting algorithm obtain ROC curve for LinesCorresponding curve;The ROC curve that fuzzy statistics algorithm is obtained is used for the corresponding curves of Blur;Calculated using reflective statistics The ROC curve that method is obtained is the corresponding curves of Spec;The ROC curve that frame statistic algorithm is obtained is used for the corresponding songs of Line Line;And the accuracy rate of above-mentioned six kinds of modes is far smaller than the accuracy rate using amalgamation mode.
The embodiment of the present invention additionally provides a kind of live body checking equipment.Figure 10 verifies equipment for the live body of the embodiment of the present invention Composition structural representation;As shown in Figure 10, the equipment includes:Resolution unit 31, taxon 32, statistic unit 33 and melt Close unit 34;Wherein,
The resolution unit 31, for obtaining the first view data, parses described first image data;
The taxon 32, the textural characteristics for obtaining described first image data;The textural characteristics characterize with At least one of properties feature:The fuzzy characteristics of described first image data, the retroreflective feature of described first image data, institute State the bounding box features of the first view data;The corresponding first kind parameter of the textural characteristics is obtained based on disaggregated model;
The statistic unit 33, for obtaining special with the texture in described first image data based on statistical disposition mode Levy corresponding Equations of The Second Kind parameter;The Equations of The Second Kind parameter is different from the first kind parameter;
The integrated unit 34, for judging whether the first kind parameter is more than first kind threshold value, and judges described Whether Equations of The Second Kind parameter is more than Second Threshold;When the first kind parameter is more than first kind threshold value and the Equations of The Second Kind parameter is big When Equations of The Second Kind threshold value, fusion parameters are determined based on the first kind parameter and the Equations of The Second Kind parameter;When the fusion parameters During more than the 3rd class threshold value, determine that live body is verified.
As a kind of embodiment, the integrated unit 34 is additionally operable to judge that the first kind parameter is not more than described One class threshold value or the Equations of The Second Kind parameter are not more than the Equations of The Second Kind threshold value or the fusion parameters are not more than the 3rd class During threshold value, determine that live body checking does not pass through.
Specifically, being used as a kind of embodiment, the taxon 32, for obtaining described first image data respectively First textural characteristics, the second textural characteristics and third texture feature;First textural characteristics characterize described first image data Fuzzy characteristics;Second textural characteristics characterize the retroreflective feature of described first image data;The third texture mark sheet Levy the bounding box features of described first image data;It is additionally operable to obtain first texture based on the first disaggregated model being pre-configured with Corresponding first parameter of feature, corresponding second ginseng of second textural characteristics is obtained based on the second disaggregated model being pre-configured with Number, corresponding 3rd parameter of the third texture feature is obtained based on the 3rd disaggregated model being pre-configured with;
The statistic unit 33, for count in described first image data with first textural characteristics the corresponding 4th Parameter, the 5th parameter corresponding with second textural characteristics and corresponding 6th parameter of the third texture feature.
Further, the integrated unit 34, for judging that first parameter is more than first threshold and second ginseng Number is more than the 3rd threshold value more than Second Threshold and the 3rd parameter and the 4th parameter is more than the 4th threshold value and described the When five parameters are more than the 5th threshold value and the 6th parameter and are more than six threshold values, based on first parameter, second ginseng Several, described 3rd parameter, the 4th parameter, the 5th parameter and the 6th parameter determine fusion parameters.
As a kind of embodiment, the integrated unit 34 is additionally operable to judge that first parameter is not more than described first Threshold value or second parameter are not more than the Second Threshold or the 3rd parameter is not more than the 3rd threshold value or described 4th parameter is not more than the 4th threshold value or the 5th parameter is not more than the 5th threshold value or the 6th parameter not During more than six threshold value, or the fusion parameters are when being not more than the 3rd class threshold value, determine that live body checking does not pass through.
Specifically, in the present embodiment, the taxon 32, for described first image data to be converted into HSV models Data;LBP processing is carried out to the HSV model datas, the first LBP characteristics, right corresponding to tone data are obtained respectively Should be in the 2nd LBP characteristics and the 3rd LBP characteristics corresponding to lightness data of saturation data, by described first LBP characteristics, the 2nd LBP characteristics and the 3rd LBP characteristics are used as first textural characteristics.
Specifically, described first image data are specifically as follows rgb image data;RGB data is then converted into HSV models Data, can respectively obtain and represent the H model datas of tone, represent the S model datas of saturation degree and represent the V pattern numbers of lightness According to.LBP processing is carried out to the H model datas, S model datas and V model datas respectively, so as to obtain H model datas, S moulds Image gradient information in type data and V model datas.By taking H model datas as an example, then the H model datas are carried out at gray scale Reason, obtains the gray level image of the H model datas, further determines that each characteristic point and adjacent eight in the gray level image Versus grayscale relation between individual characteristic point, is the gray level image of three characteristic point matrix for multiplying three, each feature as shown in fig. 4 a The gray scale of point is for example shown in Fig. 4 a;The gray value of each characteristic point is subjected to the expression that quantizes, specifically be can refer to shown in Fig. 4 b. Further, the gray scale and the gray scale of central feature point of adjacent eight characteristic points are compared, if the gray scale of adjacent feature point More than the gray scale of central feature point, then the value of the adjacent feature point is designated as 1;If conversely, the gray scale of adjacent feature point is less than Equal to the gray scale of central feature point, then the value of the adjacent feature point is designated as 0, for details, reference can be made to shown in Fig. 4 c.Further, The series connection of the value of adjacent feature point obtains to the string of binary characters of 8, the string of binary characters can be understood as being distributed in (0, 255) gray value.In specific implementation process, it can refer to shown in Fig. 4 c, if special using first, upper left corner characteristic point as starting Levy a little, arranged according to clockwise direction, then the character string of 8 obtained is 10001111.Thus the procedural image can be obtained In each characteristic point (i.e. central feature point) corresponding string of binary characters.Further, in order to remove redundancy, statistics is each special Levy in a little corresponding string of binary characters, 0 and 1 change less than 2 string of binary characters;For example, character string be 10001111 in, First changes 1 time, the 4th and the 5th 0 and 1 with second 0 and 1 and changes 1 time, amounts to change twice, is unsatisfactory for " 0 and 1 Condition of the change less than 2 ".Again for example, character string be 00001111 in, only by the 4th and the 5th 0 and 1 change 1 time, satisfaction " 0 and 1 condition of the change less than 2 ".Then, the string of binary characters after statistics is mapped in the range of (0,58), after mapping Data can be used as the first LBP characteristics corresponding to tone data;It so can also greatly reduce data processing amount.
Similarly, it can also be obtained for the 2nd LBP characteristics and the 3rd LBP characteristics using above-mentioned data mode, this In repeat no more.Further, by the first LBP characteristics, the 2nd LBP characteristics and the 3rd LBP features of acquisition First textural characteristics are used as after data concatenation, it can be understood as by LBP characteristics (including the first LBP of three 59 dimensions Characteristic, the 2nd LBP characteristics and the 3rd LBP characteristics) it is sequentially connected in series.Fig. 5 a are to be determined as living body faces in advance The first textural characteristics that image data extraction goes out;Fig. 5 b are to be determined as the image data extraction of non-living body face goes out first in advance Textural characteristics.
In the present embodiment, the taxon 32, the retroreflective feature for extracting described first image data, and extract The color histogram feature of described first image data, regard the retroreflective feature and color histogram feature as second line Manage feature;Wherein, the taxon 32, the albedo image for obtaining described first image data, based on described first View data and the albedo image obtain iridescent image;Piecemeal processing is carried out to the albedo image, image is obtained Piecemeal gray-scale statistical parameter is used as the retroreflective feature.
Specifically, characterizing the second textural characteristics of retroreflective feature includes two classes:One class is the highlight regions for describing image, i.e., Retroreflective feature;Color shades change, i.e. color histogram feature caused by another kind of then correspondence image reflectivity is different.Due to two Image (such as figure that included photograph print, display/display screen are shown in the attack pattern of non-living body face during secondary shooting Picture etc. can be understood as secondary shooting) it is approximately plane, material is also different from real human face, therefore easily causes color Change.Specifically, as the first view data of RGB image, under RGB color, obtaining described first image data Albedo image, iridescent image is obtained based on described first image data and the albedo image;Specifically, iridescent image For described first image data and the difference of its albedo image.Further, piecemeal processing is carried out to iridescent image, chooses figure As the mean and delta of piecemeal are used as retroreflective feature;Because iridescent image is specifically gray level image, then described image square Mean and delta is represented especially by the mean and delta of gray value.Fig. 6 a left figures are that the non-living body face collected is corresponding View data, right figure is the iridescent image that obtains after the image real time transfer;Fig. 6 b left figures are the living body faces correspondence collected View data, right figure is the iridescent image that obtains after the image real time transfer.
For color histogram feature, by the first described view data HSV model datas, expression tone can be obtained respectively H model datas, represent saturation degree S model datas and represent lightness V model datas.Respectively by H model datas, S models Data and V model datas project to 32 dimension spaces, obtain 32768 dimension color histograms.Get colors histogram component highest 100 dimensional features as described first image data color histogram feature.
In the present embodiment, the taxon 32, for described first image data to be filtered with processing, obtains described The first edge view data of first view data;LBP processing is carried out to the first edge view data, obtained described in characterizing 4th LBP characteristics of third texture feature.
Specifically, in order to obtain the bounding box features in described first image data, entering first to described first image data Row filtering process, obtains the corresponding first edge image of described first image data.As a kind of embodiment, it can use Sobel operators (specifically may include the two groups of matrixes detected for transverse edge and longitudinal edge is detected) and described first image Pixel value in data makees planar convolution, obtains the corresponding first edge image of described first image data.Further, to institute State first edge image and carry out gray proces, obtain the corresponding gray level image of the first edge image, determine the gray-scale map The versus grayscale relation between each characteristic point and adjacent eight characteristic points as in, such as three multiply three characteristic point matrix Gray level image, carries out the expression that quantizes, by the gray scale and central feature of adjacent eight characteristic points by the gray value of each characteristic point The gray scale of point is compared, if the gray scale of adjacent feature point is more than the gray scale of central feature point, by the adjacent feature point Value is designated as 1;If conversely, the gray scale of adjacent feature point is less than or equal to the gray scale of central feature point, by the adjacent feature point Value is designated as 0;Further, the value series connection of adjacent feature point is obtained into the string of binary characters of 8, the string of binary characters can To be interpreted as the gray value for being distributed in (0,255).In specific implementation process, shown in reference picture 4c, if with first, the upper left corner Characteristic point is arranged as initiation feature point according to clockwise direction, then the character string of 8 obtained is 10001111.Thus may be used Obtain the corresponding string of binary characters of each characteristic point (i.e. central feature point) in the procedural image.Further, in order to go Except redundancy, count in the corresponding string of binary characters of each characteristic point, 0 and 1 string of binary characters of the change less than 2;For example, word Symbol string is that in 10001111, first and second 0 and 1 change 1 time, the 4th and the 5th 0 and 1 and changed 1 time, amount to and change Twice, it is unsatisfactory for " 0 and 1 condition of the change less than 2 ".Again for example, character string be 00001111 in, only by the 4th and the 5th 0 and 1 change 1 time, meets " 0 and 1 condition of the change less than 2 ".Then, the string of binary characters after statistics is mapped to (0,58) In the range of, the data after mapping can be used as the 4th LBP characteristics corresponding to the third texture feature;So also can be significantly Reduce data processing amount.Due to having filtered other smooths, the corresponding 4th LBP characteristic energy of the first edge image The marginal portion in image is enough protruded, the bounding box features of image are described.
Above-mentioned technical proposal is to carry out texture feature extraction to described first image data based on three kinds of characteristics.The present embodiment In, the taxon 32 gathers great amount of samples data in advance, and the sample data can specifically include special using above-mentioned texture Levy extracting mode extraction the first textural characteristics and corresponding type (i.e. vague category identifier), and/or the second textural characteristics and Corresponding type (i.e. reflective type), and/or third texture feature and corresponding type (i.e. frame type), and sample number According at least one textural characteristics and corresponding type that may include in above-mentioned three kinds of textural characteristics.For each type of texture Feature carries out machine learning training, obtains the corresponding disaggregated model of each type of textural characteristics.Specifically, corresponding to fuzzy class Type, obtains corresponding first disaggregated model.For example can as shown in Figure 5 b, for marking the view data for being to obtain in advance The first textural characteristics obtained, are respectively provided with the inclined stripe in first image and the 3rd image in streak feature, such as Fig. 5 b, the Approximate horizontal stripe in two chapter images etc.;Then can the common characteristic based on the first textural characteristics kind corresponding to vague category identifier (such as streak feature) carries out machine learning training, obtains corresponding first disaggregated model of first textural characteristics.Correspond to Reflective type, obtains corresponding second disaggregated model.Corresponding to frame type, corresponding 3rd disaggregated model is obtained.
Then in the present embodiment, the taxon 32 by the textural characteristics of acquisition (including following textural characteristics at least it One:First textural characteristics, the second textural characteristics, third texture feature) input is into the disaggregated model of corresponding types, and acquisition is corresponding First kind parameter.For example, the first textural characteristics of acquisition are inputted into the first disaggregated model corresponding to vague category identifier, obtain Obtain corresponding first parameter of first textural characteristics, the fog-level of the first parameter characterization described first image data; Second textural characteristics of acquisition are inputted into the second disaggregated model corresponding to reflective type, second textural characteristics are obtained Corresponding second parameter, the reflective degree of the second parameter characterization described first image data;The third texture of acquisition is special Input is levied to corresponding in the 3rd disaggregated model of frame type, corresponding 3rd parameter of the third texture feature, institute is obtained State whether the 3rd parameter characterization described first image data include frame.Further, corresponding to each disaggregated model correspondence A threshold value is configured, when the parameter of acquisition is not more than correspondence threshold value, it is non-live to determine the people included in described first image data Body, that is, determine that live body checking does not pass through;Accordingly, when the parameter of acquisition is more than correspondence threshold value, further combined with following three kinds The statistical classification result of characteristic carries out follow-up fusion and judged.For example, when first parameter is not more than first threshold or second Parameter is not more than Second Threshold or when the 3rd parameter is not more than three threshold values, determines the people included in described first image data It is non-living body, that is, determines that live body checking does not pass through.
In the present embodiment, the statistic unit 33, for carrying out gaussian filtering process to described first image data, is obtained The Gaussian image data of described first image data;Difference is obtained based on described first image data and the Gaussian image data View data, the gradient information for obtaining the difference image data is used as the 4th parameter.
Specifically, carrying out gaussian filtering process to described first image data, Gaussian image data are obtained;Count described One view data and the gradient information of the difference image of the Gaussian image data are used as the 4th parameter.
In the present embodiment, the statistic unit 33, the iridescent image for obtaining described first image data;To described anti- Light image carries out binary conversion treatment, and piecemeal, each point of statistics are carried out to the iridescent image based on the image after binary conversion treatment Brightness meets first proportionate relationship of the region of predetermined threshold value in corresponding sub-block image in block image, calculates all block images The summation of corresponding first proportionate relationship is used as the 5th parameter.
In the present embodiment, the statistic unit 33, for recognizing the face region in described first image data;It is right Described first image data carry out edge detection process, obtain second edge view data, identify the second edge image Length meets the first straight line of the first preparatory condition in data;Position is extracted in the first straight line in the face region In addition and slope meets the second straight line of the second preparatory condition, the quantity for counting the second straight line is used as the 6th parameter.
Specifically, the statistic unit 33 carries out rim detection to described first image data;As a kind of embodiment, Rim detection can be carried out to described first image data using Canny edge detection algorithms, can specifically included:First will be described First view data (being specifically as follows rgb image data) is converted to gray level image, and gaussian filtering is carried out to the gray level image Processing, to remove picture noise;Image gradient information is further calculated, image border amplitude is calculated according to the image gradient information With direction;To image border amplitude application non-maxima suppression, only retain the maximum point of amplitude localized variation, generate the side of refinement Edge;Using dual threshold rim detection and edge is connected, the marginal point of extraction is had more robustness, so as to generate second edge figure As data.Further, hough conversion is carried out to the second edge view data, to find the second edge picture number Straight line in;Further, the first straight line of the first preparatory condition of length satisfaction in all straight lines is recognized;Wherein, as one The first straight line that length in embodiment, all straight lines of identification meets the first preparatory condition is planted, including:Recognize all straight lines The straight line that middle length exceedes the width half of described first image data is used as first straight line.On the other hand, to first figure As data are carried out in resolving, the face in described first image data is detected, face region is obtained, it is described The edge of face region can be represented by the face frame of output.Then further the first straight line is identified, obtained In the first straight line beyond the face region and slope meet the second preparatory condition straight line it is straight as second Line;Wherein, the slope meets the second straight line of the second preparatory condition, including:In the first straight line where the face Angle beyond region and between straight line where the edge of the face region is no more than the straight line conduct of predetermined angle The second straight line;As a kind of example, such as 30 degree of the predetermined angle certainly, is not limited to above-mentioned cited example.
In the present embodiment, the integrated unit 34, for obtaining first ginseng respectively using machine learning algorithm in advance Corresponding 3rd power of corresponding first weight coefficient of number, corresponding second weight coefficient of second parameter, the 3rd parameter Weight coefficient, corresponding 4th weight coefficient of the 4th parameter, corresponding 5th weight coefficient of the 5th parameter and described Corresponding 6th weight coefficient of six parameters;Obtain first parameter and first weight coefficient the first product, described the 3rd product of the second product of two parameters and second weight coefficient, the 3rd parameter and the 3rd weight coefficient, 4th parameter and the 5th of the 4th product, the 5th parameter and the 5th weight coefficient of the 4th weight coefficient Product and the 6th parameter and the 6th product of the 6th weight coefficient;First product, described second are multiplied The fusion ginseng is obtained after long-pending, described 3rd product, the 4th product, the 6th product addition described in the 5th sum of products Number.
Specifically, the first parameter obtained using above-mentioned processing mode is expressed as Blur_s, the second parameter is expressed as Spec_ S, the 3rd parameter is expressed as Line_s, and the 4th parameter is expressed as Blur, and the 5th parameter is expressed as Spec, and the 6th parameter is expressed as Line.Further, the machine learning of weighted value can be carried out using machine learning algorithm, is intended respectively for above-mentioned sextuple component Close, the fusion parameters of acquisition are met shown in aforementioned expression (14);Further, by the fusion parameters of acquisition and the default 3rd Class threshold value is compared, and when the fusion parameters are less than the 3rd class threshold value, are determined as non-living body face, that is, are determined live body Checking does not pass through;Accordingly, when the fusion parameters are not less than the 3rd class threshold value, it is determined as living body faces, that is, determines Live body is verified.
In the embodiment of the present invention, resolution unit 31, taxon 32 in the live body checking equipment, the and of statistic unit 33 Integrated unit 34, in actual applications can be by (CPU, the Central Processing of the central processing unit in the terminal Unit), digital signal processor (DSP, Digital Signal Processor), micro-control unit (MCU, Microcontroller Unit) or programmable gate array (FPGA, Field-Programmable Gate Array) realization.
The embodiment of the present invention additionally provides a kind of live body checking equipment, and live body checking equipment is used as one example of hardware entities As shown in figure 11.The equipment includes processor 61, storage medium 62, camera 65 and at least one external communication interface 63;The processor 61, storage medium 62, camera 65 and external communication interface 63 are connected by bus 64.
The live body verification method of the embodiment of the present invention can be integrated in institute by the algorithms library form of algorithm and arbitrary format State in live body checking equipment;It can specifically be integrated in the client that can be run in the live body checking equipment.In actual applications, Algorithm can be packaged together with client, user's activation client, that is, when opening live body authentication function, client call algorithm Storehouse, and start camera, the view data gathered by camera is used as source data, and carrying out live body according to the source data of collection sentences It is fixed.
, can be by it in several embodiments provided herein, it should be understood that disclosed apparatus and method Its mode is realized.Apparatus embodiments described above are only schematical, for example, the division of the unit, is only A kind of division of logic function, can have other dividing mode, such as when actually realizing:Multiple units or component can be combined, or Another system is desirably integrated into, or some features can be ignored, or do not perform.In addition, shown or discussed each composition portion Coupling point each other or direct-coupling or communication connection can be the INDIRECT COUPLINGs of equipment or unit by some interfaces Or communication connection, can be electrical, machinery or other forms.
The above-mentioned unit illustrated as separating component can be or may not be it is physically separate, it is aobvious as unit The part shown can be or may not be physical location, you can positioned at a place, can also be distributed to multiple network lists In member;Part or all of unit therein can be selected to realize the purpose of this embodiment scheme according to the actual needs.
In addition, each functional unit in various embodiments of the present invention can be fully integrated into a processing unit, also may be used Be each unit individually as a unit, can also two or more units it is integrated in a unit;It is above-mentioned Integrated unit can both be realized in the form of hardware, it would however also be possible to employ hardware adds the form of SFU software functional unit to realize.
One of ordinary skill in the art will appreciate that:Realizing all or part of step of above method embodiment can pass through Programmed instruction related hardware is completed, and foregoing program can be stored in a computer read/write memory medium, the program Upon execution, the step of including above method embodiment is performed;And foregoing storage medium includes:It is movable storage device, read-only Memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disc or Person's CD etc. is various can be with the medium of store program codes.
Or, if the above-mentioned integrated unit of the present invention is realized using in the form of software function module and is used as independent product Sale in use, can also be stored in a computer read/write memory medium.Understood based on such, the present invention is implemented The part that the technical scheme of example substantially contributes to prior art in other words can be embodied in the form of software product, The computer software product is stored in a storage medium, including some instructions are make it that a computer equipment (can be with It is personal computer, server or network equipment etc.) perform all or part of each of the invention embodiment methods described. And foregoing storage medium includes:Movable storage device, ROM, RAM, magnetic disc or CD etc. are various can be with store program codes Medium.
The foregoing is only a specific embodiment of the invention, but protection scope of the present invention is not limited thereto, any Those familiar with the art the invention discloses technical scope in, change or replacement can be readily occurred in, should all be contained Cover within protection scope of the present invention.Therefore, protection scope of the present invention should be based on the protection scope of the described claims.

Claims (24)

1. a kind of live body verification method, it is characterised in that methods described includes:
The first view data is obtained, described first image data are parsed, the textural characteristics of described first image data are obtained;It is described Textural characteristics characterize at least one of following attributive character:The fuzzy characteristics of described first image data, described first image number According to retroreflective feature, described first image data bounding box features;
The corresponding first kind parameter of the textural characteristics is obtained based on disaggregated model;
Equations of The Second Kind parameter corresponding with the textural characteristics in described first image data is obtained based on statistical disposition mode;It is described Equations of The Second Kind parameter is different from the first kind parameter;
When the first kind parameter is more than first kind threshold value and the Equations of The Second Kind parameter is more than Equations of The Second Kind threshold value, based on described First kind parameter and the Equations of The Second Kind parameter determine fusion parameters;
When the fusion parameters are more than the 3rd class threshold value, determine that live body is verified.
2. according to the method described in claim 1, it is characterised in that methods described also includes:Judge the first kind parameter not It is not more than the Equations of The Second Kind threshold value more than the first kind threshold value or the Equations of The Second Kind parameter or the fusion parameters is not more than During the 3rd class threshold value, determine that live body checking does not pass through.
3. method according to claim 1 or 2, it is characterised in that the texture of the acquisition described first image data is special Levy, including:
The first textural characteristics, the second textural characteristics and the third texture feature of described first image data are obtained respectively;Described One textural characteristics characterize the fuzzy characteristics of described first image data;Second textural characteristics characterize described first image data Retroreflective feature;The bounding box features of the third texture characteristic present described first image data;
It is described that the corresponding first kind parameter of the textural characteristics is obtained based on the disaggregated model being pre-configured with, including:Based on advance First disaggregated model of configuration obtains corresponding first parameter of first textural characteristics, based on the second classification mould being pre-configured with Type obtains corresponding second parameter of second textural characteristics, and the treble cut is obtained based on the 3rd disaggregated model being pre-configured with Manage corresponding 3rd parameter of feature;
Equations of The Second Kind parameter corresponding with the textural characteristics in the statistics described first image data, including:
Count the 4th parameter corresponding with first textural characteristics and second textural characteristics in described first image data Corresponding 6th parameter of corresponding 5th parameter and the third texture feature.
4. method according to claim 3, it is characterised in that the judgement first kind parameter is more than first kind threshold Value and when the Equations of The Second Kind parameter is more than Equations of The Second Kind threshold value, determines to melt based on the first kind parameter and the Equations of The Second Kind parameter Parameter is closed, including:
Judge that first parameter is more than first threshold and second parameter is more than Second Threshold and the 3rd parameter is big It is more than the 4th threshold value in the 3rd threshold value and the 4th parameter and the 5th parameter is more than the 5th threshold value and the 6th ginseng When number is more than six threshold values, based on first parameter, second parameter, the 3rd parameter, the 4th parameter, described 5th parameter and the 6th parameter determine fusion parameters.
5. method according to claim 3, it is characterised in that the judgement first kind parameter is not more than described first Class threshold value or the Equations of The Second Kind parameter are not more than the Equations of The Second Kind threshold value or the fusion parameters are not more than the 3rd class threshold During value, determine that live body checking does not pass through, including:
Judge that first parameter is not more than the first threshold or second parameter is not more than the Second Threshold or institute State that the 3rd parameter is not more than the 3rd threshold value or the 4th parameter is not more than the 4th threshold value or the 5th parameter When no more than described 5th threshold value or the 6th parameter are not more than six threshold value, or the fusion parameters are not more than institute When stating the 3rd class threshold value, determine that live body checking does not pass through.
6. method according to claim 3, it is characterised in that the first texture of the acquisition described first image data is special Levy, including:
Described first image data are converted into hue saturation HSV model datas;Local two are carried out to the HSV model datas Value pattern LBP processing, obtains the first LBP characteristics for corresponding to tone data, second corresponding to saturation data respectively LBP characteristics and the 3rd LBP characteristics corresponding to lightness data, by the first LBP characteristics, described second LBP characteristics and the 3rd LBP characteristics are used as first textural characteristics.
7. method according to claim 3, it is characterised in that obtain the second textural characteristics of described first image data, Including:
Extract described first image data retroreflective feature, and extract described first image data color histogram feature, It regard the retroreflective feature and color histogram feature as second textural characteristics;
Wherein, the retroreflective feature for extracting described first image data, including:Obtain the reflectivity of described first image data Image, iridescent image is obtained based on described first image data and the albedo image;The albedo image is carried out Piecemeal processing, obtains image block gray-scale statistical parameter and is used as the retroreflective feature.
8. method according to claim 3, it is characterised in that obtain the third texture feature of described first image data, Including:
Described first image data are filtered with processing, the first edge view data of described first image data is obtained;
LBP processing is carried out to the first edge view data, the 4th LBP characteristics for characterizing the third texture feature are obtained According to.
9. method according to claim 3, it is characterised in that with described first in the statistics described first image data Corresponding 4th parameter of textural characteristics, including:
Gaussian filtering process is carried out to described first image data, the Gaussian image data of described first image data are obtained;
Difference image data is obtained based on described first image data and the Gaussian image data, the difference image number is obtained According to gradient information be used as the 4th parameter.
10. method according to claim 3, it is characterised in that with second line in statistics described first image data Corresponding 5th parameter of feature is managed, including:
Obtain the iridescent image of described first image data;Binary conversion treatment is carried out to the iridescent image, at binaryzation Image after reason carries out piecemeal to the iridescent image, counts brightness in each block image and meets the region of predetermined threshold value in phase Answer the first proportionate relationship in block image, calculate the summation of corresponding first proportionate relationship of all block images as described the Five parameters.
11. method according to claim 3, it is characterised in that with the treble cut in statistics described first image data Corresponding 6th parameter of feature is managed, including:
Recognize the face region in described first image data;
Edge detection process is carried out to described first image data, second edge view data is obtained, identifies second side Length meets the first straight line of the first preparatory condition in edge view data;
Position in the first straight line is extracted beyond the face region and slope meets the second of the second preparatory condition Straight line, the quantity for counting the second straight line is used as the 6th parameter.
12. method according to claim 4, it is characterised in that it is described based on first parameter, second parameter, 3rd parameter, the 4th parameter, the 5th parameter and the 6th parameter determine fusion parameters, including:
Corresponding first weight coefficient of first parameter, second parameter pair are obtained using machine learning algorithm respectively in advance The second weight coefficient for answering, corresponding 3rd weight coefficient of the 3rd parameter, the corresponding 4th weight system of the 4th parameter Corresponding 5th weight coefficient of several, described 5th parameter and corresponding 6th weight coefficient of the 6th parameter;
Obtain first parameter and the first product, second parameter and the second weight system of first weight coefficient The 3rd product with the 3rd weight coefficient of several the second product, the 3rd parameter, the 4th parameter and the described 4th 4th product of weight coefficient, the 5th parameter and the 5th product and the 6th parameter of the 5th weight coefficient With the 6th product of the 6th weight coefficient;
By described in first product, second product, the 3rd product, the 4th product, the 5th sum of products The fusion parameters are obtained after 6th product addition.
13. a kind of live body verifies equipment, it is characterised in that the equipment includes:Resolution unit, taxon, statistic unit and Integrated unit;Wherein,
The resolution unit, for obtaining the first view data, parses described first image data;
The taxon, the textural characteristics for obtaining described first image data;The textural characteristics are characterized with properties At least one of feature:The fuzzy characteristics of described first image data, the retroreflective feature of described first image data, described first The bounding box features of view data;The corresponding first kind parameter of the textural characteristics is obtained based on disaggregated model;
The statistic unit, for obtaining corresponding with the textural characteristics in described first image data based on statistical disposition mode Equations of The Second Kind parameter;The Equations of The Second Kind parameter is different from the first kind parameter;
The integrated unit, for judging that whether the first kind parameter is more than first kind threshold value, and judge the Equations of The Second Kind Whether parameter is more than Second Threshold;When the first kind parameter is more than first kind threshold value and the Equations of The Second Kind parameter is more than second During class threshold value, fusion parameters are determined based on the first kind parameter and the Equations of The Second Kind parameter;When the fusion parameters are more than the During three class threshold values, determine that live body is verified.
14. equipment according to claim 13, it is characterised in that the integrated unit, is additionally operable to judge the first kind Parameter is not more than the first kind threshold value or the Equations of The Second Kind parameter is not more than the Equations of The Second Kind threshold value or the fusion parameters No more than the 3rd class threshold value when, determine live body checking do not pass through.
15. the equipment according to claim 13 or 14, it is characterised in that the taxon, described for obtaining respectively The first textural characteristics, the second textural characteristics and the third texture feature of first view data;First textural characteristics characterize institute State the fog-level of the first view data;Second textural characteristics characterize the reflective degree of described first image data;It is described Whether third texture characteristic present described first image data include frame;It is additionally operable to based on the first classification mould being pre-configured with Type obtains corresponding first parameter of first textural characteristics, and second line is obtained based on the second disaggregated model being pre-configured with Corresponding second parameter of feature is managed, the third texture feature the corresponding 3rd is obtained based on the 3rd disaggregated model being pre-configured with Parameter;
The statistic unit, for count the 4th parameter corresponding with first textural characteristics in described first image data, The 5th parameter corresponding with second textural characteristics and corresponding 6th parameter of the third texture feature.
16. equipment according to claim 15, it is characterised in that the integrated unit, for judging first parameter It is more than Second Threshold more than first threshold and second parameter and the 3rd parameter is more than the 3rd threshold value and the described 4th Parameter is more than the 4th threshold value and the 5th parameter and is more than the 5th threshold value and when the 6th parameter is more than six threshold values, is based on First parameter, second parameter, the 3rd parameter, the 4th parameter, the 5th parameter and the 6th ginseng Number determines fusion parameters.
17. equipment according to claim 15, it is characterised in that the integrated unit, is additionally operable to judge first ginseng Number is not more than the first threshold or second parameter is not more than the Second Threshold or the 3rd parameter no more than institute State the 3rd threshold value or the 4th parameter be not more than the 4th threshold value or the 5th parameter be not more than the 5th threshold value, Or the 6th parameter is when being not more than six threshold value, or the fusion parameters are when being not more than the 3rd class threshold value, it is determined that Live body checking does not pass through.
18. equipment according to claim 15, it is characterised in that the taxon, for by described first image number According to being converted to HSV model datas;LBP processing is carried out to the HSV model datas, first corresponding to tone data is obtained respectively LBP characteristics, the 2nd LBP characteristics and the 3rd LBP characteristics corresponding to lightness data corresponding to saturation data According to regarding the first LBP characteristics, the 2nd LBP characteristics and the 3rd LBP characteristics as described first Textural characteristics.
19. equipment according to claim 15, it is characterised in that the taxon, for extracting described first image The retroreflective feature of data, and the color histogram feature of described first image data is extracted, by the retroreflective feature and color Histogram feature is used as second textural characteristics;
Wherein, the taxon, the albedo image for obtaining described first image data, based on described first image number According to this and the albedo image obtain iridescent image;Piecemeal processing is carried out to the albedo image, image block ash is obtained Degree statistical parameter is used as the retroreflective feature.
20. equipment according to claim 15, it is characterised in that the taxon, for described first image number According to processing is filtered, the first edge view data of described first image data is obtained;To the first edge view data LBP processing is carried out, the 4th LBP characteristics for characterizing the third texture feature are obtained.
21. equipment according to claim 15, it is characterised in that the statistic unit, for described first image number According to gaussian filtering process is carried out, the Gaussian image data of described first image data are obtained;Based on described first image data with The Gaussian image data obtain difference image data, obtain the gradient information of the difference image data as the described 4th ginseng Number.
22. equipment according to claim 15, it is characterised in that the statistic unit, for obtaining described first image The iridescent image of data;Binary conversion treatment is carried out to the iridescent image, based on the image after binary conversion treatment to described reflective Image carries out piecemeal, counts brightness in each block image and meets first ratio of the region of predetermined threshold value in corresponding sub-block image Example relation, the summation for calculating corresponding first proportionate relationship of all block images is used as the 5th parameter.
23. equipment according to claim 15, it is characterised in that the statistic unit, for recognizing described first image Face region in data;Edge detection process is carried out to described first image data, second edge view data is obtained, Identify the first straight line of the first preparatory condition of length satisfaction in the second edge view data;Extract in the first straight line Position is beyond the face region and slope meets the second straight line of the second preparatory condition, counts the second straight line Quantity be used as the 6th parameter.
24. equipment according to claim 16, it is characterised in that the integrated unit, for using machine learning in advance Algorithm obtains corresponding first weight coefficient of first parameter, corresponding second weight coefficient of second parameter, institute respectively State corresponding 3rd weight coefficient of the 3rd parameter, corresponding 4th weight coefficient of the 4th parameter, the 5th parameter correspondence The 5th weight coefficient and corresponding 6th weight coefficient of the 6th parameter;Obtain first parameter and first weight Second product of the first product of coefficient, second parameter and second weight coefficient, the 3rd parameter and described the 4th product of the 3rd product of three weight coefficients, the 4th parameter and the 4th weight coefficient, the 5th parameter with The 5th product and the 6th parameter of 5th weight coefficient and the 6th product of the 6th weight coefficient;By institute State the first product, second product, the 3rd product, the 4th product, the 6th product described in the 5th sum of products The fusion parameters are obtained after addition.
CN201710175495.5A 2017-03-22 2017-03-22 A kind of living body verification method and equipment Active CN106951869B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710175495.5A CN106951869B (en) 2017-03-22 2017-03-22 A kind of living body verification method and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710175495.5A CN106951869B (en) 2017-03-22 2017-03-22 A kind of living body verification method and equipment

Publications (2)

Publication Number Publication Date
CN106951869A true CN106951869A (en) 2017-07-14
CN106951869B CN106951869B (en) 2019-03-15

Family

ID=59472685

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710175495.5A Active CN106951869B (en) 2017-03-22 2017-03-22 A kind of living body verification method and equipment

Country Status (1)

Country Link
CN (1) CN106951869B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107609463A (en) * 2017-07-20 2018-01-19 百度在线网络技术(北京)有限公司 Biopsy method, device, equipment and storage medium
CN107609494A (en) * 2017-08-31 2018-01-19 北京飞搜科技有限公司 A kind of human face in-vivo detection method and system based on silent formula
CN108304708A (en) * 2018-01-31 2018-07-20 广东欧珀移动通信有限公司 Mobile terminal, face unlocking method and related product
CN109145716A (en) * 2018-07-03 2019-01-04 袁艳荣 Boarding gate verifying bench based on face recognition
CN109558794A (en) * 2018-10-17 2019-04-02 平安科技(深圳)有限公司 Image-recognizing method, device, equipment and storage medium based on moire fringes
CN109740572A (en) * 2019-01-23 2019-05-10 浙江理工大学 A kind of human face in-vivo detection method based on partial color textural characteristics
CN110263708A (en) * 2019-06-19 2019-09-20 郭玮强 Image sources recognition methods, equipment and computer readable storage medium
CN111178112A (en) * 2018-11-09 2020-05-19 株式会社理光 Real face recognition device
CN113221842A (en) * 2021-06-04 2021-08-06 第六镜科技(北京)有限公司 Model training method, image recognition method, device, equipment and medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104778457A (en) * 2015-04-18 2015-07-15 吉林大学 Video face identification algorithm on basis of multi-instance learning
CN105243376A (en) * 2015-11-06 2016-01-13 北京汉王智远科技有限公司 Living body detection method and device
CN105389554A (en) * 2015-11-06 2016-03-09 北京汉王智远科技有限公司 Face-identification-based living body determination method and equipment
CN105389553A (en) * 2015-11-06 2016-03-09 北京汉王智远科技有限公司 Living body detection method and apparatus

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104778457A (en) * 2015-04-18 2015-07-15 吉林大学 Video face identification algorithm on basis of multi-instance learning
CN105243376A (en) * 2015-11-06 2016-01-13 北京汉王智远科技有限公司 Living body detection method and device
CN105389554A (en) * 2015-11-06 2016-03-09 北京汉王智远科技有限公司 Face-identification-based living body determination method and equipment
CN105389553A (en) * 2015-11-06 2016-03-09 北京汉王智远科技有限公司 Living body detection method and apparatus

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107609463A (en) * 2017-07-20 2018-01-19 百度在线网络技术(北京)有限公司 Biopsy method, device, equipment and storage medium
CN107609463B (en) * 2017-07-20 2021-11-23 百度在线网络技术(北京)有限公司 Living body detection method, living body detection device, living body detection equipment and storage medium
CN107609494A (en) * 2017-08-31 2018-01-19 北京飞搜科技有限公司 A kind of human face in-vivo detection method and system based on silent formula
CN108304708A (en) * 2018-01-31 2018-07-20 广东欧珀移动通信有限公司 Mobile terminal, face unlocking method and related product
CN109145716A (en) * 2018-07-03 2019-01-04 袁艳荣 Boarding gate verifying bench based on face recognition
CN109145716B (en) * 2018-07-03 2019-04-16 南京思想机器信息科技有限公司 Boarding gate verifying bench based on face recognition
CN109558794A (en) * 2018-10-17 2019-04-02 平安科技(深圳)有限公司 Image-recognizing method, device, equipment and storage medium based on moire fringes
CN111178112A (en) * 2018-11-09 2020-05-19 株式会社理光 Real face recognition device
CN111178112B (en) * 2018-11-09 2023-06-16 株式会社理光 Face recognition device
CN109740572A (en) * 2019-01-23 2019-05-10 浙江理工大学 A kind of human face in-vivo detection method based on partial color textural characteristics
CN110263708B (en) * 2019-06-19 2020-03-13 郭玮强 Image source identification method, device and computer readable storage medium
CN110263708A (en) * 2019-06-19 2019-09-20 郭玮强 Image sources recognition methods, equipment and computer readable storage medium
CN113221842A (en) * 2021-06-04 2021-08-06 第六镜科技(北京)有限公司 Model training method, image recognition method, device, equipment and medium
CN113221842B (en) * 2021-06-04 2023-12-29 第六镜科技(北京)集团有限责任公司 Model training method, image recognition method, device, equipment and medium

Also Published As

Publication number Publication date
CN106951869B (en) 2019-03-15

Similar Documents

Publication Publication Date Title
CN106951869B (en) A kind of living body verification method and equipment
CN108319953B (en) Occlusion detection method and device, electronic equipment and the storage medium of target object
CN111488756B (en) Face recognition-based living body detection method, electronic device, and storage medium
CN107545239B (en) Fake plate detection method based on license plate recognition and vehicle characteristic matching
CN103413147B (en) A kind of licence plate recognition method and system
CN107657249A (en) Method, apparatus, storage medium and the processor that Analysis On Multi-scale Features pedestrian identifies again
CN109918971B (en) Method and device for detecting number of people in monitoring video
CN109948566B (en) Double-flow face anti-fraud detection method based on weight fusion and feature selection
CN111275696B (en) Medical image processing method, image processing method and device
CN106529380A (en) Image identification method and device
CN108596197A (en) A kind of seal matching process and device
CN109255344A (en) A kind of digital display instrument positioning and Recognition of Reading method based on machine vision
CN111860369A (en) Fraud identification method and device and storage medium
CN108647634A (en) Framing mask lookup method, device, computer equipment and storage medium
CN109740721B (en) Wheat ear counting method and device
CN105184771A (en) Adaptive moving target detection system and detection method
CN111709305B (en) Face age identification method based on local image block
CN108154496B (en) Electric equipment appearance change identification method suitable for electric power robot
CN111160194B (en) Static gesture image recognition method based on multi-feature fusion
CN109977941A (en) Licence plate recognition method and device
CN106709458A (en) Human face living body detection method and device
CN109284759A (en) One kind being based on the magic square color identification method of support vector machines (svm)
CN108765454A (en) A kind of smog detection method, device and device end based on video
CN112819017B (en) High-precision color cast image identification method based on histogram
CN106920266A (en) The Background Generation Method and device of identifying code

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant