CN107316029B - A kind of living body verification method and equipment - Google Patents

A kind of living body verification method and equipment Download PDF

Info

Publication number
CN107316029B
CN107316029B CN201710533353.1A CN201710533353A CN107316029B CN 107316029 B CN107316029 B CN 107316029B CN 201710533353 A CN201710533353 A CN 201710533353A CN 107316029 B CN107316029 B CN 107316029B
Authority
CN
China
Prior art keywords
characteristic point
parameter
point
image
matching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710533353.1A
Other languages
Chinese (zh)
Other versions
CN107316029A (en
Inventor
熊鹏飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201710533353.1A priority Critical patent/CN107316029B/en
Publication of CN107316029A publication Critical patent/CN107316029A/en
Application granted granted Critical
Publication of CN107316029B publication Critical patent/CN107316029B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention discloses a kind of living body verification method and equipment.The method includes:Image data is obtained based on action command, parses described image data, identifies the region for characterizing face position in described image data;The variation of face position in the multiple image for being included based on described image data, tracks the region;Extract the textural characteristics in the region;Parameter based on the textural characteristics computational representation posture;Movement is determined based on the parameter;When movement movement corresponding with action command matching, determine that living body is verified.

Description

A kind of living body verification method and equipment
Technical field
The present invention relates to face recognition technologies, and in particular to a kind of living body verification method and equipment.
Background technique
Currently, more and more authentication systems authenticate user identity using face recognition technology.Especially with Mobile terminal client terminal it is universal, more and more faces verifying systems replace traditional password authentification to become mainstream.But with The extensive use of face recognition technology, there are various methods and pretend to be living body faces with by authentication, such as pass through photograph The duplicity such as piece, video are identified to pass through authentication;In order to take precautions against increasingly multiplicity, increasingly have fraudulent attacker Formula, the living body faces recognizer in authentication system is also more complicated, and the application capacity of generation also can be increasing.And it moves The limited storage space of client terminals is to accommodate biggish living body verification algorithm;Meanwhile the processing capacity of processor The time on the weak side for making data processing is longer, this significantly limits the application of living body faces verifying system on mobile terminals.
Summary of the invention
To solve existing technical problem, the embodiment of the present invention provides a kind of living body verification method and equipment.
In order to achieve the above objectives, the technical solution of the embodiment of the present invention is realized in:
The embodiment of the invention provides a kind of living body verification method, the method includes:
Image data is obtained based on action command, described image data is parsed, identifies in described image data and characterize people The region of face position;
The variation of face position in the multiple image for being included based on described image data, tracks the region;
Extract the textural characteristics in the region;Parameter based on the textural characteristics computational representation posture;Based on described Parameter determines movement;
When movement movement corresponding with action command matching, determine that living body is verified.
In above scheme, the variation of face position in the multiple image for being included based on described image data, The region is tracked, including:
The first frame image and the second frame image in the multiple image are extracted, identifies in the first frame image and characterizes people The first area of face position obtains corresponding first coordinate range in the first area;
Obtain initial coordinate range corresponding with first coordinate range in the second frame image;
Calculate the corresponding offset parameter of the initial coordinate range;
The second coordinate range is obtained based on the initial coordinate range and corresponding offset parameter, described second is recorded and sits Mark range is the region of the characterization face position after tracking;
Wherein, the offset parameter characterizes offset journey of second coordinate range relative to first coordinate range Degree.
It is described to obtain initial coordinate model corresponding with first coordinate range in the second frame image in above scheme It encloses, including:
First group of N number of characteristic point is chosen in the corresponding first area of first coordinate range by preset step-length, is obtained First coordinate of fisrt feature point in first group of N number of characteristic point;N is positive integer;Wherein, the fisrt feature point is described Any feature point in first group of N number of characteristic point;
Obtain second group of N number of characteristic point in the second frame image, second feature point in second group of N number of characteristic point The second coordinate it is identical as the first coordinate of corresponding fisrt feature point in first group of N number of characteristic point;
Initial coordinate range is determined based on the second coordinate of each characteristic point in second group of N number of characteristic point.
It is described to calculate the corresponding offset parameter of the initial coordinate range in above scheme, including:
Each characteristic point is calculated in second group of N number of characteristic point relative to corresponding in first group of N number of characteristic point First offset parameter of characteristic point;The identical second feature point of first offset parameter characterization coordinate and first spy Difference degree between sign point;
Multiple matching characteristics are determined based on corresponding first offset parameter of characteristic point each in second group of N number of characteristic point Point calculates the second offset parameter of each matching characteristic point in the multiple matching characteristic point;The second offset parameter characterization The matching characteristic point in second group of N number of characteristic point, between third feature point corresponding with the matching characteristic point Degrees of offset;
And calculate the third offset parameter of each matching characteristic point in the multiple matching characteristic point;The third is inclined Shifting parameter characterize the matching characteristic point in first group of N number of characteristic point, with the matching characteristic point the corresponding 4th Degrees of offset between characteristic point;
The corresponding offset of the initial coordinate range is determined according to second offset parameter and the third offset parameter Parameter.
In above scheme, the third offset parameter for calculating each matching characteristic point in the multiple matching characteristic point, Including:
Multiple groups fisrt feature point pair is extracted from multiple matching characteristic points that the second frame image is included respectively;It is described Fisrt feature point to include the first matching characteristic point and the second matching characteristic point, and from the first frame image, with it is described Extract multiple groups second feature point pair in the corresponding source characteristic point of multiple matching characteristic points, the second feature point is to including first Source characteristic point and the second source characteristic point;Wherein, the first matching characteristic point and the second matching characteristic point are the multiple Any two characteristic point in matching characteristic point;
The first distance between the first matching characteristic point and the second matching characteristic point is calculated, and described in calculating Second distance between first source characteristic point and second source characteristic point;
Obtain the relative parameter between the first distance and the second distance;The relative parameter is denoted as described The third offset parameter of 1 matching characteristic point and the second matching characteristic point.
It is described that the initial coordinate is determined according to second offset parameter and the third offset parameter in above scheme The corresponding offset parameter of range, including:
Multiple second offset parameters are handled by the first default processing rule, obtain particular offset parameter;
And handle multiple relative parameters by the second default processing rule, obtain specific relative parameter;
Using the particular offset degree and the specific relative parameter as the corresponding offset ginseng of the initial coordinate range Number.
It is described based on corresponding first offset parameter of characteristic point each in second group of N number of characteristic point in above scheme Determine multiple matching characteristic points, including:
Each feature is determined based on corresponding first offset parameter of characteristic point each in second group of N number of characteristic point The corresponding target feature point of point, obtains the third coordinate of the target feature point;
Determine initial characteristics point corresponding with the target feature point in the first frame image;The initial characteristics point 4-coordinate it is identical as the third coordinate of the target feature point;
The initial characteristics point is obtained relative to the 4th offset parameter between the target feature point;
When the 4th offset parameter reaches preset threshold, determine that the target feature point is matching characteristic point;
When the 4th offset parameter is not up to the preset threshold, determine that the target feature point is not matching characteristic Point.
In above scheme, the textural characteristics extracted in the region;Based on the textural characteristics computational representation posture Parameter;Movement is determined based on the parameter, including:
Extract the first textural characteristics and/or the second textural characteristics in the region;
Based on the first parameter of the first posture of the first textural characteristics computational representation, and/or, it is based on second texture Feature calculation characterizes the second parameter of the second posture;
The first movement is determined based on first parameter, and/or, the second movement is determined based on second parameter.
In above scheme, first textural characteristics extracted in the region, including:
Characteristic point in the region is handled according to the default processing rule of third, obtains in the region and characterizes often First procedure parameter of the difference degree of a characteristic point characteristic point adjacent with the characteristic point;
Analyze the first textural characteristics that first procedure parameter obtains characterization face texture edge;
Correspondingly, second textural characteristics extracted in the region, including:
Extract the first part region in the region;
Characteristic point in the first part region is handled according to the 4th default processing rule, obtains described first The second procedure parameter of the difference degree of each characteristic point characteristic point adjacent with the characteristic point is characterized in partial region;
Analyze the second textural characteristics that second procedure parameter obtains characterization eye texture edge.
In above scheme, first parameter based on the first posture of the first textural characteristics computational representation, including:
First textural characteristics are inputted into preconfigured first disaggregated model, obtain the first ginseng of the first posture of characterization Number.
In above scheme, second parameter based on the second posture of the second textural characteristics computational representation, including:
Second textural characteristics are inputted into preconfigured second disaggregated model, obtain the second ginseng of the second posture of characterization Number.
It is described that first movement is determined based on first parameter in above scheme, including:
Based on corresponding multiple first parameters of the multiple image, first part's image in the multiple image is judged Whether corresponding first parameter is all satisfied second part image corresponding first in first threshold range and the multiple image Whether parameter is not satisfied the first threshold range;
When corresponding first parameter of first part's image is all satisfied first threshold range and described in the multiple image When the first threshold range is not satisfied in corresponding first parameter of second part image in multiple image, first ginseng is determined Number corresponds to the first movement.
It is described that second movement is determined based on second parameter in above scheme, including:
Based on corresponding multiple second parameters of the multiple image, Part III image in the multiple image is judged Whether corresponding second parameter is all satisfied Part IV image corresponding second in second threshold range and the multiple image Whether parameter is not satisfied the second threshold range;
When corresponding second parameter of Part III image is all satisfied second threshold range and described in the multiple image When the second threshold range is not satisfied in corresponding second parameter of Part IV image in multiple image, second ginseng is determined Number corresponds to the second movement.
The embodiment of the invention also provides a kind of living bodies to verify equipment, and the equipment includes:Detection unit, tracking cell, Feature extraction unit, computing unit, movement judging unit and authentication unit;Wherein,
The detection unit parses described image data, identifies described for obtaining image data based on action command The region of face position is characterized in image data;
The tracking cell, the multiple image that the described image data for being identified based on the detection unit are included The variation of middle face position, tracks the region;
The feature extraction unit, the textural characteristics in the region for extracting the tracking cell tracking;
The computing unit, the textural characteristics computational representation posture for being extracted based on the feature extraction unit Parameter;
The movement judging unit, the parameter for being obtained based on the computing unit determine movement;
The authentication unit, for determining that living body is tested when movement movement corresponding with action command matching Card passes through.
In above scheme, the tracking cell, for extracting first frame image and the second frame figure in the multiple image Picture identifies the first area for characterizing face position in the first frame image, obtains the first area corresponding first Coordinate range;Obtain initial coordinate range corresponding with first coordinate range in the second frame image;It calculates described first The corresponding offset parameter of beginning coordinate range;The second coordinate model is obtained based on the initial coordinate range and corresponding offset parameter It encloses, records the region that second coordinate range is the characterization face position after tracking;Wherein, the offset parameter characterization Degrees of offset of second coordinate range relative to first coordinate range.
In above scheme, the tracking cell, for pressing preset step-length in corresponding firstth area of first coordinate range First group of N number of characteristic point is chosen in domain, obtains the first coordinate of fisrt feature point in first group of N number of characteristic point;N is positive whole Number;Wherein, the fisrt feature point is any feature point in first group of N number of characteristic point;Obtain the second frame image In second group of N number of characteristic point, in second group of N number of characteristic point the second coordinate of second feature point and described first group it is N number of The first coordinate of corresponding fisrt feature point is identical in characteristic point;Based on each characteristic point in second group of N number of characteristic point The second coordinate determine initial coordinate range.
In above scheme, the tracking cell, for calculate in second group of N number of characteristic point each characteristic point relative to First offset parameter of the individual features point in first group of N number of characteristic point;The first offset parameter characterization coordinate is identical The second feature point and fisrt feature point between difference degree;Based on each in second group of N number of characteristic point Corresponding first offset parameter of characteristic point determines multiple matching characteristic points, and it is special to calculate each matching in the multiple matching characteristic point Levy the second offset parameter of point;In the second offset parameter characterization matching characteristic point and second group of N number of characteristic point, Degrees of offset between third feature point corresponding with the matching characteristic point;And calculate the multiple matching characteristic point In each matching characteristic point third offset parameter;The third offset parameter characterizes the matching characteristic point and described first group Degrees of offset in N number of characteristic point, between fourth feature point corresponding with the matching characteristic point;According to second offset Parameter and the third offset parameter determine the corresponding offset parameter of the initial coordinate range.
In above scheme, the tracking cell, multiple matching characteristics for being included from the second frame image respectively Multiple groups fisrt feature point pair is extracted in point;The fisrt feature point to include the first matching characteristic point and the second matching characteristic point, And from the first frame image, in source characteristic point corresponding with the multiple matching characteristic point extract multiple groups second feature Point pair, the second feature point is to including the first source characteristic point and the second source characteristic point;Wherein, the first matching characteristic point and The second matching characteristic point is any two characteristic point in the multiple matching characteristic point;Calculate first matching characteristic First distance between point and the second matching characteristic point, and calculate first source characteristic point and second source feature Second distance between point;Obtain the relative parameter between the first distance and the second distance;By the relative parameter It is denoted as the third offset parameter of the first matching characteristic point and the second matching characteristic point.
In above scheme, the tracking cell, for carrying out multiple second offset parameters by the first default processing rule Processing obtains particular offset parameter;And handle multiple relative parameters by the second default processing rule, it obtains specific Relative parameter;Using the particular offset degree and the specific relative parameter as the corresponding offset ginseng of the initial coordinate range Number.
In above scheme, the tracking cell, for corresponding based on each characteristic point in second group of N number of characteristic point First offset parameter determines the corresponding target feature point of each characteristic point, obtains the third coordinate of the target feature point; Determine initial characteristics point corresponding with the target feature point in the first frame image;The 4th of the initial characteristics point sits It marks identical as the third coordinate of the target feature point;The initial characteristics point is obtained relative between the target feature point 4th offset parameter;When the 4th offset parameter reaches preset threshold, determine that the target feature point is matching characteristic point; When the 4th offset parameter is not up to the preset threshold, determining the target feature point not is matching characteristic point.
In above scheme, the feature extraction unit, for extracting the first textural characteristics and/or second in the region Textural characteristics;
The computing unit, for the first parameter based on the first posture of the first textural characteristics computational representation, and/ Or, the second parameter based on the second posture of the second textural characteristics computational representation;
The movement judging unit, for determining the first movement based on first parameter, and/or, it is based on described second Parameter determines the second movement.
In above scheme, the feature extraction unit, for being handled according to third is default the characteristic point in the region Rule is handled, and the difference degree that each characteristic point characteristic point adjacent with the characteristic point is characterized in the region is obtained First procedure parameter;Analyze the first textural characteristics that first procedure parameter obtains characterization face texture edge;It is also used to take out Take the first part region in the region;To the characteristic point in the first part region according to the 4th default processing rule into Row processing, obtains the difference degree that each characteristic point characteristic point adjacent with the characteristic point is characterized in the first part region The second procedure parameter;Analyze the second textural characteristics that second procedure parameter obtains characterization eye texture edge.
In above scheme, the computing unit, for first textural characteristics input preconfigured first to be classified Model obtains the first parameter of the first posture of characterization;And/or for second textural characteristics to be inputted preconfigured the Two disaggregated models obtain the second parameter of the second posture of characterization.
In above scheme, the movement judging unit, for based on corresponding multiple first ginsengs of the multiple image Number, judges whether corresponding first parameter of first part's image is all satisfied first threshold range and institute in the multiple image State whether corresponding first parameter of second part image in multiple image is not satisfied the first threshold range;When the multiframe Corresponding first parameter of first part's image is all satisfied second part in first threshold range and the multiple image in image When the first threshold range is not satisfied in corresponding first parameter of image, determine that first parameter corresponds to the first movement; And/or for being based on corresponding multiple second parameters of the multiple image, judge Part III figure in the multiple image As whether corresponding second parameter is all satisfied Part IV image in second threshold range and the multiple image corresponding Whether two parameters are not satisfied the second threshold range;When corresponding second parameter of Part III image in the multiple image It is all satisfied corresponding second parameter of Part IV image in second threshold range and the multiple image and is not satisfied described When two threshold ranges, determine that second parameter corresponds to the second movement.
Living body verification method provided in an embodiment of the present invention and equipment, the living body verification method include:Referred to based on movement It enables and obtains image data, parse described image data, identify the region for characterizing face position in described image data;Base The variation of face position in the multiple image that described image data are included, tracks the region;Extract the region In textural characteristics;Parameter based on the textural characteristics computational representation posture;Movement is determined based on the parameter;When described dynamic When making movement corresponding with action command matching, determine that living body is verified.Using the technical solution of the embodiment of the present invention, It is instructed by active output action and determines in image data whether to include the side acted accordingly according to the image data of acquisition Living body when formula determines whether, therefore, it is determined that living body verifies whether to pass through;The technical solution realization of the embodiment of the present invention passes through region The mode of tracking substitutes recognition of face detection, only needs when corresponding to parameter according to textural characteristics calculating posture in scheme preparatory The computation model of configuration, and the computation model capacity is minimum, therefore, living body proof scheme provided in an embodiment of the present invention is carried Algorithm file it is smaller, meet the memory space requirements of mobile terminal significantly;And data processing amount substantially reduces, and also meets The demand of the processing capacity of mobile terminal.
Detailed description of the invention
Fig. 1 is the overall procedure schematic diagram of the living body verification method of the embodiment of the present invention;
Fig. 2 is the details flow diagram of the living body verification method of the embodiment of the present invention;
Fig. 3 is the flow diagram of the living body verification method of the embodiment of the present invention;
Fig. 4 a to Fig. 4 e is respectively the application display schematic diagram in the living body verification method of the embodiment of the present invention;
Fig. 5 is that the living body of the embodiment of the present invention verifies the composed structure schematic diagram of equipment;
Fig. 6 is that the living body of the embodiment of the present invention verifies composed structure schematic diagram of the equipment as hardware.
Specific embodiment
With reference to the accompanying drawing and specific embodiment the present invention is described in further detail.
Before the living body verification method to the embodiment of the present invention is described in detail, first to the work of the embodiment of the present invention The general realisation of body proof scheme is illustrated.Fig. 1 is that the overall procedure of the living body verification method of the embodiment of the present invention shows It is intended to;As shown in Figure 1, the living body verification method of the embodiment of the present invention may include following several stages:
Stage 1:Input video stream namely living body verifying equipment obtain image data.Before input video stream, need first Output action instruction, so that identifying object (object that can be understood as user or the verifying of pending living body) is according to the movement Instruction execution corresponding actions;Image data is acquired based on the corresponding actions that identifying object executes.
Stage 2:Living body verifying equipment carries out Face datection and tracks, and determines face place according to the human face data detected Region, and the region where the tracking face;In the exportable mark in the image output unit side of living body verifying equipment The face frame of face region.Recognition of face is carried out just for the first frame image in image data in the application and is detected, Effect is the region obtained where face, it is understood that for the position for obtaining " face frame ";Use track algorithm to people rear Face region is tracked.Wherein, the specific implementation process of face tracking can refer to step in subsequent detailed description embodiment 101 and the corresponding description of step 102 shown in.
Stage 3:In vivo detection, after testing result is shown to be living body, into the stage 4:Image data is sent to backstage Carry out further face verification;After testing result shows not to be living body, the In vivo detection stage is reentered.Wherein, living body The specific implementation process of detection can refer in subsequent detailed description embodiment shown in step 103 and the corresponding description of step 106.
Based on Fig. 1, Fig. 2 is the details flow diagram of the living body verification method of the embodiment of the present invention, mainly to work in Fig. 2 The implementation that physical examination is surveyed is refined, and specifically can refer to shown in Fig. 2, particularly may be divided into texture feature extraction, ginseng for the stage 3 Number calculates and movement judges three processes;Wherein, the textural characteristics can specifically include the of characterization face texture feature One textural characteristics, and/or, the second textural characteristics of eye textural characteristics are characterized, parameter calculated can be understood as face and be in The score of existing posture, and/or, the score of posture is presented in eye;The movement judged specifically can be understood as rotary head and act, and/ Or, blink acts, concrete implementation mode be can refer to described in subsequent detailed description embodiment.That is, the present invention is implemented Whether the head portrait (face in other words) that the technical solution of example determines that image is included from the image of acquisition can be based on a specified It acts and is acted accordingly, therefore, it is determined that whether being living body faces.Based on this, acts of determination is dynamic for rotary head in the stage 3 After work and/or blink movement, in the stage 4, the rotary head is acted and/or blink movement is corresponding with initial action command Required movement matched, if matching is consistent, be shown to be living body, execute the stage 4:Image data is sent to backstage to carry out Face verification;If face tracking fails, perhaps movement judgement time-out or the rotary head movement and/or blink movement with it is initial The corresponding required movement of action command mismatch, then show non-living body.
The embodiment of the invention provides a kind of living body verification methods.Fig. 3 is the living body verification method of the embodiment of the present invention Flow diagram;As shown in figure 3, the living body verification method of the embodiment of the present invention includes:
Step 101:Image data is obtained based on action command, described image data is parsed, identifies described image data The region of middle characterization face position.
Step 102:The variation of face position in the multiple image for being included based on described image data, described in tracking Region.
Step 103:Extract the textural characteristics in the region;Parameter based on the textural characteristics computational representation posture.
Step 104:Movement is determined based on the parameter.
Step 105:When movement movement corresponding with action command matching, determine that living body is verified.
The living body verification method of the embodiment of the present invention is applied in living body verifying equipment.The living body verifying equipment specifically may be used To be the electronic equipment with image acquisition units, to obtain image data by described image acquisition unit;The electronics is set It is standby specifically to can be the mobile devices such as mobile phone, tablet computer, it is also possible to personal computer, configured with the access control system (door Access control system is specially the system for carrying out control to exit and entrance) access control equipment etc.;Wherein, described image acquisition unit has Body can be the camera of setting on an electronic device.On the other hand, the living body verifying equipment can also be defeated with audio The electronic equipment of unit out, to be instructed by the audio output unit output action.The technical solution of the embodiment of the present invention is It is instructed by active output action, further acquisition includes the image data of user's head portrait, and the image data based on acquisition is sentenced Determine whether user executes corresponding movement therefore, it is determined that living body verifies whether to pass through.
In the present embodiment, living body verifies equipment, and (in the following embodiment of the present invention, the living body verifying equipment is referred to as Equipment) after obtaining image data by image acquisition units, parse described image data;Wherein, image data obtained Including multiple image.The region for identifying characterization face position in described image data, specially:It identifies described The region of face position is characterized in the first frame image of image data, to characterize face institute in the first frame image On the basis of the region of position, based on the variation of face position in the multiple image, the region is tracked.
It is as an implementation, described to be wrapped based on described image data for the tracking mode of face region The variation of face position in the multiple image contained tracks the region, including:Extract the first frame in the multiple image Image and the second frame image identify the first area for characterizing face position in the first frame image, obtain described first Corresponding first coordinate range in region;Obtain initial coordinate model corresponding with first coordinate range in the second frame image It encloses;Calculate the corresponding offset parameter of the initial coordinate range;Based on the initial coordinate range and corresponding offset parameter The second coordinate range is obtained, the region that second coordinate range is the characterization face position after tracking is recorded;Wherein, institute It states offset parameter and characterizes degrees of offset of second coordinate range relative to first coordinate range.
In the present embodiment, in order to reduce Face datection load, while splicing video attack is avoided, the embodiment of the present invention uses Face tracking mode substitutes Face datection.Specifically, equipment carries out Face datection for the first frame image of described image data, The specific mode that facial feature points detection can be used identifies the face in the first frame image, and then determines the first frame The first area of face position is characterized in image;Wherein, the first area can be indicated by the first coordinate range.Having In body application process, the first area is known in the image-display units of equipment by the face collimation mark of display, such as Fig. 4 a It is shown;The face collimation mark knows corresponding coordinate range and matches with first coordinate range.Further, described first It is tracked on the basis of the first area in frame image.Wherein, the first frame image can be for for a target person First frame image in the image data of object acquisition;The present embodiment be former two field pictures treatment process for be illustrated, It certainly, during actual image real time transfer, is handled for multiple image data, the multiple image data Treatment process can refer to the treatment process that front cross frame image data is directed to described in the present embodiment.
It is described to obtain initial coordinate model corresponding with first coordinate range in the second frame image in the present embodiment It encloses, including:First group of N number of characteristic point is chosen in the corresponding first area of first coordinate range by preset step-length, is obtained First coordinate of fisrt feature point in first group of N number of characteristic point;N is positive integer;Wherein, the fisrt feature point is described Any feature point in first group of N number of characteristic point;Obtain second group of N number of characteristic point in the second frame image, described second Second coordinate of second feature point and corresponding fisrt feature point in first group of N number of characteristic point in the N number of characteristic point of group First coordinate is identical;Initial coordinate range is determined based on the second coordinate of each characteristic point in second group of N number of characteristic point.
Specifically, first group of N number of characteristic point is chosen by preset step-length for the first area in the first frame image, Obtain the coordinate of each characteristic point in first group of N number of characteristic point.For example, the first area meets m × n, wherein m is small In the length/width for being equal to the first frame image;Then n is less than or equal to the width/height of the first frame image;It then can be according to Step-length is that m × n/ (10 × 10) uniformly choose 100 characteristic points in the first area.Wherein, the step-length can be not limited to The above-mentioned step-length enumerated, the step-length can be according to configuring at equal intervals, and the step-length can also be configured according to unequal interval, can specifically be pressed It is configured according to actual demand.Further, according to each characteristic point in first group of N number of characteristic point in the first frame image Coordinate, obtain the second frame image in second group of N number of characteristic point so that each characteristic point in second group of N number of characteristic point Coordinate it is identical as the coordinate of corresponding characteristic point in first group of N number of characteristic point.For example, if suitable according to identical arrangement Sequence simultaneously identifies, i.e., the characteristic point in described first group of N number of characteristic point and second group of N number of characteristic point according to from top to bottom, Sequence from left to right is ranked up, then the coordinate and described first of first characteristic point in second group of N number of characteristic point The coordinate of first characteristic point in the N number of characteristic point of group is identical, the seat of second characteristic point in second group of N number of characteristic point Mark it is identical as the coordinate of second characteristic point in first group of N number of characteristic point, and so on, then second group of N number of spy The coordinate (i.e. the second coordinate) of any feature point (i.e. second feature point) is corresponding with first group of N number of characteristic point in sign point Fisrt feature point coordinate (i.e. the first coordinate) it is identical.Further, based on each feature in second group of N number of characteristic point Second coordinate of point determines a coordinate range, and the coordinate range is determined as the initial coordinate range.
It is described to calculate the corresponding offset parameter of the initial coordinate range in the present embodiment, including:Calculate described second group First offset parameter of each characteristic point relative to the individual features point in first group of N number of characteristic point in N number of characteristic point;Institute State the difference degree between the first offset parameter characterization identical second feature point of coordinate and fisrt feature point;It is based on Corresponding first offset parameter of each characteristic point determines multiple matching characteristic points in second group of N number of characteristic point, described in calculating Second offset parameter of each matching characteristic point in multiple matching characteristic points;Second offset parameter characterizes the matching characteristic It puts and the degrees of offset in second group of N number of characteristic point, between third feature point corresponding with the matching characteristic point;With And calculate the third offset parameter of each matching characteristic point in the multiple matching characteristic point;The third offset parameter characterization The matching characteristic point in first group of N number of characteristic point, between fourth feature point corresponding with the matching characteristic point Degrees of offset;Determine that the initial coordinate range is corresponding partially according to second offset parameter and the third offset parameter Shifting parameter.
Wherein, described to calculate in second group of N number of characteristic point before the first offset parameter of each characteristic point, the side Method further includes:It is handled for the first frame image and the second frame image, obtains the first frame image respectively Second gradient image of first gradient image and the second frame image.Specifically, for the first frame image and described Two frame images establish the first pyramid diagram picture of the first frame image respectively, establish the second gold medal word of the second frame image Tower image;The first gradient image of the first pyramid diagram picture is calculated, and calculates the second of the second pyramid diagram picture Gradient image;Wherein, the pyramid diagram picture for establishing image (such as establishes the first pyramid diagram picture of the first frame image, again Such as second pyramid diagram picture for establishing the second frame image etc.) it can be understood as downscaled images pari passu, to obtain more The result of robustness.In the specific implementation process, image can be smoothed first, then to the image after smoothing processing into Line sampling, to can get the pyramid diagram picture of size reduction.In the present embodiment, the first gradient image and second ladder Each characteristic point gradient is represented by degree image:
Gx (y, x)=Img (y, x+1)-Img (y, x-1) (1)
Gy (y, x)=Img (y+1, x)-Img (y-1, x) (2)
Wherein, Gx (y, x) indicates the gradient of characteristic point (y, x) in x-axis;Gy (y, x) indicates characteristic point (y, x) in y-axis On gradient;Img (y, x) indicates the display parameters of characteristic point (y, x);Correspondingly, Img (y, x+1) indicates characteristic point (y, x+1) Display parameters;Img (y, x-1) indicates the display parameters of characteristic point (y, x+1);Img (y+1, x) indicates characteristic point (y+1, x) Display parameters;Img (y-1, x) indicates the display parameters of characteristic point (y-1, x);As an implementation, the display ginseng Number specifically can be gray value, it is of course also possible to be other display parameters.
Further, in second group of N number of characteristic point each characteristic point relative in first group of N number of characteristic point The gradient for the individual features point that first offset parameter of individual features point can be obtained based on expression formula (1) and expression formula (2) calculates It obtains;Wherein, in second group of N number of characteristic point each characteristic point relative to the corresponding spy in first group of N number of characteristic point Sign point refers to:Characteristic point 2 in second group of N number of characteristic point is relative to the characteristic point in first group of N number of characteristic point 1, characteristic point 2 is identical with coordinate of the characteristic point 1 in the first frame image in the coordinate in the second frame image;Namely institute It states the first offset parameter and is characterized a little 2 opposite degrees of offset relative to characteristic point 1.Specifically, for second group of N number of spy Each characteristic point in sign point, analyzes neighbour of each characteristic point relative to the individual features point in first group of N number of characteristic point Characteristic of field obtains difference parameter (diff) and gradient parameter (grad), is obtained based on the difference parameter and gradient parameter calculating Obtain the first offset parameter;Wherein, the difference parameter characterizes the characteristic point in second group of N number of characteristic point relative to first group The difference degree of individual features point in N number of characteristic point.Wherein, the difference parameter and the gradient parameter specifically can by with Lower expression formula indicates:
Diff=Img1 (y1, x1)-Img2 (y2, x2) (3)
Gx=Gx (y1, x1)+Gx (y2, x2) (4)
Gy=Gy (y1, x1)+Gy (y2, x2) (5)
Wherein, Img1 (y1, x1) indicate in first group of N number of characteristic point, the display parameters of characteristic point (y1, x1);Img2 (y2, x2) indicate in second group of N number of characteristic point, the display parameters of characteristic point (y2, x2);Wherein, the display parameters specifically may be used To be gray value, it is of course also possible to be other display parameters.Gx indicates gradient in the direction of the x axis;Gy is indicated in y-axis direction On gradient;Gx (y1, x1) indicates the gradient of characteristic point (y1, x1) in the direction of the x axis in first frame image;Gx (y2, x2) table Show in the second frame image the gradient of corresponding characteristic point (y2, x2) in the direction of the x axis, characteristic point with characteristic point (y1, x1) (y1, x1) is identical as the coordinate of characteristic point (y2, x2) in the second needle image in the coordinate in first frame image;Correspondingly, Gy (y1, x1) indicates the gradient of characteristic point (y1, x1) in the y-axis direction in first frame image;Gy (y2, x2) indicates the second frame image In with characteristic point (y1, x1) gradient of corresponding characteristic point (y2, x2) in the y-axis direction.
Further, the first offset parameter of individual features point is calculated according to the difference parameter and the gradient parameter, First offset parameter can be indicated by following formula:
(dy, dx)=f (Gxx, Gxy, Gyy, Diff) (6)
Wherein it is determined that in first frame image characteristic point 1 gradient parameter, be denoted as Gx1 and Gy1;And determine the second frame figure The gradient parameter of characteristic point 2 corresponding with characteristic point 1, is denoted as Gx2 and Gy2 as in;Then based on (Gx1, Gy1) and (Gx2, Gy2 matrix operation is carried out) to integrate the gradient of the characteristic point 1 and the characteristic point 2;Then in expression formula (6), Gxx indicates special The operation result of sign 1 gradient of the gradient and characteristic point 2 in x-axis in x-axis of point;Gyy indicates the ladder of characteristic point 1 on the y axis The operation result of degree and the gradient of characteristic point 2 on the y axis;Gxy indicates gradient and characteristic point 2 of the characteristic point 1 in x-axis in y-axis On gradient operation result, or indicate the operation knot of the gradient of gradient and characteristic point 2 in x-axis on the y axis of characteristic point 1 Fruit.F () indicates certain operations rule.
During specific implementation, based on expression formula (6) internal successive ignition, it can get in the first frame image Matching characteristic point (y0, x0) of one characteristic point (y1, x1) on the second frame image, the matching characteristic point meet following table Up to formula:
(y0, x0)=(y2+dy, x2+dx) (7)
But since the position of the face in image is in variation, the portion in only described first group of N number of characteristic point Point characteristic point can find matching characteristic point in the second frame image.
As an implementation, it is described based on characteristic point corresponding first each in second group of N number of characteristic point partially Shifting parameter determines multiple matching characteristic points, including:Partially based on characteristic point each in second group of N number of characteristic point corresponding first Shifting parameter determines the corresponding target feature point of each characteristic point, obtains the third coordinate of the target feature point;Determine institute State initial characteristics point corresponding with the target feature point in first frame image;The 4-coordinate of the initial characteristics point and institute The third coordinate for stating target feature point is identical;It is inclined relative to the 4th between the target feature point to obtain the initial characteristics point Shifting parameter;When the 4th offset parameter reaches preset threshold, determine that the target feature point is matching characteristic point;When described When 4th offset parameter is not up to the preset threshold, determining the target feature point not is matching characteristic point.
Specifically, in present embodiment, in obtaining the second frame image by adopting the above technical scheme, as first frame image After the corresponding matching characteristic point (y0, x0) of middle characteristic point (y1, x1), obtained in first frame image using identical technical solution, As the corresponding matching characteristic point (y3, x3) of characteristic point (y0, x0) in the second frame image, obtain matching characteristic point (y3, x3) with The offset parameter of characteristic point (y1, x1), the offset parameter are the 4th offset parameter;When the 4th offset parameter reaches When to preset threshold, show that the deviation of bi-directional matching characteristic point is smaller, successful match;Correspondingly, working as the 4th offset parameter Not up to preset threshold when, show that the deviation of bi-directional matching characteristic point is larger, it fails to match.Two-way is used in present embodiment The matching characteristic point that the mode matched obtains, greatly strengthens the robustness of matching characteristic point.
In the present embodiment, equipment is based on corresponding first offset parameter of characteristic point each in second group of N number of characteristic point It determines multiple matching characteristic points, calculates the second offset parameter of each matching characteristic point in the multiple matching characteristic point, it is described Second offset parameter characterize the matching characteristic point in second group of N number of characteristic point, it is corresponding with the matching characteristic point Third feature point between displacement bias degree;It is also understood that second offset parameter characterizes the multiple matching Each matching characteristic o'clock is relative to the position in first group of N number of characteristic point, between characteristic point corresponding with matching characteristic point in point Move degrees of offset.For example, in characteristic point N number of for first group, characteristic point (y1, x1), in second group of N number of characteristic point with characteristic point The identical characteristic point (y2, x2) of (y1, x1) coordinate, calculates the matching characteristic point (y0, x0) of acquisition;Then second offset parameter Show the relative displacement degrees of offset between matching characteristic point (y0, x0) and characteristic point (y2, x2), it is understood that partially for second Shifting parameter shows the relative displacement degrees of offset between matching characteristic point (y0, x0) and characteristic point (y1, x1).
It is as an implementation, described to calculate each matching characteristic in the multiple matching characteristic point in the present embodiment The third offset parameter of point, including:Multiple groups the are extracted from multiple matching characteristic points that the second frame image is included respectively One characteristic point pair;The fisrt feature point is to including the first matching characteristic point and the second matching characteristic point, and from described first In frame image, in source characteristic point corresponding with the multiple matching characteristic point extract multiple groups second feature point pair, described second Characteristic point is to including the first source characteristic point and the second source characteristic point;Wherein, the first matching characteristic point and second matching Characteristic point is any two characteristic point in the multiple matching characteristic point;Calculate the first matching characteristic point and described second Second between first distance between matching characteristic point, and calculating first source characteristic point and second source characteristic point Distance;Obtain the relative parameter between the first distance and the second distance;The relative parameter is denoted as described first The third offset parameter of matching characteristic point and the second matching characteristic point.
Specifically, equipment is directed to the multiple matching characteristic points and the first frame figure that the second frame image is included respectively As in, source corresponding with matching characteristic point characteristic point extracted two-by-two;Two in the second frame image extracted With two source features in characteristic point (the i.e. described first matching characteristic point and the second matching characteristic point) and first frame image Point (the i.e. described first source characteristic point and second source characteristic point) corresponds respectively.It is special then to calculate separately first matching First distance between sign point and the second matching characteristic point, and calculate first source characteristic point and second source spy Second distance between sign point;Wherein, the first distance and the second distance can be Euclidean distance, it is of course also possible to It is other distances that can characterize the relative positional relationship between corresponding two characteristic points.Further, obtain described first away from From the relative parameter between the second distance;The relative parameter is denoted as the first matching characteristic point and second described Third offset parameter with characteristic point;The zoom degree of third offset parameter characterization face region.As a kind of reality Mode is applied, the relative parameter is specifically as follows the ratio of the first distance and the second distance;Certainly, the opposite ginseng Number can also be obtained by characterizing other processing modes of relativeness between the first distance and the second distance.
It is as an implementation, described according to second offset parameter and third offset ginseng in the present embodiment Number determines the corresponding offset parameter of the initial coordinate range, including:By multiple second offset parameters by the first default processing rule It is then handled, obtains particular offset parameter;And handle multiple relative parameters by the second default processing rule, it obtains Obtain specific relative parameter;The particular offset degree and the specific relative parameter is corresponding as the initial coordinate range Offset parameter.
In the present embodiment, each matching characteristic point in the second frame image can be calculated and obtain corresponding the Two offset parameters, ideally, the corresponding second offset ginseng of all matching characteristic points of the acquisition in the second frame image Number is equal, but since various factors (such as deviation in the movement of face location, data handling procedure etc.) will cause mistake Difference, so that corresponding second offset parameter of all matching characteristic points of the acquisition in the second frame image may equal not phases Deng.In order to make processing result with more robustness, all matching characteristic points pair of the equipment to the acquisition in the second frame image The second offset parameter answered is ranked up, and selects the intermediate value of N number of second offset parameter as particular offset parameter.It is understood that For the degrees of offset of the particular offset parameter characterization face frame.On the other hand, in the ideal case, the second frame image In the corresponding relative parameter of any two matching characteristic point be equal, but due to various factors (such as face location movement, Deviation in data handling procedure etc.) it will cause error, so that any two in the second frame image match spy The corresponding relative parameter of sign point may be unequal.In order to make processing result with more robustness, equipment is to the second frame figure The corresponding relative parameter of any two matching characteristic point as in is ranked up, and selects the intermediate value of multiple relative parameters as specific Relative parameter.It is to be understood that the zoom degree of the specific relative parameter characterization face frame.
Further, in this embodiment equipment is using the particular offset degree and the specific relative parameter as described in The corresponding offset parameter of initial coordinate range;To obtain second based on the initial coordinate range and corresponding offset parameter Coordinate range records the region that second coordinate range is the characterization face position after tracking;It specifically can refer to Fig. 4 b It is shown, wherein solid box is using the tracking result after the technical solution track human faces of the embodiment of the present invention;And dotted line frame is to adopt With the directly detected result of face detection scheme;Compared to the technical solution of Face datection characteristic point, using of the invention real The living body faces tracking scheme for applying example can obtain the variation of more quasi- good description face, specifically can be the change of face location Change.As an implementation, KLT (Kanade Lucas Tomasi) can be used in the face tracking scheme of the embodiment of the present invention Tracker is realized, KLT tracker is certainly not limited to, and the living body faces of the embodiment of the present invention can be achieved in other fast iterative algorithms Tracking;The present embodiment directlys adopt the uniform selected characteristic point in region where characterizing face, and track human faces frame, greatly promotes Face tracking speed also avoids characteristic point very few the problem of leading to tracking failure.Also, the mode of bi-directional matching is also significantly Improve the robustness of face tracking.
In the embodiment of the present invention, the textural characteristics extracted in the region;Based on the textural characteristics computational representation The parameter of posture;Movement is determined based on the parameter, including:Extract the first textural characteristics and/or the second line in the region Manage feature;Based on the first parameter of the first posture of the first textural characteristics computational representation, and/or, it is based on second texture Feature calculation characterizes the second parameter of the second posture;The first movement is determined based on first parameter, and/or, based on described the Two parameters determine the second movement.
For the acquisition modes of the first textural characteristics, as an implementation, first extracted in the region Textural characteristics, including:Characteristic point in the region is handled according to the default processing rule of third, is obtained in the region Characterize the first procedure parameter of the difference degree of each characteristic point characteristic point adjacent with the characteristic point;Analyze first mistake First textural characteristics at journey gain of parameter characterization face texture edge;Correspondingly, being directed to the acquisition modes of the second textural characteristics, institute The second textural characteristics extracted in the region are stated, including:Extract the first part region in the region;To described first Characteristic point in subregion is handled according to the 4th default processing rule, is obtained in the first part region and is characterized each spy Second procedure parameter of the difference degree of the sign point characteristic point adjacent with the characteristic point;Second procedure parameter is analyzed to obtain Characterize second textural characteristics at eye texture edge.
In the present embodiment, equipment extracts different textural characteristics for different movements, is denoted as first in the present embodiment Textural characteristics and the second textural characteristics.Classify in the present embodiment mainly for two kinds of movements, it is contemplated that rotary head is acted and blinked Eye movement work is larger for the attitudes vibration of face, therefore, classifies in the present embodiment for rotary head movement and blink movement.It can To be interpreted as, first textural characteristics characterize face texture edge, and second textural characteristics characterize eye texture edge.
Specifically, being directed to the first textural characteristics, treatment process includes:To the second frame image, characterization people is extracted The region (region can be the second area after tracking) of face position, to the region according to the default processing rule of third Then handled.As an implementation, described that the region is handled according to the default processing rule of third, including: The region is reduced, such as is contracted to (64,64) range, by the image after diminution according to binary conversion treatment mode or three Value processing mode is handled.In binary conversion treatment mode as an example, then equipment can be for second in the second frame image Region carries out local binary patterns (LBP, Local Binary Patterns) processing;Such as:The second frame figure is extracted first With the matched procedural image of the second area as in, gray proces are carried out to the procedural image, obtain the procedural image Gray level image, further determine that relatively grey between each characteristic point and eight adjacent characteristic points in the gray level image Degree relationship multiplies the gray level image of three characteristic point matrix, institute in the gray scale of each characteristic point such as Fig. 4 c for three as illustrated in fig. 4 c Show;The gray value of each characteristic point is subjected to numeralization expression, specifically can refer to shown in Fig. 4 d.Further, by adjacent eight The gray scale of characteristic point is compared with the gray scale of central feature point, if the gray scale of adjacent characteristic point is greater than the ash of central feature point Degree, then be denoted as 1 for the value of the adjacent characteristic point;Conversely, if the gray scale of adjacent characteristic point is less than or equal to the ash of central feature point Degree, then be denoted as 0 for the value of the adjacent characteristic point, for details, reference can be made to shown in Fig. 4 e.Further, by the value string of adjacent characteristic point Connection obtains 8 strings of binary characters, and the string of binary characters can be understood as the gray value for being distributed in (0,255).Having It in body implementation process, can refer to shown in Fig. 4 e, if using first, upper left corner characteristic point as initiation feature point, according to side clockwise To arrangement, then 8 character strings obtained are 10001111.Thus it can get in the procedural image each characteristic point (in i.e. Heart characteristic point) corresponding string of binary characters.Above-mentioned treatment process can specifically be realized by following formula:
LBP=[code0, code1 ... ..., code7] (8)
Code (m, n)=Img (y+m, x+n)>Img (y, x)?1:0 (9)
Further, in order to remove redundancy, count in the corresponding string of binary characters of each characteristic point, 0 and 1 variation be less than 2 string of binary characters;For example, character string is in 10001111, first and second 0 and 1 change 1 time, the 4th and the It five 0 and 1 variation 1 time, amounts to variation twice, is unsatisfactory for the condition of " 0 and 1 variation is less than 2 ".In another example character string is In 00001111, only changed 1 time by the 4th and the 5th 0 and 1, meets the condition of " 0 and 1 variation is less than 2 ".Then, it will unite String of binary characters after meter is mapped in (0,58) range, and the data after mapping can be used as LBP data;It can also subtract significantly in this way Few data processing amount.
Wherein, in expression formula (8) LBP indicate fisrt feature point in the region display parameters and adjacent characteristic point The relativeness of display parameters;The fisrt feature point is any feature point in the region;Code0, code1 ... ..., Code7 respectively indicates the display parameters of the adjacent characteristic point of the fisrt feature point;Expression formula (9) is indicated characteristic point (y+m, x + n) gray value and the gray value of characteristic point (y, x) be compared, if the gray value of characteristic point (y+m, x+n) is greater than characteristic point The string of binary characters code (m, n) of characteristic point (m, n) is then denoted as 1, is otherwise denoted as 0 by the gray value of (y, x).
Further, the LBP numerical value that (0,58) is distributed in certain image-region is subjected to statistic histogram processing.Because The histogram has 59 dimensions, so available one 59 vector tieed up, the first texture as the region after statistics Feature.
For the second textural characteristics, since the region of eye is smaller, blink movement is also very fast, therefore, is extracting the second line When managing feature, the extraction of eye region is carried out to region first, that is, extracts the first part region in the region, it is described First part region can be the upper half area in the region, for example, extracting the upper half/one third in the region Region, as the first part region.It can refer to the place of aforementioned first textural characteristics to the first part region after extraction Reason mode is handled, and the second textural characteristics are obtained.As another embodiment, equipment can be directed to the first part region It carries out three value modes of part (LTP, Local Ternary Pattern) to handle, treatment process is approximate with LBP processing mode, area It is not, in LBP treatment process, when indicating the relativeness of gray value of characteristic point and adjacent characteristic point, passes through 0 and 1 It is marked.In the LTP treatment process of present embodiment, in the opposite pass for indicating the gray value of characteristic point and adjacent characteristic point It when being, is marked by 0,1 and -1, specific treatment process can be realized by following formula:
Code (m, n)=Img (y+m, x+n)>Img(y,x)+Fuzzy?1:0 (10)
Code (m, n)=Img (y+m, x+n)<Img(y,x)-Fuzzy?-1:0 (11)
Fuzzy=ratio × Img (y, x) (12)
Wherein, Img (y, x) indicates the gray value of characteristic point (y, x);The characteristic point (y, x) can be three spies for multiplying three Levy the central feature point in dot matrix;Img (y+m, x+n) indicates the gray value of characteristic point (y+m, x+n);Characteristic point (y+m, x+ It n) is specially the characteristic point adjacent with characteristic point (y, x);Ratio indicates scale parameter, can be pre-configured with;Fuzzy indicates feature The gray value of point (y, x) and the product of ratio;The gray value of characteristic point (y, x) is bigger, and the value of Fuzzy is also maximum.Expression formula (10) meaning indicates to be compared the sum of the gray value of the gray value of characteristic point (y+m, x+n) and characteristic point (y, x) and Fuzzy Compared with, if gray value and Fuzzy the sum of of the gray value of characteristic point (y+m, x+n) greater than characteristic point (y, x), then by characteristic point (m, N) character string code (m, n) is denoted as 1, is otherwise denoted as 0;The meaning of expression formula (11) is indicated the ash of characteristic point (y+m, x+n) The difference of the gray value and Fuzzy of angle value and characteristic point (y, x) is compared, if the gray value of characteristic point (y+m, x+n) is greater than spy The gray value of point (y, x) and the difference of Fuzzy are levied, then the character string code (m, n) of characteristic point (m, n) is denoted as -1, is otherwise denoted as 0。
In the present embodiment, first parameter based on the first posture of the first textural characteristics computational representation, including:It will First procedure parameter inputs preconfigured first disaggregated model, obtains the first parameter of the first posture of characterization.
Specifically, equipment acquires great amount of samples data in advance, (sample data specifically be can be using above-mentioned processing side The first textural characteristics that formula obtains) and corresponding posture class indication, classify to the sample data and corresponding posture Mark carries out machine learning training, obtains the first disaggregated model.After obtaining first textural characteristics, by first texture Feature inputs first disaggregated model, obtains corresponding first posture of first textural characteristics, first posture is for example Indicate front or side;First posture can be indicated by the first parameter.In practical applications, first parameter can be passed through Numerical values recited show that first posture levels off to front or side, such as first parameter is bigger, shows described first Posture more levels off to front;Correspondingly, if first parameter is smaller, show that first posture more levels off to side.
In the present embodiment, second parameter based on the second posture of the second textural characteristics computational representation, including:It will Second textural characteristics input preconfigured second disaggregated model, obtain the second parameter of the second posture of characterization.
Specifically, equipment acquires great amount of samples data in advance, (sample data specifically be can be using above-mentioned processing side The second textural characteristics that formula obtains) and corresponding posture class indication, classify to the sample data and corresponding posture Mark carries out machine learning training, obtains the second disaggregated model.After obtaining second textural characteristics, by second texture Feature inputs second disaggregated model, obtains corresponding second posture of second textural characteristics, second posture is for example It indicates to open eyes or close one's eyes, second posture can be indicated by the second parameter.In practical applications, second parameter can be passed through Numerical values recited show that second posture levels off to eye opening or eye closing, such as second parameter is bigger, shows described second Posture more levels off to eye opening;Correspondingly, if second parameter is smaller, show that second posture more levels off to eye closing.
It is described that first movement is determined based on first parameter in the present embodiment, including:Distinguished based on the multiple image Corresponding multiple first parameters judge whether corresponding first parameter of first part's image is all satisfied first in the multiple image Whether corresponding first parameter of second part image is not satisfied the first threshold in threshold range and the multiple image Range;When corresponding first parameter of first part's image is all satisfied first threshold range and described more in the multiple image When the first threshold range is not satisfied in corresponding first parameter of second part image in frame image, first parameter is determined Corresponding to the first movement.
Specifically, it is aforementioned obtain characterization the first posture the first parameter when, due to it is aforementioned be for two field pictures carry out The first parameter obtained is handled, and in the specific application process, it is for multiple image included in the image data obtained Carry out the first parameter of processing acquisition, it can be understood as, first parameter can get for each frame image, then be directed to multiframe Image, which can correspond to, obtains multiple first parameters.Based on this, argument sequence can get for the multiple first parameter.Described in analysis Argument sequence, if numerical value change from low to high, can determine face by flanks transform for front;Correspondingly, if numerical value change by It is high to Low, then it can determine that face is changed into side by front, it is certainly, opposite, if making the by other data processing methods One parameter is bigger, shows that face more levels off to front, and the first parameter is smaller, shows that face more levels off to side, then joins in analysis During Number Sequence, if numerical value change from low to high, can determine that face is changed into side by front;Correspondingly, if numerical value becomes Change from high to low, then can determine face by flanks transform for front.Based on this, the variation of numerical value in the argument sequence can be passed through So that it is determined that corresponding first movement.
In the specific implementation process, for the multiple image of acquisition, the X frame image before present frame is chosen, by the X Frame image uniform cutting is Y sections;Wherein, X and Y is positive integer, and Y is less than X;For every section of image, due in every section of image Including at least two field pictures, then at least two first parameters can be obtained in every section of image;In at least two first parameter First parameter of the median as every section of image is chosen, to can get corresponding Y the first parameter of Y sections of images. Y the first parameter strings are obtained into obtain argument sequence, judge whether the Parameters variation in the argument sequence meets preset rules, example Such as, if Parameters variation determines corresponding first movement from high to low or from low to high, based on Parameters variation.Another real It applies in mode, multiple first parameters corresponding for the multiple image determine front portion image packet in the multiple image The face contained is in front, and the face that rear portion image includes is in side, or determines previous in the multiple image The face that parts of images includes is in side, and the face that rear portion image includes is in front, then can determine that the first parameter Corresponding first movement.For example, judging in X frame image, corresponding first parameter of preceding one third image of the X frame image is equal Meet first threshold range, then show include face direct picture and the X frame image rear one third image it is corresponding The first parameter first threshold range is not satisfied, then show to include face side image, then can determine in the X frame image Middle face can determine that rotary head acts by obverting as side.
It is described that second movement is determined based on second parameter in the present embodiment, including:Distinguished based on the multiple image Corresponding multiple second parameters judge whether corresponding second parameter of Part III image is all satisfied second in the multiple image Whether corresponding second parameter of Part IV image is not satisfied the second threshold in threshold range and the multiple image Range;When corresponding second parameter of Part III image is all satisfied second threshold range and described more in the multiple image When the second threshold range is not satisfied in corresponding second parameter of Part IV image in frame image, second parameter is determined Corresponding to the second movement.
Specifically, the method for determination acted with first is similarly, in aforementioned the second parameter for obtaining the second posture of characterization, by In it is aforementioned be the second parameter that processing acquisition is carried out for two field pictures, be for the figure obtained and in the specific application process The multiple image as included in data carries out the second parameter of processing acquisition, it can be understood as, each frame image can be obtained Second parameter is obtained, then can be corresponded to for multiple image and obtain multiple second parameters.Based on this, for the multiple second ginseng The available argument sequence of number.The argument sequence is analyzed, if numerical value change from low to high, can determine that human eye is changed by opening eyes It closes one's eyes;Correspondingly, if numerical value change from high to low, can determine that human eye is changed into eye opening by closing one's eyes, certainly, opposite, if logical Crossing other data processing methods makes the second parameter bigger, shows that human eye more levels off to eye opening, the second parameter is smaller, shows face More level off to eye closing, then during analyzing argument sequence, if numerical value change from low to high, can determine that human eye is changed by closing one's eyes To open eyes;Correspondingly, if numerical value change from high to low, can determine that human eye is changed into eye closing by opening eyes.Wherein, dynamic due to blinking Make comparatively fast, the argument sequence can be the second parameter comprising there are two, have described two second parameter energy of discrete state Enough characteristics presented from low to high or from high to low.Based on this, can by the variation of numerical value in the argument sequence so that it is determined that Corresponding second movement.
In the specific implementation process, for the multiple image of acquisition, the X frame image before present frame is chosen, by the X Frame image uniform cutting is Y sections;Wherein, X and Y is positive integer, and Y is less than X;For every section of image, due in every section of image Including at least two field pictures, then at least two second parameters can be obtained in every section of image;In at least two second parameter Second parameter of the median as every section of image is chosen, to can get corresponding Y the second parameter of Y sections of images. Y the second parameter strings are obtained into obtain argument sequence, judge whether the Parameters variation in the argument sequence meets preset rules, example Such as, if Parameters variation determines corresponding second movement from high to low or from low to high, based on Parameters variation.Another real It applies in mode, multiple second parameters corresponding for the multiple image determine front portion image packet in the multiple image The human eye contained is in eyes-open state, and the human eye that rear portion image includes is in closed-eye state, or determines the multiframe figure The human eye that front portion image includes as in is in closed-eye state, and the human eye that rear portion image includes is in eyes-open state, It then can determine that corresponding second movement of the second parameter.For example, judging in X frame image, the preceding one third image of the X frame image Corresponding second parameter is all satisfied second threshold range, then shows that the human eye that image includes is in eyes-open state and the X frame Second threshold range is not satisfied in corresponding second parameter of rear one third image of image, then shows at human eye that image includes In closed-eye state, then it can determine the blink movement in the X frame image.
In the present embodiment, since eye closing and the eye opening movement of human eye are excessively rapid, then preconfigured second threshold range Different from the first threshold range.
In the present embodiment, equipment is mentioned for textural characteristics (including the first textural characteristics and the second textural characteristics) in region It takes, the textural characteristics are not limited to LBP/LTP feature, are also possible to that other textural characteristics of image change, example can be described Such as histograms of oriented gradients (HOG, Histogram of Oriented Gradient) feature.In equipment for sample characteristics into The process of row machine learning training, can be carried out by support vector machines (SVM, Support Vector Machine) learning model Training classification is not limited to SVM training mode classification, the mode classifications such as other neural networks, linear projection can also be real certainly It is existing.
Living body verification method described in the present embodiment is not limited to apply in mobile terminal, can also be applied to any occasion Living body faces verifying, including but not limited to personal computer (PC), server or embedded device etc..Since the present invention is implemented Example technical solution realize region track by way of substitutes recognition of face detection, blink judgement in without to eyes into Row positioning;In addition preconfigured computation model only is needed when corresponding to parameter according to textural characteristics calculating posture in scheme (such as SVM model), and the computation model capacity is minimum (model file can be reduced to 160k), it, can during specific implementation SVM parameter is directly written in code, it might even be possible to it is interpreted as not needing computation model, it is therefore, provided in an embodiment of the present invention The algorithm file that living body proof scheme is carried is smaller, meets the memory space requirements of mobile terminal significantly;And data processing Amount substantially reduces, and also meets the demand of the processing capacity of mobile terminal.
In the present embodiment, after obtaining the first movement and/or the second movement through the above way, first movement is judged And/or second movement whether movement corresponding with the action command of output matching;If matching is consistent, show described image number Face in is living body faces rather than preparatory filmed picture or the corresponding face of video.And in the prior art, it emits Filling living body faces (which can be described as attacking) in a manner of verifying by living body mainly includes:1, normal photo;2, include The photo of certain class movement;It 3, include the video of certain class movement;4, the video spliced according to action sequence.This makes research staff It is proposed that different algorithm optimization movements judges performance to cope with different attacks.Existing living body faces verification technique is based on essence True facial modeling, in mobile terminal side, fast and accurately computation model file required for positioning feature point is excessive, Processing speed is also not fast enough.By taking the location algorithm of mainstream as an example, display shape recurrence (ESR) algorithm process time can achieve 10 Millisecond, computation model file can reach tens, and active appearance models (AAM) are although the model file of algorithm is only several hundred K, but the processing time of every frame data is difficult to real-time processing.And what the technical solution of the embodiment of the present invention was tracked by region Mode substitutes recognition of face detection, positions in blink judgement without to eyes;In addition only according to line in scheme Reason feature calculation posture needs preconfigured computation model (such as SVM model) when corresponding to parameter, and the computation model capacity SVM parameter can be directly written in code, or even can by minimum (model file can be reduced to 160k) during specific implementation To be interpreted as not needing computation model, therefore, the algorithm file that living body proof scheme provided in an embodiment of the present invention is carried compared with It is small, the memory space requirements of mobile terminal are met significantly;And data processing amount substantially reduces, and also meets the place of mobile terminal The demand of reason ability.
In order to verify the effectiveness of living body proof scheme provided in an embodiment of the present invention, respectively to different movements, and string Linkage has carried out the test of attack and true man's experience, specifically can be as shown in table 1.As shown in table 1, attempt to use mobile phone in attacker When photo, photograph print or video pass through the verifying of authentication system, above-mentioned three kinds of attacks are zero entirely through rate; It wherein, is zero for the number of pass times of blink movement and rotary head movement in above-mentioned three kinds of attack patterns, so as to table The bright living body proof scheme using the embodiment of the present invention can effectively identify the void such as cell phone pictures, photograph print or video The attack of false identity greatly improves the accuracy rate of authentication.
Table 1
Table 2 is actually detected situation signal of the embodiment of the present invention under different testing conditions.As shown in table 2, it detects Condition may include light condition and test object whether wearing spectacles, and whether the glasses worn have the various items such as frame Part.Wherein, light condition may include ordinary ray condition, highlighting bar part and dim light condition etc.;Wherein, detection can be passed through To light intensity parameter decision light condition be to belong to ordinary ray, strong light or half-light line;Can be pre-configured with first threshold and Second threshold, the first threshold are greater than second threshold;When the light intensity parameter detected is greater than the first threshold, light is determined Lines part is highlighting bar part;When the light intensity parameter detected is less than the second threshold, determine that light condition is half-light line Condition;When the light intensity parameter detected is between the first threshold and the second threshold, determine that light condition is normal Light.On the other hand, whether wearing spectacles and glasses have frame and will affect mentioning for eye textural characteristics test object Take and blink movement judgement.As can be seen from Table 2, by taking first kind ordinary ray glasses as an example, 23/25 shows 25 times By 23 times in test, show in the verifying of living body true man, even if user's wearing spectacles, the living body of its living body true man is verified Percent of pass it is also higher, be not in the unacceptable situation of multiple authentication, the experience of user will not be adversely affected;Its In, detection averagely needs 1 time, and it is 1 second (s) that detection, which averagely needs the time, every time, and the detection of blink movement averagely needs 2 times, often Being averaged for secondary detection needs the time for 2s;The detection of rotary head movement averagely needs 2 times, and being averaged of detecting every time needs the time to be 2s。
Table 2
The embodiment of the invention also provides a kind of living bodies to verify equipment.Fig. 5 is that the living body of the embodiment of the present invention verifies equipment Composed structure schematic diagram;As shown in figure 5, the equipment includes:Detection unit 31, tracking cell 32, feature extraction unit 33, Computing unit 34, movement judging unit 35 and authentication unit 36;Wherein,
The detection unit 31 parses described image data, identifies institute for obtaining image data based on action command State the region that face position is characterized in image data;
The tracking cell 32, the multiframe that the described image data for being identified based on the detection unit 31 are included The variation of face position in image, tracks the region;
The feature extraction unit 33, for extracting the textural characteristics in the region that the tracking cell 32 tracks;
The computing unit 34, the textural characteristics computational representation appearance for being extracted based on the feature extraction unit 33 The parameter of state;
The movement judging unit 35, the parameter for being obtained based on the computing unit 34 determine movement;
The authentication unit 36, for determining living body when movement movement corresponding with action command matching It is verified.
The tracking cell 32 identifies institute for extracting first frame image and the second frame image in the multiple image The first area for characterizing face position in first frame image is stated, corresponding first coordinate range in the first area is obtained; Obtain initial coordinate range corresponding with first coordinate range in the second frame image;Calculate the initial coordinate range Corresponding offset parameter;The second coordinate range is obtained based on the initial coordinate range and corresponding offset parameter, records institute State the region that the second coordinate range is the characterization face position after tracking;Wherein, the offset parameter characterization described second Degrees of offset of the coordinate range relative to first coordinate range.
Further include image acquisition units in the detection unit 31 in the embodiment of the present invention, is acquired by described image single Member obtains image data;Image data obtained includes multiple image.The detection unit 31 identifies described image data The region of middle characterization face position, specially:It identifies where characterizing face in the first frame image of described image data The region of position, thus on the basis of characterizing the region of face position in the first frame image, the tracking cell 32 variations based on face position in the multiple image, track the region.
For the tracking mode of face region, as an implementation, the tracking cell 32, for by default Step-length chooses first group of N number of characteristic point in the corresponding first area of first coordinate range, obtains first group of N number of spy First coordinate of fisrt feature point in sign point;N is positive integer;Wherein, the fisrt feature point is first group of N number of characteristic point In any feature point;Second group of N number of characteristic point in the second frame image is obtained, in second group of N number of characteristic point Second coordinate of two characteristic points is identical as the first coordinate of corresponding fisrt feature point in first group of N number of characteristic point;Base The second coordinate of each characteristic point determines initial coordinate range in second group of N number of characteristic point.
In the present embodiment, in order to reduce Face datection load, while splicing video attack is avoided, the embodiment of the present invention uses Face tracking mode substitutes Face datection.Specifically, the detection unit 31 for described image data first frame image into Row Face datection, the mode that facial feature points detection specifically can be used identify face in the first frame image, and then really The first area of face position is characterized in the fixed first frame image;Wherein, the first area can pass through the first coordinate Range Representation.In the specific application process, the first area passes through the face frame of display in the image-display units of equipment Mark, such as shown in Fig. 4 a;The face collimation mark knows corresponding coordinate range and matches with first coordinate range.Further Ground, the first area of the tracking cell 32 in first frame image on the basis of, are tracked.Wherein, described One frame image can be the first frame image in the image data acquired for a target person;The present embodiment is with front cross frame figure Be illustrated for the treatment process of picture, certainly, during actual image real time transfer, be for multiple image data into Row processing, the treatment process of the multiple image data can refer to the place that front cross frame image data is directed to described in the present embodiment Reason process.
In the present embodiment, the tracking cell 32, for pressing preset step-length in first coordinate range corresponding first First group of N number of characteristic point is chosen in region, obtains the first coordinate of fisrt feature point in first group of N number of characteristic point;N is positive Integer;Wherein, the fisrt feature point is any feature point in first group of N number of characteristic point;Obtain the second frame figure Second group of N number of characteristic point as in, the second coordinate and first group of N of second feature point in second group of N number of characteristic point The first coordinate of corresponding fisrt feature point is identical in a characteristic point;Based on each feature in second group of N number of characteristic point Second coordinate of point determines initial coordinate range.
Specifically, for the first area in the first frame image, the tracking cell 32 chooses the by preset step-length One group of N number of characteristic point obtains the coordinate of each characteristic point in first group of N number of characteristic point.For example, the first area meets M × n, wherein m is less than or equal to the length/width of the first frame image;Then n be less than or equal to the first frame image width/ Length;It can be then that m × n/ (10 × 10) uniformly choose 100 characteristic points in the first area according to step-length.Wherein, described Step-length can be not limited to the above-mentioned step-length enumerated, and the step-length can be according to configuring at equal intervals, and the step-length can also be according to unequal interval Configuration, can specifically be configured according to actual demand.Further, the tracking cell 32 is according in the first frame image The coordinate of each characteristic point in first group of N number of characteristic point obtains second group of N number of characteristic point in the second frame image, so that described The coordinate of the coordinate of each characteristic point and corresponding characteristic point in first group of N number of characteristic point in second group of N number of characteristic point It is identical.For example, if putting in order and identifying, i.e., described first group of N number of characteristic point and second group of N number of feature according to identical Characteristic point in point is ranked up according to sequence from top to bottom, from left to right, then in second group of N number of characteristic point The coordinate of one characteristic point is identical as the coordinate of first characteristic point in first group of N number of characteristic point, and described second group N number of The coordinate of second characteristic point in characteristic point is identical as the coordinate of second characteristic point in first group of N number of characteristic point, And so on, then in second group of N number of characteristic point the coordinate (i.e. the second coordinate) of any feature point (i.e. second feature point) with The coordinate (i.e. the first coordinate) of corresponding fisrt feature point is identical in first group of N number of characteristic point.Further, it is based on institute The second coordinate for stating each characteristic point in second group of N number of characteristic point determines a coordinate range, and the coordinate range is determined as institute State initial coordinate range.
In the present embodiment, the tracking cell 32 is opposite for calculating each characteristic point in second group of N number of characteristic point First offset parameter of the individual features point in first group of N number of characteristic point;First offset parameter characterizes coordinate phase Difference degree between the same second feature point and fisrt feature point;Based on every in second group of N number of characteristic point Corresponding first offset parameter of a characteristic point determines multiple matching characteristic points, calculates each matching in the multiple matching characteristic point Second offset parameter of characteristic point;Second offset parameter characterizes the matching characteristic point and second group of N number of characteristic point In, the degrees of offset between third feature point corresponding with the matching characteristic point;And calculate the multiple matching characteristic The third offset parameter of each matching characteristic point in point;The third offset parameter characterizes the matching characteristic point and described first It organizes in N number of characteristic point, the degrees of offset between fourth feature point corresponding with the matching characteristic point;Partially according to described second Shifting parameter and the third offset parameter determine the corresponding offset parameter of the initial coordinate range.
Wherein, the tracking cell 32, first for each characteristic point in calculating second group of N number of characteristic point are inclined It before shifting parameter, is handled for the first frame image and the second frame image, obtains the first frame image respectively First gradient image and the second frame image the second gradient image.Specifically, for the first frame image and described Second frame image establishes the first pyramid diagram picture of the first frame image respectively, establishes the second gold medal of the second frame image Word tower image;It calculates the first gradient image of the first pyramid diagram picture, and calculates the of the second pyramid diagram picture Two gradient images;Wherein, establish image pyramid diagram picture (such as establish the first pyramid diagram picture of the first frame image, In another example establishing second pyramid diagram picture of the second frame image etc.) it can be understood as downscaled images pari passu, to obtain The result of more robustness.In the specific implementation process, image can be smoothed first, then to the image after smoothing processing It is sampled, to can get the pyramid diagram picture of size reduction.In the present embodiment, the first gradient image and described second Each characteristic point gradient is represented by gradient image:
Gx (y, x)=Img (y, x+1)-Img (y, x-1) (1)
Gy (y, x)=Img (y+1, x)-Img (y-1, x) (2)
Wherein, Gx (y, x) indicates the gradient of characteristic point (y, x) in x-axis;Gy (y, x) indicates characteristic point (y, x) in y-axis On gradient;Img (y, x) indicates the display parameters of characteristic point (y, x);Correspondingly, Img (y, x+1) indicates characteristic point (y, x+1) Display parameters;Img (y, x-1) indicates the display parameters of characteristic point (y, x+1);Img (y+1, x) indicates characteristic point (y+1, x) Display parameters;Img (y-1, x) indicates the display parameters of characteristic point (y-1, x);As an implementation, the display ginseng Number specifically can be gray value, it is of course also possible to be other display parameters.
Further, in second group of N number of characteristic point each characteristic point relative in first group of N number of characteristic point The gradient for the individual features point that first offset parameter of individual features point can be obtained based on expression formula (1) and expression formula (2) calculates It obtains;Wherein, in second group of N number of characteristic point each characteristic point relative to the corresponding spy in first group of N number of characteristic point Sign point refers to:Characteristic point 2 in second group of N number of characteristic point is relative to the characteristic point in first group of N number of characteristic point 1, characteristic point 2 is identical with coordinate of the characteristic point 1 in the first frame image in the coordinate in the second frame image;Namely institute It states the first offset parameter and is characterized a little 2 opposite degrees of offset relative to characteristic point 1.Specifically, for second group of N number of spy Each characteristic point in sign point, analyzes neighbour of each characteristic point relative to the individual features point in first group of N number of characteristic point Characteristic of field obtains difference parameter (diff) and gradient parameter (grad), is obtained based on the difference parameter and gradient parameter calculating Obtain the first offset parameter;Wherein, the difference parameter characterizes the characteristic point in second group of N number of characteristic point relative to first group The difference degree of individual features point in N number of characteristic point.Wherein, the difference parameter and the gradient parameter specifically can by with Lower expression formula indicates:
Diff=Img1 (y1, x1)-Img2 (y2, x2) (3)
Gx=Gx (y1, x1)+Gx (y2, x2) (4)
Gy=Gy (y1, x1)+Gy (y2, x2) (5)
Wherein, Img1 (y1, x1) indicate in first group of N number of characteristic point, the display parameters of characteristic point (y1, x1);Img2 (y2, x2) indicate in second group of N number of characteristic point, the display parameters of characteristic point (y2, x2);Wherein, the display parameters specifically may be used To be gray value, it is of course also possible to be other display parameters.Gx indicates gradient in the direction of the x axis;Gy is indicated in y-axis direction On gradient;Gx (y1, x1) indicates the gradient of characteristic point (y1, x1) in the direction of the x axis in first frame image;Gx (y2, x2) table Show in the second frame image the gradient of corresponding characteristic point (y2, x2) in the direction of the x axis, characteristic point with characteristic point (y1, x1) (y1, x1) is identical as the coordinate of characteristic point (y2, x2) in the second needle image in the coordinate in first frame image;Correspondingly, Gy (y1, x1) indicates the gradient of characteristic point (y1, x1) in the y-axis direction in first frame image;Gy (y2, x2) indicates the second frame image In with characteristic point (y1, x1) gradient of corresponding characteristic point (y2, x2) in the y-axis direction.
Further, the first offset parameter of individual features point is calculated according to the difference parameter and the gradient parameter, First offset parameter can be indicated by following formula:
(dy, dx)=f (Gxx, Gxy, Gyy, Diff) (6)
Wherein it is determined that in first frame image characteristic point 1 gradient parameter, be denoted as Gx1 and Gy1;And determine the second frame figure The gradient parameter of characteristic point 2 corresponding with characteristic point 1, is denoted as Gx2 and Gy2 as in;Then based on (Gx1, Gy1) and (Gx2, Gy2 matrix operation is carried out) to integrate the gradient of the characteristic point 1 and the characteristic point 2;Then in expression formula (6), Gxx indicates special The operation result of sign 1 gradient of the gradient and characteristic point 2 in x-axis in x-axis of point;Gyy indicates the ladder of characteristic point 1 on the y axis The operation result of degree and the gradient of characteristic point 2 on the y axis;Gxy indicates gradient and characteristic point 2 of the characteristic point 1 in x-axis in y-axis On gradient operation result, or indicate the operation knot of the gradient of gradient and characteristic point 2 in x-axis on the y axis of characteristic point 1 Fruit.F () indicates certain operations rule.
During specific implementation, based on expression formula (6) internal successive ignition, it can get in the first frame image Matching characteristic point (y0, x0) of one characteristic point (y1, x1) on the second frame image, the matching characteristic point meet following table Up to formula:
(y0, x0)=(y2+dy, x2+dx) (7)
But since the position of the face in image is in variation, the portion in only described first group of N number of characteristic point Point characteristic point can find matching characteristic point in the second frame image.
As an implementation, the tracking cell 32, for based on each feature in second group of N number of characteristic point Corresponding first offset parameter of point determines the corresponding target feature point of each characteristic point, obtains the of the target feature point Three coordinates;Determine initial characteristics point corresponding with the target feature point in the first frame image;The initial characteristics point 4-coordinate it is identical as the third coordinate of the target feature point;The initial characteristics point is obtained relative to the target signature The 4th offset parameter between point;When the 4th offset parameter is not up to preset threshold, determine that the target feature point is Matching characteristic point;When the 4th offset parameter reaches the preset threshold, determining the target feature point not is that matching is special Sign point.
Specifically, in present embodiment, the tracking cell 32 in obtaining the second frame image by adopting the above technical scheme, After the corresponding matching characteristic point (y0, x0) of characteristic point (y1, x1) in first frame image, obtained using identical technical solution In first frame image, as the corresponding matching characteristic point (y3, x3) of characteristic point (y0, x0) in the second frame image, acquisition matching spy The offset parameter of point (y3, x3) and characteristic point (y1, x1) are levied, the offset parameter is the 4th offset parameter;When described When 4th offset parameter reaches preset threshold, show that the deviation of bi-directional matching characteristic point is smaller, successful match;Correspondingly, working as institute When stating the 4th offset parameter and being not up to preset threshold, show that the deviation of bi-directional matching characteristic point is larger, it fails to match.This embodiment party The matching characteristic point obtained by the way of bi-directional matching in formula, greatly strengthens the robustness of matching characteristic point.
In the present embodiment, the tracking cell 32 is based on each characteristic point corresponding the in second group of N number of characteristic point One offset parameter determines multiple matching characteristic points, calculates the second offset of each matching characteristic point in the multiple matching characteristic point Parameter, second offset parameter characterizes in the matching characteristic point and second group of N number of characteristic point and the matching characteristic Displacement bias degree between the corresponding third feature point of point;It is also understood that described in the second offset parameter characterization In multiple match points each matching characteristic o'clock relative in first group of N number of characteristic point, characteristic point corresponding with matching characteristic point Between displacement bias degree.For example, in characteristic point N number of for first group, characteristic point (y1, x1), in second group of N number of characteristic point The identical characteristic point (y2, x2) with characteristic point (y1, x1) coordinate, calculates the matching characteristic point (y0, x0) of acquisition;Then described second Offset parameter shows the relative displacement degrees of offset between matching characteristic point (y0, x0) and characteristic point (y2, x2), it is understood that Show the relative displacement degrees of offset between matching characteristic point (y0, x0) and characteristic point (y1, x1) for the second offset parameter.
In the present embodiment, as an implementation, the tracking cell 32, for respectively from the second frame image institute Multiple groups fisrt feature point pair is extracted in the multiple matching characteristic points for including;The fisrt feature point is to including the first matching characteristic point With the second matching characteristic point, and from the first frame image, source characteristic point corresponding with the multiple matching characteristic point Middle extraction multiple groups second feature point pair, the second feature point is to including the first source characteristic point and the second source characteristic point;Wherein, institute It states the first matching characteristic point and the second matching characteristic point is any two characteristic point in the multiple matching characteristic point;Meter The first distance between the first matching characteristic point and the second matching characteristic point is calculated, and calculates first source feature Second distance between point and second source characteristic point;Obtain the opposite ginseng between the first distance and the second distance Number;The relative parameter is denoted as to the third offset parameter of the first matching characteristic point and the second matching characteristic point.
Specifically, the tracking cell 32 is directed to the multiple matching characteristic points and institute that the second frame image is included respectively It states in first frame image, source corresponding with matching characteristic point characteristic point is extracted two-by-two;The the second frame image extracted In two matching characteristic points (the i.e. described first matching characteristic point and the second matching characteristic point) and first frame image in Two source characteristic points (the i.e. described first source characteristic point and second source characteristic point) correspond respectively.It then calculates separately described First distance between first matching characteristic point and the second matching characteristic point, and calculate first source characteristic point and institute State the second distance between the second source characteristic point;Wherein, the first distance and the second distance can be Euclidean distance, when So, it is also possible to that other distances of the relative positional relationship between corresponding two characteristic points can be characterized.Further, institute is obtained State the relative parameter between first distance and the second distance;The relative parameter is denoted as the first matching characteristic point and institute State the third offset parameter of the second matching characteristic point;The zoom degree of third offset parameter characterization face region.Make For a kind of embodiment, the relative parameter is specifically as follows the ratio of the first distance and the second distance;Certainly, institute Stating relative parameter can also be obtained by characterizing other processing modes of relativeness between the first distance and the second distance ?.
In the present embodiment, as an implementation, the tracking cell 32, for multiple second offset parameters to be pressed the One default processing rule is handled, and particular offset parameter is obtained;And by multiple relative parameters by the second default processing rule It is handled, obtains specific relative parameter;Using the particular offset degree and the specific relative parameter as the initial seat Mark the corresponding offset parameter of range.
Corresponding second is obtained partially specifically, can calculate for each matching characteristic point in the second frame image Shifting parameter, ideally, corresponding second offset parameter of all matching characteristic points of the acquisition in the second frame image are equal It is equal, but since various factors (such as deviation in the movement of face location, data handling procedure etc.) will cause error, from And make corresponding second offset parameter of all matching characteristic points of the acquisition in the second frame image may be unequal.For Make processing result with more robustness, all matching characteristics of the tracking cell 32 to the acquisition in the second frame image Corresponding second offset parameter of point is ranked up, and selects the intermediate value of N number of second offset parameter as particular offset parameter.It can manage Xie Wei, the degrees of offset of the particular offset parameter characterization face frame.On the other hand, in the ideal case, the second frame figure The corresponding relative parameter of any two matching characteristic point as in is equal, but due to various factors (such as the shifting of face location Dynamic, deviation in data handling procedure etc.) it will cause error, so that any two in the second frame image match The corresponding relative parameter of characteristic point may be unequal.In order to make processing result with more robustness, the tracking cell 32 is right The corresponding relative parameter of any two matching characteristic point in the second frame image is ranked up, and selects multiple relative parameters Intermediate value is as specific relative parameter.It is to be understood that the zoom degree of the specific relative parameter characterization face frame.
Further, in this embodiment the tracking cell 32 is by the particular offset degree and the specific opposite ginseng Number is as the corresponding offset parameter of the initial coordinate range;To be joined based on the initial coordinate range and corresponding offset Number obtains the second coordinate range, records the region that second coordinate range is the characterization face position after tracking;Specifically It can refer to shown in Fig. 4 b, wherein solid box is using the tracking result after the technical solution track human faces of the embodiment of the present invention;And Dotted line frame is using the directly detected result of Face datection scheme;Compared to the technical solution of Face datection characteristic point, adopt The variation that more quasi- good description face can be obtained with the living body faces tracking scheme of the embodiment of the present invention, specifically can be face The variation of position.As an implementation, the realization of KLT tracker can be used in the face tracking scheme of the embodiment of the present invention, when It so is not limited to KLT tracker, the living body faces tracking of the embodiment of the present invention can be achieved in other fast iterative algorithms;The present embodiment The uniform selected characteristic point in region where characterizing face, and track human faces frame are directlyed adopt, face tracking speed is greatly improved Degree also avoids characteristic point very few the problem of leading to tracking failure.Also, the mode of bi-directional matching also greatly improve face with The robustness of track.
In the present embodiment, the feature extraction unit 33, for extracting the first textural characteristics and/or in the region Two textural characteristics;
The computing unit 34, for the first parameter based on the first posture of the first textural characteristics computational representation, and/ Or, the second parameter based on the second posture of the second textural characteristics computational representation;
The movement judging unit 35, for determining the first movement based on first parameter, and/or, based on described the Two parameters determine the second movement.
As an implementation, the feature extraction unit 33, for the characteristic point in the region according to third Default processing rule is handled, and the difference that each characteristic point characteristic point adjacent with the characteristic point is characterized in the region is obtained First procedure parameter of off course degree;Analyze the first textural characteristics that first procedure parameter obtains characterization face texture edge; It is also used to extract the first part region in the region;To the characteristic point in the first part region according to the 4th default place Reason rule is handled, and is obtained and is characterized each characteristic point characteristic point adjacent with the characteristic point in the first part region Second procedure parameter of difference degree;Analyze the second texture spy that second procedure parameter obtains characterization eye texture edge Sign.
In the present embodiment, the feature extraction unit 33 extracts different textural characteristics for different movements, in this reality It applies and is denoted as the first textural characteristics and the second textural characteristics in example.Classify in the present embodiment mainly for two kinds of movements, considers It is larger for the attitudes vibration of face to rotary head movement and blink movement, therefore, acts and blink for rotary head in the present embodiment Movement is classified.It is to be understood that first textural characteristics characterize face texture edge, the second textural characteristics characterization Eye texture edge.
Specifically, the feature extraction unit 33 is directed to the first textural characteristics, treatment process includes:To second frame Image extracts the region (region can be the second area after tracking) of characterization face position, to the region It is handled according to the default processing rule of third.As an implementation, described that the region is handled according to third is default Rule is handled, including:The region is reduced, such as is contracted to (64,64) range, by the image after diminution according to Binary conversion treatment mode or three-valued processing mode are handled.In binary conversion treatment mode as an example, then equipment can be for described Second area in second frame image carries out LBP processing;Such as:Extract first in the second frame image with the second area Matched procedural image carries out gray proces to the procedural image, obtains the gray level image of the procedural image, further really The versus grayscale relationship between each characteristic point and eight adjacent characteristic points in the fixed gray level image, as illustrated in fig. 4 c, Multiply the gray level image of three characteristic point matrix for three, the gray scale of each characteristic point is for example shown in Fig. 4 c;By each characteristic point Gray value carries out numeralization expression, specifically can refer to shown in Fig. 4 d.Further, by the gray scale of adjacent eight characteristic points and center The gray scale of characteristic point is compared, will be described adjacent if the gray scale of adjacent characteristic point is more than or equal to the gray scale of central feature point The value of characteristic point is denoted as 1;Conversely, if the gray scale of adjacent characteristic point is less than the gray scale of central feature point, by the adjacent feature The value of point is denoted as 0, and for details, reference can be made to shown in Fig. 4 e.Further, the value of adjacent characteristic point is connected to obtain 8 binary words Symbol string, the string of binary characters can be understood as the gray value for being distributed in (0,255).In the specific implementation process, it can refer to Shown in Fig. 4 e, if being arranged using first, upper left corner characteristic point as initiation feature point according to clockwise direction, then 8 obtained Character string be 10001111.Thus can get in the procedural image each characteristic point (i.e. central feature point) corresponding two into Character string processed.
Further, in order to remove redundancy, count in the corresponding string of binary characters of each characteristic point, 0 and 1 variation be less than 2 string of binary characters;For example, character string is in 10001111, first and second 0 and 1 change 1 time, the 4th and the It five 0 and 1 variation 1 time, amounts to variation twice, is unsatisfactory for the condition of " 0 and 1 variation is less than 2 ".In another example character string is In 00001111, only changed 1 time by the 4th and the 5th 0 and 1, meets the condition of " 0 and 1 variation is less than 2 ".Then, it will unite String of binary characters after meter is mapped in (0,58) range, and the data after mapping can be used as LBP data;It can also subtract significantly in this way Few data processing amount.Above-mentioned treatment process can specifically be realized by following formula:
LBP=[code0, code1 ... ..., code7] (8)
Code (m, n)=Img (y+m, x+n)>Img (y, x)?1:0 (9)
Wherein, in expression formula (8) LBP indicate fisrt feature point in the region display parameters and adjacent characteristic point The relativeness of display parameters;The fisrt feature point is any feature point in the region;Code0, code1 ... ..., Code7 respectively indicates the display parameters of the adjacent characteristic point of the fisrt feature point;Expression formula (9) is indicated characteristic point (y+m, x + n) gray value and the gray value of characteristic point (y, x) be compared, if the gray value of characteristic point (y+m, x+n) is greater than characteristic point The string of binary characters code (m, n) of characteristic point (m, n) is then denoted as 1, is otherwise denoted as 0 by the gray value of (y, x).
Further, the LBP numerical value that (0,58) is distributed in certain image-region is subjected to statistic histogram processing.Because The histogram has 59 dimensions, so available one 59 vector tieed up, the first texture as the region after statistics Feature.
For the second textural characteristics, since the region of eye is smaller, blink movement is also very fast, therefore, is extracting the second line When managing feature, the extraction of eye region is carried out to region first, that is, extracts the first part region in the region, it is described First part region can be the upper half area in the region, for example, extracting the upper half/one third in the region Region, as the first part region.It can refer to the place of aforementioned first textural characteristics to the first part region after extraction Reason mode is handled, and the second textural characteristics are obtained.As another embodiment, equipment can be directed to the first part region Carry out LTP processing, treatment process is approximate with LBP processing mode, and difference is, in LBP treatment process, expression characteristic point and When the relativeness of the gray value of adjacent characteristic point, it is marked by 0 and 1.In the LTP treatment process of present embodiment, When indicating the relativeness of gray value of characteristic point and adjacent characteristic point, it is marked by 0,1 and -1, it is specific processed Journey can be realized by following formula:
Code (m, n)=Img (y+m, x+n)>Img(y,x)+Fuzzy?1:0 (10)
Code (m, n)=Img (y+m, x+n)<Img(y,x)-Fuzzy?-1:0 (11)
Fuzzy=ratio × Img (y, x) (12)
Wherein, Img (y, x) indicates the gray value of characteristic point (y, x);The characteristic point (y, x) can be three spies for multiplying three Levy the central feature point in dot matrix;Img (y+m, x+n) indicates the gray value of characteristic point (y+m, x+n);Characteristic point (y+m, x+ It n) is specially the characteristic point adjacent with characteristic point (y, x);Ratio indicates scale parameter, can be pre-configured with;Fuzzy indicates feature The gray value of point (y, x) and the product of ratio;The gray value of characteristic point (y, x) is bigger, and the value of Fuzzy is also maximum.Expression formula (10) meaning indicates to be compared the sum of the gray value of the gray value of characteristic point (y+m, x+n) and characteristic point (y, x) and Fuzzy Compared with, if gray value and Fuzzy the sum of of the gray value of characteristic point (y+m, x+n) greater than characteristic point (y, x), then by characteristic point (m, N) character string code (m, n) is denoted as 1, is otherwise denoted as 0;The meaning of expression formula (11) is indicated the ash of characteristic point (y+m, x+n) The difference of the gray value and Fuzzy of angle value and characteristic point (y, x) is compared, if the gray value of characteristic point (y+m, x+n) is greater than spy The gray value of point (y, x) and the difference of Fuzzy are levied, then the character string code (m, n) of characteristic point (m, n) is denoted as -1, is otherwise denoted as 0。
In the present embodiment, as an implementation, the computing unit 34, for inputting first textural characteristics Preconfigured first disaggregated model obtains the first parameter of the first posture of characterization;And/or for second texture is special Sign inputs preconfigured second disaggregated model, obtains the second parameter of the second posture of characterization.
Specifically, acquiring great amount of samples data in equipment in advance, (sample data specifically be can be using above-mentioned processing The first textural characteristics that mode obtains) and corresponding posture class indication, to the sample data and corresponding posture point Class mark carries out machine learning training, obtains the first disaggregated model.After obtaining first textural characteristics, by first line It manages feature and inputs first disaggregated model, obtain corresponding first posture of first textural characteristics, the first posture example Such as indicate front or side;First posture can be indicated by the first parameter.In practical applications, can join by described first Several numerical values reciteds shows that first posture levels off to front or side, such as first parameter is bigger, shows described the One posture more levels off to front;Correspondingly, if first parameter is smaller, show that first posture more levels off to side.Separately On the one hand, equipment acquire in advance great amount of samples data (sample data specifically can be using above-mentioned processing mode obtain Second textural characteristics) and corresponding posture class indication, the sample data and corresponding posture class indication are carried out Machine learning training, obtains the second disaggregated model.After obtaining second textural characteristics, second textural characteristics are inputted Second disaggregated model, obtains corresponding second posture of second textural characteristics, and second posture for example indicates to open eyes Or close one's eyes, second posture can be indicated by the second parameter.It in practical applications, can be big by the numerical value of second parameter It is small to show that second posture levels off to eye opening or eye closing, such as second parameter is bigger, shows that second posture more becomes It is bordering on eye opening;Correspondingly, if second parameter is smaller, show that second posture more levels off to eye closing.
In the present embodiment, as an implementation, the movement judging unit 35, for based on the multiple image point Not corresponding multiple first parameters judge whether corresponding first parameter of first part's image is all satisfied in the multiple image Whether corresponding first parameter of second part image is not satisfied first threshold in one threshold range and the multiple image It is worth range;When corresponding first parameter of first part's image is all satisfied first threshold range and described in the multiple image When the first threshold range is not satisfied in corresponding first parameter of second part image in multiple image, first ginseng is determined Number corresponds to the first movement;And/or for being based on corresponding multiple second parameters of the multiple image, judge described more Whether corresponding second parameter of Part III image is all satisfied in second threshold range and the multiple image in frame image Whether corresponding second parameter of four parts of images is not satisfied the second threshold range;When Part III in the multiple image Corresponding second parameter of image is all satisfied Part IV image corresponding second in second threshold range and the multiple image When the second threshold range is not satisfied in parameter, determine that second parameter corresponds to the second movement.
Specifically, it is aforementioned obtain characterization the first posture the first parameter when, due to it is aforementioned be for two field pictures carry out The first parameter obtained is handled, and in the specific application process, it is for multiple image included in the image data obtained Carry out the first parameter of processing acquisition, it can be understood as, first parameter can get for each frame image, then be directed to multiframe Image, which can correspond to, obtains multiple first parameters.Based on this, argument sequence can get for the multiple first parameter.Described in analysis Argument sequence, if numerical value change from low to high, can determine face by flanks transform for front;Correspondingly, if numerical value change by It is high to Low, then it can determine that face is changed into side by front, it is certainly, opposite, if making the by other data processing methods One parameter is bigger, shows that face more levels off to front, and the first parameter is smaller, shows that face more levels off to side, then joins in analysis During Number Sequence, if numerical value change from low to high, can determine that face is changed into side by front;Correspondingly, if numerical value becomes Change from high to low, then can determine face by flanks transform for front.Based on this, the variation of numerical value in the argument sequence can be passed through So that it is determined that corresponding first movement.
In the specific implementation process, for the multiple image of acquisition, the X frame image before present frame is chosen, by the X Frame image uniform cutting is Y sections;Wherein, X and Y is positive integer, and Y is less than X;For every section of image, due in every section of image Including at least two field pictures, then at least two first parameters can be obtained in every section of image;In at least two first parameter First parameter of the median as every section of image is chosen, to can get corresponding Y the first parameter of Y sections of images. Y the first parameter strings are obtained into obtain argument sequence, judge whether the Parameters variation in the argument sequence meets preset rules, example Such as, if Parameters variation determines corresponding first movement from high to low or from low to high, based on Parameters variation.Another real It applies in mode, multiple first parameters corresponding for the multiple image determine front portion image packet in the multiple image The face contained is in front, and the face that rear portion image includes is in side, or determines previous in the multiple image The face that parts of images includes is in side, and the face that rear portion image includes is in front, then can determine that the first parameter Corresponding first movement.For example, judging in X frame image, corresponding first parameter of preceding one third image of the X frame image is equal Meet first threshold range, then show include face direct picture and the X frame image rear one third image it is corresponding The first parameter first threshold range is not satisfied, then show to include face side image, then can determine in the X frame image Middle face can determine that rotary head acts by obverting as side.
In the present embodiment, detection unit 31, tracking cell 32, feature extraction unit 33 in living body verifying equipment, Computing unit 34, movement judging unit 35 and authentication unit 36, in practical applications can be by the central processing in the equipment Device (CPU, Central Processing Unit), digital signal processor (DSP, Digital Signal Processor), Micro-control unit (MCU, Microcontroller Unit) or programmable gate array (FPGA, Field-Programmable Gate Array) it realizes.
The embodiment of the invention also provides a kind of living bodies to verify equipment, and living body verifies equipment as one example of hardware entities As shown in Figure 6.The equipment includes processor 61, storage medium 62, camera 65 and at least one external communication interface 63; The processor 61, storage medium 62, camera 65 and external communication interface 63 are connected by bus 64.
The living body verification method of the embodiment of the present invention can be integrated in institute by the algorithms library form of algorithm and arbitrary format It states in living body verifying equipment;It can specifically be integrated in the client that can be run in the living body verifying equipment.In practical applications, Algorithm can be packaged together with client, when user activates client, i.e. unlatching living body authentication function, client call algorithm Library, and start camera, the image data acquired by camera is acted as source data according to the source data of acquisition Determine.
It need to be noted that be:Above is referred to the description of equipment item, be with above method description it is similar, with method Beneficial effect description, does not repeat them here.For undisclosed technical detail in present device embodiment, the method for the present invention is please referred to The description of embodiment.
It should be understood by those skilled in the art that, the embodiment of the present invention can provide as method, equipment or computer program Product.Therefore, the shape of hardware embodiment, software implementation or embodiment combining software and hardware aspects can be used in the present invention Formula.Moreover, the present invention, which can be used, can use storage in the computer that one or more wherein includes computer usable program code The form for the computer program product implemented on medium (including but not limited to magnetic disk storage and optical memory etc.).
The present invention be referring to according to the method for the embodiment of the present invention, the flow chart of equipment and computer program product and/or Block diagram describes.It should be understood that each process that can be realized by computer program instructions in flowchart and/or the block diagram and/or The combination of process and/or box in box and flowchart and/or the block diagram.It can provide these computer program instructions to arrive General purpose computer, special purpose computer, Embedded Processor or other programmable data processing devices processor to generate one Machine, so that being generated by the instruction that computer or the processor of other programmable data processing devices execute for realizing flowing The device for the function of being specified in journey figure one process or multiple processes and/or block diagrams one box or multiple boxes.
These computer program instructions, which may also be stored in, is able to guide computer or other programmable data processing devices with spy Determine in the computer-readable memory that mode works, so that it includes referring to that instruction stored in the computer readable memory, which generates, Enable the manufacture of device, the command device realize in one box of one or more flows of the flowchart and/or block diagram or The function of being specified in multiple boxes.
These computer program instructions also can be loaded onto a computer or other programmable data processing device, so that counting Series of operation steps are executed on calculation machine or other programmable devices to generate computer implemented processing, thus in computer or The instruction executed on other programmable devices is provided for realizing in one or more flows of the flowchart and/or block diagram one The step of function of being specified in a box or multiple boxes.
The foregoing is only a preferred embodiment of the present invention, is not intended to limit the scope of the present invention.

Claims (20)

1. a kind of living body verification method, which is characterized in that the method includes:
Image data is obtained based on action command, parses described image data, identifies characterization face institute in described image data In the region of position;
The variation of face position in the multiple image for being included based on described image data is uniformly chosen in this region Characteristic point tracks the region using preset track algorithm;
Characteristic point in the region is handled according to the default processing rule of third, obtains and characterizes each spy in the region First procedure parameter of the difference degree of the sign point characteristic point adjacent with the characteristic point;First procedure parameter is analyzed to obtain Characterize first textural characteristics at face texture edge;And/or
Extract the first part region in the region;To the characteristic point in the first part region according to the 4th default processing Rule is handled, and the difference that each characteristic point characteristic point adjacent with the characteristic point is characterized in the first part region is obtained Second procedure parameter of off course degree;Analyze the second textural characteristics that second procedure parameter obtains characterization eye texture edge;
Based on the first parameter of the first posture of the first textural characteristics computational representation, and/or, it is based on second textural characteristics Second parameter of the second posture of computational representation;
The first movement is determined based on first parameter, and/or, the second movement is determined based on second parameter;
When first movement and/or the second movement movement corresponding with action command matching, determine that living body verifying is logical It crosses.
2. the method according to claim 1, wherein the multiple image for being included based on described image data The variation of middle face position tracks the region, including:
The first frame image and the second frame image in the multiple image are extracted, identifies characterization face institute in the first frame image In the first area of position, corresponding first coordinate range in the first area is obtained;
Obtain initial coordinate range corresponding with first coordinate range in the second frame image;
Calculate the corresponding offset parameter of the initial coordinate range;
The second coordinate range is obtained based on the initial coordinate range and corresponding offset parameter, records the second coordinate model The region of characterization face position after enclosing for tracking;
Wherein, the offset parameter characterizes degrees of offset of second coordinate range relative to first coordinate range.
3. according to the method described in claim 2, it is characterized in that, described obtain in the second frame image is sat with described first The corresponding initial coordinate range of range is marked, including:
First group of N number of characteristic point is chosen in the corresponding first area of first coordinate range by preset step-length, described in acquisition First coordinate of fisrt feature point in first group of N number of characteristic point;N is positive integer;Wherein, the fisrt feature point is described first Any feature point in the N number of characteristic point of group;
Second group of N number of characteristic point in the second frame image is obtained, the of second feature point in second group of N number of characteristic point Two coordinates are identical as the first coordinate of corresponding fisrt feature point in first group of N number of characteristic point;
Initial coordinate range is determined based on the second coordinate of each characteristic point in second group of N number of characteristic point.
4. according to the method described in claim 3, it is characterized in that, described calculate the corresponding offset ginseng of the initial coordinate range Number, including:
Each characteristic point is calculated in second group of N number of characteristic point relative to the individual features in first group of N number of characteristic point First offset parameter of point;The identical second feature point of the first offset parameter characterization coordinate and the fisrt feature point Between difference degree;
Multiple matching characteristic points are determined based on corresponding first offset parameter of characteristic point each in second group of N number of characteristic point, Calculate the second offset parameter of each matching characteristic point in the multiple matching characteristic point;Described in the second offset parameter characterization Matching characteristic point with it is inclined in second group of N number of characteristic point, between third feature point corresponding with the matching characteristic point Shifting degree;
And calculate the third offset parameter of each matching characteristic point in the multiple matching characteristic point;The third offset ginseng Number characterize the matching characteristic points in first group of N number of characteristic point, fourth feature corresponding with the matching characteristic point Degrees of offset between point;
The corresponding offset parameter of the initial coordinate range is determined according to second offset parameter and the third offset parameter.
5. according to the method described in claim 4, it is characterized in that, described calculate each matching in the multiple matching characteristic point The third offset parameter of characteristic point, including:
Multiple groups fisrt feature point pair is extracted from multiple matching characteristic points that the second frame image is included respectively;Described first Characteristic point to include the first matching characteristic point and the second matching characteristic point, and from the first frame image, with it is the multiple Multiple groups second feature point pair is extracted in the corresponding source characteristic point of matching characteristic point, the second feature point is to special including the first source Sign point and the second source characteristic point;Wherein, the first matching characteristic point and the second matching characteristic point are the multiple matching Any two characteristic point in characteristic point;
The first distance between the first matching characteristic point and the second matching characteristic point is calculated, and calculates described first Second distance between source characteristic point and second source characteristic point;
Obtain the relative parameter between the first distance and the second distance;The relative parameter is denoted as described first Third offset parameter with characteristic point and the second matching characteristic point.
6. according to the method described in claim 5, it is characterized in that, described inclined according to second offset parameter and the third Shifting parameter determines the corresponding offset parameter of the initial coordinate range, including:
Multiple second offset parameters are handled by the first default processing rule, obtain particular offset parameter;
And handle multiple relative parameters by the second default processing rule, obtain specific relative parameter;
Using the particular offset parameter and the specific relative parameter as the corresponding offset parameter of the initial coordinate range.
7. according to the method described in claim 4, it is characterized in that, described based on each spy in second group of N number of characteristic point Corresponding first offset parameter of sign point determines multiple matching characteristic points, including:
Each characteristic point pair is determined based on corresponding first offset parameter of characteristic point each in second group of N number of characteristic point The target feature point answered obtains the third coordinate of the target feature point;
Determine initial characteristics point corresponding with the target feature point in the first frame image;The of the initial characteristics point 4-coordinate is identical as the third coordinate of the target feature point;
The initial characteristics point is obtained relative to the 4th offset parameter between the target feature point;
When the 4th offset parameter reaches preset threshold, determine that the target feature point is matching characteristic point;
When the 4th offset parameter is not up to the preset threshold, determining the target feature point not is matching characteristic point.
8. the method according to claim 1, wherein described be based on the first textural characteristics computational representation first First parameter of posture, including:
First textural characteristics are inputted into preconfigured first disaggregated model, obtain the first parameter of the first posture of characterization.
9. the method according to claim 1, wherein described be based on the second textural characteristics computational representation second Second parameter of posture, including:
Second textural characteristics are inputted into preconfigured second disaggregated model, obtain the second parameter of the second posture of characterization.
10. being wrapped the method according to claim 1, wherein described determine the first movement based on first parameter It includes:
Based on corresponding multiple first parameters of the multiple image, judge that first part's image is corresponding in the multiple image The first parameter whether be all satisfied corresponding first parameter of second part image in first threshold range and the multiple image Whether the first threshold range is not satisfied;
When corresponding first parameter of first part's image is all satisfied first threshold range and the multiframe in the multiple image When the first threshold range is not satisfied in corresponding first parameter of second part image in image, first parameter pair is determined It should be in the first movement.
11. being wrapped the method according to claim 1, wherein described determine the second movement based on second parameter It includes:
Based on corresponding multiple second parameters of the multiple image, judge that Part III image is corresponding in the multiple image The second parameter whether be all satisfied corresponding second parameter of Part IV image in second threshold range and the multiple image Whether the second threshold range is not satisfied;
When corresponding second parameter of Part III image is all satisfied second threshold range and the multiframe in the multiple image When the second threshold range is not satisfied in corresponding second parameter of Part IV image in image, second parameter pair is determined It should be in the second movement.
12. a kind of living body verifies equipment, which is characterized in that the equipment includes:Detection unit, tracking cell, feature extraction list Member, computing unit, movement judging unit and authentication unit;Wherein,
The detection unit parses described image data, identifies described image for obtaining image data based on action command The region of face position is characterized in data;
The tracking cell, people in the multiple image that the described image data for being identified based on the detection unit are included The variation of face position, uniform selected characteristic point, tracks the region using preset track algorithm in this region;
The feature extraction unit is obtained for being handled according to the default processing rule of third the characteristic point in the region Obtain the first procedure parameter that the difference degree of each characteristic point characteristic point adjacent with the characteristic point is characterized in the region;Point Analyse the first textural characteristics that first procedure parameter obtains characterization face texture edge;It is also used to extract in the region A part of region;Characteristic point in the first part region is handled according to the 4th default processing rule, described in acquisition The second procedure parameter of the difference degree of each characteristic point characteristic point adjacent with the characteristic point is characterized in first part region; Analyze the second textural characteristics that second procedure parameter obtains characterization eye texture edge;
The computing unit, for the first parameter based on the first posture of the first textural characteristics computational representation, and/or, base In the second parameter of the second posture of the second textural characteristics computational representation;
The movement judging unit, for determining the first movement based on first parameter, and/or, it is based on second parameter Determine the second movement;
The authentication unit, for when first movement and/or the second movement movement matching corresponding with the action command When, determine that living body is verified.
13. equipment according to claim 12, which is characterized in that the tracking cell, for extracting the multiple image In first frame image and the second frame image, identify in the first frame image characterize face position first area, obtain Obtain corresponding first coordinate range in the first area;It obtains corresponding with first coordinate range in the second frame image Initial coordinate range;Calculate the corresponding offset parameter of the initial coordinate range;Based on the initial coordinate range and correspondence Offset parameter obtain the second coordinate range, record second coordinate range be tracking after characterize face position area Domain;Wherein, the offset parameter characterizes degrees of offset of second coordinate range relative to first coordinate range.
14. equipment according to claim 13, which is characterized in that the tracking cell, for pressing preset step-length described First group of N number of characteristic point is chosen in the corresponding first area of first coordinate range, is obtained first in first group of N number of characteristic point First coordinate of characteristic point;N is positive integer;Wherein, the fisrt feature point is any spy in first group of N number of characteristic point Sign point;Second group of N number of characteristic point in the second frame image is obtained, second feature point in second group of N number of characteristic point Second coordinate is identical as the first coordinate of corresponding fisrt feature point in first group of N number of characteristic point;Based on described second The second coordinate of each characteristic point determines initial coordinate range in the N number of characteristic point of group.
15. equipment according to claim 14, which is characterized in that the tracking cell, it is N number of for calculating described second group First offset parameter of each characteristic point relative to the individual features point in first group of N number of characteristic point in characteristic point;It is described First offset parameter characterizes the difference degree between the identical second feature point of coordinate and fisrt feature point;Based on institute It states corresponding first offset parameter of each characteristic point in second group of N number of characteristic point and determines multiple matching characteristic points, calculate described more Second offset parameter of each matching characteristic point in a matching characteristic point;Second offset parameter characterizes the matching characteristic point With the degrees of offset in second group of N number of characteristic point, between third feature point corresponding with the matching characteristic point;With And calculate the third offset parameter of each matching characteristic point in the multiple matching characteristic point;The third offset parameter characterization The matching characteristic point in first group of N number of characteristic point, between fourth feature point corresponding with the matching characteristic point Degrees of offset;Determine that the initial coordinate range is corresponding partially according to second offset parameter and the third offset parameter Shifting parameter.
16. equipment according to claim 15, which is characterized in that the tracking cell, for respectively from second frame Multiple groups fisrt feature point pair is extracted in multiple matching characteristic points that image is included;The fisrt feature point is to including the first matching Characteristic point and the second matching characteristic point, and from the first frame image, source corresponding with the multiple matching characteristic point Multiple groups second feature point pair is extracted in characteristic point, the second feature point is to including the first source characteristic point and the second source characteristic point; Wherein, the first matching characteristic point and the second matching characteristic point are that any two in the multiple matching characteristic point are special Sign point;It calculates the first distance between the first matching characteristic point and the second matching characteristic point, and calculates described the Second distance between one source characteristic point and second source characteristic point;It obtains between the first distance and the second distance Relative parameter;The relative parameter is denoted as to the third offset of the first matching characteristic point and the second matching characteristic point Parameter.
17. equipment according to claim 16, which is characterized in that the tracking cell, for multiple second offsets to be joined Number is handled by the first default processing rule, obtains particular offset parameter;And by multiple relative parameters by the second default place Reason rule is handled, and specific relative parameter is obtained;Using the particular offset parameter and the specific relative parameter as described in The corresponding offset parameter of initial coordinate range.
18. equipment according to claim 15, which is characterized in that the tracking cell, for N number of based on described second group Corresponding first offset parameter of each characteristic point determines the corresponding target feature point of each characteristic point in characteristic point, obtains institute State the third coordinate of target feature point;Determine initial characteristics corresponding with the target feature point in the first frame image Point;The 4-coordinate of the initial characteristics point is identical as the third coordinate of the target feature point;Obtain the initial characteristics point Relative to the 4th offset parameter between the target feature point;When the 4th offset parameter reaches preset threshold, determine The target feature point is matching characteristic point;When the 4th offset parameter is not up to the preset threshold, the mesh is determined Marking characteristic point is not matching characteristic point.
19. equipment according to claim 18, which is characterized in that the computing unit, for first texture is special Sign inputs preconfigured first disaggregated model, obtains the first parameter of the first posture of characterization;And/or it is used for described second Textural characteristics input preconfigured second disaggregated model, obtain the second parameter of the second posture of characterization.
20. equipment according to claim 18, which is characterized in that the movement judging unit, for being based on the multiframe Corresponding multiple first parameters of image judge whether corresponding first parameter of first part's image is equal in the multiple image Meet corresponding first parameter of second part image in first threshold range and the multiple image whether be not satisfied it is described First threshold range;When corresponding first parameter of first part's image is all satisfied first threshold range, simultaneously in the multiple image And when the first threshold range is not satisfied in corresponding first parameter of second part image in the multiple image, described in determination First parameter corresponds to the first movement;And/or for being based on corresponding multiple second parameters of the multiple image, judgement Whether corresponding second parameter of Part III image is all satisfied second threshold range and the multiframe figure in the multiple image Whether corresponding second parameter of Part IV image is not satisfied the second threshold range as in;When in the multiple image It is corresponding that corresponding second parameter of three parts image is all satisfied Part IV image in second threshold range and the multiple image The second parameter when the second threshold range is not satisfied, determine that second parameter corresponds to the second movement.
CN201710533353.1A 2017-07-03 2017-07-03 A kind of living body verification method and equipment Active CN107316029B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710533353.1A CN107316029B (en) 2017-07-03 2017-07-03 A kind of living body verification method and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710533353.1A CN107316029B (en) 2017-07-03 2017-07-03 A kind of living body verification method and equipment

Publications (2)

Publication Number Publication Date
CN107316029A CN107316029A (en) 2017-11-03
CN107316029B true CN107316029B (en) 2018-11-23

Family

ID=60181058

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710533353.1A Active CN107316029B (en) 2017-07-03 2017-07-03 A kind of living body verification method and equipment

Country Status (1)

Country Link
CN (1) CN107316029B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108021892B (en) * 2017-12-06 2021-11-19 上海师范大学 Human face living body detection method based on extremely short video
KR102374747B1 (en) * 2017-12-15 2022-03-15 삼성전자주식회사 Method and device to recognize object
CN108304708A (en) * 2018-01-31 2018-07-20 广东欧珀移动通信有限公司 Mobile terminal, face unlocking method and related product
CN110688878B (en) * 2018-07-06 2021-05-04 北京三快在线科技有限公司 Living body identification detection method, living body identification detection device, living body identification detection medium, and electronic device
CN109886080A (en) * 2018-12-29 2019-06-14 深圳云天励飞技术有限公司 Human face in-vivo detection method, device, electronic equipment and readable storage medium storing program for executing
CN110276313B (en) * 2019-06-25 2022-04-22 杭州网易智企科技有限公司 Identity authentication method, identity authentication device, medium and computing equipment
CN111860394A (en) * 2020-07-28 2020-10-30 成都新希望金融信息有限公司 Gesture estimation and gesture detection-based action living body recognition method
CN112699857A (en) * 2021-03-24 2021-04-23 北京远鉴信息技术有限公司 Living body verification method and device based on human face posture and electronic equipment
CN116152936A (en) * 2023-02-17 2023-05-23 深圳市永腾翼科技有限公司 Face identity authentication system with interactive living body detection and method thereof

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102402691A (en) * 2010-09-08 2012-04-04 中国科学院自动化研究所 Method for tracking gestures and actions of human face
CN105868733A (en) * 2016-04-21 2016-08-17 腾讯科技(深圳)有限公司 Face in-vivo validation method and device
CN106599772A (en) * 2016-10-31 2017-04-26 北京旷视科技有限公司 Living body authentication method, identity authentication method and device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102402691A (en) * 2010-09-08 2012-04-04 中国科学院自动化研究所 Method for tracking gestures and actions of human face
CN105868733A (en) * 2016-04-21 2016-08-17 腾讯科技(深圳)有限公司 Face in-vivo validation method and device
CN106599772A (en) * 2016-10-31 2017-04-26 北京旷视科技有限公司 Living body authentication method, identity authentication method and device

Also Published As

Publication number Publication date
CN107316029A (en) 2017-11-03

Similar Documents

Publication Publication Date Title
CN107316029B (en) A kind of living body verification method and equipment
CN111340008B (en) Method and system for generation of counterpatch, training of detection model and defense of counterpatch
CN106897658B (en) Method and device for identifying human face living body
JP5010905B2 (en) Face recognition device
CN108229330A (en) Face fusion recognition methods and device, electronic equipment and storage medium
CN106022317A (en) Face identification method and apparatus
CN110222573B (en) Face recognition method, device, computer equipment and storage medium
US20230033052A1 (en) Method, apparatus, device, and storage medium for training image processing model
WO2016084072A1 (en) Anti-spoofing system and methods useful in conjunction therewith
CN107194361A (en) Two-dimentional pose detection method and device
CN107844742B (en) Facial image glasses minimizing technology, device and storage medium
CN105335719A (en) Living body detection method and device
CN109034095A (en) A kind of face alignment detection method, apparatus and storage medium
CN111414858B (en) Face recognition method, target image determining device and electronic system
CN110428399A (en) Method, apparatus, equipment and storage medium for detection image
Smith-Creasey et al. Continuous face authentication scheme for mobile devices with tracking and liveness detection
CN107463865A (en) Face datection model training method, method for detecting human face and device
CN108875474A (en) Assess the method, apparatus and computer storage medium of face recognition algorithms
US11620854B2 (en) Evaluating the security of a facial recognition system using light projections
KR20200029659A (en) Method and apparatus for face recognition
WO2020195732A1 (en) Image processing device, image processing method, and recording medium in which program is stored
CN109871845A (en) Certificate image extracting method and terminal device
CN111382791B (en) Deep learning task processing method, image recognition task processing method and device
CN111784658A (en) Quality analysis method and system for face image
CN107369086A (en) A kind of identity card stamp system and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant