CN105447432A - Face anti-fake method based on local motion pattern - Google Patents

Face anti-fake method based on local motion pattern Download PDF

Info

Publication number
CN105447432A
CN105447432A CN201410428040.6A CN201410428040A CN105447432A CN 105447432 A CN105447432 A CN 105447432A CN 201410428040 A CN201410428040 A CN 201410428040A CN 105447432 A CN105447432 A CN 105447432A
Authority
CN
China
Prior art keywords
face
motion
local
key point
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410428040.6A
Other languages
Chinese (zh)
Other versions
CN105447432B (en
Inventor
杨健伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yang Jianwei
Original Assignee
Qiansou Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qiansou Inc filed Critical Qiansou Inc
Priority to CN201410428040.6A priority Critical patent/CN105447432B/en
Publication of CN105447432A publication Critical patent/CN105447432A/en
Application granted granted Critical
Publication of CN105447432B publication Critical patent/CN105447432B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention relates to a face anti-fake method based on a local motion pattern, comprising the following steps: (1) detecting a face image area collected by a camera, and positioning face key points; (2) collecting motion information of a face area and a non-face area in a local area where the face key points are located; (3) calculating a local motion pattern of the face according to the obtained local motion information at all the key points; and (4) using a pre-configured pattern classifier to judge whether the face is fake based on the local motion pattern of the face. The beneficial effect is that the face anti-fake method can be effectively combined with a practical face recognition system to quickly and effectively distinguish a true face from a fake face with little need for user interaction.

Description

A kind of face method for anti-counterfeit based on local motion mode
Technical field
The present invention relates to computer vision and area of pattern recognition, the face method for anti-counterfeit research in living things feature recognition field, particularly relates to a kind of face method for anti-counterfeit based on local motion mode.
Background technology
At present, biometrics identification technology has been widely used in the every aspect in daily life.Face biometrics identification technology, due to its have facilitate easy-to-use, user friendly, the advantage such as contactless, achieving the development of advancing by leaps and bounds in recent years, these development have been embodied in each research field, comprise Face datection, face characteristic extraction, classifier design and hardware device manufacture etc.But the living things feature recognition based on face is still faced with some tests on application, wherein, the safety issue of recognition system that the most outstanding is exactly; As a kind of device for identification, they are easy to be palmed off into legal user by an illegal molecule, and current most of face identification system all cannot distinguish real face and photo, as long as got the photo of validated user, so just can out-trick easily this kind of recognition system, the social networks of now prosperity becomes easily abnormal by this attack pattern; In addition, all attack may be produced to face identification system with the mask of the video recorded or forgery.
Face is false proof also known as face In vivo detection, receives the attention from academia and industry member gradually; The false proof fundamental purpose of face distinguishes the facial image of real human face and above-mentioned forgery, identifies the attack of false facial image to face identification system, thus improve the security of face identification system; Different according to the clue used, face method for anti-counterfeit can be divided into three classes:
1, based on the face method for anti-counterfeit of skin reflex characteristic: from the reflection characteristic of people's face skin, it is false proof that some researchers utilize multispectral acquisition means to carry out face; Utilize the reflectivity of people's face skin of true man's skin and forgery under different spectrum this feature different, reach the object that face is false proof; The research contents of these class methods finds suitable spectrum, makes the difference of true and false people's face skin maximum; But these class methods have following obvious deficiency: 1) only test in very small amount of data, therefore comprehensive assessment cannot be carried out to performance; 2) spectral band chosen by the induction of conventional camera, cannot need to dispose special sensor devices, adds hardware spending; 3) extra sensor devices needs to develop signaling conversion circuit targetedly, adds the compatibility issue with existing system.
2, based on the face method for anti-counterfeit of texture difference: the face method for anti-counterfeit based on microtexture has to be supposed: same equipment collection is forged face and be there is loss in detail or difference with true man's appearance ratio that this equipment gathers, and the difference in these details just causes the difference in image microtexture; This hypothesis is in most of the cases set up, the face forged is by using real human face picture making to form, for the photo printed, first photo is printed on paper by disabled user, attacks before then the human face photo of printing being placed in face identification system; In this process, at least have two links and make a difference, one is print link, and printer can not reappear photo content without distortion; Two is secondary imagings of photograph print, and the content perfection on photo can not catch by collecting device; In addition, real human face and the difference of printing face in surface configuration, difference of local Gao Guang etc., the difference both all causing in microtexture.
3, based drive face method for anti-counterfeit: these class methods be intended to by detect that the physiological reaction of face judges to gather whether as real human face; Consider real human face and adulterator's appearance ratio, have more independence, these class methods by the action that requires user and carry out specifying as the foundation judged; Conventional exchange method comprises nictation, shakes the head, mouth action etc.; Except based on except the detection method of local motion, class methods are also had to be carry out judging based on the action of whole head; The effective reason of these class methods is that the three-dimensional structure of photo and face exists obvious difference, makes the head movement pattern obtained also there is certain difference; In order to improve face anti-counterfeiting performance further, to be a kind ofly suggested based on multi-modal face method for anti-counterfeit; The method requires that user reads the content of text of specifying, and whether lip motion and corresponding voice content subsequently by analyzing user coincide and judge the true and false of face; But this method for anti-counterfeit based on man-machine interaction is owing to requiring that user carries out specific action, too high to the requirement of user, makes Consumer's Experience not good, meanwhile, authenticated time is longer is also a large drawback of said method.
In above three kinds of methods, based drive face method for anti-counterfeit has not by illumination condition, the advantages such as picture quality impact, but, these class methods are when extracting motion feature, the regional of face is not accurately located, thus cannot accurate description gather the actual motion state of face; Such as, the image of collection is divided into face rectangular area and background area by certain methods roughly, the true and false of face is judged by the motion state of both contrasts, but, the human face region determined by rectangle frame contains a large amount of background areas, such that real human face is very large may be mistaken for forgery face; Meanwhile, in this case, the face of forgery is by folding, and torsional deformation also can be out-tricked face Antiforge system easily; Therefore, how accurately locating human face region and non-face region, and find that to have distinctive regional area most with the local motion mode information that Extraction and discrimination is strong be the key that can face Antiforge system be applied in reality.
Summary of the invention
The object of this invention is to provide a kind of face method for anti-counterfeit based on local motion mode, to overcome currently available technology above shortcomings.
The object of the invention is to be achieved through the following technical solutions:
Based on a face method for anti-counterfeit for local motion mode, comprising:
The video image gathered in advance is analyzed, determines human face region, and described human face region is analyzed, determine each face key point in described human face region;
According to the frame of video corresponding to described video image, obtain direction of motion and the amplitude information of pixel in described video image;
According to direction of motion and the amplitude information of the pixel obtained, described face key point is analyzed, determine the direction of motion in the regional area of described face key point place and amplitude information, and according to the relation between the direction of motion of this information determination regional area and between amplitude, thus obtain the local motion mode of face;
Classified by the local motion mode of pre-configured pattern classifier to the face obtained, and according to the result of classifying, verify the true and false of face in described video image.
Further, described human face region is obtained by human-face detector or utilizes artificial appointment.
Further, described human face region is analyzed, determines that in described human face region, each face key point comprises:
According to the position of described human face region, by the initial position message of predefined face key point, determine the position of each face key point in described human face region;
According to the position of face key point each in described human face region, extract video image characteristic corresponding with the position of described face key point on described video image;
According to described video image characteristic, by pre-configured algorithm model, upgrade the position of face key point corresponding with described human face region on described video image;
When meet pre-conditioned after, said process stop.
Further, according to the direction of motion of pixel obtained and amplitude information, described face key point is analyzed, determines that direction of motion in the regional area of described face key point place and amplitude information comprise:
According to the position of described accurate face key point, head zone in described video image is accurately divided, determine the respective image mask of head zone in described video image;
According to direction of motion and the amplitude information of the pixel of described image mask and acquisition, extract direction of motion and the amplitude information in head zone in the regional area of each accurate face key point residing for it and non-head region.
Further, according to the position of described face key point, head zone in described video image is accurately divided, determines that corresponding image mask comprises:
According to the position of described accurate face key point, determine the face envelope corresponding with the position of described accurate face key point; And the region comprised by this face envelope is as the human face region of described video image;
According to the connecting line at face envelope two ends in described video image, mirror image is carried out to described face envelope, and described face envelope and its mirror image are combined, obtain a closed curve, the region comprised by described curve is as the head zone of described video image;
According to the position of the position of the human face region of described video image and the head zone of described video image, determine the human face region of described video image and the respective image mask of head zone.
Under the prerequisite that can obtain face and head precise boundary, quantity and the corresponding position of required key point can be selected arbitrarily.
Further, according to direction of motion and the amplitude information of the pixel of described image mask and acquisition, the direction of motion and the amplitude information that extract head zone in the regional area of each accurate face key point residing for it and non-head region comprise:
According to the parameter information of pre-configured regional area size, determine the regional area corresponding to each accurate face key point;
According to described image mask, the pixel dropping on head zone in described regional area is demarcated as foreground area, the pixel dropped on outside head zone in described regional area is demarcated as background area;
According to direction of motion and the amplitude information of the pixel obtained; Add up the respective direction of motion in prospect in described regional area and background area and amplitude information.
Further, according to direction of motion and the amplitude information of described face key point place regional area, calculate the relation between the direction of motion of zones of different and amplitude, the local motion mode obtaining face comprises:
According to direction of motion and the amplitude information of the prospect in the regional area of described each face key point residing for it and background, calculate the relation of direction of motion between local foreground region, between local background region and between local foreground and background area and amplitude information;
Direction of motion between described local foreground region according to calculating, between local background region and between local foreground and background area and the relation of amplitude information, determine the local motion mode of face.
Further, according to direction of motion and the amplitude information of the prospect in the regional area of described each face key point residing for it and background, the relation calculating direction of motion between local foreground region, between local background region and between local foreground and background area and amplitude information comprises:
Based on direction of motion and the amplitude information of the prospect in described regional area and background area, direction of motion is quantized into some intervals, obtains the movable information histogram of the motion amplitude of pixel in each regional area accumulative;
According to described movable information histogram, the related coefficient between the movable information histogram determining any two described regional areas and the ratio between motion amplitude.
Further, the direction of motion between the described local foreground region according to calculating, between local background region and between local foreground and background area and the relation of amplitude information, determine that the local motion mode of face comprises:
According to the ratio between described related coefficient and described motion amplitude, the related coefficient between all regional areas and motion amplitude ratio are combined, determines the local motion mode obtaining face.
Beneficial effect of the present invention is: based on face method for anti-counterfeit provided by the invention by carrying out face and head zone location accurately, the face local motion mode that Extraction and discrimination ability is strong, fast and effeciently can distinguish the true and false of facial image; Effectively compensate for the defect that existing method accurately cannot extract face and head movement information, employ simultaneously a kind of can the local motion mode information of high-efficiency earth's surface intelligent face motion state more; The method is not subject to the impact of image capture environment and collecting device quality substantially, meanwhile, also substantially by forging the degree true to nature of photo and forging the impact of face deformation extent, effectively can distinguish the real human face before camera and forgery face.
Accompanying drawing explanation
Below in order to be illustrated more clearly in the embodiment of the present invention or technical scheme of the prior art, be briefly described to the accompanying drawing used required in embodiment below, apparently, accompanying drawing in the following describes is only some embodiments of the application, for those of ordinary skill in the art, under the prerequisite not paying creative work, other accompanying drawing can also be obtained according to these accompanying drawings.
The process flow diagram of a kind of face method for anti-counterfeit based on local motion mode that Fig. 1 provides for the embodiment of the present invention;
The use cascade of a kind of face method for anti-counterfeit based on local motion mode that Fig. 2 provides for the embodiment of the present invention strengthens the process flow diagram that regression model carries out face key point location;
The process flow diagram of the face local motion mode extracting method of a kind of face method for anti-counterfeit based on local motion mode that Fig. 3 provides for inventive embodiments.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, be clearly and completely described the technical scheme in the embodiment of the present invention, obviously, described embodiment is only the present invention's part embodiment, instead of whole embodiments.Based on the embodiment in the present invention, the every other embodiment that those of ordinary skill in the art obtain, all belongs to the scope of protection of the invention.
A kind of face method for anti-counterfeit based on local motion mode described in the embodiment of the present invention, as shown in the flowchart of fig.1, comprises the following steps:
Step 1: the video image gathered in advance is analyzed, determines human face region, and described human face region is analyzed, determine each face key point in described human face region; Described human face region is obtained by human-face detector or utilizes artificial appointment.
Face key point (FaceLandmark) mainly comprises cheek, eyes, eyebrow, and nose and mouth are in interior multiple positions with certain semanteme; After end user's face detection algorithm obtains human face region position in the picture, dissimilar method can be used to carry out key point location; At present, face key point localization method can be divided into multiclass, comparatively conventional comprises active shape model (ActiveShapeModel, ASM), initiatively phenomenological model (ActiveAppearanceMode, AAM), the partial model (ConstrainedLocalModel of constraint, CLM), cascade strengthens shape regression model (CascadedBoostedShapeRegressionModel) etc.; As shown in Figure 2, the application chooses face cheek key point as face key point, and for cascade enhancing regression model, the basic procedure that face key point is located is described:
Step 1-1: based on the position of human face region, the position of initialization face key point; The video image of described human face region is analyzed, determine the position of human face region in video image, and according to the position of described human face region, by the positional information of predefined face key point, determine the position of each face key point on described human face region; Usually, the shape of front face is used to carry out initialization;
Step 1-2: the position of face key point each on described human face region is analyzed, extracts video image characteristic corresponding with the position of described face key point on described video image;
Step 1-3: according to described video image characteristic, by pre-configured image regression model, determines the position of accurate face key point corresponding with described human face region on described video image;
Step 1-4: skip to step 1-2, carries out the recurrence of next round, until meet certain end condition.
In step 1, first need to carry out Face datection to the image of current collection, if facial image do not detected, then gather next frame image; If multiple facial images detected, then choose the maximum face of detection block area and carry out the false proof analysis of face.
Based on above face key point localization method, the positional information QUOTE of K face key point just can be obtained , wherein the positional representation of a kth key point is QUOTE ; Particularly, the present embodiment sequentially chooses 17 key points at cheek place successively.
Step 2: according to the frame of video gathered corresponding to video image of step 1, extracts direction of motion and the amplitude information of pixel in present image.
The movable information of image refers to and the change in location that in image, pixel occurs relative to the former frame of camera collection or the image of some frames represents with direction of motion and motion amplitude; At present, the movable information of pixel in image is mainly obtained based on light stream (OpticalFlow); The concept of light stream is proposed in nineteen fifty by people such as Gibson the earliest; It can describe the motion of movement by foreground target in scene itself, camera, or the various different motor pattern that both associated movements produce; At present, the method for optical flow computation has multiple, as method based on Polynomial Expansion that Lucas-Kanade algorithm, Horn-Schunck algorithm and GunnarFarneback propose etc.; Wherein the first algorithm is for extracting sparse optical flow, and latter two algorithm is used for computation-intensive light stream.
The application adopts GunnarFarneback algorithm; After given current frame image and previous frame image, this algorithm can calculate the motor pattern of each pixel in current frame image; For i-th pixel,
Its motor pattern is expressed as QUOTE , wherein QUOTE represent the motion in the X direction in image coordinate system
Amplitude, QUOTE represent the motion amplitude on y direction.
Step 3: the direction of motion of the pixel calculated based on step 2 and amplitude information---obtain based on optical flow computation, described face key point is analyzed, determine the direction of motion of described face key point in regional area and amplitude information, and according to the direction of motion of described face key point and amplitude information, determine the relation between this direction of motion and amplitude information, obtain the local motion mode of face; Extract the movable information of the regional area residing for 17 face key points in step 1; As shown in Figure 3, in order to realize face antiforge function more accurately, the concrete steps that motor pattern extracts are as follows:
Step 3-1: according to the position of described accurate face key point, human face region and head zone in described video image are accurately divided, determine that the respective image of human face region and head zone in described video image is covered, the concrete steps obtaining image mask are:
Step 3-1-1: according to the position of described accurate face key point, determines the face envelope corresponding with the position of described accurate face key point; And the region comprised by this face envelope is as the human face region of described video image;
Step 3-1-2: the face envelope that step 3-1-1 is obtained, according to the connecting line of face envelope two-end-point, mirror image is carried out to described face envelope, and described face envelope and its mirror image are combined, obtain a closed curve, the region comprised by described curve is as the head zone of described video image;
Step 3-1-3: according to the position of the position of the human face region of described video image and the head zone of described video image, determine the human face region of described video image and the respective image mask of head zone.
Step 3-2: the direction of motion of the image mask obtained based on step 3-1 and the pixel of acquisition and amplitude information, extract direction of motion and the amplitude information of foreground area in the regional area of each accurate face key point residing for it and background area, for key point k, concrete steps are as follows:
Step 3-2-1: according to the parameter information of pre-configured regional area size, determines the regional area corresponding to each accurate face key point; If the wide of human face region is W, height is H, centered by face key point k, and selected local rectangular portions QUOTE wide be 0.2 × W, height is 0.2 × H;
Step 3-2-2: the light stream direction and the amplitude that calculate all pixels in rectangular area, be expressed as QUOTE ;
Step 3-2-3: at rectangular area QUOTE in the regional area comprised, determine to drop on face and the inside and outside pixel of head zone, be defined as prospect and background area, represent that set is for QUOTE respectively and QUOTE ;
Step 3-2-4: add up QUOTE respectively and QUOTE movable information; First, by light stream direction (0 ° to 360 °) uniform quantization to 18 intervals; Subsequently, the light stream amplitude sum of the accumulative pixel dropped in each interval;
Obtain dimension be 18 two histograms be expressed as QUOTE and QUOTE .
Use said method, just can obtain the movable information at face 17 key point places---direction of motion and amplitude information; This movable information will be used to extract local motion mode.
Step 3-3: based on the prospect at key point place extracted in step 3-2 and the direction of motion of background and amplitude information, direction of motion between calculating local foreground region, between local background region and between local foreground and background area and the relation of amplitude information, thus obtain the local motion mode of current face, the concrete steps obtaining the local motion mode of face are as follows:
Step 3-3-1: calculate the related coefficient between the histogram that any two local foreground according to key point place or background area extracts;
Step 3-3-2: calculate the ratio between any two motion amplitudes extracted according to key point place local foreground or background area;
Step 3-3-3: all motion amplitude ratios that all related coefficients calculated by step 3-3-1 and step 3-3-2 calculate are combined, as the local motion mode of current face.
By step 3-2, obtain the histogram of totally 34 18 dimensions of the local motion information for representing key point place; Subsequently, the present invention carrys out the local motion mode information of quantificational expression face by calculating 34 histograms related coefficient between any two and Amplitude Ration; Wherein, in step 3-3-1, given any two histograms, are expressed as vectorial QUOTE and QUOTE , the computing formula of its related coefficient is as follows:
QUOTE (1)
Wherein QUOTE and QUOTE be respectively QUOTE and QUOTE average; Based on above-mentioned formula, 34*33/2=561 related coefficient just can be obtained; Meanwhile, in step 3-3-2, by calculating the ratio also obtaining 561 histogram amplitudes; Wherein histogrammic amplitude is the average light stream amplitude of pixel, namely in region the light stream amplitude sum of all pixels divided by the number of this area pixel point; So far, related coefficient and Amplitude Ration are formed altogether the feature of 1122 dimensions, in order to represent the local motion mode of face.
Step 4: obtained the local motion mode of face by step 3 after, is classified by the local motion mode of pre-configured pattern classifier to the face obtained, and according to the result of classifying, verifies the true and false of face in described video image.
Using forestland sorter judges the true and false of current gathered facial image; Obtain local motion mode in extraction from current face's image, namely after 1122 dimensional feature vectors, support vector machine (SupportVectorMachine, SVM) the pattern classification model that training in advance just can be used good is to judge the true and false of current input image.
In step 4, used support vector cassification model needs training in advance; For this reason, the video sequence of camera collection 20 real human face and 20 forgery faces is used; The duration of video sequence is 30s; Wherein, when gathering the video sequence of real human face, requiring that the head of gathered person and face carry out slight motion, as shaken the head, nodding, smiling, speaking etc.; The forgery face video sequence gathered is divided into two classes, and one is gather from the sequence of photograph print, and it two be the sequence of collection from tablet personal computer display screen; In gatherer process, the face of forgery can be static, also can carry out the motion of arbitrary form, or torsional deformation.
After the above-mentioned video sequence of acquisition, equally by step 1, step 2 and step 3, therefrom extract the local motion mode feature of human face region, and use linear SVM training to obtain dual mode sorter.
The present invention is not limited to above-mentioned preferred forms; anyone can draw other various forms of products under enlightenment of the present invention; no matter but any change is done in its shape or structure; every have identical with the application or akin technical scheme, all drops within protection scope of the present invention.

Claims (9)

1., based on a face method for anti-counterfeit for local motion mode, it is characterized in that, comprising:
The video image gathered in advance is analyzed, determines human face region, and described human face region is analyzed, determine each face key point in described human face region;
According to the frame of video corresponding to described video image, obtain direction of motion and the amplitude information of pixel in described video image;
According to direction of motion and the amplitude information of the pixel obtained, described face key point is analyzed, determine the direction of motion in the regional area of described face key point place and amplitude information, and according to the relation between the direction of motion of this information determination regional area and between amplitude, thus obtain the local motion mode of face;
Classified by the local motion mode of pre-configured pattern classifier to the face obtained, and according to the result of classifying, verify the true and false of face in described video image.
2. the face method for anti-counterfeit based on local motion mode according to claim 1, is characterized in that, described human face region is obtained by human-face detector or utilizes artificial appointment.
3. the face method for anti-counterfeit based on local motion mode according to claim 1, is characterized in that, analyze described human face region, determines that in described human face region, each face key point comprises:
According to the position of described human face region, by the initial position message of predefined face key point, determine the position of each face key point in described human face region;
According to the position of face key point each in described human face region, extract video image characteristic corresponding with the position of described face key point on described video image;
According to described video image characteristic, by pre-configured algorithm model, upgrade the position of face key point corresponding with described human face region on described video image;
When meet pre-conditioned after, said process stop.
4. the face method for anti-counterfeit based on local motion mode according to claim 1, it is characterized in that, according to direction of motion and the amplitude information of the pixel obtained, described face key point is analyzed, determines that direction of motion in the regional area of described face key point place and amplitude information comprise:
According to the position of described accurate face key point, head zone in described video image is accurately divided, determine the respective image mask of head zone in described video image;
According to direction of motion and the amplitude information of the pixel of described image mask and acquisition, extract direction of motion and the amplitude information in head zone in the regional area of each accurate face key point residing for it and non-head region.
5. the face method for anti-counterfeit based on local motion mode according to claim 4, is characterized in that, according to the position of described face key point, accurately divides head zone in described video image, determines that corresponding image mask comprises:
According to the position of described accurate face key point, determine the face envelope corresponding with the position of described accurate face key point; And the region comprised by this face envelope is as the human face region of described video image;
According to the connecting line at face envelope two ends in described video image, mirror image is carried out to described face envelope, and described face envelope and its mirror image are combined, obtain a closed curve, the region comprised by described curve is as the head zone of described video image;
According to the position of the position of the human face region of described video image and the head zone of described video image, determine the human face region of described video image and the respective image mask of head zone.
6. the face method for anti-counterfeit based on local motion mode according to claim 4, it is characterized in that, according to direction of motion and the amplitude information of the pixel of described image mask and acquisition, the direction of motion and the amplitude information that extract head zone in the regional area of each accurate face key point residing for it and non-head region comprise:
According to the parameter information of pre-configured regional area size, determine the regional area corresponding to each accurate face key point;
According to described image mask, the pixel dropping on head zone in described regional area is demarcated as foreground area, the pixel dropped on outside head zone in described regional area is demarcated as background area;
According to direction of motion and the amplitude information of the pixel obtained; Add up the respective direction of motion in prospect in described regional area and background area and amplitude information.
7. the face method for anti-counterfeit based on local motion mode according to claim 6, it is characterized in that, according to direction of motion and the amplitude information of described face key point place regional area, calculate the relation between the direction of motion of zones of different and amplitude, the local motion mode obtaining face comprises:
According to direction of motion and the amplitude information of the prospect in the regional area of described each face key point residing for it and background, calculate the relation of direction of motion between local foreground region, between local background region and between local foreground and background area and amplitude information;
Direction of motion between described local foreground region according to calculating, between local background region and between local foreground and background area and the relation of amplitude information, determine the local motion mode of face.
8. the face method for anti-counterfeit based on local motion mode according to claim 7, it is characterized in that, according to direction of motion and the amplitude information of the prospect in the regional area of described each face key point residing for it and background, the relation calculating direction of motion between local foreground region, between local background region and between local foreground and background area and amplitude information comprises:
Based on direction of motion and the amplitude information of the prospect in described regional area and background area, direction of motion is quantized into some intervals, obtains the movable information histogram of the motion amplitude of pixel in each regional area accumulative;
According to described movable information histogram, the related coefficient between the movable information histogram determining any two described regional areas and the ratio between motion amplitude.
9. the face method for anti-counterfeit based on local motion mode according to claim 8, it is characterized in that, direction of motion between described local foreground region according to calculating, between local background region and between local foreground and background area and the relation of amplitude information, determine that the local motion mode of face comprises:
According to the ratio between described related coefficient and described motion amplitude, the related coefficient between all regional areas and motion amplitude ratio are combined, determines the local motion mode obtaining face.
CN201410428040.6A 2014-08-27 2014-08-27 A kind of face method for anti-counterfeit based on local motion mode Expired - Fee Related CN105447432B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410428040.6A CN105447432B (en) 2014-08-27 2014-08-27 A kind of face method for anti-counterfeit based on local motion mode

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410428040.6A CN105447432B (en) 2014-08-27 2014-08-27 A kind of face method for anti-counterfeit based on local motion mode

Publications (2)

Publication Number Publication Date
CN105447432A true CN105447432A (en) 2016-03-30
CN105447432B CN105447432B (en) 2019-09-13

Family

ID=55557594

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410428040.6A Expired - Fee Related CN105447432B (en) 2014-08-27 2014-08-27 A kind of face method for anti-counterfeit based on local motion mode

Country Status (1)

Country Link
CN (1) CN105447432B (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106228137A (en) * 2016-07-26 2016-12-14 广州市维安科技股份有限公司 A kind of ATM abnormal human face detection based on key point location
CN107358155A (en) * 2017-06-02 2017-11-17 广州视源电子科技股份有限公司 Method and device for detecting ghost face action and method and system for recognizing living body
CN107643826A (en) * 2017-08-28 2018-01-30 天津大学 A kind of unmanned plane man-machine interaction method based on computer vision and deep learning
CN107688781A (en) * 2017-08-22 2018-02-13 北京小米移动软件有限公司 Face identification method and device
CN108537131A (en) * 2018-03-15 2018-09-14 中山大学 A kind of recognition of face biopsy method based on human face characteristic point and optical flow field
WO2018166515A1 (en) * 2017-03-16 2018-09-20 北京市商汤科技开发有限公司 Anti-counterfeiting human face detection method and system, electronic device, program and medium
CN108846321A (en) * 2018-05-25 2018-11-20 北京小米移动软件有限公司 Identify method and device, the electronic equipment of face prosthese
CN109583391A (en) * 2018-12-04 2019-04-05 北京字节跳动网络技术有限公司 Critical point detection method, apparatus, equipment and readable medium
CN109766785A (en) * 2018-12-21 2019-05-17 中国银联股份有限公司 A kind of biopsy method and device of face
CN110223322A (en) * 2019-05-31 2019-09-10 腾讯科技(深圳)有限公司 Image-recognizing method, device, computer equipment and storage medium
CN111460419A (en) * 2020-03-31 2020-07-28 周亚琴 Internet of things artificial intelligence face verification method and Internet of things cloud server
CN111611873A (en) * 2020-04-28 2020-09-01 平安科技(深圳)有限公司 Face replacement detection method and device, electronic equipment and computer storage medium
CN111626101A (en) * 2020-04-13 2020-09-04 惠州市德赛西威汽车电子股份有限公司 Smoking monitoring method and system based on ADAS
CN112287909A (en) * 2020-12-24 2021-01-29 四川新网银行股份有限公司 Double-random in-vivo detection method for randomly generating detection points and interactive elements
CN111611873B (en) * 2020-04-28 2024-07-16 平安科技(深圳)有限公司 Face replacement detection method and device, electronic equipment and computer storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101169827A (en) * 2007-12-03 2008-04-30 北京中星微电子有限公司 Method and device for tracking characteristic point of image
CN102750518A (en) * 2012-05-30 2012-10-24 深圳光启创新技术有限公司 Face verification system and method based on visible light communications
US20130243274A1 (en) * 2012-03-15 2013-09-19 Hiroshi Sukegawa Person Image Processing Apparatus and Person Image Processing Method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101169827A (en) * 2007-12-03 2008-04-30 北京中星微电子有限公司 Method and device for tracking characteristic point of image
US20130243274A1 (en) * 2012-03-15 2013-09-19 Hiroshi Sukegawa Person Image Processing Apparatus and Person Image Processing Method
CN102750518A (en) * 2012-05-30 2012-10-24 深圳光启创新技术有限公司 Face verification system and method based on visible light communications

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
余棉水等: "基于光流的动态人脸表情识别", 《微电子学与计算机》 *

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106228137A (en) * 2016-07-26 2016-12-14 广州市维安科技股份有限公司 A kind of ATM abnormal human face detection based on key point location
US11080517B2 (en) 2017-03-16 2021-08-03 Beijing Sensetime Technology Development Co., Ltd Face anti-counterfeiting detection methods and systems, electronic devices, programs and media
US11482040B2 (en) 2017-03-16 2022-10-25 Beijing Sensetime Technology Development Co., Ltd. Face anti-counterfeiting detection methods and systems, electronic devices, programs and media
WO2018166515A1 (en) * 2017-03-16 2018-09-20 北京市商汤科技开发有限公司 Anti-counterfeiting human face detection method and system, electronic device, program and medium
CN107358155A (en) * 2017-06-02 2017-11-17 广州视源电子科技股份有限公司 Method and device for detecting ghost face action and method and system for recognizing living body
CN107688781A (en) * 2017-08-22 2018-02-13 北京小米移动软件有限公司 Face identification method and device
CN107643826A (en) * 2017-08-28 2018-01-30 天津大学 A kind of unmanned plane man-machine interaction method based on computer vision and deep learning
CN108537131B (en) * 2018-03-15 2022-04-15 中山大学 Face recognition living body detection method based on face characteristic points and optical flow field
CN108537131A (en) * 2018-03-15 2018-09-14 中山大学 A kind of recognition of face biopsy method based on human face characteristic point and optical flow field
CN108846321A (en) * 2018-05-25 2018-11-20 北京小米移动软件有限公司 Identify method and device, the electronic equipment of face prosthese
CN108846321B (en) * 2018-05-25 2022-05-03 北京小米移动软件有限公司 Method and device for identifying human face prosthesis and electronic equipment
CN109583391B (en) * 2018-12-04 2021-07-16 北京字节跳动网络技术有限公司 Key point detection method, device, equipment and readable medium
CN109583391A (en) * 2018-12-04 2019-04-05 北京字节跳动网络技术有限公司 Critical point detection method, apparatus, equipment and readable medium
CN109766785A (en) * 2018-12-21 2019-05-17 中国银联股份有限公司 A kind of biopsy method and device of face
CN109766785B (en) * 2018-12-21 2023-09-01 中国银联股份有限公司 Living body detection method and device for human face
CN110223322A (en) * 2019-05-31 2019-09-10 腾讯科技(深圳)有限公司 Image-recognizing method, device, computer equipment and storage medium
CN111460419A (en) * 2020-03-31 2020-07-28 周亚琴 Internet of things artificial intelligence face verification method and Internet of things cloud server
CN111626101A (en) * 2020-04-13 2020-09-04 惠州市德赛西威汽车电子股份有限公司 Smoking monitoring method and system based on ADAS
CN111611873A (en) * 2020-04-28 2020-09-01 平安科技(深圳)有限公司 Face replacement detection method and device, electronic equipment and computer storage medium
CN111611873B (en) * 2020-04-28 2024-07-16 平安科技(深圳)有限公司 Face replacement detection method and device, electronic equipment and computer storage medium
CN112287909B (en) * 2020-12-24 2021-09-07 四川新网银行股份有限公司 Double-random in-vivo detection method for randomly generating detection points and interactive elements
CN112287909A (en) * 2020-12-24 2021-01-29 四川新网银行股份有限公司 Double-random in-vivo detection method for randomly generating detection points and interactive elements

Also Published As

Publication number Publication date
CN105447432B (en) 2019-09-13

Similar Documents

Publication Publication Date Title
CN105447432A (en) Face anti-fake method based on local motion pattern
Shao et al. Deep convolutional dynamic texture learning with adaptive channel-discriminability for 3D mask face anti-spoofing
US10672140B2 (en) Video monitoring method and video monitoring system
CN102375970B (en) A kind of identity identifying method based on face and authenticate device
CN108182409B (en) Living body detection method, living body detection device, living body detection equipment and storage medium
CN109598242B (en) Living body detection method
US10943095B2 (en) Methods and systems for matching extracted feature descriptors for enhanced face recognition
CN104036236B (en) A kind of face gender identification method based on multiparameter exponential weighting
CN105205480B (en) Human-eye positioning method and system in a kind of complex scene
WO2018119668A1 (en) Method and system for recognizing head of pedestrian
CN102194108B (en) Smile face expression recognition method based on clustering linear discriminant analysis of feature selection
CN105205455A (en) Liveness detection method and system for face recognition on mobile platform
CN102332095A (en) Face motion tracking method, face motion tracking system and method for enhancing reality
CN105243376A (en) Living body detection method and device
CN108416291B (en) Face detection and recognition method, device and system
Phimoltares et al. Face detection and facial feature localization without considering the appearance of image context
CN102622584A (en) Method for detecting mask faces in video monitor
CN112101208A (en) Feature series fusion gesture recognition method and device for elderly people
CN104008364A (en) Face recognition method
CN103544478A (en) All-dimensional face detection method and system
Wang et al. An intelligent recognition framework of access control system with anti-spoofing function
JP2013228847A (en) Facial expression analyzing device and facial expression analyzing program
CN106156739B (en) A kind of certificate photo ear detection and extracting method based on face mask analysis
KR101344851B1 (en) Device and Method for Processing Image
CN104573628A (en) Three-dimensional face recognition method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C41 Transfer of patent application or patent right or utility model
TA01 Transfer of patent application right

Effective date of registration: 20160531

Address after: 425000 Yongzhou City, Hunan province Lengshuitan District Fushan Road Pearl Street No. 127

Applicant after: Yang Jianwei

Address before: 100084 B1 floor, block A, Wan Lin Building, No. 88, Nongda South Road, Beijing, Haidian District

Applicant before: QIANSOU INC.

GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20190913

Termination date: 20210827

CF01 Termination of patent right due to non-payment of annual fee