CN104615997A - Human face anti-fake method based on multiple cameras - Google Patents

Human face anti-fake method based on multiple cameras Download PDF

Info

Publication number
CN104615997A
CN104615997A CN201510080965.0A CN201510080965A CN104615997A CN 104615997 A CN104615997 A CN 104615997A CN 201510080965 A CN201510080965 A CN 201510080965A CN 104615997 A CN104615997 A CN 104615997A
Authority
CN
China
Prior art keywords
face
camera
counterfeit
symmetrical
human face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510080965.0A
Other languages
Chinese (zh)
Other versions
CN104615997B (en
Inventor
赵启军
陈虎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan Chuanda Zhisheng Software Co Ltd
Wisesoft Co Ltd
Original Assignee
Sichuan Chuanda Zhisheng Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan Chuanda Zhisheng Software Co Ltd filed Critical Sichuan Chuanda Zhisheng Software Co Ltd
Priority to CN201510080965.0A priority Critical patent/CN104615997B/en
Publication of CN104615997A publication Critical patent/CN104615997A/en
Application granted granted Critical
Publication of CN104615997B publication Critical patent/CN104615997B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements

Abstract

The invention relates to the field of human face recognition, in particular to a human face anti-fake method based on multiple cameras. A plurality of cameras are arranged to conduct camera shooting from different angles, pictures shot by different cameras are utilized to conduct human face characteristic angle comparison to further achieve human face recognition and fake resistance, hardware configuration is simple, demarcation is not required, the method can be easily integrated with the existing human face recognition system, meanwhile, judgment can be conducted by mixing multiple human face characteristics to prevent fakeness, and the anti-fake accuracy is effectively improved.

Description

A kind of face method for anti-counterfeit based on multiple-camera
Technical field
The present invention relates to field of face identification, particularly a kind of face method for anti-counterfeit based on multiple-camera.
Background technology
Current two-dimension human face recognition technology obtains a large amount of uses in the application such as work attendance and gate inhibition.But two-dimension human face recognition system faces some difficult problems in actual applications, cheat face identification system as used photo or video.In order to address this problem, face identification system method for anti-counterfeit arises at the historic moment.
Conventional face identification system method for anti-counterfeit comprises use additional hardware and uses the large class of software algorithm two.Wherein using the method for additional hardware generally to adopt increases extra near infrared sensor to judge whether object to be identified is real human face,
Use the method for software algorithm then to distinguish real human face and photo etc. by the feature (as detected nictation, expression shape change or behavior etc.) of the image or video of analyzing object to be identified and forge face.These methods or introduce diverse extras thus add complexity and the cost of system, or depend on the feature of the image sequence of single image or shooting continuously, even require that user performs certain action according to instruction, thus cause the inconvenience in use, also have impact on false proof precision.
Summary of the invention
The object of the invention is to overcome and rely on increase hardware based on the face method for anti-counterfeit of multiple-camera or rely on the problem that user performs instruction action at present, there is provided a kind of and can further improve the anti-counterfeit capability of face identification system and the face method for anti-counterfeit based on multiple-camera of ease for use, comprise following steps:
(1) be symmetrical arranged and take facial image by n platform video camera, n is more than 2 natural numbers.
(2) respectively face feature point location is carried out to the facial image of different cameras shooting, obtain the coordinate of each facial feature points.
(3) select any three not facial feature points on the same line, wherein fisrt feature point coordinate is (x 1, y 1), second feature point coordinate is (x 2, y 2), third feature point coordinate is (x 3, y 3), according to the face visual angle characteristic d β of each image of following formulae discovery i:
β 1=arctan(y 2-y 1,x 2-x 1)×180/π;
β 2=arctan(y 3-(y 1+y 2)/2,x 3-(x 1+x 2)/2)×180/π;
i=|β 21|。
(4) the face visual angle characteristic of the facial image taken by symmetrical camera compares, and show that face visual angle characteristic is poor
(5) by face visual angle characteristic difference Δ and predetermined threshold value T Δcompare, as Δ > T Δtime, be then judged as true, otherwise be false.
Further, described facial feature points comprises nose, the corners of the mouth, pupil center, canthus.
Preferably, in step (3), three facial feature points selected are respectively Liang Ge pupil center and nose.
Further, also comprise and judge true and false step by face characteristic similarity S as follows:
(6) from the facial image that each video camera is taken, textural characteristics value T is extracted i∈ R m, i is camera number, and m is intrinsic dimensionality.
(7) calculate each shot by camera facial image and and its facial image being in the shot by camera of symmetrical seat in the plane between human face similarity degree S k, computing formula is:
S k = Σ h = 1 m ( T i , h - T j , h ) 2 T i , h + T j , h Σ h = 1 m ( T i , h - T j , h ) p p
Wherein, T i, T jbe respectively the textural characteristics value of two mutual symmetrical video cameras in position, the span of k be 1 to the value of p is 1 or 2, P value when being 1, represents Euclidean distance, when P value is 2, and expression L1 distance.
(8) each face image similarity S calculated imean value, be this shooting human face similarity degree S, its computing formula is: wherein for rounding under n/2.
(9) as S > T s, be then judged as vacation, otherwise be true, wherein TS is for presetting face characteristic similarity threshold.
Further, in step (2-1), textural characteristics value obtains initial value for taking local binary patterns (LBP), gradient orientation histogram pattern (HOG) or Gabor filter patterns, and the initial value of acquisition is carried out dimensionality reduction or conversion process obtains in some embodiment, as Δ > T through principal component analysis (PCA) or linear discriminant analysis Δand S≤T stime, be judged to be real human face, otherwise be judged to forge face.
In other embodiment, as α (Δ-T Δ)+β (T s-S) > T time, be judged to be real human face, otherwise be judged to be forge face, wherein α, β are constant, and meet alpha+beta=1; The threshold value that T pre-sets.
Further, the predetermined angle in n platform video camera between adjacent camera is 0-45 degree.
In some embodiments, a described n video camera is in same level height.
In other embodiment, when the quantity of described video camera is even number, symmetrical between two and form symmetrical group relative to same axis between video camera, the video camera of different symmetrical group is in differentiated levels.
Further, the angle between two video cameras in different symmetrical group is identical or not identical.
In sum, owing to have employed technique scheme, the invention has the beneficial effects as follows: the present invention adopts configuration multi-cam to make a video recording from different perspectives, and it is false proof to realize recognition of face to the method that the image that different camera is taken carries out comparing at face characteristic visual angle, hardware configuration is simple, without the need to demarcating, be easy to mutually integrated with existing face identification system; Also being undertaken false proof by merging the differentiation of multiple face characteristic similarity simultaneously, effectively improving anti-spurious accuracy.
Accompanying drawing explanation
Fig. 1 is face visual angle characteristic method for anti-counterfeit process flow diagram provided by the invention.
The Double anti-counterfeit method integrated use process flow diagram of Fig. 2 for providing in the embodiment of the present invention 2.
Fig. 3 is human face similarity degree method for anti-counterfeit process flow diagram in the Double anti-counterfeit method that provides in the embodiment of the present invention 2.
Embodiment
Below in conjunction with accompanying drawing, the present invention is described in detail.
In order to make object of the present invention, technical scheme and advantage clearly understand, below in conjunction with drawings and Examples, the present invention is further elaborated.Should be appreciated that specific embodiment described herein only in order to explain the present invention, be not intended to limit the present invention.
Embodiment 1: rely on increase hardware or rely on the problem that user performs instruction action based on the face method for anti-counterfeit of multiple-camera as Fig. 1 the object of the invention is to overcome at present, there is provided a kind of and can further improve the anti-counterfeit capability of face identification system and the face method for anti-counterfeit based on multiple-camera of ease for use, when video camera is even number, for 2 video cameras, 2 video cameras are arranged (preferably between 0-45 degree at same level height according to predetermined angle, predetermined angle in the present embodiment between two video cameras is 30 degree), and take facial image (should symmetrically take), and comprise the step of being carried out true and false judgement as follows by face visual angle characteristic difference Δ:
S101: respectively face feature point location is carried out to the facial image of different cameras shooting, has both determined each facial feature points, as the coordinate position at nose, the corners of the mouth, pupil center, canthus.
S102: select Liang Ge pupil center and nose as three facial feature points, wherein the first center coordinate of eye pupil is (x 1, y 1), the second center coordinate of eye pupil is (x 2, y 2), nose coordinate is (x 3, y 3), according to following formulae discovery face visual angle characteristic d β i:
Calculate the angle of Liang Ge pupil center line and transverse axis: β 1=arctan (y 2-y 1, x 2-x 1) × 180/ π;
Calculate nose and the line of interpupillary line central point and the angle of transverse axis:
β 2=arctan(y 3-(y 1+y 2)/2,x 3-(x 1+x 2)/2)×180/π;
Calculate face visual angle characteristic: d β i=| β 21|.
S103: by the face visual angle characteristic d β of the facial image of two video camera shootings 1, d β 2compare, draw face visual angle characteristic difference Δ=| d β 1-d β 2|.
S104: by face visual angle characteristic difference Δ and predetermined threshold value T Δcompare, as Δ > T Δtime, be then judged as true, otherwise be false.
Embodiment 2: when video camera is odd number, for 5 video cameras, is with embodiment 1 distinctive points:
5 video cameras are arranged (preferably between 0-45 degree at same level height according to predetermined angle, predetermined angle in the present embodiment between adjacent two video cameras is 10 degree), and (5 video camera order labels are No. 1 video camera, No. 2 video cameras, No. 3 video cameras, No. 4 video cameras, No. 5 video cameras to take facial image, during shooting, face should just to being positioned at No. 3 middle video cameras, now, No. 1 video camera and No. 5 video camera symmetries, No. 2 video cameras and No. 4 video camera symmetries).
S101: respectively face feature point location is carried out to the facial image of different cameras shooting, has both determined each facial feature points, as the coordinate position at nose, the corners of the mouth, pupil center, canthus.
S102: select Liang Ge pupil center and nose as three facial feature points, wherein the first center coordinate of eye pupil is (x 1, y 1), the second center coordinate of eye pupil is (x 2, y 2), nose coordinate is (x 3, y 3), according to following formulae discovery face visual angle characteristic d β i:
Calculate the angle of Liang Ge pupil center line and transverse axis: β 1=arctan (y 2-y 1, x 2-x 1) × 180/ π;
Calculate nose and the line of interpupillary line central point and the angle of transverse axis:
β 2=arctan(y 3-(y 1+y 2)/2,x 3-(x 1+x 2)/2)×180/π;
Calculate face visual angle characteristic: d β i=| β 21|.
S103: by the face visual angle characteristic d β of the facial image of 5 video camera shootings 1, d β 2, d β 3, d β 4, d β 5the face visual angle characteristic of middle symmetrical seat in the plane compares, and show that face visual angle characteristic is poor in the present embodiment, in not wherein being calculated in without the visual angle characteristic of No. 3 video cameras of symmetrical seat in the plane.Only have 5 video cameras in the present embodiment, therefore symmetrical each other video camera is No. 1 video camera and No. 5 video cameras, No. 2 video cameras and No. 4 video cameras (during shooting video camera should relative to facial symmetry).
S104: by face visual angle characteristic difference Δ and predetermined threshold value T Δcompare, as Δ > T Δtime, be then judged as true, otherwise be false.
Embodiment 3: when video camera is even number, for 4 video cameras, symmetrical between two and form 2 symmetrical group relative to same axis between 4 video cameras, the video cameras of 2 symmetrical groups are in differentiated levels.Angle between two video cameras in different symmetrical group is different, if No. 1 video camera in the present embodiment and No. 4 video cameras are one symmetrical group, angle is between the two 30 degree, and No. 2 video cameras and No. 3 video cameras are one symmetrical group, and angle is between the two 45 degree; During shooting, face should just to the axis of symmetry of two pairs of video cameras.Be with embodiment 1 distinctive points:
S103: by the face visual angle characteristic d β of the facial image of 4 video camera shootings 1, d β 2, d β 3, d β 4the face visual angle characteristic of middle symmetrical seat in the plane compares, and show that face visual angle characteristic is poor in the present embodiment, Δ = | dβ 1 - d β 4 | + | d β 2 - d β 3 | 2 .
S104: by face visual angle characteristic difference Δ and predetermined threshold value T Δcompare, as Δ > T Δtime, be then judged as true, otherwise be false.
Embodiment 4: as shown in Figure 2 and Figure 3, in the present embodiment, except each step comprised as described in example 1 above (calculating referred to as face visual angle characteristic), also comprises the following step calculating face characteristic similarity S:
S201: extract textural characteristics value T from the facial image that each video camera is taken i∈ R m, i is camera number, and the textural characteristics value of the facial image of the shooting of two video cameras in the present embodiment is respectively T 1and T 2m is intrinsic dimensionality, and R is set of real numbers.
S202: calculate each shot by camera facial image and and its facial image being in the shot by camera of symmetrical seat in the plane between human face similarity degree S k, only have two video cameras in the present embodiment, therefore computing formula is:
S k = Σ h = 1 m ( T 1 , h - T 2 , h ) 2 T 1 , h + T 2 , h Σ h = 1 m ( T 1 , h - T 2 , h ) p p
Wherein, T 1, T 2be respectively the textural characteristics value of two mutual symmetrical video cameras in position, because only have 2 video cameras,
Therefore the value of p is 1 or 2, P value when being 1, represents Euclidean distance, when P value is 2, and expression L1 distance.
S203: each face image similarity S calculated kmean value, be this shooting human face similarity degree S, in the present embodiment, because only have 2 video cameras, therefore directly draw human face similarity degree S by S202.
S204: as S > T s, be then judged as vacation, otherwise be true, wherein T sfor default face characteristic similarity threshold.
In the present embodiment, because there is face characteristic Similarity Measure and the calculating of the face visual angle characteristic described in embodiment 1 simultaneously, therefore also comprise the step of comprehensive descision, both only had when two kinds are all judged as that true time is (namely as Δ > T Δand simultaneously S≤T stime), just judge that current face is as real human face, otherwise be judged to forge face.
Embodiment 5: when video camera is odd number, for 5 video cameras, except each step comprised as described in example 2 above (calculating referred to as face visual angle characteristic), also comprises the following step calculating face characteristic similarity S:
S201: the textural characteristics value of the facial image of the shooting of 5 video cameras in the present embodiment is respectively T 1, T 2, T 3, T 4, T 5, m is intrinsic dimensionality, and R is set of real numbers.
S202: calculate each shot by camera facial image and and its facial image being in the shot by camera of symmetrical seat in the plane between human face similarity degree S i5 video cameras are only had in the present embodiment, therefore symmetrical each other video camera is No. 1 video camera and No. 3 video cameras, No. 2 video cameras and No. 4 video cameras, and is positioned at No. 3 middle video cameras owing to not having symmetrical machine, and the image of therefore its shooting does not participate in Similarity Measure; Computing formula is:
S k = Σ h = 1 m ( T i , h - T j , h ) 2 T i , h + T j , h Σ h = 1 m ( T i , h - T j , h ) p p
Wherein, T i, T jbe respectively the textural characteristics value of two mutual symmetrical video cameras in position, wherein the span of k be 1 to wherein:
S 1 = Σ h = 1 m ( T 1 , h - T 5 , h ) 2 T 1 , h + T 5 , h Σ h = 1 m ( T 1 , h - T 5 , h ) p p
S 2 = Σ h = 1 m ( T 2 , h - T 4 , h ) 2 T 2 , h + T 4 , h Σ h = 1 m ( T 2 , h - T 4 , h ) p p
The value of p is 1 or 2, P value when being 1, represents Euclidean distance, when P value is 2, and expression L1 distance.
S203: each face image similarity S calculated kmean value, be this shooting human face similarity degree S, in the present embodiment, S = S 1 + S 2 2 .
S204: as S > T s, be then judged as vacation, otherwise be true, wherein TS is for presetting face characteristic similarity threshold.
In the present embodiment, because there is the step of face characteristic Similarity Measure and the calculating of the face visual angle characteristic described in embodiment 2 simultaneously, therefore also comprise the step of comprehensive descision, both only had when two kinds are all judged as that true time is (namely as Δ > T Δand simultaneously S≤T stime), just judge that current face is as real human face, otherwise be judged to forge face.
Embodiment 6: when video camera is even number, for 4 video cameras, except each step comprised as described in example 3 above (calculating referred to as face visual angle characteristic), also comprises the following step calculating face characteristic similarity S:
S201: the textural characteristics value of the facial image of the shooting of 4 video cameras in the present embodiment is respectively T 1, T 2, T 3, T 4, m is intrinsic dimensionality, and R is set of real numbers.
S202: calculate each shot by camera facial image and and its facial image being in the shot by camera of symmetrical seat in the plane between human face similarity degree S i, only have 4 video cameras in the present embodiment, therefore symmetrical each other video camera is No. 1 video camera and No. 4 video cameras, No. 2 video cameras and No. 3 video cameras, and the image of therefore its shooting does not participate in Similarity Measure; Computing formula is:
S k = Σ h = 1 m ( T i , h - T j , h ) 2 T i , h + T j , h Σ h = 1 m ( T i , h - T j , h ) p p
Wherein, T i, T jbe respectively the textural characteristics value of two mutual symmetrical video cameras in position, wherein the span of k be 1 to wherein:
S 1 = Σ h = 1 m ( T 1 , h - T 4 , h ) 2 T 1 , h + T 4 , h Σ h = 1 m ( T 1 , h - T 4 , h ) p p
S 2 = Σ h = 1 m ( T 2 , h - T 3 , h ) 2 T 2 , h + T 3 , h Σ h = 1 m ( T 2 , h - T 3 , h ) p p
The value of p is 1 or 2, P value when being 1, represents Euclidean distance, when P value is 2, and expression L1 distance.
S203: each face image similarity S calculated kmean value, be this shooting human face similarity degree S, in the present embodiment, S = S 1 + S 2 2 .
S204: as S > T s, be then judged as vacation, otherwise be true, wherein T sfor default face characteristic similarity threshold.
In the present embodiment, because there is the step of face characteristic Similarity Measure and the calculating of the face visual angle characteristic described in embodiment 3 simultaneously, therefore also comprise the step of comprehensive descision, both only had when two kinds are all judged as that true time is (namely as Δ > T Δand simultaneously S≤T stime), just judge that current face is as real human face, otherwise be judged to forge face.
Embodiment 7: as shown in Figure 1, Figure 2, Figure 3 shows, in the present embodiment, compared with embodiment 4,5 or 6, difference is, the step of described comprehensive descision is: as α (Δ-T Δ)+β (T s-S) > T time, be judged to be real human face, otherwise be judged to be forge face, wherein α, β are constant, and meet alpha+beta=1; The threshold value that T pre-sets.
The foregoing is only preferred embodiment of the present invention, not in order to limit the present invention, all any amendments done within the spirit and principles in the present invention, equivalent replacement and improvement etc., all should be included within protection scope of the present invention.

Claims (10)

1., based on a face method for anti-counterfeit for multiple-camera, it is characterized in that, comprise following steps:
(1) be symmetrical arranged and take facial image by n platform video camera, n is more than 2 natural numbers;
(2) respectively face feature point location is carried out to the facial image of different cameras shooting, obtain the coordinate of each facial feature points;
(3) select any three not facial feature points on the same line, wherein fisrt feature point coordinate is (x 1, y 1), second feature point coordinate is (x 2, y 2), third feature point coordinate is (x 3, y 3), according to the face visual angle characteristic d β of each image of following formulae discovery i:
β 1=arctan(y 2-y 1,x 2-x 1)×180/π;
β 2=arctan(y 3-(y 1+y 2)/2,x 3-(x 1+x 2)/2)×180/π;
i=|β 21|;
(4) the face visual angle characteristic of the facial image taken by symmetrical camera compares, and show that face visual angle characteristic is poor
(5) by face visual angle characteristic difference Δ and predetermined threshold value T Δcompare, as Δ > T Δtime, be then judged as true, otherwise be false.
2., as claimed in claim 1 based on the face method for anti-counterfeit of multiple-camera, it is characterized in that, described facial feature points comprises nose, the corners of the mouth, pupil center, canthus.
3., as claimed in claim 2 based on the face method for anti-counterfeit of multiple-camera, it is characterized in that, in step (1-2), three facial feature points selected are respectively Liang Ge pupil center and nose.
4., as claimed in claim 1 based on the face method for anti-counterfeit of multiple-camera, it is characterized in that, also comprise and judge true and false step by face characteristic similarity S as follows:
(6) from the facial image that each video camera is taken, textural characteristics value T is extracted i∈ R m, i is camera number, and m is intrinsic dimensionality, and R is set of real numbers;
(7) calculate each shot by camera facial image and and its facial image being in the shot by camera of symmetrical seat in the plane between human face similarity degree S k, computing formula is:
S k = Σ h = 1 m ( T i , h - T j , h ) 2 T i , h + T j , h Σ h = 1 m ( T i , h - T j , h ) p p
Wherein, T i, T jbe respectively the textural characteristics value of two mutual symmetrical video cameras in position, the span of k be 1 to the value of p is 1 or 2;
(8) each face image similarity S calculated imean value, be this shooting human face similarity degree S;
(9) as S > T s, be then judged as vacation, otherwise be true, wherein T sfor default face characteristic similarity threshold.
5. as claimed in claim 4 based on the face method for anti-counterfeit of multiple-camera, it is characterized in that, in step (2-1), textural characteristics value obtains initial value for taking local binary patterns (LBP), gradient orientation histogram pattern (HOG) or Gabor filter patterns, and the initial value of acquisition is carried out dimensionality reduction or conversion process obtains through principal component analysis (PCA) or linear discriminant analysis.
6., as claimed in claim 4 based on the face method for anti-counterfeit of multiple-camera, it is characterized in that, as Δ > T Δand S≤T stime, be judged to be real human face, otherwise be judged to forge face.
7., as claimed in claim 4 based on the face method for anti-counterfeit of multiple-camera, it is characterized in that, as α (Δ-T Δ)+β (T s-S) > T time, be judged to be real human face, otherwise be judged to be forge face, wherein α, β are constant, and meet alpha+beta=1; The threshold value that T pre-sets.
8., as claimed in claim 1 based on the face method for anti-counterfeit of multiple-camera, it is characterized in that, have predetermined angle between adjacent camera in symmetrically arranged n platform video camera, this predetermined angle is 0-45 degree.
9., as claimed in claim 1 based on the face method for anti-counterfeit of multiple-camera, it is characterized in that, a described n video camera is in same level height.
10. as claimed in claim 1 based on the face method for anti-counterfeit of multiple-camera, it is characterized in that, when the quantity of described video camera is even number, symmetrical between two and form symmetrical group relative to same axis between video camera, the video camera of different symmetrical group is in differentiated levels;
Angle between two video cameras in different symmetrical group is identical or not identical.
CN201510080965.0A 2015-02-15 2015-02-15 A kind of face method for anti-counterfeit based on multiple-camera Active CN104615997B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510080965.0A CN104615997B (en) 2015-02-15 2015-02-15 A kind of face method for anti-counterfeit based on multiple-camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510080965.0A CN104615997B (en) 2015-02-15 2015-02-15 A kind of face method for anti-counterfeit based on multiple-camera

Publications (2)

Publication Number Publication Date
CN104615997A true CN104615997A (en) 2015-05-13
CN104615997B CN104615997B (en) 2018-06-19

Family

ID=53150434

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510080965.0A Active CN104615997B (en) 2015-02-15 2015-02-15 A kind of face method for anti-counterfeit based on multiple-camera

Country Status (1)

Country Link
CN (1) CN104615997B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104881657A (en) * 2015-06-08 2015-09-02 微梦创科网络科技(中国)有限公司 Profile face identification method and system, and profile face construction method and system
CN106355139A (en) * 2016-08-22 2017-01-25 厦门中控生物识别信息技术有限公司 Facial anti-fake method and device
CN106372629A (en) * 2016-11-08 2017-02-01 汉王科技股份有限公司 Living body detection method and device
CN108229329A (en) * 2017-03-16 2018-06-29 北京市商汤科技开发有限公司 Face false-proof detection method and system, electronic equipment, program and medium
CN109359460A (en) * 2018-11-20 2019-02-19 维沃移动通信有限公司 A kind of face recognition method and terminal device
US10956714B2 (en) 2018-05-18 2021-03-23 Beijing Sensetime Technology Development Co., Ltd Method and apparatus for detecting living body, electronic device, and storage medium
CN112651319A (en) * 2020-12-21 2021-04-13 科大讯飞股份有限公司 Video detection method and device, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101916496A (en) * 2010-08-11 2010-12-15 无锡中星微电子有限公司 System and method for detecting driving posture of driver
US20110128385A1 (en) * 2009-12-02 2011-06-02 Honeywell International Inc. Multi camera registration for high resolution target capture
CN102254169A (en) * 2011-08-23 2011-11-23 东北大学秦皇岛分校 Multi-camera-based face recognition method and multi-camera-based face recognition system
CN102693418A (en) * 2012-05-17 2012-09-26 上海中原电子技术工程有限公司 Multi-pose face identification method and system
CN102779274A (en) * 2012-07-19 2012-11-14 冠捷显示科技(厦门)有限公司 Intelligent television face recognition method based on binocular camera

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110128385A1 (en) * 2009-12-02 2011-06-02 Honeywell International Inc. Multi camera registration for high resolution target capture
CN101916496A (en) * 2010-08-11 2010-12-15 无锡中星微电子有限公司 System and method for detecting driving posture of driver
CN102254169A (en) * 2011-08-23 2011-11-23 东北大学秦皇岛分校 Multi-camera-based face recognition method and multi-camera-based face recognition system
CN102693418A (en) * 2012-05-17 2012-09-26 上海中原电子技术工程有限公司 Multi-pose face identification method and system
CN102779274A (en) * 2012-07-19 2012-11-14 冠捷显示科技(厦门)有限公司 Intelligent television face recognition method based on binocular camera

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
朱同辉等: "多摄像机协同的最优人脸采集算法", 《计算机工程》 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104881657A (en) * 2015-06-08 2015-09-02 微梦创科网络科技(中国)有限公司 Profile face identification method and system, and profile face construction method and system
CN104881657B (en) * 2015-06-08 2019-01-25 微梦创科网络科技(中国)有限公司 Side face recognition methods, side face construction method and system
CN106355139A (en) * 2016-08-22 2017-01-25 厦门中控生物识别信息技术有限公司 Facial anti-fake method and device
CN106355139B (en) * 2016-08-22 2019-08-30 厦门中控智慧信息技术有限公司 Face method for anti-counterfeit and device
CN106372629A (en) * 2016-11-08 2017-02-01 汉王科技股份有限公司 Living body detection method and device
CN108229329A (en) * 2017-03-16 2018-06-29 北京市商汤科技开发有限公司 Face false-proof detection method and system, electronic equipment, program and medium
US11482040B2 (en) 2017-03-16 2022-10-25 Beijing Sensetime Technology Development Co., Ltd. Face anti-counterfeiting detection methods and systems, electronic devices, programs and media
US10956714B2 (en) 2018-05-18 2021-03-23 Beijing Sensetime Technology Development Co., Ltd Method and apparatus for detecting living body, electronic device, and storage medium
CN109359460A (en) * 2018-11-20 2019-02-19 维沃移动通信有限公司 A kind of face recognition method and terminal device
CN112651319A (en) * 2020-12-21 2021-04-13 科大讯飞股份有限公司 Video detection method and device, electronic equipment and storage medium
CN112651319B (en) * 2020-12-21 2023-12-05 科大讯飞股份有限公司 Video detection method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN104615997B (en) 2018-06-19

Similar Documents

Publication Publication Date Title
CN104615997A (en) Human face anti-fake method based on multiple cameras
KR102596897B1 (en) Method of motion vector and feature vector based fake face detection and apparatus for the same
US8929595B2 (en) Dictionary creation using image similarity
Berretti et al. A set of selected SIFT features for 3D facial expression recognition
Fan et al. Identifying first-person camera wearers in third-person videos
CN103207898B (en) A kind of similar face method for quickly retrieving based on local sensitivity Hash
Dominio et al. Hand gesture recognition with depth data
WO2015149534A1 (en) Gabor binary pattern-based face recognition method and device
US8577094B2 (en) Image template masking
CN103020607B (en) Face recognition method and face recognition device
Lee et al. Place recognition using straight lines for vision-based SLAM
CN104751108A (en) Face image recognition device and face image recognition method
CN105184304A (en) Image Recognition Device And Method For Registering Feature Data In Image Recognition Device
CN103577815A (en) Face alignment method and system
TW201917636A (en) A method of face recognition based on online learning
US9002115B2 (en) Dictionary data registration apparatus for image recognition, method therefor, and program
Shao et al. Identity and kinship relations in group pictures
CN113505717B (en) Online passing system based on face and facial feature recognition technology
Karmakar et al. Face recognition using face-autocropping and facial feature points extraction
CN106980818B (en) Personalized preprocessing method, system and terminal for face image
Liu et al. Product recognition for unmanned vending machines
CN109146913A (en) A kind of face tracking method and device
CN104573682A (en) Face anti-counterfeiting method based on face similarity
Praseeda Lekshmi et al. Analysis of facial expressions from video images using PCA
Biswas et al. Extraction of regions of interest from face images using cellular analysis

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant