CN104573644A - Multi-mode face identification method - Google Patents

Multi-mode face identification method Download PDF

Info

Publication number
CN104573644A
CN104573644A CN201410836731.XA CN201410836731A CN104573644A CN 104573644 A CN104573644 A CN 104573644A CN 201410836731 A CN201410836731 A CN 201410836731A CN 104573644 A CN104573644 A CN 104573644A
Authority
CN
China
Prior art keywords
face
identification method
matching
database
remove
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410836731.XA
Other languages
Chinese (zh)
Inventor
孙伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin Is Auspicious For Opening Up New Science And Technology Development Co Ltd
Original Assignee
Tianjin Is Auspicious For Opening Up New Science And Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin Is Auspicious For Opening Up New Science And Technology Development Co Ltd filed Critical Tianjin Is Auspicious For Opening Up New Science And Technology Development Co Ltd
Priority to CN201410836731.XA priority Critical patent/CN104573644A/en
Publication of CN104573644A publication Critical patent/CN104573644A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a multi-mode face identification method which comprises the following three steps: firstly, performing global spectrum face matching on a to-be-matched face and faces in a database; secondly, if the matching is failed, judging that two face modes are not matched, and if the matching is successful, performing geometric characteristic matching and judging whether the two face modes are matched or not; finally, transmitting face data without global spectrum face matching and geometric characteristic matching into a new face memory database in the database. According to the method, overall difference of different faces is distinguished by a global spectrum face matching method, and local tiny difference of the faces are distinguished in combination with local geometric characteristic matching, so that the face identification performance is improved; for newly acquired faces, if the newly acquired faces are not matched with the faces in the original database, the newly acquired faces are stored in the new face memory database module of the database, so that original images can be quickly called from the new face memory database module during re-matching.

Description

Multi-mode face identification method
Technical field
The present invention relates to face identification method, particularly relate to a kind of multi-mode face identification method global characteristics coupling and local characteristic matching combined.
Background technology
The method of recognition of face has a variety of, is roughly divided into two classes: the coupling based on global characteristics and the coupling based on local feature.Global characteristics matching process has PCA, 2DPCA, LDA, spectrum holes etc., and local feature coupling has geometric properties, wavelet character, LBP feature etc.In fact, often there is non-rigid shape deformations (such as smile, frown) in face pattern, the a certain global characteristics matching process of simple use or local feature matching process are all difficult to take into account the impact of face overall difference drawn game portion variation on accuracy of identification, and to can not memory storage with the unmatched new face of face in database, make troubles to the investigation work whether this face occurred, or need this face to be indexed in the face database be recorded, also need this face related data of typing again.
Summary of the invention
For overcoming above-mentioned technical deficiency, a kind of multi-mode face identification method of the present invention, overall spectrum holes method is adopted to distinguish the overall difference of different face, the nuance of face local is distinguished again in conjunction with local geometric features coupling, the performance of final raising recognition of face, and every for unmatched face data are transferred to new face data memory storehouse.
The technical solution used in the present invention is:
Multi-mode face identification method, comprise three steps, first step, face to be matched and face in database are carried out overall spectrum holes mate, if do not mated, judge that two face patterns are not mated, if coupling, carry out second step, carry out geometric properties matching judgment two face pattern whether to mate, third step, overall spectrum holes is not mated the unmatched human face data with geometric properties and to be transferred in database in new face data memory storehouse;
First step: overall spectrum holes coupling, first carry out which floor wavelet decomposition to facial image, then make Fourier transform to final low frequency subgraph picture, thus the lower dimensional space obtaining former facial image is expressed, this expression has spectral amplitude shift invariant;
(1) wavelet decomposition:
(2) feature extraction: implement Fourier transform to the facial image low frequency subgraph picture after being decomposed by wavelet transformation, for two dimensional image signal f (x, y), its two-dimensional Fourier transform formula is:
I ( u , v ) = F [ f ( x , y ) ] = ∫ - ∞ + ∞ ∫ - ∞ + ∞ f ( x , y ) e - 2 πj ( ux + vy ) dxdy
(3) characteristic matching: for the overall spectrum holes feature of two width faces, adopt the coupling of distance measuring method realization character, distance measure represents the function of the difference of each corresponding component between two vectors, its basis is the distance of two vector end-points, remembers that the overall spectrum holes feature of two width faces is respectively:
X=(x 1, x 2..., x n) ', y=(y 1, y 2..., y n) ', Euclidean distance is between the two:
d ( x , y ) = | | x - y | | = [ Σ i = 1 n ( x i - y i ) 2 ] 1 / 2 , Euclidean distance is less, and two face patterns are more similar, set a threshold value T, when Euclidean distance is less than a certain empirical value T, thinks that two face patterns may be mated, carries out local feature matching process below, carry out meticulous coupling; Otherwise, think that two face patterns are not mated.
Second step: local geometric features is mated, two eyebrows of face, eyes, nose, face face feature are serious by human face expression variable effect, disturb also very responsive to block, jewelry etc., comparatively stable only has eyebrow parts, so local geometric features will be extracted for eyebrow parts, step for carrying out face segmentation, eyebrow location, feature extraction, characteristic matching;
(1) face segmentation: first face is divided into multiple subregion, subregion interior lights, according to relatively more even, then carried out Iamge Segmentation, is finally stitched together by the segmentation result of subregion again, obtain the overall segmentation result of face in face subregion; Subregion Iamge Segmentation selects OTSU method, first get certain gray-scale value, with it for image is divided into background and target two class by boundary, the pixel calculated respectively in this two class object is counted and average gray, then their inter-class variance is calculated, the gray-scale value corresponding to maximal value finally getting inter-class variance, as threshold value, splits image with this: the probability that in note image, gray-scale value i occurs is p (i)=n i/ N (i=0,1 ..., L-1), N is image slices vegetarian refreshments sum, n ithe pixel number of to be gray-scale value be i, L is maximum gradation value, and image is divided into target (M) and background (B) two class according to threshold value x, wherein target gray value is less than background gray levels, w M ( x ) = Σ i = 0 x n i / N , w B ( x ) = Σ i = L - 1 n i = X + 1 n i / N , μ M ( x ) = ( Σ i = 0 x i · p ( i ) ) / w M ( x ) , μ B ( x ) = ( Σ i = X + 1 L - 1 i · p ( i ) ) / w B ( x ) , Ф (x)=w b(x) w m(x) (μ b(x)-μ m(x)) 2, travel through all gray-scale values, choose the threshold value x that Ф (x) is maximum, x=Arg{max 0≤x≤L-1Ф (x) }, this threshold value is optimal segmenting threshold, thus Threshold segmentation image, and the pixel that gray-scale value is less than this threshold value is target, and other pixel is background;
(2) eyebrow location: the object block after Iamge Segmentation marked, calculates object block area (shared pixel count), determine the outer by rectangle position and wide height of every block.The judgement of eyebrow position determines according to the geometric position of eyebrow in binaryzation facial image;
(3) feature extraction: find descriptor, the constant descriptor of comparative example, translation and selection is adopted to extract characteristics of image, adopt image Hu not bending moment eyebrow parts are described, calculate 7 of Hu not bending moment flow process be: computed image barycenter, ask required centre distance, ask required normalization centre distance, wherein centroid calculation formula is: i ‾ = m 10 / m 00 j ‾ = m 01 / m 00 , Wherein, m pq = Σ i = 0 M - 1 Σ j = 0 N - 1 i p j q f ( i , j ) , Make p, q get respectively (1,0), (1,0), (0,1), (0,0), obtain m 10, m 01, m 00;
Required centre distance is u 20, u 02, u 11, u 12, u 21, u 30, u 03, computing formula is:
u pq = Σ i = 0 M - 1 Σ j = 0 N - 1 ( i - i ‾ ) p ( j - j ‾ ) q f ( i , j ) , Method for normalizing: η pq=u pq/ u 00 r, obtain 7 Hu not bending moment be:
φ 1=η 2002
φ 2=(η 2002) 2+4η 11 2
φ 3=(η 30-3η 12) 2+(3η 2103) 2
φ 4=(η 3012) 2+(η 2103) 2
φ 5=(η 30-3η 12)(η 3012)[(η 3012) 2-3(η 2103) 2]+(3η 2103)(η 2103)[3(η 3012) 2-(η 2103) 2],
φ 6=(η 2002)[(η 3012) 2-(η 2103) 2]+4η 113012)(η 2103)
φ 7=(3η 2130)(η 3012)[(η 3012) 2-3(η 2103) 2]+(3η 2130)(η 2103)[3(η 3012) 2-(η 2103) 2];
(4) characteristic matching: characteristic matching is carried out according to the least euclidean distance criteria, note φ i, krepresent a kth Hu square of face eyebrow parts in database, φ j, krepresent a kth Hu square of face eyebrow parts to be identified, both difference d are: d = min j Σ k = 1 7 ( φ i , k - φ j , k ) 2 .
If this value is less than a certain empirical value, then think two brow region couplings, otherwise not think and mate.
The face meeting local geometric features coupling thinks the face mated, otherwise not thinks and mate.
Third step: in overall spectrum holes matching process, during characteristic matching, face characteristic data Euclidean distance being greater than to setting threshold value T are transferred in the new face data memory storehouse in database, in geometric properties matching process, during characteristic matching, when to 7 Hu, bending moment is not greater than the threshold value d of setting, the geometric properties data of extraction are transferred in the module in new face data memory storehouse of same face.
Beneficial effect of the present invention: the inventive method adopts overall spectrum holes method to distinguish the overall difference of different face, the nuance of face local is distinguished again in conjunction with local geometric features coupling, the performance of final raising recognition of face, if do not mated with former database face the new face gathered, then the face newly gathered is stored in the new face data memory library module of database, so that when again mating, from new face data memory library module, transfer original image fast.
Embodiment
Embodiment 1
A certain recognition of face, detect original image, small echo classification image, extracts overall spectrum signature, plots spectral image, and when extracting overall spectrum signature, the formula of employing two-dimensional Fourier transform is: I ( u , v ) = F [ f ( x , y ) ] = ∫ - ∞ + ∞ ∫ - ∞ + ∞ f ( x , y ) e - 2 πj ( ux + vy ) dxdy , Carry out global characteristics coupling, adopt the coupling of distance measuring method realization character, distance measure represents the function of the difference of each corresponding component between two vectors, and its basis is the distance of two vector end-points, remembers that the overall spectrum holes feature of two width faces is respectively:
X=(x 1, x 2..., x n) ', y=(y 1, y 2..., y n) ', Euclidean distance is between the two:
d ( x , y ) = | | x - y | | = [ Σ i = 1 n ( x i - y i ) 2 ] 1 / 2 , D value is greater than pre-set value T, then overall spectrum signature does not mate, and this unmatched face overall situation spectral feature data is transferred to new face data memory storehouse.
Embodiment 2
Pre-set value T is less than to d value in embodiment 1, then overall spectrum signature coupling, carry out local geometric features coupling, face is split, extracts eyebrow parts and bianry image thereof, adopt the Hu of image not bending moment eyebrow parts are described, calculate 7 of Hu not bending moment flow process be: computed image barycenter, ask required centre distance, ask required normalization centre distance, wherein centroid calculation formula is: i ‾ = m 10 / m 00 j ‾ = m 01 / m 00 , Wherein, m pq = Σ i = 0 M - 1 Σ j = 0 N - 1 i p j q f ( i , j ) , Make p, q get respectively (1,0), (1,0), (0,1), (0,0), obtain m 10, m 01, m 00;
Required centre distance is u 20, u 02, u 11, u 12, u 21, u 30, u 03, computing formula is:
u pq = Σ i = 0 M - 1 Σ j = 0 N - 1 ( i - i ‾ ) p ( j - j ‾ ) q f ( i , j ) , Method for normalizing: η pq=u pq/ u 00 r, obtain 7 Hu not bending moment be:
φ 1=η 2002
φ 2=(η 2002) 2+4η 11 2
φ 3=(η 30-3η 12) 2+(3η 2103) 2
φ 4=(η 3012) 2+(η 2103) 2
φ 5=(η 30-3η 12)(η 3012)[(η 3012) 2-3(η 2103) 2]+(3η 2103)(η 2103)[3(η 3012) 2-(η 2103) 2],
φ 6=(η 2002)[(η 3012) 2-(η 2103) 2]+4η 113012)(η 2103)
φ 7=(3η 2130)(η 3012)[(η 3012) 2-3(η 2103) 2]+(3η 2130)(η 2103)[3(η 3012) 2-(η 2103) 2];
Carry out characteristic matching: characteristic matching is carried out according to the least euclidean distance criteria, note φ i, krepresent a kth Hu square of face eyebrow parts in database, φ j, krepresent a kth Hu square of face eyebrow parts to be identified, both difference d are: d = min j Σ k = 1 7 ( φ i , k - φ j , k ) 2 , If this d value is greater than empirical value D, then think that two brow region are not mated, this non-matched data is transferred to new face data memory storehouse.
Embodiment 3
Be less than empirical value D in embodiment 2 to d value, then think that two brow region are mated, this face is identified.

Claims (10)

1. multi-mode face identification method, comprise three steps, first step, carries out overall spectrum holes by face to be matched and face in database and mates, if do not mated, judges that two face patterns are not mated, if coupling, carry out second step again, carry out geometric properties coupling, judge whether two face patterns mate, third step, does not mate the unmatched human face data with geometric properties and is transferred to new face data memory storehouse in database by overall spectrum holes.
2. to remove the multi-mode face identification method described in 1 according to right, it is characterized in that, which floor wavelet decomposition is described overall spectrum holes coupling, first carry out to facial image, then Fourier transform is done to final low frequency subgraph picture, thus the lower dimensional space obtaining former facial image is expressed.
3. to remove the multi-mode face identification method described in 2 according to right, it is characterized in that, described wavelet decomposition process is, by facial image through one deck wavelet transformation, 4 sub-band images can be decomposed into, for the low frequency part of sub-band images, can wavelet decomposition be proceeded, realize multilevel wavelet decomposition, wavelet transformation formula is:
Wherein, W (j, m, n) coefficient is being similar at yardstick j place image f (x, y), for Haar wavelet scaling function, be formulated as:
4. will remove the multi-mode face identification method described in 2 according to right, it is characterized in that, described makes Fourier transform to final low frequency subgraph picture, and its formula is:
I ( u , v ) = F [ f ( x , y ) ] = ∫ - ∞ + ∞ ∫ - ∞ + ∞ f ( x , y ) e - 2 πj ( ux + vy ) dxdy .
5. to remove the multi-mode face identification method described in 4 according to right, it is characterized in that, overall spectrum holes feature is adopted to the coupling of distance measuring method realization character, distance measure represents the function of the difference of each corresponding component between two vectors, its basis is the distance of two vector end-points, remembers that the overall spectrum holes feature of two width faces is respectively: x=(x 1, x 2..., x n) ', y=(y 1, y 2..., y n) ', Euclidean distance is between the two: d ( x , y ) = | | x - y | | = [ Σ i = 1 n ( x i - y i ) 2 ] 1 / 2 , Euclidean distance is less, and two face patterns are more similar, set a threshold value T, when Euclidean distance is less than a certain empirical value T, thinks that two face patterns may be mated, carries out local feature matching process below, carry out meticulous coupling; Otherwise, think that two face patterns are not mated.
6. will remove the multi-mode face identification method described in 1 according to right, it is characterized in that, described local geometric features coupling, extracts local geometric features mainly for eyebrow parts, step for carrying out face segmentation, eyebrow location, feature extraction, characteristic matching.
7. to remove the multi-mode face identification method described in 6 according to right, it is characterized in that, described face dividing method is, first face is divided into multiple subregion, subregion interior lights is according to more even, then in face subregion, carry out Iamge Segmentation, finally again the segmentation result of subregion is stitched together, obtain the overall segmentation result of face; Subregion Iamge Segmentation selects OTSU method, first get certain gray-scale value, with it for image is divided into background and target two class by boundary, the pixel calculated respectively in this two class object is counted and average gray, then their inter-class variance is calculated, the gray-scale value corresponding to maximal value finally getting inter-class variance, as threshold value, splits image with this.
8. to remove the multi-mode face identification method described in 6 according to right, it is characterized in that, described feature extraction, adopt image Hu not bending moment eyebrow parts are described, calculate 7 of Hu not bending moment flow process be: computed image barycenter, ask required centre distance, ask required normalization centre distance, wherein centroid calculation formula is: i ‾ = m 10 / m 00 j ‾ = m 01 / m 00 , Wherein, m pq = Σ i = 0 M - 1 Σ j = 0 N - 1 i p j p f ( i , j ) , Make p, q get respectively (1,0), (1,0), (0,1), (0,0), obtain m 10, m 01, m 00;
Required centre distance is u 20, u 02, u 11, u 12, u 21, u 30, u 03, computing formula is:
u pq = Σ i = 0 m - 1 Σ j = 0 N - 1 ( i - i ‾ ) p ( j - j ‾ ) q f ( i , j ) , Method for normalizing: η pq=u pq/ u 00 r, obtain 7 Hu not bending moment be:
φ 1=η 2002
φ 2=(η 2002) 2+4η 11 2
φ 3=(η 30-3η 12) 2+(3η 2103) 2
φ 4=(η 3012) 2+(η 2103) 2
φ 5=(η 30-3η 12)(η 3012)[(η 3012) 2-3(η 2103) 2]+(3η 2103)(η 2103)[3(η 3012) 2-(η 2103) 2],
φ 6=(η 2002)[(η 3012) 2-(η 2103) 2]+4η 113012)(η 2103)
φ 7=(3η 2130)(η 3012)[(η 3012) 2-3(η 2103) 2]+(3η 2130)(η 2103)[3(η 3012) 2-(η 2103) 2]。
9. will remove the multi-mode face identification method described in 6 according to right, it is characterized in that, described characteristic matching is carried out according to the least euclidean distance criteria, note φ i, krepresent a kth Hu square of face eyebrow parts in database, φ j, krepresent a kth Hu square of face eyebrow parts to be identified, both difference d are: d = min j Σ k = 1 7 ( φ i , k - φ j , k ) 2 .
10. to remove the multi-mode face identification method described in 1 according to right, it is characterized in that, described third step, in overall spectrum holes matching process, during characteristic matching, face characteristic data Euclidean distance being greater than to setting threshold value T are transferred in the new face data memory storehouse in database, in geometric properties matching process, during characteristic matching, when to 7 Hu, bending moment is not greater than the threshold value d of setting, the geometric properties data of extraction are transferred in the module in new face data memory storehouse of same face.
CN201410836731.XA 2014-12-29 2014-12-29 Multi-mode face identification method Pending CN104573644A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410836731.XA CN104573644A (en) 2014-12-29 2014-12-29 Multi-mode face identification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410836731.XA CN104573644A (en) 2014-12-29 2014-12-29 Multi-mode face identification method

Publications (1)

Publication Number Publication Date
CN104573644A true CN104573644A (en) 2015-04-29

Family

ID=53089666

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410836731.XA Pending CN104573644A (en) 2014-12-29 2014-12-29 Multi-mode face identification method

Country Status (1)

Country Link
CN (1) CN104573644A (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106295672A (en) * 2015-06-12 2017-01-04 中国移动(深圳)有限公司 A kind of face identification method and device
CN106778704A (en) * 2017-01-23 2017-05-31 安徽理工大学 A kind of recognition of face matching process and semi-automatic face matching system
CN108322606A (en) * 2018-01-30 2018-07-24 努比亚技术有限公司 A kind of screen opening method, terminal and computer readable storage medium
CN108416732A (en) * 2018-02-02 2018-08-17 重庆邮电大学 A kind of Panorama Mosaic method based on image registration and multi-resolution Fusion
CN109446893A (en) * 2018-09-14 2019-03-08 百度在线网络技术(北京)有限公司 Face identification method, device, computer equipment and storage medium
CN109635739A (en) * 2018-12-13 2019-04-16 深圳三人行在线科技有限公司 A kind of method for collecting iris and equipment
CN109670440A (en) * 2018-12-14 2019-04-23 央视国际网络无锡有限公司 The recognition methods of giant panda face and device
CN112766014A (en) * 2019-10-21 2021-05-07 深圳君正时代集成电路有限公司 Recognition method for automatic learning in face recognition
CN112766015A (en) * 2019-10-21 2021-05-07 深圳君正时代集成电路有限公司 Secondary recognition method for improving face recognition accuracy
CN113569593A (en) * 2020-04-28 2021-10-29 京东方科技集团股份有限公司 Intelligent vase system, flower identification and display method and electronic equipment

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106295672A (en) * 2015-06-12 2017-01-04 中国移动(深圳)有限公司 A kind of face identification method and device
CN106295672B (en) * 2015-06-12 2019-10-29 中移信息技术有限公司 A kind of face identification method and device
CN106778704A (en) * 2017-01-23 2017-05-31 安徽理工大学 A kind of recognition of face matching process and semi-automatic face matching system
CN108322606A (en) * 2018-01-30 2018-07-24 努比亚技术有限公司 A kind of screen opening method, terminal and computer readable storage medium
CN108416732A (en) * 2018-02-02 2018-08-17 重庆邮电大学 A kind of Panorama Mosaic method based on image registration and multi-resolution Fusion
CN109446893A (en) * 2018-09-14 2019-03-08 百度在线网络技术(北京)有限公司 Face identification method, device, computer equipment and storage medium
CN109635739A (en) * 2018-12-13 2019-04-16 深圳三人行在线科技有限公司 A kind of method for collecting iris and equipment
CN109670440A (en) * 2018-12-14 2019-04-23 央视国际网络无锡有限公司 The recognition methods of giant panda face and device
CN112766014A (en) * 2019-10-21 2021-05-07 深圳君正时代集成电路有限公司 Recognition method for automatic learning in face recognition
CN112766015A (en) * 2019-10-21 2021-05-07 深圳君正时代集成电路有限公司 Secondary recognition method for improving face recognition accuracy
CN113569593A (en) * 2020-04-28 2021-10-29 京东方科技集团股份有限公司 Intelligent vase system, flower identification and display method and electronic equipment
WO2021218624A1 (en) * 2020-04-28 2021-11-04 京东方科技集团股份有限公司 Intelligent vase system and flower identification and display method, and electronic device

Similar Documents

Publication Publication Date Title
CN104573644A (en) Multi-mode face identification method
Suruliandi et al. Local binary pattern and its derivatives for face recognition
CN106778517A (en) A kind of monitor video sequence image vehicle knows method for distinguishing again
Roy et al. Iris segmentation using variational level set method
Feng et al. A coarse-to-fine classification scheme for facial expression recognition
CN108764096B (en) Pedestrian re-identification system and method
CN111191655A (en) Object identification method and device
Smereka et al. Selecting discriminative regions for periocular verification
KR101326691B1 (en) Robust face recognition method through statistical learning of local features
Zhao et al. AU recognition on 3D faces based on an extended statistical facial feature model
Liang et al. Bayesian multi-distribution-based discriminative feature extraction for 3D face recognition
Gonzalez-Sosa et al. Image-based gender estimation from body and face across distances
Othman et al. A novel approach for occluded ear recognition based on shape context
Chouchane et al. 3D and 2D face recognition using integral projection curves based depth and intensity images
CN104573628A (en) Three-dimensional face recognition method
Yang et al. A comparative study of several feature extraction methods for person re-identification
Nsaef et al. Enhancement segmentation technique for iris recognition system based on Daugman's Integro-differential operator
JP5286574B2 (en) Object detection recognition apparatus, object detection recognition method, and program
Monteiro et al. Multimodal hierarchical face recognition using information from 2.5 D images
Li et al. 3D face recognition by constructing deformation invariant image
JP3859347B2 (en) Object recognition apparatus and method
Rai et al. Appearance based gender classification with PCA and (2D) 2 PC A on approximation face image
Naveena et al. Partial face recognition by template matching
Jariwala et al. A real time robust eye center localization using geometric eye model and edge gradients in unconstrained visual environment
Li et al. A pixel-wise, learning-based approach for occlusion estimation of iris images in polar domain

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20150429