CN106056076B - A kind of method of the illumination invariant of determining complex illumination facial image - Google Patents

A kind of method of the illumination invariant of determining complex illumination facial image Download PDF

Info

Publication number
CN106056076B
CN106056076B CN201610371321.1A CN201610371321A CN106056076B CN 106056076 B CN106056076 B CN 106056076B CN 201610371321 A CN201610371321 A CN 201610371321A CN 106056076 B CN106056076 B CN 106056076B
Authority
CN
China
Prior art keywords
illumination
facial image
invariant
image
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610371321.1A
Other languages
Chinese (zh)
Other versions
CN106056076A (en
Inventor
程勇
韩袁琛
曹雪虹
焦良葆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Institute of Technology
Original Assignee
Nanjing Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Institute of Technology filed Critical Nanjing Institute of Technology
Priority to CN201610371321.1A priority Critical patent/CN106056076B/en
Publication of CN106056076A publication Critical patent/CN106056076A/en
Application granted granted Critical
Publication of CN106056076B publication Critical patent/CN106056076B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a kind of methods of the illumination invariant of determining complex illumination facial image, firstly, visual light imaging model --- the Lambert's model that research is classical, object analysis image-forming principle, the illumination estimation model to establish novel provide principle foundation;Then, when designing illumination estimation model, it is contemplated that under the conditions of complex illumination, the light conditions of a width facial image can be divided into it is unobstructed, block and these three regions of transition, thus be divided into two classes and discuss;Again, due to the correlation of illumination between adjacent pixel, the two class illumination estimation results defined before can be merged to obtain a final result;Finally, the illumination invariant of facial image can be derived from by classical simple Lambert's model.The method of the present invention can effectively eliminate the light differential of original image.And the numberical range of mentioned illumination invariant is between 0 and 1, and it is consistent with the numberical range of face intrinsic.

Description

A kind of method of the illumination invariant of determining complex illumination facial image
Technical field
The present invention relates to a kind of methods of the illumination invariant of determining complex illumination facial image, belong to face recognition technology Field.
Background technique
In recent years, in order to effectively eliminate influence of the complex illumination to recognition of face performance, domestic and foreign scholars have been proposed All multi-methods.Wherein, it is a kind of classical, effective method that illumination invariant is extracted from complex illumination facial image.Past, In order to isolate illumination invariant and imaging source from multiplying property model, assume initially that illumination invariant quickly changes, imaging Source is slowly varying, then implements illumination estimation using low-pass filtering and extracts illumination invariant indirectly.Such method can be divided into directly It connects and extracts illumination invariant with indirect both of which.Direct Model refer to from facial image extract high-frequency characteristic as illumination not Variable, effective high-frequency characteristic specifically include that Gradient Features, textural characteristics and transform domain high-frequency characteristic.Indirect pattern refer to first from Illumination is estimated in facial image, then implements illumination and the separation of face intrinsic, extracts illumination invariant, effective illumination estimation Method specifically includes that gaussian filtering, weighting Anisotropic fractals, logarithm total variation and is converted in smothing filtering.
Although these methods have been achieved for certain progress in complex illumination recognition of face, but still have limitation. On the one hand, it is assumed that the illumination invariant feature of face, which quickly changes, has certain narrow-mindedness.Because in face major part region Illumination invariant feature, such as eyebrow, pupil, mole and skin, be all it is slowly varying, only just there is illumination invariant spy between region Sign quickly variation.On the other hand, the angle of current low-pass filtering, smothing filtering and denoising model from acquisition image low-frequency information Estimate illumination (fuzzy image), contains excessive face intrinsic information, be only able to satisfy the slowly varying characteristic of illumination, ignore Image obtains the characteristic of model, is not associated with directly with image irradiation.
Summary of the invention
The technical problem to be solved by the present invention is to overcome the deficiencies of existing technologies, a kind of determining complex illumination face is provided The method of the illumination invariant of image no longer assumes that the frequency characteristic of face intrinsic on the basis of studying classical Lambert's model, But from the image-forming principle of image, illumination can be more accurately estimated from facial image, extracts more robust light According to invariant.
In order to solve the above technical problems, the present invention provides a kind of side of the illumination invariant of determining complex illumination facial image Method, comprising the following steps:
1) by analysis Lambert's model, complex illumination facial image model is determined;
2) illumination estimation model is designed, the image irradiation of facial image is solved;
3) image irradiation of the facial image solved according to the complex illumination facial image model and step 2) of step 1), meter Calculate human face light invariant.
In aforementioned step 1), complex illumination facial image model are as follows:
F (x, y)=I (x, y) R (x, y) (2)
Wherein, F (x, y) is facial image, and R (x, y) indicates that human face light invariant, I (x, y) indicate the figure of facial image As illumination.
Aforementioned step 2) designs illumination estimation model, and solving the image irradiation of facial image, detailed process is as follows:
Illumination estimation model I and illumination 2-1) are separately designed based on the slowly varying region of illumination and the quick region of variation of illumination Estimate modelⅱ:
Illumination estimation model I is defined as:
Illumination estimation modelⅱ is defined as:
Fa(x, y)=Im(x, y)-F (x, y) (5)
Wherein, Im(x, y) is the image irradiation under illumination estimation model I, Is(x, y) is the figure under illumination estimation modelⅱ As illumination, oI, jIt is point (x, y) in Ω1Consecutive points in neighborhood;Max () and min () are respectively indicated and are sought collective data Maximum value and minimum value;
2-2) calculate Im(x, y) and Is(x, y) will merge illumination estimation knot using illumination fusion in facial image F (x, y) Fruit Ims(x, y) is defined as:
T=mean (Fg(x, y))+k × (max (Fg(x, y))-mean (Fg(x, y))) (7)
Fg(x, y)=Fa(x, y)/Im(x, y) (8)
Wherein, mean () indicates to seek the average value of collective data;K is adjustable factors;
2-3) design the phase between image irradiation of the adaptive Anisotropic fractals of one kind to establish adjacent pixel Guan Xing, and by final image irradiation I (x, y) is defined as:
Wherein, G (x, y, Ω2) be standard deviation be ρ, convolution kernel scale is Ω2Gaussian kernel;P (x, y, Ω2) it is Ims(x, y) Corresponding anisotropy template;Ims(i, j) is Ims(x, y) is in Ω2Pixel in neighborhood.
Adjustable factors k above-mentioned is taken as 0.6.
Standard deviation ρ above-mentioned is taken as 1.
Ω above-mentioned1And Ω2Neighborhood window is set as 3 × 3.
Human face light invariant above-mentioned indicates are as follows:
R (x, y)=F (x, y)/I (x, y) (11)
Wherein, F (x, y) is facial image, and R (x, y) indicates that human face light invariant, I (x, y) indicate the figure of facial image As illumination.
Advantageous effects of the invention: the method for the present invention can effectively eliminate the light differential of original image.And And the numberical range of mentioned illumination invariant is between 0 and 1, it is consistent with the numberical range of face intrinsic.
Detailed description of the invention
Fig. 1 is the Yale B in the embodiment of the present invention+The illumination invariant of face database.
Specific embodiment
The invention will be further described below in conjunction with the accompanying drawings.Following embodiment is only used for clearly illustrating the present invention Technical solution, and not intended to limit the protection scope of the present invention.
The invention mainly comprises extraction two parts of the foundation of illumination estimation model and illumination invariant.Firstly, research warp Visual light imaging model --- the Lambert's model of allusion quotation, object analysis image-forming principle, the illumination estimation model to establish novel provide Principle foundation;Then, when designing illumination estimation model, it is contemplated that under the conditions of complex illumination, the illumination feelings of a width facial image Condition can be divided into it is unobstructed, block and these three regions of transition, thus be divided into two classes and discuss;Again, due to adjacent The two class illumination estimation results defined before can be merged to obtain a final result by the correlation of illumination between pixel;Most Afterwards, by classical simple Lambert's model, the illumination invariant of facial image can be derived from.Specifically comprise the following steps:
1, Lambert's model is analyzed:
Image refers to that target object surface is reflected into the measurement of the light intensity formed on image acquisition sensor.Lambert's mould Type is widely used in complex illumination recognition of face as classical visible images imaging model.Formula (1) gives bright Primary model describes the image-forming principle of target object.
G (x, y)=ρ (x, y) n (x, y)Ts (1)
Wherein, ρ (x, y) and n (x, y)TThe reflectivity and normal vector of target object surface are respectively indicated, s indicates imaging Source, G (x, y) indicate the image of target object.
The reflectivity and normal vector of body surface are unrelated with imaging source, are the internal characteristics (illumination invariant) of object. Therefore, the image-forming principle of target object can be described with simple Lambert's model, i.e. a width facial image F (x, y) can be with It indicates are as follows:
F (x, y)=I (x, y) R (x, y) (2)
Wherein, R (x, y) indicates face intrinsic (illumination invariant), and numberical range belongs to [0,1], and I (x, y) indicates face The imaging source (image irradiation) of image.
From Lambert's model: facial image is the product that face intrinsic is multiplied with imaging source;The numerical value of face intrinsic Range belongs to [0,1];The intensity of facial image is lower than the intensity of imaging source;The maximum value of facial image is than previous any light Method is closer to imaging source by estimate.
2, illumination estimation model is designed:
The light conditions of one width facial image can be divided into three parts: unobstructed region, occlusion area and transitional region (the unobstructed region between occlusion area).The illumination in these regions is presented below as feature respectively: the unobstructed area light of light It is slow according to brighter and variation;Light occlusion area illumination is more gloomy and variation is slow;Light transitional region illumination is by bright To dark and quick variation.Therefore, for the slowly varying region of illumination with quick region of variation separately design illumination estimation model I and II:
Illumination estimation model I is defined as:
Illumination estimation modelⅱ is defined as:
Fa(x, y)=Im(x, y)-F (x, y) (5)
Wherein, oI, jIt is point (x, j) in Ω1Consecutive points in neighborhood;Max (), min () and mean () difference table Show the maximum value, minimum value and average value for seeking collective data.
To the I in formula F (x, y)m(x, y) and IsAfter (x, y) is calculated, illumination estimation is improved using illumination fusion. In this process, we distinguish the shielding edge and other regions of light by image segmentation, and by facial image F (x, y) Middle fusion illumination estimation result Ims(x, y) is defined as:
T=mean (Fg(x, y))+k × (max (Fg(x, y))-mean (Fg(x, y))) (7)
Fg(x, y)=Fa(x, y)/Im(x, y) (8)
Wherein, mean () indicates to seek the average value of collective data;K ∈ [0,1] is an adjustable factors.
Since the illumination of neighborhood pixels there should be very big relationship, the adaptive Anisotropic fractals of one kind are designed to build Correlation between the illumination of vertical adjacent pixel, and by final image irradiation estimated result I (x, y) is defined as:
Wherein, G (x, y, Ω2) be standard deviation be ρ, convolution kernel scale is Ω2Gaussian kernel;P (x, y, Ω2) it is Ims(x, y) Corresponding anisotropy template;Ims(i, j) is Ims(x, y) is in Ω2Pixel in neighborhood.
Adjustable factors k and standard deviation ρ are respectively set to 0.6 and 1, Ω in the present invention1And Ω2Neighborhood window is set as 3 × 3.
3. deriving illumination invariant:
After estimating illumination in facial image, the Lambert's model that can be described according to formula (2) derives facial image Illumination invariant.The illumination invariant of facial image F (x, y) may be expressed as:
R (x, y)=F (x, y)/I (x, y) (11)
Proved by experimental verification: the method for the present invention can effectively eliminate the light differential of original image, and described The numberical range of illumination invariant R is consistent with the numberical range of face intrinsic between 0 and 1.
Embodiment:
In order to verify the validity of the method for the present invention, Yale B and extension Yale B are combined into Yale B+Face database into Row experiment.The library complexity light illumination mode is still a challenging problem for robust illumination face recognition algorithms.Identification Stage, principal component analysis are used for feature extraction, and the nearest neighbor classifier based on Euclidean distance is classified for identification.Inventive algorithm With current advanced algorithm: MSR, Gradientfaces and Guo have carried out comparative experiments, provide corresponding recognition effect.
Yale B+Face database includes 38 people, and 64 kinds of illumination modes amount to 2432 width images.All graphical rules are adjusted Whole is 100*100.According to the difference of light source and center of face axis angle, face database is divided into 5 set altogether.Fig. 1 gives The illumination invariant that one people, 5 width images of each set and the present invention extract, it can be seen that the present invention can effectively eliminate difference Influence of the illumination to face intrinsic.
Firstly, selecting a collection to be combined into training set respectively from 5 set, other four set are used as test set, table 1-5 Give the experimental result of algorithms of different.It can be seen that the discrimination of the mentioned algorithm of the present invention is higher than other algorithms, especially collect When closing 5 as training set, hence it is evident that be better than other algorithms.Then, in order to verify the high efficiency of inventive algorithm, everyone is arbitrarily selected Piece image is selected as training set (total 38 width facial images), other images are as test set (total 2394 width face figures Picture), experiment 60 times is repeated, the average recognition rate and standard deviation of algorithms of different are as shown in table 6, it can be seen that inventive algorithm is put down Equal discrimination is apparently higher than other algorithms, and discrimination standard deviation is minimum.
Table 1: discrimination (%) of the set 1 as training set algorithms of different.
Method Set 2 Set 3 Set 4 Set 5 Entire set
MSR 99.78 95.49 94.52 94.04 95.71
Gradientfaces 100.00 98.87 87.28 94.74 95.29
S&L 100.00 97.56 95.83 93.21 96.26
The method of the present invention 100.00 99.81 98.90 98.06 99.08
Table 2: discrimination (%) of the set 2 as training set algorithms of different.
Method Set 1 Set 3 Set 4 Set 5 Entire set
MSR 97.74 94.17 93.64 90.31 93.12
Gradientfaces 99.25 95.30 92.54 93.91 94.69
S&L 98.12 96.62 96.05 90.58 94.48
The method of the present invention 100.00 98.12 99.56 98.02 98.74
Table 3: discrimination (%) of the set 3 as training set algorithms of different.
Method Set 1 Set 2 Set 4 Set 5 Entire set
MSR 99.62 98.25 96.27 97.65 97.74
Gradientfaces 100.00 100.00 98.03 99.03 99.16
S&L 99.25 98.90 95.18 97.65 97.58
The method of the present invention 99.25 98.90 99.34 99.31 99.21
Table 4: discrimination (%) of the set 4 as training set algorithms of different.
Method Set 1 Set 2 Set 3 Set 5 Entire set
MSR 95.87 96.71 94.17 99.31 96.86
Gradientfaces 100.00 99.56 97.37 99.72 99..9
S&L 99.25 98.68 94.93 99.45 98.03
The method of the present invention 98.50 100.00 99.25 99.31 99.34
Table 5: discrimination (%) of the set 5 as training set algorithms of different.
Method Set 1 Set 2 Set 3 Set 4 Entire set
MSR 96.62 91.45 92.67 99.34 94.74
Gradientfaces 96.24 91.67 90.23 99.56 94.04
S&L 98.50 88.38 89.29 98.90 93.04
The method of the present invention 100.00 99.78 100.00 100.00 99.94
Table 6: everyone average recognition rate (%) of the image as training set algorithms of different is randomly selected.
Method Set 1 Set 2 Set 3 Set 4 Set 5 Entire set
MSR 84.19 81.36 74.37 74.21 80.20 78.45
Gradientfaces 87.65 80.16 73.94 82.18 90.19 82.95
S&L 85.63 79.09 70.25 71.76 79.81 76.73
The method of the present invention 94.44 92.35 91.48 93.01 95.06 93.32
The above is only a preferred embodiment of the present invention, it is noted that for the ordinary skill people of the art For member, without departing from the technical principles of the invention, several improvement and deformations can also be made, these improvement and deformations Also it should be regarded as protection scope of the present invention.

Claims (6)

1. a kind of method of the illumination invariant of determining complex illumination facial image, which comprises the following steps:
1) by analysis Lambert's model, complex illumination facial image model is determined;
2) illumination estimation model is designed, the image irradiation of facial image is solved;Detailed process is as follows:
Illumination estimation model I and illumination estimation 2-1) are separately designed based on the slowly varying region of illumination and the quick region of variation of illumination Modelⅱ:
Illumination estimation model I is defined as:
Illumination estimation modelⅱ is defined as:
Fa(x, y)=Im(x,y)-F(x,y)(5)
Wherein, Im(x, y) is the image irradiation under illumination estimation model I, Is(x, y) is the image light under illumination estimation modelⅱ According to oi,jIt is point (x, y) in Ω1Consecutive points in neighborhood;Max () and min () respectively indicate the maximum for seeking collective data Value and minimum value;
2-2) calculate Im(x, y) and Is(x, y) will merge illumination estimation result I using illumination fusion in facial image F (x, y)ms (x, y) is defined as:
T=mean (Fg(x,y))+k×(max(Fg(x,y))-mean(Fg(x,y))) (7)
Fg(x, y)=Fa(x,y)/Im(x,y) (8)
Wherein, mean () indicates to seek the average value of collective data;K is adjustable factors;
The correlation between image irradiation of the adaptive Anisotropic fractals of one kind to establish adjacent pixel 2-3) is designed, And by final image irradiation I (x, y) is defined as:
Wherein, G (x, y, Ω2) be standard deviation be ρ, convolution kernel scale is Ω2Gaussian kernel;P(x,y,Ω2) it is Ims(x, y) is corresponding Anisotropy template;Ims(i, j) is Ims(x, y) is in Ω2Pixel in neighborhood;
3) image irradiation of the facial image solved according to the complex illumination facial image model and step 2) of step 1) calculates people Face illumination invariant.
2. a kind of method of the illumination invariant of determining complex illumination facial image according to claim 1, feature exist In, in the step 1), complex illumination facial image model are as follows:
F (x, y)=I (x, y) R (x, y) (2)
Wherein, F (x, y) is facial image, and R (x, y) indicates that human face light invariant, I (x, y) indicate the image light of facial image According to.
3. a kind of method of the illumination invariant of determining complex illumination facial image according to claim 1, feature exist In the adjustable factors k is taken as 0.6.
4. a kind of method of the illumination invariant of determining complex illumination facial image according to claim 1, feature exist In the standard deviation ρ is taken as 1.
5. a kind of method of the illumination invariant of determining complex illumination facial image according to claim 1, feature exist In the Ω1And Ω2Neighborhood window is set as 3 × 3.
6. a kind of method of the illumination invariant of determining complex illumination facial image according to claim 1, feature exist In the human face light invariant indicates are as follows:
R (x, y)=F (x, y)/I (x, y) (11)
Wherein, F (x, y) is facial image, and R (x, y) indicates that human face light invariant, I (x, y) indicate the image light of facial image According to.
CN201610371321.1A 2016-05-30 2016-05-30 A kind of method of the illumination invariant of determining complex illumination facial image Active CN106056076B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610371321.1A CN106056076B (en) 2016-05-30 2016-05-30 A kind of method of the illumination invariant of determining complex illumination facial image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610371321.1A CN106056076B (en) 2016-05-30 2016-05-30 A kind of method of the illumination invariant of determining complex illumination facial image

Publications (2)

Publication Number Publication Date
CN106056076A CN106056076A (en) 2016-10-26
CN106056076B true CN106056076B (en) 2019-06-14

Family

ID=57171435

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610371321.1A Active CN106056076B (en) 2016-05-30 2016-05-30 A kind of method of the illumination invariant of determining complex illumination facial image

Country Status (1)

Country Link
CN (1) CN106056076B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107239729B (en) * 2017-04-10 2020-09-01 南京工程学院 Illumination face recognition method based on illumination estimation
CN107451591A (en) * 2017-06-27 2017-12-08 重庆三峡学院 A kind of human face light invariant feature extraction method using Wallis operators
CN108335315A (en) * 2017-12-28 2018-07-27 国网北京市电力公司 The determination method, apparatus in illumination variation region

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2005365A2 (en) * 2006-04-13 2008-12-24 Tandent Vision Science, Inc. Method and system for separating illumination and reflectance using a log color space
EP2580740A2 (en) * 2010-06-10 2013-04-17 Tata Consultancy Services Limited An illumination invariant and robust apparatus and method for detecting and recognizing various traffic signs
CN103530634A (en) * 2013-10-10 2014-01-22 中国科学院深圳先进技术研究院 Face characteristic extraction method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8175390B2 (en) * 2008-03-28 2012-05-08 Tandent Vision Science, Inc. System and method for illumination invariant image segmentation

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2005365A2 (en) * 2006-04-13 2008-12-24 Tandent Vision Science, Inc. Method and system for separating illumination and reflectance using a log color space
EP2580740A2 (en) * 2010-06-10 2013-04-17 Tata Consultancy Services Limited An illumination invariant and robust apparatus and method for detecting and recognizing various traffic signs
CN103530634A (en) * 2013-10-10 2014-01-22 中国科学院深圳先进技术研究院 Face characteristic extraction method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
人脸认证中的光照不变特征图像提取方法研究;匡婷;《中国优秀硕士学位论文全文数据库 信息科技辑》;20140215(第2期);第12-13页,摘要

Also Published As

Publication number Publication date
CN106056076A (en) 2016-10-26

Similar Documents

Publication Publication Date Title
Li et al. Robust and accurate iris segmentation in very noisy iris images
Sun et al. Local morphology fitting active contour for automatic vascular segmentation
CN101359365B (en) Iris positioning method based on maximum between-class variance and gray scale information
Chen et al. A highly accurate and computationally efficient approach for unconstrained iris segmentation
Liu et al. Detecting wide lines using isotropic nonlinear filtering
Esmaeili et al. Automatic detection of exudates and optic disk in retinal images using curvelet transform
CN100373397C (en) Pre-processing method for iris image
Liu et al. Active contour model driven by local histogram fitting energy
Zhang et al. Level set evolution driven by optimized area energy term for image segmentation
CN103886589A (en) Goal-oriented automatic high-precision edge extraction method
Li et al. Liver segmentation from CT image using fuzzy clustering and level set
CN107066969A (en) A kind of face identification method
CN106056076B (en) A kind of method of the illumination invariant of determining complex illumination facial image
CN107239729B (en) Illumination face recognition method based on illumination estimation
Khotanlou et al. Automatic brain tumor segmentation using symmetry analysis and deformable models
CN103870820A (en) Illumination normalization method for extreme illumination face recognition
Lee et al. Multiscale morphology based illumination normalization with enhanced local textures for face recognition
Asmuni et al. An improved multiscale retinex algorithm for motion-blurred iris images to minimize the intra-individual variations
CN109523559A (en) A kind of noise image dividing method based on improved energy functional model
Zhou et al. A novel approach for red lesions detection using superpixel multi-feature classification in color fundus images
CN106372593B (en) Optic disk area positioning method based on vascular convergence
Zheng et al. Illumination normalization via merging locally enhanced textures for robust face recognition
Ahmed et al. Retina based biometric authentication using phase congruency
Sontakke et al. Automatic ROI extraction and vein pattern imaging of dorsal hand vein images
CN104021387A (en) Face image illumination processing method based on visual modeling

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant