CN103902958A - Method for face recognition - Google Patents

Method for face recognition Download PDF

Info

Publication number
CN103902958A
CN103902958A CN201210578261.2A CN201210578261A CN103902958A CN 103902958 A CN103902958 A CN 103902958A CN 201210578261 A CN201210578261 A CN 201210578261A CN 103902958 A CN103902958 A CN 103902958A
Authority
CN
China
Prior art keywords
image
face
module
picture
gray
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201210578261.2A
Other languages
Chinese (zh)
Inventor
屈景春
吴军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CHONGQING KAIZE TECHNOLOGY Co Ltd
Original Assignee
CHONGQING KAIZE TECHNOLOGY Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CHONGQING KAIZE TECHNOLOGY Co Ltd filed Critical CHONGQING KAIZE TECHNOLOGY Co Ltd
Priority to CN201210578261.2A priority Critical patent/CN103902958A/en
Publication of CN103902958A publication Critical patent/CN103902958A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

The invention relates to a method for face recognition. The method comprises: a. an image acquisition module; b. an image preprocessing module; c. a face positioning module; d. a feature extraction module; and e. a recognition module. The method for face recognition in the invention uses relatively advanced technical force, and ensures that an application has certain advantages in technology; the method uses mature technology, and ensures security and reliability of the application; the application is convenient to extend and maintain, facilitating technological updating; the application makes full use of existing resources, and reduces unnecessary reinvestment as much as possible; and compiled codes must be rigorous and readable, explanation of the codes must be clear, and thus responsibility is provided for redevelopment of the application.

Description

The method of recognition of face
Technical field
The present invention relates to the field of recognition of face, especially a kind of method of recognition of face.
Background technology
Along with the rapid growth that safety entrance control and the application of financial trade aspect need, biometric identification technology has obtained new attention.At present, the new development that microelectronics and vision system aspect obtain, makes the cost that realizes of high-performance automatic identification technology in this field be reduced to acceptable degree.And recognition of face is one of most widely used technology in all biometric discrimination method, face recognition technology is a rising in recent years, but little known new technology.People are more the mystery application of seeing this technology in film: police, by the suspect's who takes on the sly facial photo, is input in computer, compare, and find out this suspect's particulars and previous conviction with the data in police databases.This is not imaginary plot.Abroad, face recognition technology is used in security protection departments such as national important department and army and police already in a large number.At home, start from the nineties in last century for the research of face recognition technology, be mainly used at present the fields such as public security, finance, network security, estate management and work attendance.
Recognition of face is one of the most challenging problem of machine vision and area of pattern recognition, also has application value comparatively widely simultaneously.Face recognition technology is a very active research field, and it has covered the content of the subjects such as Digital Image Processing, pattern-recognition, computer vision, neural network, psychology, physiology, mathematics.Nowadays, although research has in this respect obtained some gratifying achievements, but FRT is still faced with very severe problem in useful application, because the distribution of face face is closely similar, and face itself is again a flexible article, the ever-changing of expression, attitude or hair style, cosmetic brought sizable trouble all to correct identification.How can correctly identify a large amount of people and requirement of real time is problem in the urgent need to address.
On the basis of detecting at face, facial key feature detection attempts to detect the shape information of the major organs such as position and eyes and face of the main face feature point on face.Gray-level projection tracing analysis, template matches, deformable template, Hough conversion, Snake operator, elastic graph matching technique, initiatively proterties model and active appearance models based on Gabor wavelet transformation are conventional methods.The main thought of deformable template is according to the shape information of the priori of face characteristic to be detected, the shape of a parametric description of definition, the parameter of this model has reflected the variable part of character pair shape, as position, size, angle etc., finally dynamically mutual adaptation of edge, peak, paddy and the intensity profile characteristic by model and image, is revised for they.Because template deformation has utilized the global information of characteristic area, therefore can detect preferably corresponding character shape.In parameter space, carry out energy function minimization because deformable template will adopt optimized algorithm, therefore the major defect of algorithm is 2 points: one, high to the degree of dependence of parameter initial value, be easy to be absorbed in local minimum; Two, computing time is long.For the problem of this two aspect, we have adopted a kind of detection algorithm from coarse to fine: first utilize the priori of human face structure, peak valley and the frequency characteristic of face-image intensity profile to detect roughly the approximate region of eyes, nose, mouth, chin and the unique point that some are crucial; Then on this basis, provide the initial parameter of good template, thereby can significantly improve speed and the precision of algorithm.Eyes are facial most important features, and their accurate location is the key of identification.Eyes location technology based on region growing, on the basis that this technology detects at face, taking full advantage of eyes is upper left side and this characteristic of top-right gray scale paddy district at face center in facial zone, can accurately locate fast two eye pupil centers.This algorithm has adopted the search strategy based on region growing, in the roughly face framework providing, estimates the initial position of nose at face location algorithm, then defines two initial ranging rectangles, respectively two residing approximate location growths to the left and right.This algorithm is starkly lower than the feature of facial gray scale according to human eye gray scale, utilize search rectangular to find the edge of eye, finally navigates to the center of pupil.Experiment shows, this algorithm, for the variation of face size, attitude and illumination, has stronger adaptive faculty, but in the situation that eye shade is heavier, there will be location inaccurate.Wear black surround glasses, also can affect the positioning result of this algorithm.
Based on the facial sensory perceptual system of visual channel information, comprise the subsystems such as Face detection and tracking, facial Feature Localization, face recognition, face classification (age, race, sex's etc. differentiations), Expression Recognition, labiomaney, as Fig. 1-1 formula, can find out, continue face detection with after chasing after, facial Feature Localization is an indispensable link of facial perception normally, is the basis of follow-up work, has great importance.Although recognition of face can not say the necessary functions of other facial sensing modules, but, certainly, utilize known identity information, in conjunction with the priori of particular person, can improve the reliability of expression analysis, labiomaney and speech recognition, gesture identification and even handwritten form identification.And computing machine is exactly the environment setting based on specific user to the most directly application of user's identity validation: as user's personalized working environment, shared and secret protection of information etc.
Summary of the invention
The technical problem to be solved in the present invention is: a kind of method that recognition of face is provided.
The technical solution adopted for the present invention to solve the technical problems is: a kind of method of recognition of face, and its feature concrete steps are:
A. image collection module: obtain picture from camera is taken pictures, also can obtain from picture library, the picture after obtaining can show to identify in the interface of software;
B. image pretreatment module: comprise image light compensation, image grizzle, Gaussian smoothing, histogram equalization, realize that picture contrast strengthens, binaryzation conversion;
Light compensation module: due to light reason, the image of photograph may there is the unbalanced situation of light and cause color error ratio, in order to offset the color error ratio existing in this whole image, the solution that native system adopts is: the brightness of all pixels in whole image is arranged from high to low, get front 5% pixel, then linear amplification, makes the mean flow rate of these pixels reach 255.In fact be exactly the rgb value of adjusting picture pixel, because may there is the unbalanced situation of light in the picture that system obtains, this can affect our extraction to feature, in simultaneity factor, will use YcrCB color space, so be necessary image to carry out light compensation.As far as possible its feature is showed in image.YcrCB is a kind of color space, and it is for video system, in this color space, and the brightness of Y representation in components pixel, Cr represents red component, Cb represents blue component, conventionally Cr and Cb is called to colourity.YcrCB color space is the color table representation model adopting in the CC601 encoding scheme take studio quality standard as target;
The process of the image gray processing of gradation of image module is exactly the process that coloured image is converted to black and white image, and it is also for by the information of image more specifically, simply show, and still, does like this and also will lose image information.Therefore, as far as possible in the process transforming by the information of simple mode represent images complexity, in native system, adopt following steps to realize the gray processing of image: color conversion becomes that gray scale, gray scales conversion, gray scale linear transformation, gray scale linearity are blocked, gray scale negate;
Gaussian smoothing module will be carried out smoothing processing to image, and in image acquisition process, due to the impact of various factors, image tends to occur some irregular noises, enter image and all likely produce in transmission, storage etc. the loss of data.Thereby affect the quality of image.The process of processing noise is called smoothly.Smoothly can reduce the visual noise of image, after the HFS of going out in image, those original unconspicuous low-frequency components are more easily identified simultaneously.Smoothly can realize by convolution.After horizontal projection after convolution is level and smooth, binaryzation provides good image effect, in the gatherer process of image, due to the impact of various factors, in image, tend to occur some irregular random noises, as loss of data and damage etc. that data occur in the time transmitting, store, the quality that these all can affect image, therefore needs that picture is carried out to smooth operation and eliminates noise with this.If but smoothly improper, the details that will make image itself as boundary profile, lines etc. become smudgy, have and keep image detail as far as possible in order both to smooth out noise, native system adopts Gaussian smoothing;
Figure 768333DEST_PATH_IMAGE001
histogram equalization: the object of histogram equalization is an input picture to be converted to have identical pixel to count in each gray level, the central idea of its processing is from becoming being uniformly distributed in whole tonal ranges between certain gray area of relatively concentrating the grey level histogram of original image, its Research Thinking is: carry out histogrammic equilibrium treatment by histogram transformation formula, histogram transformation formula is
The object that uses this module is to make input be converted to the output image that has identical pixel number in each gray level by point processing.Its realization is mainly the change type that utilizes gray balance
DB = f (DA)=
Figure 650838DEST_PATH_IMAGE002
H(u)du。
It is exactly the further processing to image that picture contrast strengthens module, and contrast is pulled open again.It is directly processed its gray scale for each pixel of original image, and its processing procedure is mainly by enhancing function, the gray level of pixel to be carried out to computing and the new gray-scale value using operation result as this pixel is realized.The analytical expression of the enhancing function of selecting by change just can obtain different treatment effects, for the feature of image is displayed step by step, the contrast that need to carry out image strengthens, it is mainly added up by the gray-scale value to image, think relevant information for being less than Low, using it as black processing, for think the information that some are irrelevant more than High, they are removed, and between the two, carry out contrast enhancing, they are saved as new Pixel Information in the ratio of total gray-scale value the inside.
Binarization block is to be processed into bianry image by gathering the multi-level gray scale image obtaining, so that analysis and understanding and identification reduce calculated amount.Binaryzation is exactly by some algorithms, change the pixel color in image by a threshold value, make and in entire image picture, only have black and white two-value, this image is generally made up of black region and white portion, can represent a pixel with a bit, " 1 " represents black, and " 0 " represents white, can certainly turn expression around, this image is referred to as bianry image.This is just conducive to our extraction to feature, adopts interclass variance and the outer variance of group to realize binaryzation in this design.
C. face locating module: face picture after treatment is positioned, eyes, nose, face are marked, to carry out feature extraction, because eyes have symmetry, therefore can be marked soon, and nose is below eyes, and face is below nose, so as long as eye mark is good, nose and face also can be marked accordingly;
D. characteristic extracting module: in the face picture behind location, the eigenwert of eyes, nose, face is extracted;
E. identification module: the value in the eigenwert of extracting from picture and background data base has been compared to recognition function.
The concrete steps of described image pretreatment module are as follows: the selection of (1) design proposal principle; (2) image file format is selected; (3) developing instrument is selected; (4) algorithm selection analysis.
The concrete steps of described image gray processing module are as follows: (1) color conversion becomes gray scale; (2) gray scales conversion; (3) gray scale linear transformation; (4) gray scale linearity is blocked; (5) gray scale negate.
The concrete steps of described characteristic extracting module are as follows: (1) extracts the distance of two eyes; (2) degree of tilt of eyes; (3) center of gravity of eyes, face; (4) mark each feature with a rectangle.
The invention has the beneficial effects as follows: the method for recognition of face of the present invention, adopt comparatively advanced technical force, guarantee that application program possesses certain advantage technically; Adopt proven technique, guarantee the safety and reliability of application program; Application program is convenient to expansion and is safeguarded, is easy to the renewal of technology; Application program makes full use of existing resource, reduces unnecessary reinvestment as far as possible; The necessary rigorous readability of code of writing, the explanation of code must be clear, for the redevelopment of application program provides the responsibility that should use up.
Accompanying drawing explanation
Below in conjunction with drawings and Examples, the present invention is further described.
Fig. 1 is structured flowchart of the present invention;
Fig. 2 is the structured flowchart of image pretreatment module in Fig. 1.
Embodiment
In conjunction with the accompanying drawings, the present invention is further detailed explanation.These accompanying drawings are the schematic diagram of simplification, and basic structure of the present invention is only described in a schematic way, and therefore it only shows the formation relevant with the present invention.
The method of recognition of face as shown in Figure 1, its concrete steps are:
A. image collection module: obtain picture from camera is taken pictures, also can obtain from picture library, the picture after obtaining can show to identify in the interface of software;
B. image pretreatment module: comprise image light compensation, image grizzle, Gaussian smoothing, balanced histogram, realize that picture contrast strengthens, binaryzation conversion, the proposition of the idea of image light compensation is mainly to consider that the color informations such as the colour of skin are often subject to the impact of the factors such as the color error ratio of light source colour, image capture device, and depart from the whole essential color and move to a direction, our usually said color is colder, partially warm, photo is partially yellow, just blue etc.This phenomenon is more common in exquisite photograph.So propositions such as Anil K.Jain, in order to offset the color error ratio existing in this whole image, we arrange all pixel intensity in whole image (being through the brightness after non-linear r-correction) from high to low, get front 5% pixel, for example, if the number of these pixels is abundant (, be greater than 100), we are using their brightness as " reference white " (Reference White) just, also by their R of color, G, B component value is all adjusted into 255, the color-values of other pixels of entire image also all exchanges by this adjustment yardstick, specific implementation light compensation function: implementation procedure is as follows:
1., edit menu IDR_MAINFRAM, first add therein a menu item, by its called after " pre-service ", and in its attribute column, be made as " ejection " menu, click this menu item of pre-service and will eject a new submenu, now this submenu called after " light compensation ", and its ID is made as to ID_READY_LIGHTINGCONPENSATE, function ReadyLightingconpensate () in respective file FaceDetectView.Cpp realizes, and in void CFaceDetectView::OnReadyLightingconpensate () add following code:
hDIBTemp = gDib.CopyHandle(hDIB);
gDib.LightingCompensate(hDIB);
GlobalUnlock(hDIB);
Invalidate();
Light compensation function is in fact with the LightingCompensate(in epimere code) function realizes.Function LightingCompensate() be a member function of class DIB.Its core code is as described below:
// circulation is below carried out light compensation to image
for(i =0;i<height;i++)
for(int j=0;j<width;j++)
{
// obtain pixel-shift
lOffset = this->PixelOffset(i,j,wBytesPerLine);
// obtain blue component
*(lpData+lOffset) = colorb;
// green component
colorb = *(lpData+lOffset+1);
colorb *=co;
if(colorb >255)
colorb = 255;
*(lpData+lOffset+1) = colorb;
// red component
colorb = *(lpData+lOffset+2);
colorb *=co;
if(colorb >255)
colorb = 255;
*(lpData+lOffset+2) = colorb;
The concrete steps of image gray processing module are as follows:
(1) color conversion becomes gray scale: coloured image is converted into normal following empirical formula: the gray=0.39 × R+0.50 × G+0.11 × B that adopts of gray scale image, and wherein, gray is gray-scale value, and R, G, B are respectively redness, green and blue component value;
(2) gray scales conversion; Gray scales conversion is that the gray scale of former pixel is multiplied by a zoom factor, and finally by [0,255];
(3) gray scale linear transformation: under-exposed or over-exposed during due to imaging when image, can produce the disadvantage of contrast deficiency, thereby make the details in image differentiate unclear.Gradation of image is carried out to linear expansion, often can improve significantly the outward appearance of image.The calculating formula of gray scale linear transformation is:
Figure 189007DEST_PATH_IMAGE003
g =
F, other
In formula, f is the gray scale of former pixel, and g is the gray scale after conversion.This conversion to [c, d] between gray area, and will not remain unchanged the gray scale transformation that belongs to [a, b] at [a, b] interval former pixel grey scale.Here a, b, c, d, f, g is the round values between [0,255].Visible, a is mapped as c, and b is mapped as d;
(4) gray scale linearity is blocked: if the gray scale of former pixel is less than a, the gray scale of this pixel equals c; If the gray scale of former pixel is greater than b, the gray scale of this pixel equals d;
(5) gray scale negate image gray processing.
Gaussian smoothing: template operation is a kind of operational method of often using in Digital Image Processing, level and smooth, the sharpening of image and refinement, rim detection all will be used template operation.For example: having a kind of common smoothing algorithm is that the mean value (divided by 9) of then trying to achieve, as the gray-scale value of this pixel in new figure, represents this operation with the following method by the gray-scale value of a pixel in former figure and its gray-scale value addition of contiguous eight pixels around:
Figure 973557DEST_PATH_IMAGE005
Above formula is similar to matrix, and we are referred to as template conventionally.Middle stain represents this element central element, and this element is the element that will process.If template is:
This operation should be described as: the gray-scale value of 8 pixels of the gray-scale value of a pixel in former figure and its bottom right vicinity is added,
Then using the mean value 9(trying to achieve divided by 9) gray-scale value of this pixel in new figure.If template is
Figure 52428DEST_PATH_IMAGE007
,
Represent using 2 of self gray-scale value extraordinarily following element gray-scale values as new value, and
Figure 165878DEST_PATH_IMAGE008
represent self
Gray-scale value adds 2 times of new gray-scale values of conduct of top element gray-scale value.Conventionally template does not allow to shift out border, so image after treatment can be less than former figure,
"-" represents cannot carry out on border the point of template operation, and general way is to copy the gray-scale value of former figure, no longer carries out any other processing.Template operation has realized the computing of a kind of field, and the result of certain pixel is not only relevant with this pixel grey scale, and relevant with the value of its field point.Below level and smooth purposes and solution are described in detail.
In the gatherer process of image, due to the impact of various factors, in image, tend to occur some irregular random noises, as loss of data and damage etc. that data occur in the time transmitting, store, these all can affect the quality of image.The process of processing noise spot is referred to as smoothly, smoothly can reduce the visual noise of image, removes after the HFS in image simultaneously, and those original unconspicuous low-frequency components are more easily identified.And noise spot is generally lonely
Vertical point, the pixel grey scale of noise spot and their neighbour's pixel have significant difference, and grey scale change always has sudden change high frequency nearby.Smoothly can realize by convolution, level and smooth frequency cutoff point is determined by size and the convolution coefficient of convolution kernel.Convolution kernel for smothing filtering is called low-pass filter ripple device, and low-pass filter ripple utensil has following feature: the row, column number of 1 convolution kernel is odd number, is generally 3 × 3 matrix; 2 convolution coefficients are symmetrical centered by central point; 3 all convolution coefficients are all positive number; The value of 4 distance centers convolution coefficient far away is less or remain unchanged; Result after 5 convolution does not change the brightness of image.Good image effect is provided after convolution is level and smooth, to horizontal projection, binaryzation subsequently.The curve of horizontal projection seems smoother, and the image isolated point after binaryzation is fewer.Below several conventional convolution kernels:
1/9 1/9 1/9 1/10 1/10 1/10 1/16 2/16 1/16
1/9 1/9 1/9 1/10 1/5 1/10 2/16 4/16 2/16
1/9 1/9 1/9 1/10 1/10 1/10 1/16 2/16 1/16
L P1 LP2 LP3
Common processing is: by central point around the pixel value of eight points take advantage of after the corresponding coefficient of matrix separately and be added and obtain a value, then this value is multiplied by the coefficient of central point, the pixel value of central point is composed the last value for obtaining.In general, different noises has separately convolution algorithm targetedly.Convolution algorithm used herein is Gaussian convolution core, that is the LP3 of convolution kernel above.Gaussian convolution obtains by the 2 dimension Gaussian functions of sampling.The advantage of Gaussian smoothing algorithm is that the distortion of level and smooth rear image is few, and algorithm has more standby versatility, can remove different noise.It should be noted that: in the time of smoothing processing, sharp point cannot be processed, and therefore range of DO should be set in image boundary.
1. its concrete programming is as follows: edit menu IDR_MAINFRAM, in menu " pre-service ", add a Submenu Items, and called after " Gaussian smoothing " is also made as ID_READY_Template by its ID.
2. the button.onrelease that adds " Gaussian smoothing " menu item in class CFaceDetectView, its code is as follows:
// carry out template operation
Template(tem ,3,3, xishu);
Invalidate(TRUE);
Wherein tem is template parameter, and xishu is coefficients; Template() function is the main function of realizing Gaussian smoothing, and its core code is:
for(m=i-((tem_h-1)/2);m<=i+((tem_h-1)/2);m++)
{
for(n=j-((tem_w-1)/2);n<=j+((tem_w-1)/2);n++)
Note: will be centered by point (i, j), the pixel in the scope identical with template size and template are to multiplying each other with the coefficient of position and linear stack
sum+=Gray[m][n]* tem[(m-i+((tem_h-1)/2))*tem_w+n-j+((tem_w-1)/2)];
}
Result is multiplied by total coefficients
sum=(int)sum*xishu;
Calculate absolute value
sum = fabs(sum);
If be less than 0, forcing assignment is 0
if(sum<0)
sum=0;
If be greater than 255, forcing assignment is 255
if(sum>255)
sum=255;
HeightTemplate[i][j] = sum;
Histogram equalization object is by point processing, input to be converted in each gray level, to have the output image of identical pixel number (the histogram of output is flat).This is for being highly effective at the form that carries out image ratio or before cutting apart, image is converted into one-level.
Definition according to the probability density function of image (PDF, the histogram of normalization tape unit area):
P(x)=
Figure 362547DEST_PATH_IMAGE009
* H(x) (formula 5)
Wherein H(x) be histogram, the area that A0 is image, the probability density function of establishing the front image of conversion is
Pr(r), after changing, the probability density function of image is Ps(S), transfer function is s=f(r), by theory of probability knowledge, we can obtain:
Ps(S)=Pr(r) *
Figure 586855DEST_PATH_IMAGE010
(formula 6)
Like this, be that histogram is flat if want to make to change the probability density function of rear image into 1(), must meet:
Pr(r)=
Figure 834036DEST_PATH_IMAGE010
(formula 4-5)
Both members integration:
S=f(r)= 0 r p2(u) du=
Figure 118387DEST_PATH_IMAGE009
0 r h(u) du (formula 7)
This change type is called as the cumulative distribution function of image
Formula be above be normalized rear derivation for there is no normalized situation, only require that the change type of gray balance is with maximum gray-scale value (DMax is exactly 255 for gray-scale map):
DB=f (DA)= h(u) du (formula 1)
For discrete picture change type be:
DB=f(DA)=
Figure 228502DEST_PATH_IMAGE011
(formula 8)
In formula, Hi is the number of pixels of i level gray scale.
Its programming is as follows: the palette and the file that do not need equally to change DIB in gray balance operation, as long as pointer and DIB height, the width information of pointing to DIB pixel reference position are passed to subfunction and just can complete gray balance conversion work, its core code is as follows:
* (lpData+lOffset)=state; The equilibrium of // display gray scale
*(lpData + lOffset+1)=state ;
*(lpData + lOffset+2)=state ;
Realizing picture contrast strengthens: after image equalization histogram is processed, just can carry out contrast enhancing to image, further pull open contrast.It is added up by the gray-scale value to image, for than minimum setting value little think relevant information, using it as black processing, than maximum set value large think the information that some are irrelevant, they are removed, and between the two, carry out contrast enhancing, they are saved as new Pixel Information in the ratio of total gray-scale value the inside.
The fundamental purpose of this work is that the feature of image is displayed step by step.
(2) coding is realized:
1. edit menu IDR_MAINFRAM adds a Submenu Items in menu " pre-service ", called after " realizing picture contrast strengthens ", and its ID is made as to ID_READY_ContrastEnhance.
2. the button.onrelease that adds " realizing picture contrast strengthens " menu item in class CFaceDetectView, its code is as follows;
lOffset = gDib.PixelOffset(i, j, gwBytesPerLine);
Obtain gradation of image enhancing function
int state=IncreaseContrast(ZFT[k][k1], 100);
Image after display gray scale strengthens
*(lpData + lOffset ) = state ;
*(lpData + lOffset+1) = state ;
*(lpData + lOffset+2) = state ;
Wherein IncreaseContras() function is to realize the Key Functions that picture contrast strengthens, and this regulates contrast according to parameter n, and n is larger, contrasts stronglyer, and its core is:
If data are very little, be set to 0
if(pByte<=Low)
return 0;
Obtain intermediate data, and contrast enhancing processing
else if ((Low<pByte)&&(pByte<High))
return int(((pByte-Low)/Grad));
If data are very large, be set to 255
else
return 255;
C. face locating module: face picture after treatment is positioned, eyes, nose, face are marked, to carry out feature extraction;
D. characteristic extracting module: in the face picture behind location, the eigenwert of eyes, nose, face is extracted;
E. identification module: the value in the eigenwert of extracting from picture and background data base has been compared to recognition function.
Take above-mentioned foundation desirable embodiment of the present invention as enlightenment, by above-mentioned description, relevant staff can, not departing from the scope of this invention technological thought, carry out various change and modification completely.The technical scope of this invention is not limited to the content on instructions, must determine its technical scope according to claim scope.

Claims (4)

1. a method for recognition of face, its feature concrete steps are:
A. image collection module: obtain picture from camera is taken pictures, also can obtain from picture library, the picture after obtaining can show to identify in the interface of software;
B. image pretreatment module: comprise image light compensation, image grizzle, Gaussian smoothing, histogram equalization, realize that picture contrast strengthens, binaryzation conversion;
C. face locating module: face picture after treatment is positioned, eyes, nose, face are marked, to carry out feature extraction;
D. characteristic extracting module: in the face picture behind location, the eigenwert of eyes, nose, face is extracted;
E. identification module: the value in the eigenwert of extracting from picture and background data base has been compared to recognition function.
2. the method for recognition of face according to claim 1, is characterized in that: the concrete steps of described image pretreatment module are as follows: the selection of (1) design proposal principle; (2) image file format is selected; (3) developing instrument is selected; (4) algorithm selection analysis.
3. the method for recognition of face according to claim 1, is characterized in that: the concrete steps of described image gray processing module are as follows: (1) color conversion becomes gray scale; (2) gray scales conversion; (3) gray scale linear transformation; (4) gray scale linearity is blocked; (5) gray scale negate.
4. the method for recognition of face according to claim 1, is characterized in that: the concrete steps of described characteristic extracting module are as follows: (1) extracts the distance of two eyes; (2) degree of tilt of eyes; (3) center of gravity of eyes, face; (4) mark each feature with a rectangle.
CN201210578261.2A 2012-12-28 2012-12-28 Method for face recognition Pending CN103902958A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210578261.2A CN103902958A (en) 2012-12-28 2012-12-28 Method for face recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210578261.2A CN103902958A (en) 2012-12-28 2012-12-28 Method for face recognition

Publications (1)

Publication Number Publication Date
CN103902958A true CN103902958A (en) 2014-07-02

Family

ID=50994271

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210578261.2A Pending CN103902958A (en) 2012-12-28 2012-12-28 Method for face recognition

Country Status (1)

Country Link
CN (1) CN103902958A (en)

Cited By (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104408780A (en) * 2014-11-28 2015-03-11 四川浩特通信有限公司 Face recognition attendance system
CN105447827A (en) * 2015-11-18 2016-03-30 广东欧珀移动通信有限公司 Image noise reduction method and system thereof
CN105844727A (en) * 2016-03-18 2016-08-10 中兴智能视觉大数据技术(湖北)有限公司 Intelligent dynamic human face recognition attendance checking record management system
CN106203356A (en) * 2016-07-12 2016-12-07 中国计量大学 A kind of face identification method based on convolutional network feature extraction
CN106346471A (en) * 2016-08-31 2017-01-25 梁云 Intelligent automatic checking robot for storage
CN106384086A (en) * 2016-08-31 2017-02-08 广州市贺氏办公设备有限公司 Dual-camera face identification method and system
CN106407906A (en) * 2016-08-31 2017-02-15 彭青 Human face identification method
CN106446779A (en) * 2016-08-29 2017-02-22 深圳市软数科技有限公司 Method and apparatus for identifying identity
CN106626381A (en) * 2016-08-31 2017-05-10 李军 Shooting device used for 3D printer
CN106778621A (en) * 2016-12-19 2017-05-31 四川长虹电器股份有限公司 Facial expression recognizing method
CN106910434A (en) * 2017-02-13 2017-06-30 武汉随戈科技服务有限公司 A kind of exhibitions conference service electronics seat card
CN108446691A (en) * 2018-06-09 2018-08-24 西北农林科技大学 A kind of face identification method based on SVM linear discriminants
CN108492350A (en) * 2018-04-02 2018-09-04 吉林动画学院 Role's mouth shape cartoon production method based on lip-reading
CN108573230A (en) * 2018-04-10 2018-09-25 京东方科技集团股份有限公司 Face tracking method and face tracking device
CN108875331A (en) * 2017-08-01 2018-11-23 北京旷视科技有限公司 Face unlocking method, device and system and storage medium
CN109118434A (en) * 2017-06-26 2019-01-01 南京东大智能化系统有限公司 A kind of image pre-processing method
CN109299655A (en) * 2018-08-09 2019-02-01 大连海事大学 A kind of online method for quickly identifying of marine oil overflow based on unmanned plane
CN109344739A (en) * 2018-09-12 2019-02-15 安徽美心信息科技有限公司 Mood analysis system based on facial expression
CN110009639A (en) * 2018-08-02 2019-07-12 永康市柴迪贸易有限公司 Books automatic push platform
CN111222407A (en) * 2019-11-18 2020-06-02 太原科技大学 Palate wrinkle identification method adopting uniform slicing and inflection point characteristic extraction
CN111292497A (en) * 2020-02-24 2020-06-16 青岛海尔多媒体有限公司 Control method and device for home monitoring system and electrical equipment
CN111445591A (en) * 2020-03-13 2020-07-24 平安科技(深圳)有限公司 Conference sign-in method, system, computer equipment and computer readable storage medium
CN111476175A (en) * 2020-04-09 2020-07-31 上海看看智能科技有限公司 Adaptive topological graph matching method and system suitable for old people face comparison
CN111738242A (en) * 2020-08-21 2020-10-02 浙江鹏信信息科技股份有限公司 Face recognition method and system based on self-adaption and color normalization
CN112381073A (en) * 2021-01-12 2021-02-19 上海齐感电子信息科技有限公司 IQ (in-phase/quadrature) adjustment method and adjustment module based on AI (Artificial Intelligence) face detection
CN112562106A (en) * 2020-11-16 2021-03-26 浙江大学 Data acquisition and processing system for computer software development

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101329722A (en) * 2007-06-21 2008-12-24 上海北控智能科技有限公司 Human face recognition method for performing recognition algorithm based on neural network
WO2009062945A1 (en) * 2007-11-16 2009-05-22 Seereal Technologies S.A. Method and device for finding and tracking pairs of eyes
CN102194131A (en) * 2011-06-01 2011-09-21 华南理工大学 Fast human face recognition method based on geometric proportion characteristic of five sense organs

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101329722A (en) * 2007-06-21 2008-12-24 上海北控智能科技有限公司 Human face recognition method for performing recognition algorithm based on neural network
WO2009062945A1 (en) * 2007-11-16 2009-05-22 Seereal Technologies S.A. Method and device for finding and tracking pairs of eyes
CN102194131A (en) * 2011-06-01 2011-09-21 华南理工大学 Fast human face recognition method based on geometric proportion characteristic of five sense organs

Cited By (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104408780A (en) * 2014-11-28 2015-03-11 四川浩特通信有限公司 Face recognition attendance system
CN105447827A (en) * 2015-11-18 2016-03-30 广东欧珀移动通信有限公司 Image noise reduction method and system thereof
CN105447827B (en) * 2015-11-18 2018-01-16 广东欧珀移动通信有限公司 Image denoising method and system
CN105844727A (en) * 2016-03-18 2016-08-10 中兴智能视觉大数据技术(湖北)有限公司 Intelligent dynamic human face recognition attendance checking record management system
CN106203356A (en) * 2016-07-12 2016-12-07 中国计量大学 A kind of face identification method based on convolutional network feature extraction
CN106203356B (en) * 2016-07-12 2019-04-26 中国计量大学 A kind of face identification method based on convolutional network feature extraction
CN106446779A (en) * 2016-08-29 2017-02-22 深圳市软数科技有限公司 Method and apparatus for identifying identity
CN106346471A (en) * 2016-08-31 2017-01-25 梁云 Intelligent automatic checking robot for storage
CN106626381A (en) * 2016-08-31 2017-05-10 李军 Shooting device used for 3D printer
CN106626381B (en) * 2016-08-31 2017-11-21 东莞理工学院 The filming apparatus that 3D printer uses
CN106407906A (en) * 2016-08-31 2017-02-15 彭青 Human face identification method
CN106384086A (en) * 2016-08-31 2017-02-08 广州市贺氏办公设备有限公司 Dual-camera face identification method and system
CN106778621A (en) * 2016-12-19 2017-05-31 四川长虹电器股份有限公司 Facial expression recognizing method
CN106910434A (en) * 2017-02-13 2017-06-30 武汉随戈科技服务有限公司 A kind of exhibitions conference service electronics seat card
CN109118434A (en) * 2017-06-26 2019-01-01 南京东大智能化系统有限公司 A kind of image pre-processing method
CN108875331A (en) * 2017-08-01 2018-11-23 北京旷视科技有限公司 Face unlocking method, device and system and storage medium
CN108492350A (en) * 2018-04-02 2018-09-04 吉林动画学院 Role's mouth shape cartoon production method based on lip-reading
CN108573230B (en) * 2018-04-10 2020-06-26 京东方科技集团股份有限公司 Face tracking method and face tracking device
CN108573230A (en) * 2018-04-10 2018-09-25 京东方科技集团股份有限公司 Face tracking method and face tracking device
CN108446691A (en) * 2018-06-09 2018-08-24 西北农林科技大学 A kind of face identification method based on SVM linear discriminants
CN110009639A (en) * 2018-08-02 2019-07-12 永康市柴迪贸易有限公司 Books automatic push platform
CN109299655A (en) * 2018-08-09 2019-02-01 大连海事大学 A kind of online method for quickly identifying of marine oil overflow based on unmanned plane
CN109344739A (en) * 2018-09-12 2019-02-15 安徽美心信息科技有限公司 Mood analysis system based on facial expression
CN111222407A (en) * 2019-11-18 2020-06-02 太原科技大学 Palate wrinkle identification method adopting uniform slicing and inflection point characteristic extraction
CN111292497A (en) * 2020-02-24 2020-06-16 青岛海尔多媒体有限公司 Control method and device for home monitoring system and electrical equipment
CN111445591A (en) * 2020-03-13 2020-07-24 平安科技(深圳)有限公司 Conference sign-in method, system, computer equipment and computer readable storage medium
CN111476175A (en) * 2020-04-09 2020-07-31 上海看看智能科技有限公司 Adaptive topological graph matching method and system suitable for old people face comparison
CN111738242A (en) * 2020-08-21 2020-10-02 浙江鹏信信息科技股份有限公司 Face recognition method and system based on self-adaption and color normalization
CN112562106A (en) * 2020-11-16 2021-03-26 浙江大学 Data acquisition and processing system for computer software development
CN112381073A (en) * 2021-01-12 2021-02-19 上海齐感电子信息科技有限公司 IQ (in-phase/quadrature) adjustment method and adjustment module based on AI (Artificial Intelligence) face detection

Similar Documents

Publication Publication Date Title
CN103902958A (en) Method for face recognition
CN108446617B (en) Side face interference resistant rapid human face detection method
WO2020000908A1 (en) Method and device for face liveness detection
US7920725B2 (en) Apparatus, method, and program for discriminating subjects
WO2021051611A1 (en) Face visibility-based face recognition method, system, device, and storage medium
CN106682578B (en) Weak light face recognition method based on blink detection
US20030021448A1 (en) Method for detecting eye and mouth positions in a digital image
US8295593B2 (en) Method of detecting red-eye objects in digital images using color, structural, and geometric characteristics
US11682231B2 (en) Living body detection method and device
CN106056064A (en) Face recognition method and face recognition device
US20090010544A1 (en) Method, apparatus, and program for detecting facial characteristic points
CN106503644B (en) Glasses attribute detection method based on edge projection and color characteristic
CN103116749A (en) Near-infrared face identification method based on self-built image library
CN106570447B (en) Based on the matched human face photo sunglasses automatic removal method of grey level histogram
CN111008971B (en) Aesthetic quality evaluation method of group photo image and real-time shooting guidance system
CN106980852A (en) Based on Corner Detection and the medicine identifying system matched and its recognition methods
JP6351243B2 (en) Image processing apparatus and image processing method
CN103902962A (en) Shielding or light source self-adaption human face recognition method and device
CN101390128A (en) Detecting method and detecting system for positions of face parts
CN113449704B (en) Face recognition model training method and device, electronic equipment and storage medium
CN109711309B (en) Method for automatically identifying whether portrait picture is eye-closed
CN112613471B (en) Face living body detection method, device and computer readable storage medium
CN113591763B (en) Classification recognition method and device for face shapes, storage medium and computer equipment
CN111222433A (en) Automatic face auditing method, system, equipment and readable storage medium
Martinikorena et al. Fast and robust ellipse detection algorithm for head-mounted eye tracking systems

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20140702