CN101089874B - Identify recognizing method for remote human face image - Google Patents

Identify recognizing method for remote human face image Download PDF

Info

Publication number
CN101089874B
CN101089874B CN2006100870359A CN200610087035A CN101089874B CN 101089874 B CN101089874 B CN 101089874B CN 2006100870359 A CN2006100870359 A CN 2006100870359A CN 200610087035 A CN200610087035 A CN 200610087035A CN 101089874 B CN101089874 B CN 101089874B
Authority
CN
China
Prior art keywords
face image
gabor
image
histogram
standard
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN2006100870359A
Other languages
Chinese (zh)
Other versions
CN101089874A (en
Inventor
邵刚
庄镇泉
庄连生
李斌
王睿斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN2006100870359A priority Critical patent/CN101089874B/en
Publication of CN101089874A publication Critical patent/CN101089874A/en
Application granted granted Critical
Publication of CN101089874B publication Critical patent/CN101089874B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

A method of using remote human face image (HFI) to identify status includes carrying out treatment on standard HFI by server end to obtain standard human face histogram character, forming weak sorter as per said histogram character, integrating optimum weak sorters to form strong sorter, using histogram corresponding to strong sorter as standard HFI optimum histogram character, making treatment on HFI to be identified to obtain optimum histogram character of HFI to be identified, comparing two said histogram characters to generate identification sample and confirming user status of HFI to be identified by classifying sample to be identified.

Description

Identity recognition method of remote face image
Technical Field
The invention relates to a remote identity recognition technology in a communication system, in particular to an identity recognition method of a remote face image.
Background
With the rapid development of communication networks, various services are increasing, how to perform remote identity recognition in the communication networks is becoming more and more important, and remote identity recognition has become a prerequisite and an important component for providing various services for network service providers. The traditional remote identity identification adopts a text password mode, namely a terminal sends the text password to an authentication server of a communication network for authentication, and the mode not only requires a user to remember various complicated text passwords, but also easily loses and is stolen.
In order to make the remote identification process efficient, novel and convenient, the inherent physiological characteristics or behavior characteristics of the human body can be utilized to carry out remote identification at present, namely, the remote identification can adopt the biological characteristic identification technology. Among the numerous biometric techniques, face recognition technology plays an important role and has been applied in some specific fields. The face recognition is to perform remote identity recognition according to the face image, including face detection and face image recognition.
In a patent application with the application number of CN200310120624.9 entitled "step-by-step face detection and recognition method in mobile computing environment", a method for face detection and recognition is disclosed, which comprises: firstly, detecting and calibrating, namely obtaining an image from a camera on mobile equipment, simply and effectively correcting the light of the image, and detecting and calibrating a human face by adopting a rapid human face detection algorithm; secondly, encryption transmission, namely, encrypting the digital watermark of the calibrated face range and then sending the encrypted face range to a server through a wireless communication network, and verifying the digital watermark embedded in the face range by the server to judge the integrity and correctness of the face image; and finally, carrying out face image recognition by adopting a face recognition training algorithm based on an embedded Hidden Markov Model (HMM) and returning an authentication result to the mobile equipment.
The current research results show that the difference of the face images of the same person under different illumination conditions is possibly larger than that of the face images of different persons. Under the condition that the indoor illumination condition is not changed greatly, the recognition rate of the best face recognition system can reach 95%, but under the outdoor illumination condition, the recognition rate of the best face recognition system drops to about 50% suddenly. Therefore, the lighting condition has become an important factor affecting the authentication recognition rate of the remote face image. Due to the complexity of the use occasion of the mobile equipment, the illumination condition of the shot image is necessarily complex, the face image authentication mode only carries out simple light correction on the shot image, and does not use a special illumination preprocessing algorithm to preprocess the image, so that the success rate of remotely recognizing the face image is greatly reduced.
In order to improve the success rate of recognizing the face Image without being affected by illumination conditions during remote verification of the face Image, a Self-commercial Image (SQI) can be adopted to perform illumination normalization on the face Image and then transmit the face Image to a server, and the method specifically comprises the following steps:
(1) firstly, a human face image I is given, n different scale anisotropism Gaussian smoothing operators G are selected1,G2,...,GnAnd different weights W1, W2, Wn are given, and the images I are smoothed by the operators to obtain a series of smoothed images
Figure G06187035920060620D000021
<math><mrow><msub><mover><mi>I</mi><mo>&OverBar;</mo></mover><mi>i</mi></msub><mo>=</mo><mi>I</mi><mo>&CirclePlus;</mo><mfrac><mn>1</mn><mi>N</mi></mfrac><msub><mi>W</mi><mi>i</mi></msub><msub><mi>G</mi><mi>i</mi></msub><mo>,</mo></mrow></math> i=1,2,...,n;
(2) Calculating self-quotient image QiThe calculation formula is as follows: <math><mrow><msub><mi>Q</mi><mi>i</mi></msub><mo>=</mo><mfrac><mi>I</mi><msub><mover><mi>I</mi><mo>&OverBar;</mo></mover><mi>i</mi></msub></mfrac><mo>,</mo></mrow></math> i=1,2,...,n;
(3) adjusting the value of the self-quotient image using a non-linear function T such that QiValue of (A) falls within [0, 255%]In between, the adjusted image is recorded as Di:Di=T(Qi),i=1,2,...,n;
(4) Summing the adjusted images to obtain a final self-quotient image Q: <math><mrow><mi>Q</mi><mo>=</mo><munderover><mi>&Sigma;</mi><mrow><mi>i</mi><mo>=</mo><mn>1</mn></mrow><mi>n</mi></munderover><msub><mi>m</mi><mi>i</mi></msub><msub><mi>D</mi><mi>i</mi></msub><mo>,</mo></mrow></math> 1, 2, n, wherein m isiThe weights of the self-quotient images obtained by calculation for the filters corresponding to different scales can be 1;
(5) and performing subsequent processing, such as feature extraction and recognition, by using the self-quotient image Q as a result of illumination preprocessing of the original image I, and sending the face image subjected to the subsequent processing to a server side for recognition by adopting a face recognition training algorithm of an HMM.
However, the SQI is adopted to perform illumination normalization on the face image and then transmit the normalized face image to the server, which also has the following disadvantages: firstly, the parameters involved in SQI are numerous, and particularly, the parameter selection of a Gaussian smoothing operator in the method is difficult; second, the SQI does not work well for shadow removal in facial images; third, the accuracy of feature extraction in face images by SQI needs to be further improved.
In order to improve the success rate of face image recognition without being affected by illumination conditions during remote verification of face images, the following method can be adopted: firstly, establishing a standard face image and a strong classifier at a server end, namely performing Gabor filtering on the standard face image in 5 scales and 8 directions by using a Gabor filter, describing and storing the characteristics of the image by using a magnitude image after the Gabor filtering, constructing a weak classifier on the basis of the magnitude after Gabor conversion of a single pixel point in the magnitude image, and performing weak classifier synthesis by AdaBoost to further construct a strong classifier; and then, geometrically normalizing the image shot by the mobile equipment to obtain a face image to be recognized, sending the face image to a server, and authenticating the face image to be recognized by the server according to the stored standard face image characteristics and the strong classifier of the face image. The method comprises the following specific steps:
(1) at the server side, Gabor filtering in 5 scales and 8 directions is carried out on the standard face image, and the Gabor filter is expressed as follows: <math><mrow><msub><mi>&psi;</mi><mrow><mi>u</mi><mo>,</mo><mi>v</mi></mrow></msub><mrow><mo>(</mo><mi>z</mi><mo>)</mo></mrow><mo>=</mo><mfrac><msup><mrow><mo>|</mo><mo>|</mo><msub><mi>k</mi><mrow><mi>u</mi><mo>,</mo><mi>v</mi></mrow></msub><mo>|</mo><mo>|</mo></mrow><mn>2</mn></msup><msup><mi>&sigma;</mi><mn>2</mn></msup></mfrac><msup><mi>e</mi><mrow><mo>(</mo><msup><mrow><mo>-</mo><mo>|</mo><mo>|</mo><msub><mi>k</mi><mrow><mi>u</mi><mo>,</mo><mi>v</mi></mrow></msub><mo>|</mo><mo>|</mo></mrow><mn>2</mn></msup><msup><mrow><mo>|</mo><mo>|</mo><mi>z</mi><mo>|</mo><mo>|</mo></mrow><mn>2</mn></msup><mo>/</mo><mn>2</mn><msup><mi>&sigma;</mi><mn>2</mn></msup><mo>)</mo></mrow></msup><mo>[</mo><msup><mi>e</mi><mrow><msub><mi>ik</mi><mrow><mi>u</mi><mo>,</mo><mi>v</mi></mrow></msub><mi>z</mi></mrow></msup><mo>-</mo><msup><mi>e</mi><mrow><msup><mrow><mo>-</mo><mi>&sigma;</mi></mrow><mn>2</mn></msup><mo>/</mo><mn>2</mn></mrow></msup><mo>]</mo><mo>,</mo></mrow></math> wherein, <math><mrow><msub><mi>k</mi><mrow><mi>u</mi><mo>,</mo><mi>v</mi></mrow></msub><mo>=</mo><msub><mi>k</mi><mi>v</mi></msub><msup><mi>e</mi><mrow><mi>i</mi><msub><mi>&phi;</mi><mi>u</mi></msub></mrow></msup><mo>;</mo></mrow></math> k v = k max f v the frequency is specified in the form of a frequency, <math><mrow><msub><mi>&phi;</mi><mi>u</mi></msub><mo>=</mo><mfrac><mi>u&pi;</mi><mn>8</mn></mfrac><mo>,</mo></mrow></math> φue [0, pi) specifies the direction, and z ═ x, y.
In the above formula, v controls the dimension of the Gabor filter, and determines the center of the Gabor filter in the frequency domain; u controls the filtering direction of the Gabor filter. In the experiment, the values of the parameters are as follows: v ∈ {0, 1, 2, 3, 4}, u ∈ {0, 1, 2, 3, 4, 5, 6, 7} and σ ═ 2 pi, kmax=π/2, f = 2 . After Gabor filtering, a standard face image is changed into 40 images with the same size to be represented, which is called Gaborface, and the Gaborface is stored, as shown in fig. 1: the left side is a standard face image which is not subjected to Gabor filtering, and the right side is a standard face image which is subjected to Gabor filtering;
(2) at the server side, after Gabor filtering is carried out on all standard face images, positive and negative samples of the standard faces, namely training samples, are generated. The positive samples represent the difference between the filtered Gaborfaces of different standard face images of the same person, and the negative samples represent the difference between the filtered Gaborfaces of the standard face images of different persons. As shown in fig. 2, the upper part of the graph shows a positive sample obtained by filtering different face images of the same person; the lower part of the graph shows a negative sample obtained after the filtering of the face images of different people;
(3) at a server side, according to the obtained training sample, selecting an optimal weak classifier combination by using each pixel point of the training sample as a weak classifier and using an AdaBoost algorithm to form a strong classifier, wherein a frame of the training process is shown in FIG. 3, and finally training to obtain the strong classifier;
(4) at a server end, utilizing Gaborface and a strong classifier of a standard face image to identify, wherein the steps are as follows: for a human face image to be authenticated which is subjected to geometric normalization on mobile equipment, firstly, Gabor filtering is carried out on the human face image, then, the Gabor face of the human face image is compared with the Gabor face of a standard human face image stored at a server end one by one to obtain the difference between the Gabor face of the human face image and the Gabor face of the standard human face image, a sample to be authenticated is generated, and the sample to be authenticated is classified according to a strong classifier obtained in a training stage: if the sample belongs to a positive sample, the fact that the face to be authenticated and the standard face image belong to the same person is indicated; otherwise, the person is different; thereby finally determining the identity of the face image to be authenticated.
The method for remote face image authentication also has the following defects: firstly, only a Gabor filtered amplitude image is used for describing the characteristics of an image, but a Gabor filtered phase image is not used, and in practice, the phase image contains more image texture characteristics than the amplitude image and is more robust to illumination change; secondly, the amplitude of a pixel point of the amplitude image is directly used as a feature, and the feature is not as robust as the statistical feature based on the region for the classification effect; thirdly, the AdaBoost algorithm is used for synthesizing the weak classifiers, so that a strong classifier which is not optimal is possibly obtained, and the AdaBoost algorithm is equivalent to a gradient descent method of a function and is likely to fall into local optimization in nature, so that the obtained strong classifier is probably not global optimal, and the recognition rate of the remote face image is influenced.
Disclosure of Invention
In view of the above, the present invention is directed to an identity recognition method for a remote face image, which can improve the recognition rate of the face image without being affected by illumination conditions when the face image is remotely recognized.
According to the above purpose, the technical scheme of the invention is realized as follows:
an identity recognition method of a remote face image comprises the following steps:
in the training stage of the standard face image, the server side carries out illumination preprocessing, Gabor filtering, Gabor coefficient normalization, sub-window analysis and Gabor histogram statistics on the standard face image to obtain Gabor histogram features of the standard face image, pairwise combination is carried out on the standard face image, a weak classifier is constructed according to the Gabor histogram features of the standard face image, the weak classifier is screened out by using a preferred algorithm to form a strong classifier, and the Gabor histogram features of the standard face image corresponding to the strong classifier are the optimal Gabor histogram features of the standard face image;
in the stage of identity recognition of the remote face image, the server side carries out illumination preprocessing, Gabor filtering, Gabor coefficient normalization, sub-window analysis and Gabor histogram statistics on the face image to be recognized received from the client side, then extracts the optimal Gabor histogram characteristics of the face image to be recognized, compares the optimal Gabor histogram characteristics with the optimal Gabor histogram characteristics of the standard face image one by one to generate a sample to be recognized, classifies the sample to be recognized according to a strong classifier obtained in the stage of training, determines the user identity of the face image to be recognized, and sends the recognition result to the client side.
The illumination pretreatment process comprises the following steps:
a. carrying out logarithmic transformation on the image I to obtain a transformed face image I which is LogI;
b. solving extended full variation submodel for transformed image iType estimation l: <math><mrow><munder><mi>min</mi><mrow><mi>l</mi><mo>&GreaterEqual;</mo><mi>s</mi></mrow></munder><munder><mo>&Integral;</mo><mi>&Omega;</mi></munder><mo>[</mo><msup><mrow><mo>|</mo><mo>&dtri;</mo><mi>l</mi><mo>|</mo></mrow><mn>2</mn></msup><mo>+</mo><mi>&alpha;</mi><msup><mrow><mo>(</mo><mi>l</mi><mo>-</mo><mi>i</mi><mo>)</mo></mrow><mn>2</mn></msup><mo>+</mo><mi>&beta;</mi><msup><mrow><mo>|</mo><mo>&dtri;</mo><mrow><mo>(</mo><mi>l</mi><mo>-</mo><mi>i</mi><mo>)</mo></mrow><mo>|</mo></mrow><mn>2</mn></msup><mo>]</mo><mi>dxdy</mi><mo>,</mo></mrow></math> the formula is solved by adopting an optimal algorithm;
c. after L is obtained, performing inverse logarithmic transformation on L to obtain an image L ═ exp (L);
d. carrying out nonlinear transformation on the image L to make the value of the image L fall into [0, 255 ];
e. the quotient of image I and image L is taken as the illumination invariance R, i.e. R = I L ;
f. And (3) carrying out nonlinear transformation on the illumination invariant R to enable the value of the pixel point to be between [0 and 255], so as to obtain an image subjected to illumination preprocessing.
The preferred algorithm in step b is a step estimation algorithm EDA.
The process of Gabor coefficient normalization is as follows: and carrying out normalization processing on the Gabor coefficient of each pixel point obtained after the Gabor filtering is carried out on the face image, so that the values of the amplitude and the phase of the face image subjected to the Gabor filtering are discretized.
The process of sub-window analysis and Gabor histogram statistics is as follows: and extracting a statistical histogram of the Gabor coefficient corresponding to the face image in the sub-window area as Gabor histogram features of the face image.
The optimal Gabor histogram feature of the face image is obtained by screening through a distributed estimation algorithm (EDA). It can be seen from the above-mentioned solution that, in the method provided by the present invention, during the training stage of the standard face image, the features of the standard face image are extracted, and after the illumination preprocessing, Gabor filtering and Gabor coefficient normalization (amplitude and phase value discretization), the filtered image is subjected to sub-window analysis and the Gabor coefficient histogram corresponding to the sub-window region is counted, the Gabor histogram features of the standard face image are obtained, pairwise matching is performed on the standard face image, the difference between the standard face images is calculated according to the Gabor histogram features of the standard face image to form the training sample, the difference between the Gabor histograms of the images is used as the features of the training sample, each feature of the training sample is used as a weak classifier, the optimal weak classifier combination is screened out by EDA to obtain the strong classifier, and the Gabor histogram features of the standard face image corresponding to the strong classifier are the final features of each standard face image that need to be extracted and participate in the classification And the class features are called the optimal Gabor histogram features of the standard face image. In the process of carrying out the remote face image identity recognition stage, correspondingly, carrying out illumination preprocessing, Gabor filtering and Gabor coefficient normalization (amplitude and phase value discretization) on a face image to be authenticated, extracting the optimal Gabor histogram characteristics of the face image to be recognized, comparing the optimal Gabor histogram characteristics with the optimal Gabor histogram characteristics of a standard face image one by one to obtain a sample to be recognized, and classifying the sample to be recognized according to a strong classifier obtained in a training stage to obtain the identity recognition result of the face image to be recognized. Because the invention carries out special illumination preprocessing on the face image, the invention is not influenced by illumination conditions; the invention adopts the Gabor histogram of the sub-window as the characteristic of the face image, so the recognition rate and the robustness of the face recognition algorithm can be improved.
Drawings
FIG. 1 is a schematic diagram of a standard face image before and after Gabor filtering in the prior art;
FIG. 2 is a schematic diagram of a positive sample obtained after filtering different face images of the same person and a negative sample obtained after filtering face images of different persons in the prior art;
FIG. 3 is a schematic diagram of a frame of a strong classifier trained to obtain a face image in the prior art;
FIG. 4 is a flowchart of a method for remote identification of a face image according to the present invention;
FIG. 5 is a flowchart of a method of the present invention in a standard face image training phase;
FIG. 6 is a schematic diagram of the present invention for extracting features of each of the face images that have been subjected to illumination preprocessing;
FIG. 7 is a schematic diagram of normalization of extracted features of each of the pre-illumination processed face images according to the present invention;
fig. 8 is a flowchart of a method for recognizing a face image by a server side according to the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to specific embodiments and the accompanying drawings.
The invention provides a method for carrying out identity recognition according to a face image under a variable illumination condition, which mainly solves the problem of recognizing identity information of a shot face image under a complex illumination condition. The method provided by the invention can be applied to a communication network, such as a wireless communication network. When the identity of the face image is identified, firstly, the face image can be collected by a client, such as a mobile device, and then sent to a server through a communication network; secondly, the server side processes the received face images to extract corresponding features, classifies the face images according to the strong classifiers obtained by training, and sends the recognition results to the client side through a communication network.
The following description will be given by taking the face image to be recognized, which is sent to the server by the client, as a shot face image after geometric normalization processing.
FIG. 4 is a flow chart of the method for remote identity recognition of a face image, which comprises a training stage of a standard face image and a remote identity recognition stage of the face image, and comprises the following specific steps:
training phase of standard face image
Step 400, after the standard face images are respectively subjected to illumination preprocessing by the server, Gabor filtering in 5 scales and 8 directions is respectively carried out, 40 Gaborface images are obtained from each standard face image subjected to illumination preprocessing, the pixel point values of the Gaborface images are complex numbers, and the values of the amplitude and the phase are real numbers.
Step 401, the server performs discretization on the obtained values of the amplitude and the phase of each pixel point in the Gaborface, so that the values are normalized to an integer between [0 and 255 ].
Step 402, the server scans the Gabor face by using the sub-windows with variable sizes, counts the Gabor histograms (including the amplitude histogram and the phase histogram) corresponding to each sub-window region, and takes the Gabor histograms as a feature of the face image, which is called the Gabor histogram feature, and all the Gabor histogram features are taken together as the feature of the standard face image and stored.
Step 403, the server side performs pairwise combination on all standard face images, calculates the difference between the standard face images according to the Gabor histogram features of each standard face image, the difference is used as a training sample, the difference between the Gabor histogram features corresponding to the standard face images forms the features of the training sample, each feature of the training sample is used as a weak classifier, and a distributed estimation algorithm (EDA) is adopted to screen out the optimal weak classifier combination to form a strong classifier, the Gabor histogram features corresponding to the strong classifier are the most effective features for standard face image identity recognition and are called optimal Gabor histogram features, and the face images only need to extract the optimal Gabor histogram features of the standard face images in the recognition stage.
Remote identity recognition stage of face image
And step 404, the client sends the face image to be recognized to the server after geometric normalization processing.
Step 405, the server side performs illumination preprocessing on the received face image to be recognized, performs 5-scale and 8-direction Gabor filtering, and obtains 40 gaborfaces, wherein pixel points of the gaborfaces are complex numbers, and values of phases and amplitudes of the gaborfaces are real numbers.
And step 406, discretizing the values of the amplitude and the phase of each pixel point in the obtained Gaborface by the server end, normalizing the values to integers between 0 and 255, and extracting the optimal Gabor histogram features of the face image to be recognized.
Step 407, the server compares the face image to be recognized with all the stored standard face images one by one, calculates the difference between the optimal Gabor histogram feature of the face image to be recognized and the optimal Gabor histogram feature of the standard face image to obtain a sample to be recognized, and classifies the sample to be recognized according to the strong classifier obtained in the training stage: if the sample to be recognized is a positive sample, the face image to be recognized and the standard face image belong to the same person; and if the sample to be recognized is a negative sample, the face image to be recognized and the standard face image belong to different people.
And step 408, the server side sends the identification result to the client side.
In order to identify the face image received from the client, the server of the present invention needs to train the acquired standard face image, and how to train the standard face image is described in detail below.
Training phase of standard face image
Since the face image recognition problem is a multi-class problem, the present invention converts it into two classes of problems, i.e., positive and negative samples, for convenience. Wherein, the positive sample is the difference between different face images of the same person; the negative examples are the differences between facial images of different persons. Herein, the positive and negative examples are collectively referred to as training examples.
The training stage of the standard face image aims to select the most effective feature combination for face image recognition from the standard face image, the most effective feature combination is utilized to form a strong classifier, and the features of the standard face image and the strong classifier are utilized to identify the identity of the face image to be recognized in the recognition stage of the server.
In order to perform normal training on a standard face image, the same person at least includes two different face images, and fig. 5 is a flowchart of a method of a training phase of the standard face image of the present invention, and the method specifically includes the steps of:
step 500, the server side performs illumination preprocessing on all standard face images respectively, and extracts illumination invariants of the face images respectively.
Step 501, the server side respectively performs 5-dimension and 8-direction Gabor filtering on the face image subjected to illumination preprocessing, and 40 filtered face images, namely Gaborface, are respectively obtained from each face image subjected to illumination preprocessing. Each pixel point value of each Gaborface is a complex number, and the amplitude value and the phase value of each Gaborface are real numbers. As shown in fig. 6, fig. 6 is a schematic diagram of 40 Gabor faces obtained by Gabor filtering each face image subjected to illumination preprocessing.
Step 502, the server performs discretization processing on the values (including amplitude and phase) of each pixel point of each Gaborface respectively, so that the values are normalized to an integer between [0 and 255 ].
Because the value of each pixel point of the Gabor filtered result Gabor of each face image subjected to illumination preprocessing is a complex number, and the value of the amplitude and the phase thereof are real numbers, in order to count the amplitude histogram and the phase histogram (collectively referred to as Gabor histograms) of the sub-window region, the invention needs to perform discretization processing on the value of the amplitude and the phase of each pixel point of the Gabor filtered result, so that the value is normalized to an integer between 0 and 255.
For illustration purposes: the amplitude normalization method comprises the steps of comparing the amplitude of the current characteristic point P with the amplitude of the adjacent 8 characteristic points to construct an 8-bit binary number (b)1b2b3b4b5b6b7b8)2If the amplitude of the ith feature point is larger than that of the current feature point P, then bi1, otherwise biFinally, the amplitude of the current feature point P is used as (b)1b2b3b4b5b6b7b8)2Instead, to achieve normalization of the magnitude of the real number type to [0, 255%]The whole numbers in between, as shown in fig. 7.
Step 503, the server side respectively scans Gabor faces obtained through the sub-windows with variable sizes, statistics is carried out on the Gabor histograms (including an amplitude histogram and a phase histogram) in each sub-window, and the Gabor histograms of all the sub-windows are combined together to form the Gabor histogram feature of the face image.
In the invention, the process of extracting the Gabor histogram feature of each face image subjected to illumination preprocessing comprises the following steps:
and scanning each Gabor face through a series of sub-windows with variable sizes, and extracting Gabor histograms (including an amplitude histogram and a phase histogram) of corresponding sub-window regions in each Gabor face as the features of the sub-window regions. Thus, each sub-window region of each illumination-preprocessed face image is characterized by 80 Gabor histograms, including 40 magnitude histograms and 40 phase histograms. In the present invention, it is assumed that each face image is 100 × 100 in size, if the sub-window is largeThe size of each sub-window is changed from 10 × 10 to 100 × 100, the length and the width of each sub-window are always equal, the step size of the length and the width change is 2, the moving step size of each sub-window from left to right, and up and down is 3, the number of the sub-windows obtained by finally scanning each face image is 497025, each sub-window is described by 80 Gabor histograms, and each face image subjected to illumination preprocessing is finally characterized by 497025 × 80 — 39762000 Gabor histograms. In the present invention, each Gabor histogram corresponds to three aspects of information: the position of the sub-window, the direction and scale of filtering, and Gabor histogram types, wherein the Gabor histogram types may be an amplitude histogram and a phase histogram. One column vector H for each Gabor histogramiN, where all Gabor histograms of each face image subjected to illumination preprocessing are arranged in a certain order to form a matrix H ═ H · H1H2...HN]The matrix H is the features of each face image that has been subjected to illumination preprocessing.
Step 504, the server side combines all the standard face images pairwise, calculates the difference between the standard face images according to the Gabor histogram characteristics of the standard face images, and uses the difference as a training sample, and the difference between the Gabor histograms corresponding to the standard face images forms the characteristics of the training sample.
In the invention, the difference between different standard face images of the same person forms a positive sample, and the difference between standard face images of different persons forms a negative sample. By means of the concept of positive and negative samples, multiple types of problems in face image recognition can be converted into two types of problems, namely the difference between any two face images is judged to belong to the positive sample type, namely the same person; or belong to the negative sample class, i.e. different people.
The method for generating the training sample, namely the method for calculating the difference between two human face images, comprises the following steps: for any pair of face images ImAnd InThe characteristic matrix is normalized HmAnd HnThe difference between the face images is represented by NDimension vector Dmn=[d1,d2,...,dN]N is equal to the number of texture histograms of the face image, where diRepresenting a feature matrix HmAnd HnI.e. the similarity between the corresponding two Gabor histograms. Thus, training sample DmnDefined in an N-dimensional space, all similarity values constitute the entire feature space.
And 505, taking each feature of the training sample as a weak classifier by the server, screening out the optimal weak classifier combination by adopting EDA (electronic design automation) to form a strong classifier, and obtaining the Gabor histogram feature of the optimal face image.
In the invention, because the training sample is an N-dimensional vector, N is always larger, the dimension reduction of the feature space of the training sample is needed, the optimal feature is selected, the feature dimension of the training sample is reduced, and the classification effect is improved.
The EDA is adopted to screen out the most effective characteristic combination to form the strong classifier, and the specific steps are as follows:
using code strings c of length N1c2...cN(wherein c isi0 or 1), all possible feature combinations constitute the search space of the EDA, and each code string is referred to as an individual in the EDA search space. Each bit c in the code stringiCorresponding to a particular eigenvalue component di。ci1 indicates that the corresponding feature component is selected to participate in the classification, ci0 means that the corresponding feature component does not participate in the classification, e.g., 10100.. 000 means that only the first feature component and the third feature component participate in the classification, and the final strong classifier is composed of two feature components, i.e., the first and third feature components.
The invention classifies the positive and negative samples by the strong classifier, calculates the recognition rate of the strong classifier, and defines the recognition rate as the fitness of an individual, namely, a fitness function is defined as the recognition rate of the individual. And searching the optimal individual, namely the feature combination with the highest recognition rate and the least number of features, in the space by using the EDA, wherein the feature component combination represented by the optimal individual forms the final strong classifier.
Assuming that EDA screening finally obtains a strong classifier consisting of K characteristic components, K < N, each training sample can be finally described by the K characteristic components, and is marked as Dmn′=[d1′,d2′,...,dK′]. Wherein Dmn' New feature vector representing training sample, dj' (j ═ 1, 2.., K) corresponds to the selected optimal feature component. Each feature component corresponds to a texture histogram, the features finally extracted from the standard face image are the texture histograms corresponding to the feature components, and the new image feature matrix is recorded as H' ═ H1′H2′...HK′]In which H isi' corresponds to the selected feature component diAnd selecting optimal Gabor histogram features of Gabor histogram vectors corresponding to the Gabor histogram vectors, and representing the face image by using the optimal Gabor histogram features and performing identity recognition.
After the server stores the optimal Gabor histogram features of the standard face image and the strong classifier is obtained in the training stage, the face image to be recognized sent by the client can be recognized, and the remote identity recognition stage of the face image is explained in detail below.
Remote identity recognition stage of face image
In the remote identity recognition stage of the face image, the identity of a user is determined according to the characteristics of the face image to be recognized. At the server side, the same as the training stage of the standard face image, the invention converts multiple types of problems identified by the face image into two types of problems, and determines the identity of the user by judging which standard face image belongs to the same person.
Fig. 8 is a flowchart of a method for recognizing a face image by a server side, which specifically comprises the following steps:
step 800, the server side performs illumination preprocessing on the face image to be recognized sent by the client side, and extracts an illumination invariant of the face image to be recognized.
Step 801, the server performs 5-scale 8-direction Gabor filtering on the face image to be recognized after illumination preprocessing to obtain 40 filtered face images to be recognized, namely the Gaborface to be recognized.
In the invention, the values of the pixel points of 40 Gaborface images to be recognized are complex numbers, and the values of the amplitude and the phase are real numbers.
Step 802, the server performs discretization on the values of the amplitude and the phase of each pixel point in each Gaborface obtained after filtering, so that the values are normalized to integers between [0 and 255 ].
And 803, the server side respectively scans 40 Gaborface of the face image to be recognized through the sub-windows with variable sizes, and extracts the optimal Gabor histogram features of the face image to be recognized.
In the invention, the optimal Gabor histogram features of the face image to be recognized are extracted, and the features form a feature matrix H ═ H1H2...HK]Each Gabor histogram feature is represented by a column vector, which is simply Hi
And step 804, the server matches the face images to be recognized with the stored standard face images one by one, and finally determines the user identity information of the face images to be recognized.
The matching steps are as follows: calculating the difference between the optimal Gabor histogram feature of the face image to be recognized and the optimal Gabor histogram feature of the standard face image to obtain a sample to be recognized, and classifying the sample to be recognized by using a strong classifier obtained in a training stage: if the sample to be recognized is a positive sample, matching is successful, and the face image to be recognized and the standard face image belong to the same user; and if the sample to be recognized is a negative sample, the matching fails, and the face image to be recognized and the standard face image belong to different users.
Step 805, the server sends the identification result to the client.
In fig. 5 or fig. 8, a specific process of the light irradiation pretreatment is as follows.
Illumination is a key factor influencing face image recognition, the change of illumination conditions can cause drastic change of the gray value of the face image, and the removal of the influence of the illumination conditions can be realized by extracting the illumination invariant of the face image. In the invention, the illumination condition problem is converted into an optimization problem, illumination preprocessing is realized by using EDA, and the illumination invariant of the face image is extracted.
Setting a face image as I, wherein the illumination preprocessing algorithm is as follows:
1) carrying out logarithmic transformation on the face image I to obtain a transformed face image I, namely I is logI;
2) and (3) estimating l by solving an extended total variation model for the transformed face image i: <math><mrow><munder><mi>min</mi><mrow><mi>l</mi><mo>&GreaterEqual;</mo><mi>s</mi></mrow></munder><munder><mo>&Integral;</mo><mi>&Omega;</mi></munder><mo>[</mo><msup><mrow><mo>|</mo><mo>&dtri;</mo><mi>l</mi><mo>|</mo></mrow><mn>2</mn></msup><mo>+</mo><mi>&alpha;</mi><msup><mrow><mo>(</mo><mi>l</mi><mo>-</mo><mi>i</mi><mo>)</mo></mrow><mn>2</mn></msup><mo>+</mo><mi>&beta;</mi><msup><mrow><mo>|</mo><mo>&dtri;</mo><mrow><mo>(</mo><mi>l</mi><mo>-</mo><mi>i</mi><mo>)</mo></mrow><mo>|</mo></mrow><mn>2</mn></msup><mo>]</mo><mi>dxdy</mi><mo>,</mo></mrow></math> the solution of the formula is an optimization problem in practice, the solution method can be diversified, and the EDA with good optimization effect and high solution speed is selected for solution in the invention;
3) after L is obtained, inverse logarithmic transformation is performed on L to obtain a face image L, namely L ═ exp (L)
4) Carrying out nonlinear transformation on the face image L to make the value of the face image L fall into [0, 255 ];
5) using the quotient of the face image I and the face image L as the illumination invariant R, i.e. R = I L ;
6) And (3) carrying out nonlinear transformation on the illumination invariant R to enable the value of the pixel point to be between [0 and 255], so as to obtain the face image subjected to illumination preprocessing.
And the server side carries out the illumination preprocessing process on all the standard face images according to the method.
The method provided by the invention is subjected to principle experimental verification, the adopted face database is a FERET library, the training set is 270 persons, and the number of face images is 540 (quasi-positive) face images, and the testing set comprises two subsets: 1196 persons in the fa subset 1196 are used as standard face images; fb subset 1195 people, 1195 total, same as people in subset fa but with different expressions. The algorithm training time is about 1 week, the average face image recognition time is less than 1.5 seconds each time, and the recognition rate is more than 98%.
The method for identifying the remote face image identity, provided by the invention, has the advantages that the illumination pretreatment is carried out on the image by utilizing the extended total variation model, the illumination normalization of the image is realized, the robustness and the identification rate of an image identification algorithm are improved, and an identification system adopting the method can be applied to occasions with complex illumination conditions; according to the method for remote face image identity recognition, when a strong classifier is trained, feature screening is carried out by adopting an EDA algorithm, so that the feature number of a standard face image is reduced, the processing time and the feature storage space can be reduced when the image is recognized, the recognition effect is improved, and in addition, due to the global optimal searching capability of the EDA, the finally obtained strong classifier is globally optimal; the Gabor histogram feature provided by the remote face image identity recognition method provided by the invention is more stable than the feature based on a single pixel point in the prior art, and the robustness of a recognition system adopting the method can be improved, because the face image must be accurately aligned when a recognition algorithm based on the feature of the single pixel point is used for recognition, otherwise, the recognition effect is greatly influenced.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principle of the present invention are intended to be included within the scope of the present invention.

Claims (3)

1. An identity recognition method of a remote face image is characterized by comprising the following steps:
in the training stage of the standard face image, the server side carries out illumination preprocessing, Gabor filtering, Gabor coefficient normalization, sub-window analysis and Gabor histogram statistics on the standard face image to obtain Gabor histogram features of the standard face image, pairwise combination is carried out on the standard face image, a weak classifier is constructed according to the Gabor histogram features of the standard face image, the weak classifier is screened out by using a preferred algorithm to form a strong classifier, and the Gabor histogram features of the standard face image corresponding to the strong classifier are the optimal Gabor histogram features of the standard face image;
in the stage of identity recognition of the remote face image, the server side carries out illumination preprocessing, Gabor filtering, Gabor coefficient normalization, sub-window analysis and Gabor histogram statistics on the face image to be recognized received from the client side, then extracts the optimal Gabor histogram characteristics of the face image to be recognized, compares the optimal Gabor histogram characteristics with the optimal Gabor histogram characteristics of the standard face image one by one to generate a sample to be recognized, classifies the sample to be recognized according to a strong classifier obtained in the stage of training, determines the user identity of the face image to be recognized, and sends the recognition result to the client side.
2. The method of claim 1, wherein the light pre-treatment comprises:
a. carrying out logarithmic transformation on the image I to obtain a transformed face image I which is LogI;
b. and (3) estimating l by solving an extended total variation model for the transformed image i:
Figure FSB00000118497500011
the formula is solved by adopting an optimal algorithm;
c. after L is obtained, performing inverse logarithmic transformation on L to obtain an image L ═ exp (L);
d. carrying out nonlinear transformation on the image L to make the value of the image L fall into [0, 255 ];
e. the quotient of image I and image L is taken as the illumination invariance R, i.e.
Figure FSB00000118497500012
f. And (3) carrying out nonlinear transformation on the illumination invariant R to enable the value of the pixel point to be between [0 and 255], so as to obtain an image subjected to illumination preprocessing.
3. The method of claim 2, wherein the preferred algorithm of step b is a step estimation algorithm EDA.
CN2006100870359A 2006-06-12 2006-06-12 Identify recognizing method for remote human face image Expired - Fee Related CN101089874B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2006100870359A CN101089874B (en) 2006-06-12 2006-06-12 Identify recognizing method for remote human face image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2006100870359A CN101089874B (en) 2006-06-12 2006-06-12 Identify recognizing method for remote human face image

Publications (2)

Publication Number Publication Date
CN101089874A CN101089874A (en) 2007-12-19
CN101089874B true CN101089874B (en) 2010-08-18

Family

ID=38943228

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2006100870359A Expired - Fee Related CN101089874B (en) 2006-06-12 2006-06-12 Identify recognizing method for remote human face image

Country Status (1)

Country Link
CN (1) CN101089874B (en)

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4513898B2 (en) * 2008-06-09 2010-07-28 株式会社デンソー Image identification device
CN101771539B (en) * 2008-12-30 2012-07-04 北京大学 Face recognition based method for authenticating identity
KR20110047398A (en) * 2009-10-30 2011-05-09 삼성전자주식회사 Image providing system and image providing mehtod of the same
EP2529334A4 (en) * 2010-01-29 2017-07-19 Nokia Technologies Oy Methods and apparatuses for facilitating object recognition
CN103020589B (en) * 2012-11-19 2017-01-04 山东神思电子技术股份有限公司 A kind of single training image per person method
CN104778389A (en) * 2014-01-09 2015-07-15 腾讯科技(深圳)有限公司 Numerical value transferring method, terminal, server and system
CN106156568B (en) * 2015-03-24 2020-03-24 联想(北京)有限公司 Biological information identification module and electronic equipment
CN105528616B (en) * 2015-12-02 2019-03-12 深圳Tcl新技术有限公司 Face identification method and device
CN106934335B (en) * 2015-12-31 2021-02-02 南通东华软件有限公司 Image recognition method and device
CN106507199A (en) * 2016-12-20 2017-03-15 深圳Tcl数字技术有限公司 TV programme suggesting method and device
CN107256407B (en) * 2017-04-21 2020-11-10 深圳大学 Hyperspectral remote sensing image classification method and device
CN109492601A (en) * 2018-11-21 2019-03-19 泰康保险集团股份有限公司 Face comparison method and device, computer-readable medium and electronic equipment
US11651447B2 (en) 2019-10-31 2023-05-16 Kyndryl, Inc. Ledger-based image distribution permission and obfuscation
CN112149564B (en) * 2020-09-23 2023-01-10 上海交通大学烟台信息技术研究院 Face classification and recognition system based on small sample learning
CN112699355A (en) * 2020-12-22 2021-04-23 湖南麒麟信安科技股份有限公司 Dynamic face authentication method and system with user and host decoupled
CN114241534B (en) * 2021-12-01 2022-10-18 佛山市红狐物联网科技有限公司 Rapid matching method and system for full-palm venation data

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6301370B1 (en) * 1998-04-13 2001-10-09 Eyematic Interfaces, Inc. Face recognition from video images
CN1476589A (en) * 2001-08-23 2004-02-18 索尼公司 Robot apparatus, face recognition method and face recognition apparatus
CN1475961A (en) * 2003-07-14 2004-02-18 中国科学院计算技术研究所 Human eye location method based on GaborEge model
CN1540571A (en) * 2003-10-29 2004-10-27 中国科学院计算技术研究所 Method of sicriminating handwriting by computer based on analyzing local feature
US6826300B2 (en) * 2001-05-31 2004-11-30 George Mason University Feature based classification

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6301370B1 (en) * 1998-04-13 2001-10-09 Eyematic Interfaces, Inc. Face recognition from video images
US6826300B2 (en) * 2001-05-31 2004-11-30 George Mason University Feature based classification
CN1476589A (en) * 2001-08-23 2004-02-18 索尼公司 Robot apparatus, face recognition method and face recognition apparatus
CN1475961A (en) * 2003-07-14 2004-02-18 中国科学院计算技术研究所 Human eye location method based on GaborEge model
CN1540571A (en) * 2003-10-29 2004-10-27 中国科学院计算技术研究所 Method of sicriminating handwriting by computer based on analyzing local feature

Also Published As

Publication number Publication date
CN101089874A (en) 2007-12-19

Similar Documents

Publication Publication Date Title
CN101089874B (en) Identify recognizing method for remote human face image
Qin et al. Deep representation for finger-vein image-quality assessment
Xia et al. A novel weber local binary descriptor for fingerprint liveness detection
JP4543423B2 (en) Method and apparatus for automatic object recognition and collation
US7336806B2 (en) Iris-based biometric identification
US8275175B2 (en) Automatic biometric identification based on face recognition and support vector machines
Zois et al. A comprehensive study of sparse representation techniques for offline signature verification
Sarhan et al. Multimodal biometric systems: a comparative study
Kumar et al. An improved biometric fusion system of fingerprint and face using whale optimization
Sequeira et al. Iris liveness detection methods in mobile applications
Nguyen et al. Complex-valued iris recognition network
Deshpande et al. CNNAI: a convolution neural network-based latent fingerprint matching using the combination of nearest neighbor arrangement indexing
Gopal et al. Accurate human recognition by score-level and feature-level fusion using palm–phalanges print
Ghoualmi et al. An efficient feature selection scheme based on genetic algorithm for ear biometrics authentication
Borra et al. An efficient fingerprint identification using neural network and BAT algorithm
Ghulam Mohi-ud-Din et al. Personal identification using feature and score level fusion of palm-and fingerprints
El-Naggar et al. Which dataset is this iris image from?
CN112257688A (en) GWO-OSELM-based non-contact palm in-vivo detection method and device
Ramachandra et al. Feature level fusion based bimodal biometric using transformation domine techniques
Kundu et al. An efficient integrator based on template matching technique for person authentication using different biometrics
Makinde et al. Enhancing the accuracy of biometric feature extraction fusion using Gabor filter and Mahalanobis distance algorithm
Hussein et al. Human Recognition based on Multi-instance Ear Scheme
Nestorovic et al. Extracting unique personal identification number from iris
CN118470809B (en) Object recognition system and method for fusing human face and living palm vein
Jagtap et al. Biometric solution for person identification using iris recognition system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CI01 Publication of corrected invention patent application

Correction item: Claims

Correct: 6 items

False: 3 items

Number: 33

Volume: 26

CI03 Correction of invention patent

Correction item: Claims

Correct: 6 items

False: 3 items

Number: 33

Page: Description

Volume: 26

ERR Gazette correction

Free format text: CORRECT: CLAIM OF RIGHT; FROM: ITEM 3 TO: ITEM 6

CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20100818

Termination date: 20170612

CF01 Termination of patent right due to non-payment of annual fee