WO2017092272A1 - 人脸识别方法和装置 - Google Patents

人脸识别方法和装置 Download PDF

Info

Publication number
WO2017092272A1
WO2017092272A1 PCT/CN2016/084618 CN2016084618W WO2017092272A1 WO 2017092272 A1 WO2017092272 A1 WO 2017092272A1 CN 2016084618 W CN2016084618 W CN 2016084618W WO 2017092272 A1 WO2017092272 A1 WO 2017092272A1
Authority
WO
WIPO (PCT)
Prior art keywords
face image
sample
image
recognized
face
Prior art date
Application number
PCT/CN2016/084618
Other languages
English (en)
French (fr)
Inventor
王甜甜
Original Assignee
深圳Tcl新技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳Tcl新技术有限公司 filed Critical 深圳Tcl新技术有限公司
Publication of WO2017092272A1 publication Critical patent/WO2017092272A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2135Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis

Definitions

  • the present invention relates to the field of multimedia, and in particular, to a face recognition method and apparatus.
  • the traditional face texture information extraction method uses GT (Gabor Transform, Gabor Transform) and LBP (Local Binary Pattern) transform to superimpose, thereby extracting face texture information.
  • the specific process is as follows: firstly, the face image is subjected to Gabor filtering, and then the filtered image is represented by LBP histogram transformation to represent the texture information of the face, and the face image needs to be transformed by 5 scales and 8 directions to generate 40 images. The image is filtered, and then the 40 filtered images are subjected to LBP transform processing, and finally face recognition is performed.
  • the dimension of the image to be processed is too high, the calculation amount is large, the face recognition time is long, and the efficiency is low. .
  • the main object of the present invention is to provide a face recognition method and apparatus, which aim to solve the technical problem that the prior art has a large amount of calculation and a long calculation time in the face recognition process.
  • the present invention provides a face recognition method, the method comprising the steps of:
  • the a_max is the maximum value of the pixel in the a-th filtered image in the filtered image of the face image obtained by the circular symmetric Gabor transform; the a is the pixel value of each pixel in the a-th filtered image; 255 denotes the maximum value of the image pixels; the uint8 converts the calculated face image into a data format a_temp that can be output as an image.
  • the present invention further provides a method for recognizing a face, the method comprising the steps of:
  • the present invention also provides a face recognition device, the device comprising:
  • An acquiring module configured to acquire a sample face image and a face image to be recognized, wherein at least two face images exist in the sample face image
  • a first transform module configured to perform a circular symmetric Gabor transform on the sample face image and the to-be-recognized face image, respectively, and correspondingly obtain a sample face image and a to-be-recognized face image after the circular symmetric Gabor transform;
  • a superimposing module configured to superimpose the sample face image and the to-be-recognized face image subjected to the circular symmetric Gabor transform respectively, and correspondingly obtain the superimposed sample face image and the to-be-recognized face image;
  • An extraction module configured to perform region energy extraction on the superimposed sample face image and the to-be-recognized face image, respectively, corresponding to obtaining a sample face image and a to-be-recognized face image after region energy extraction;
  • a second transform module configured to perform local binary pattern transformation on the sample face image and the to-be-recognized face image respectively obtained by the region energy extraction, and correspondingly obtain a sample histogram of the texture information including the sample face image and a histogram of the texture information to be recognized of the face image to be recognized;
  • a comparison module configured to compare the sample histogram with the to-be-identified histogram to obtain a face image in the sample face image that is the same as the to-be-identified face image.
  • the present invention obtains the sample histogram and the to-be-identified by performing circular symmetric Gabor transform, superposition, region energy extraction and local binary mode transformation on the sample face image and the face image to be recognized. a histogram, and comparing the sample histogram with the to-be-identified histogram, to obtain a face image in the sample face image that is the same as the to-be-identified face image.
  • the method of extracting the histogram of the face image by the combination of the circular symmetric Gabor transform and the local binary mode transform is implemented, thereby determining the face image of the sample face image which is the same as the face image to be recognized, and the face recognition process is reduced.
  • the amount of calculation in the calculation shortens the calculation time and improves the efficiency of face recognition.
  • FIG. 1 is a schematic flow chart of a first embodiment of a face recognition method according to the present invention
  • FIG. 2 is a schematic flow chart of a second embodiment of a face recognition method according to the present invention.
  • FIG. 3 is a schematic diagram of functional modules of a first embodiment of a face recognition device according to the present invention.
  • FIG. 4 is a schematic diagram of functional modules of a second embodiment of a face recognition device according to the present invention.
  • FIG. 5 is a schematic diagram of a face image after pre-processing of a face image in a sample face image according to the present invention
  • FIG. 6 is a schematic diagram of a face image of a face image subjected to circular symmetric Gabor transformation according to the present invention.
  • FIG. 7 is a schematic diagram of a face image after preprocessing of a face image according to the present invention.
  • FIG. 8 is a schematic diagram of an image of the face image of FIG. 7 after circular symmetric Gabor transformation
  • FIG. 9 is a schematic diagram of a face image of FIG. 8 after superimposing and extracting a face image through a circular symmetric Gabor transform;
  • Figure 10 is a schematic view showing the comparison of the fourth face image in Figures 8 and 9;
  • FIG. 11 is a schematic diagram of a histogram of a face image in which five face images are superimposed together in FIG. 8;
  • FIG. 12 is a schematic diagram of a face image histogram of the face image of the sample face image of FIG. 5 which is the same face image as the face image to be recognized;
  • FIG. 13 is a schematic diagram of performing region energy extraction on a superimposed face image according to an embodiment of the present invention.
  • the invention provides a face recognition method.
  • FIG. 1 is a schematic flowchart diagram of a first embodiment of a face recognition method according to the present invention.
  • the face recognition method includes:
  • Step S10 acquiring a sample face image and a face image to be recognized, wherein at least two face images exist in the sample face image;
  • the terminal acquires a face image through a camera, and the face image acquired by the terminal includes a sample face image and a face image to be recognized, wherein at least two face images exist in the sample face image, the to-be-identified There is only one face image in the face image.
  • the sample face image includes 11 face images.
  • the terminal acquires the sample face image and the to-be-recognized face image through a camera with a high pixel.
  • the terminal includes, but is not limited to, a smartphone and a tablet.
  • Step S20 performing a circular symmetric Gabor transform on the sample face image and the to-be-recognized face image, respectively, and correspondingly obtaining a sample face image and a to-be-recognized face image after the circular symmetric Gabor transformation;
  • FIG. 6 is a schematic diagram of a face image of a face image subjected to circular symmetric Gabor transform according to the present invention.
  • Step S30 superimposing the sample face image and the to-be-recognized face image subjected to the circular symmetric Gabor transform respectively, and correspondingly obtaining the superimposed sample face image and the to-be-recognized face image;
  • Step S40 performing regional energy extraction on the superimposed sample face image and the to-be-recognized face image, respectively, correspondingly obtaining the sample face image and the to-be-recognized face image after the region energy extraction;
  • Step S50 performing local binary pattern transformation on the sample face image and the to-be-recognized face image respectively after the region energy extraction, correspondingly obtaining a sample histogram containing the texture information of the sample face image and including the a histogram of the texture information to be recognized of the face image to be recognized;
  • the terminal superimposes the sample face image subjected to the circular symmetric Gabor transform to obtain a superimposed sample face image, and then performs regional energy extraction on the superimposed sample face image to obtain a region energy extraction.
  • a sample face image and then performing a local binary mode transformation on the sample energy image extracted by the region energy to obtain a sample histogram containing texture information of the sample face image;
  • the face image to be recognized by the symmetric Gabor transform is superimposed to obtain the superimposed face image to be recognized, and then the region energy extraction is performed on the superimposed face image to be recognized, and the face to be recognized after the region energy extraction is obtained.
  • each of the sample face image and the face image to be recognized corresponds to five histograms, that is, each of the filtered images corresponds to a histogram; the terminal pair passes through the ring Performing a local binary pattern transformation on the face image to be recognized of the symmetric Gabor transform to obtain five to-be-identified histograms including the texture information of the face image to be recognized, for each sample person passing through the circular symmetric Gabor transform
  • the face image is subjected to local binary mode conversion to obtain five sample histograms including texture information of the sample face image.
  • Step S60 Comparing the sample histogram with the to-be-identified histogram to obtain a face image of the sample face image that is the same as the to-be-identified face image.
  • the terminal compares the sample histogram with the to-be-identified histogram, and when the terminal determines a histogram of a face in the sample histogram and the to-be-identified histogram, the sample histogram
  • the face image corresponding to the histogram matching the to-be-identified histogram in the figure is the same face image as the to-be-identified face image in the sample face image. That is, when the histogram of a face image in the sample face image matches the histogram of the face image to be recognized, a histogram matching the histogram of the face image to be recognized in the sample face image is The face image corresponding to the figure is the same face image as the face image to be recognized in the sample face image.
  • a conventional method for extracting texture information of a face image is a combination of GT (Gabor Transform, Gabor Transform) and the local binary pattern.
  • GT General Transform, Gabor Transform
  • the process of filtering and extracting texture information of a face image by using the GT first Performing the GT transform on the face image to obtain a filtered face image, wherein the GT transform is for transforming 8 directions in 5 scales, that is, generating 40 filtered images, and then filtering the 40 frames.
  • the image performs the local binary mode conversion, and finally the face image is recognized.
  • the computational complexity of the method is too high, and the calculation time is too long, resulting in long reading and analysis time of the video, and low efficiency.
  • the texture information of the face image is extracted by combining the circular symmetric Gabor transform and the local binary mode transform, and the face image is subjected to the circular symmetric Gabor transform to generate five filters.
  • the image is superimposed, recombined into a new five filtered images, and then the region energy extraction is performed on the reconstructed filtered image to extract an image that best describes the texture information of the face image, and then the texture image information is performed.
  • the local binary mode transform Compared with the face recognition algorithm combining the GT and the local binary mode, only five filtered images need to be calculated, and the 40 filtered images are calculated in relative proportion, which reduces the calculation amount and reduces the calculation time.
  • the sample histogram and the to-be-identified histogram are obtained by performing circular symmetric Gabor transform, superposition, region energy extraction, and local binary mode transformation on the sample face image and the face image to be recognized. Comparing the sample histogram with the to-be-identified histogram, the same facial image in the sample face image as the to-be-recognized face image is obtained.
  • the method of extracting the histogram of the face image by the combination of the circular symmetric Gabor transform and the local binary mode transform is implemented, thereby determining the face image of the sample face image which is the same as the face image to be recognized, and the face recognition process is reduced. The amount of calculation in the calculation shortens the calculation time and improves the efficiency of face recognition.
  • FIG. 2 is a schematic flowchart diagram of a second embodiment of a face recognition method according to the present invention.
  • a second embodiment of the face recognition method of the present invention is proposed based on the first embodiment of the present invention.
  • the method for recognizing a face further includes:
  • Step S70 preprocessing the sample face image and the to-be-recognized face image, wherein the pre-processing includes grayscale processing and histogram equalization processing.
  • FIG. 5 is a schematic diagram of a face image after pre-processing of all face images in a sample face image according to the present invention
  • FIG. 7 is a pre-processing of a face image according to the present invention.
  • the sample face image in FIG. 5 includes a total of 11 face images
  • FIG. 7 is a schematic diagram of the face image after preprocessing the eighth face image in the sample face image.
  • the FIG. 7 also shows a schematic diagram of the face image after preprocessing the face image to be recognized. .
  • the terminal performs gradation transformation on the sample face image and the to-be-recognized face image, and correspondingly obtains the face image and the face image after gradation transformation in the face image to be recognized.
  • the gradation transformation is also called grayscale stretching and contrast stretching. It is the most basic kind of point operation. According to the gray value of each pixel in the original image, according to some mapping rule, it is transformed into another A gray value that achieves the purpose of enhancing an image by assigning a new gray value to each pixel in the original image.
  • the face image of the gradation transformation is subjected to equalization processing of the face image histogram.
  • the terminal performs equalization processing on the face image of the gradation transformed face image, and correspondingly obtains a face image of the sample face image and the to-be-recognized face image for histogram equalization processing
  • the image corresponds to the pre-processed face image.
  • the step of equalizing the histogram of the face image is: 1 statistic the histogram of the face image after the gradation transformation; 2 according to the statistical histogram of the face image, using the cumulative distribution function to transform, and obtaining the transformed
  • the new gray scale; 3 replaces the old gray scale with the new gray scale. This step is an approximation process. It should be as reasonable as possible according to the reasonable purpose, and the gray values are equal or approximate.
  • the terminal performs median filtering on the sample face image and the to-be-recognized face image respectively, and the median filtering is to sort the pixels of the local area according to the gray level, and take the gray in the field.
  • the median of the degree is taken as the gray value of the current pixel.
  • the step of median filtering is: 1 roaming the filter template in the image, and superimposing the center of the template with a pixel position in the image; 2 reading the gray value of each corresponding pixel in the template; 3, the gray value Arrange from small to large; 4
  • the intermediate data of this column of data is assigned to the pixel corresponding to the center of the template.
  • the terminal performs homomorphic filtering on the face image in the sample frame and the face image in the frame to be classified.
  • the homomorphic filtering is to change the luminance model (non-additive) in the form of an image product into an additivity form for filtering enhancement processing.
  • the steps of the homomorphic filtering are: 1 performing logarithmic transformation on both sides of the luminance function, and then taking a Fourier transform; Filter; 3 takes the inverse of the Fourier transform of the output of the filter, and then takes the exponential transformation.
  • the obtained sample face image and the face image to be recognized are not affected by illumination and skin color. The effect of improving the accuracy of face recognition.
  • the circular symmetric Gabor transform is obtained after the terminal performs the circular symmetric Gabor transform on the sample face image and the to-be-recognized face image. And a filtered image of the sample face image, and a filtered image of the face image to be recognized after the circular symmetric Gabor transform.
  • Each face image is generated by the circular symmetric Gabor transform, and corresponding five filtered images are generated.
  • the terminal superimposes the filtered image of each face image in the sample face image and the filtered image of the face image to be recognized, respectively, to obtain a filtered image that is superimposed, that is, in the sample face image
  • the five filtered images of each face image are recombined into new five filtered images, and the five filtered images of the face image to be recognized are reconstructed into new five filtered images to obtain a superimposed sample face image and The face image to be recognized.
  • the terminal superimposes the sample face image subjected to the circular symmetric Gabor transform, and superimposes the to-be-identified face image subjected to the circular symmetric Gabor transform, and the formula of the superposition process is:
  • the a_max is the maximum value of the pixel in the a-th filtered image in the filtered image of the face image obtained by the circular symmetric Gabor transform; the a is the pixel value of each pixel in the a-th filtered image; 255 denotes the maximum value of the image pixels; the uint8 converts the calculated face image into a data format a_temp that can be output as an image.
  • the face image to be superimposed by the terminal is test, and its size is w*h. After the test passes the circular symmetric Gabor transform, and a certain filtered image is A, the size of the A is also w.
  • each face image undergoes a circular symmetric Gabor transform
  • five filtered images are obtained, and the size of the five filtered images is w*h.
  • the a is one of the five filtered images
  • a_max is the maximum value of the pixels in the filtered image
  • the process of superimposing the five filtered images obtained by the terminal on the face image after the CSGT transformation is similar. Therefore, in the present embodiment, only the superimposition process of one filtered image is described.
  • FIG. 8 is a schematic diagram of an image of the face image of FIG. 7 undergoing circular symmetric Gabor transformation, that is, FIG. 8 is the eighth image of the face image in the sample face image.
  • step S40 includes:
  • Step b calculating a sample face image and a face image to be recognized after the region energy extraction according to the region energy extraction formula, wherein the region energy extraction formula is:
  • the i is a center point of the sample face image or the face image to be recognized after being superimposed, and the initial value of the i is 0, and the d is a preset value greater than 0, and the i is The initial value is centered and incremented in units of d until the increment condition is not satisfied.
  • the increment condition is:
  • the sum is a pixel superposition value of all the pixels in the area of the i centered on the center point, and the width and height are 0.9, which is a setting value, and may also be set to 0.8, 0.85, 0.95, etc.
  • the value, the energy extraction by the incremental condition indicates that 90% of the energy is extracted, and at this time, the texture information that best describes the face image in the a-th filtered picture can be extracted; similarly, when set to 0.95, the extraction is 95%. Energy, and so on.
  • step c the value of i when the incremental condition is not satisfied is denoted as I, and the image is taken from the superimposed image of the face of the sample or the image of the face to be recognized with the I value as the center and the height as the center.
  • the image, the captured image is used as the output image after the region energy extraction.
  • FIG. 13 is a schematic diagram of performing region energy extraction on a superimposed face image according to an embodiment of the present invention.
  • I point is the center point of the b_temp image
  • the initial value of I is 0, and when sum/10 is less than or equal to 0.9, the center point of the b_temp image is taken as the image to be intercepted.
  • C is an image area after I is expanded; when sum/10 is greater than 0.9, image clipping is performed centering on the center point of the b_temp image, and the width and height of the b_temp image obtained by the interception are I.
  • I is the value of i when the incremental condition is not satisfied, that is, C in FIG. 13 is the size of the image to be intercepted, and the width and height are both I, and the intercepted image is extracted by region energy.
  • FIG. 9 is a schematic diagram of a face image after superimposing and region energy extraction of a face image subjected to a circular symmetric Gabor transform in FIG. That is, FIG. 9 is a schematic diagram showing a face image obtained by superimposing and extracting an eighth face image of the sample face image subjected to the circular symmetric Gabor transform, and may also indicate that the pair passes through the ring.
  • FIG. 10 is a schematic diagram of comparison of the fourth face image in FIG. 8 and FIG. 9. The annotation in FIG.
  • FIG. 11 is a schematic diagram of a face image histogram in which five face images are superimposed together in FIG. 8, and FIG. 11 shows five filtered images obtained by the circular symmetric Gabor transform. After the partial binary mode conversion, the obtained five face image histograms are superimposed together.
  • step S60 includes:
  • Step d calculating a distance between the sample histogram and the to-be-identified histogram by an Euclidean distance formula
  • the terminal calculates a distance between the sample histogram and the to-be-identified histogram by an Euclidean distance formula.
  • the Euclidean distance formula is also called the Euclidean distance formula, which is the true distance between two points in the m-dimensional space. In a two-dimensional space, the Euclidean distance formula is:
  • (x i , y i ) is the position coordinate of the face image in the histogram to be recognized
  • (x j , y j ) is the position coordinate of the jth face image in the sample histogram
  • D i,j is the The distance between the sample histogram and the histogram to be identified.
  • Step e comparing a distance between the sample histogram and the to-be-identified histogram
  • Step f when the distance between the sample histogram and the to-be-identified histogram is the smallest, determining that the corresponding face image with the smallest distance is the image of the face to be recognized in the sample face image The same face image.
  • the terminal compares a distance between a histogram of all face images in the sample face image and the histogram to be recognized, when a histogram of a face image in the sample face image and the to-be
  • the terminal determines that the face image with the smallest distance is the same face image as the image to be recognized in the sample face image .
  • FIG. 12 is a schematic diagram of a face image histogram of the face image of the sample face image of FIG.
  • the invention further provides a face recognition device.
  • FIG. 3 is a schematic diagram of functional modules of a first embodiment of a face recognition device according to the present invention.
  • the face recognition device includes:
  • the acquiring module 10 is configured to acquire a sample face image and a face image to be recognized, wherein at least two face images exist in the sample face image;
  • the terminal acquires a face image through a camera, and the face image acquired by the terminal includes a sample face image and a face image to be recognized, wherein at least two face images exist in the sample face image, the to-be-identified There is only one face image in the face image.
  • the sample face image includes 11 face images.
  • the terminal acquires the sample face image and the to-be-recognized face image through a camera with a high pixel.
  • the terminal includes, but is not limited to, a smartphone and a tablet.
  • the first transform module 20 is configured to perform a circular symmetric Gabor transform on the sample face image and the to-be-recognized face image respectively, and correspondingly obtain a sample face image and a to-be-recognized face image after the circular symmetric Gabor transform;
  • FIG. 6 is a schematic diagram of a face image of a face image subjected to circular symmetric Gabor transform according to the present invention.
  • the superimposing module 30 is configured to superimpose the sample face image and the to-be-recognized face image subjected to the circular symmetric Gabor transform respectively, and correspondingly obtain the superimposed sample face image and the to-be-recognized face image;
  • the extracting module 40 is configured to perform region energy extraction on the superimposed sample face image and the to-be-recognized face image respectively, and correspondingly obtain the sample face image and the to-be-recognized face image after the region energy extraction;
  • the second transform module 50 is configured to perform local binary pattern transformation on the sample face image and the to-be-recognized face image respectively after the region energy extraction, and correspondingly obtain a sample histogram including the texture information of the sample face image. And a histogram to be identified including texture information of the face image to be recognized;
  • the terminal superimposes the sample face image subjected to the circular symmetric Gabor transform to obtain a superimposed sample face image, and then performs regional energy extraction on the superimposed sample face image to obtain a region energy extraction.
  • a sample face image and then performing a local binary mode transformation on the sample energy image extracted by the region energy to obtain a sample histogram containing texture information of the sample face image;
  • the face image to be recognized by the symmetric Gabor transform is superimposed to obtain the superimposed face image to be recognized, and then the region energy extraction is performed on the superimposed face image to be recognized, and the face to be recognized after the region energy extraction is obtained.
  • the image is then subjected to local binary mode transformation on the image of the face to be recognized through the region energy extraction to obtain a histogram to be identified including the texture information of the face image to be recognized.
  • Each of the sample face image and the face image to be recognized corresponds to five histograms, that is, each of the filtered images corresponds to a histogram; the terminal pair passes through the ring Performing a local binary pattern transformation on the to-be-identified face image of the symmetric Gabor transform to obtain the person to be identified 5 to-be-identified histograms of the texture information of the face image, performing local binary mode transformation on each sample face image subjected to the circular symmetric Gabor transform to obtain 5 sample histograms including texture information of the sample face image Figure.
  • the comparison module 60 is configured to compare the sample histogram with the to-be-identified histogram to obtain a face image in the sample face image that is the same as the to-be-identified face image.
  • the terminal compares the sample histogram with the to-be-identified histogram, and when the terminal determines a histogram of a face in the sample histogram and the to-be-identified histogram, the sample histogram
  • the face image corresponding to the histogram matching the to-be-identified histogram in the figure is the same face image as the to-be-identified face image in the sample face image. That is, when the histogram of a face image in the sample face image matches the histogram of the face image to be recognized, a histogram matching the histogram of the face image to be recognized in the sample face image is The face image corresponding to the figure is the same face image as the face image to be recognized in the sample face image.
  • a conventional method for extracting texture information of a face image is a combination of GT (Gabor Transform, Gabor Transform) and the local binary pattern.
  • GT General Transform, Gabor Transform
  • the process of filtering and extracting texture information of a face image by using the GT first Performing the GT transform on the face image to obtain a filtered face image, wherein the GT transform is for transforming 8 directions in 5 scales, that is, generating 40 filtered images, and then filtering the 40 frames.
  • the image performs the local binary mode conversion, and finally the face image is recognized.
  • the computational complexity of the method is too high, and the calculation time is too long, resulting in long reading and analysis time of the video, and low efficiency.
  • the texture information of the face image is extracted by combining the circular symmetric Gabor transform and the local binary mode transform, and the face image is subjected to the circular symmetric Gabor transform to generate five filters.
  • the image is superimposed, recombined into a new five filtered images, and then the region energy extraction is performed on the reconstructed filtered image to extract an image that best describes the texture information of the face image, and then the texture image information is performed.
  • the local binary mode transform Compared with the face recognition algorithm combining the GT and the local binary mode, only five filtered images need to be calculated, and the 40 filtered images are calculated in relative proportion, which reduces the calculation amount and reduces the calculation time.
  • the sample histogram and the to-be-identified histogram are obtained by performing circular symmetric Gabor transform, superposition, region energy extraction, and local binary mode transformation on the sample face image and the face image to be recognized. Comparing the sample histogram with the to-be-identified histogram, the same facial image in the sample face image as the to-be-recognized face image is obtained.
  • the method of extracting the histogram of the face image by the combination of the circular symmetric Gabor transform and the local binary mode transform is implemented, thereby determining the face image of the sample face image which is the same as the face image to be recognized, and the face recognition process is reduced. The amount of calculation in the calculation shortens the calculation time and improves the efficiency of face recognition.
  • FIG. 4 is a schematic diagram of functional modules of a second embodiment of a face recognition apparatus according to the present invention.
  • a second embodiment of the face recognition apparatus of the present invention is proposed based on the first embodiment of the present invention.
  • the face recognition device further includes:
  • a pre-processing module 70 configured to pre-process the sample face image and the to-be-recognized face image Processing, wherein the pre-processing includes gradation processing and histogram equalization processing.
  • FIG. 5 is a schematic diagram of a face image after pre-processing of all face images in a sample face image according to the present invention
  • FIG. 7 is a pre-processing of a face image according to the present invention.
  • the sample face image in FIG. 5 includes a total of 11 face images
  • FIG. 7 is a schematic diagram of the face image after preprocessing the eighth face image in the sample face image.
  • the FIG. 7 also shows a schematic diagram of the face image after preprocessing the face image to be recognized. .
  • the terminal performs gradation transformation on the sample face image and the to-be-recognized face image, and correspondingly obtains the face image and the face image after gradation transformation in the face image to be recognized.
  • the gradation transformation is also called grayscale stretching and contrast stretching. It is the most basic kind of point operation. According to the gray value of each pixel in the original image, according to some mapping rule, it is transformed into another A gray value that achieves the purpose of enhancing an image by assigning a new gray value to each pixel in the original image.
  • the face image of the gradation transformation is subjected to equalization processing of the face image histogram.
  • the terminal performs equalization processing on the face image of the gradation transformed face image, and correspondingly obtains a face image of the sample face image and the to-be-recognized face image for histogram equalization processing
  • the image corresponds to the pre-processed face image.
  • the step of equalizing the histogram of the face image is: 1 statistic the histogram of the face image after the gradation transformation; 2 according to the statistical histogram of the face image, using the cumulative distribution function to transform, and obtaining the transformed
  • the new gray scale; 3 replaces the old gray scale with the new gray scale. This step is an approximation process. It should be as reasonable as possible according to the reasonable purpose, and the gray values are equal or approximate.
  • the obtained sample face image and the face image to be recognized are not affected by illumination and skin color. The effect of improving the accuracy of face recognition.
  • the circular symmetric Gabor transform is obtained after the terminal performs the circular symmetric Gabor transform on the sample face image and the to-be-recognized face image. And a filtered image of the sample face image, and a filtered image of the face image to be recognized after the circular symmetric Gabor transform.
  • Each face image is generated by the circular symmetric Gabor transform, and corresponding five filtered images are generated.
  • the terminal superimposes the filtered image of each face image in the sample face image and the filtered image of the face image to be recognized, respectively, to obtain a filtered image that is superimposed, that is, in the sample face image
  • the five filtered images of each face image are recombined into new five filtered images, and the five filtered images of the face image to be recognized are reconstructed into new five filtered images to obtain a superimposed sample face image and The face image to be recognized.
  • the terminal pairs the sample passing through the circular symmetric Gabor transform The face image is superimposed, and the face image to be recognized passing through the circular symmetric Gabor transform is superimposed, and the formula of the superposition process is:
  • the a_max is the maximum value of the pixel in the a-th filtered image in the filtered image of the face image obtained by the circular symmetric Gabor transform; the a is the pixel value of each pixel in the a-th filtered image; 255 denotes the maximum value of the image pixels; the uint8 converts the calculated face image into a data format a_temp that can be output as an image.
  • the face image to be superimposed by the terminal is test, and its size is w*h. After the test passes the circular symmetric Gabor transform, and a certain filtered image is A, the size of the A is also w.
  • each face image undergoes a circular symmetric Gabor transform
  • five filtered images are obtained, and the size of the five filtered images is w*h.
  • the a is one of the five filtered images
  • a_max is the maximum value of the pixels in the filtered image
  • the process of superimposing the five filtered images obtained by the terminal on the face image after the CSGT transformation is similar. Therefore, in the present embodiment, only the superimposition process of one filtered image is described.
  • FIG. 8 is a schematic diagram of an image of the face image of FIG. 7 undergoing circular symmetric Gabor transformation, that is, FIG. 8 is the eighth image of the face image in the sample face image.
  • the overlay module 40 includes:
  • a normalization processing unit configured to normalize the superimposed sample face image and the a_temp image corresponding to the face image to be recognized, respectively, to obtain a normalized sample face image and to be identified
  • the b_temp image corresponding to the face image, where b_temp a_temp/255;
  • An extracting unit is configured to calculate a sample face image and a face image to be recognized after the region energy extraction according to the region energy extraction formula, wherein the region energy extraction formula is:
  • the i is a center point of the sample face image or the face image to be recognized after being superimposed, and the initial value of the i is 0, and the d is a preset value greater than 0, and the i is The initial value is centered and incremented in units of d until the increment condition is not satisfied.
  • the increment condition is:
  • the sum is a pixel superposition value of all the pixels in the area of the i centered on the center point, and the width and height are 0.9, which is a setting value, and may also be set to 0.8, 0.85, 0.95, etc.
  • the value, the energy extraction by the incremental condition means that 90% of the energy is extracted, and at this time, Extract the texture information that best describes the face image in the a-th filtered picture; similarly, when set to 0.95, it means extracting 95% of the energy, and so on;
  • An intercepting unit configured to record an i value when the incremental condition is not satisfied as I, and to use the sample face image or the image to be recognized after the superimposition with the I value as the center and the I value as the width and the height Intercepting an image, and the intercepted image is used as an output image after region energy extraction;
  • FIG. 13 is a schematic diagram of performing region energy extraction on a superimposed face image according to an embodiment of the present invention.
  • I point is the center point of the b_temp image
  • the initial value of I is 0, and when sum/10 is less than or equal to 0.9, the center point of the b_temp image is taken as the image to be intercepted.
  • C is an image area after I is expanded; when sum/10 is greater than 0.9, image clipping is performed centering on the center point of the b_temp image, and the width and height of the b_temp image obtained by the interception are I.
  • I is the value of i when the incremental condition is not satisfied, that is, C in FIG. 13 is the size of the image to be intercepted, and the width and height are both I, and the intercepted image is extracted by region energy.
  • FIG. 9 is a schematic diagram of a face image after superimposing and region energy extraction of a face image subjected to a circular symmetric Gabor transform in FIG. That is, FIG. 9 is a schematic diagram showing a face image obtained by superimposing and extracting the eighth face image of the sample face image subjected to the circular symmetric Gabor transform, and may also represent the pair passing through the ring.
  • FIG. 10 is a schematic diagram of comparison of the fourth face image in FIG. 8 and FIG. 9. The annotation in FIG.
  • FIG. 11 is a schematic diagram of a face image histogram in which five face images are superimposed together in FIG. 8, and FIG. 11 shows five filtered images obtained by the circular symmetric Gabor transform. After the partial binary mode conversion, the obtained five face image histograms are superimposed together.
  • the comparison module 60 includes:
  • a calculating unit configured to calculate a distance between the sample histogram and the to-be-identified histogram by an Euclidean distance formula
  • the terminal calculates a distance between the sample histogram and the to-be-identified histogram by an Euclidean distance formula.
  • the Euclidean distance formula is also called the Euclidean distance formula, which is the true distance between two points in the m-dimensional space. In a two-dimensional space, the Euclidean distance formula is:
  • (x i, y i) be the coordinates for the location of the human face image recognition histogram
  • D i, j is the The distance between the sample histogram and the histogram to be identified.
  • a comparison unit configured to compare a distance between the sample histogram and the to-be-identified histogram
  • a second determining unit configured to determine, when the distance between the sample histogram and the to-be-identified histogram is the smallest, the corresponding face image with the smallest distance as the sample face image and the waiting Recognize face images with the same face image.
  • the terminal compares a distance between a histogram of all face images in the sample face image and the histogram to be recognized, when a histogram of a face image in the sample face image and the to-be
  • the terminal determines that the face image with the smallest distance is the same face image as the image to be recognized in the sample face image .
  • FIG. 12 is a schematic diagram of a face image histogram of the face image of the sample face image of FIG.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

一种人脸识别方法及装置,该方法为:获取样本人脸图像和待识别人脸图像,其中,所述样本人脸图像中存在至少两幅人脸图像(S10);对所述样本人脸图像和所述待识别人脸图像分别进行环形对称Gabor变换、叠加、区域能量提取和进行局部二值模式变换,对应得到包含所述样本人脸图像的纹理信息的样本直方图和包含所述待识别人脸图像的纹理信息的待识别直方图(S20-S50);将所述样本直方图和所述待识别直方图进行对比,以得到所述样本人脸图像中与所述待识别人脸图像相同的人脸图像(S60)。从而降低人脸识别过程中的计算量,缩短计算时间,提高人脸识别的效率。

Description

人脸识别方法和装置 技术领域
本发明涉及多媒体领域,尤其涉及一种人脸识别方法和装置。
背景技术
在人脸识别过程中,需要对人脸图像进行人脸纹理信息的提取。传统的人脸纹理信息提取方法采用的是GT(Gabor Transform,Gabor变换)和LBP(Local Binary Pattern,局部二值模式)变换进行叠加,从而提取出人脸纹理信息。具体过程为:首先将人脸图像经过Gabor滤波,再将滤波后的图像采用LBP直方图变换表示出人脸的纹理信息,则人脸图像需要经过5个尺度8个方向的变换,生成40幅滤波图像,然后再对这40幅滤波图像进行LBP变换处理,最后进行人脸识别。在使用Gabor变换和LBP变换相结合的方法提取人脸图像纹理信息对人脸进行识别的过程中,需要处理的图像的维数过高,计算量较大,人脸识别的时间长,效率低。
发明内容
本发明的主要目的在于提供一种人脸识别方法和装置,旨在解决现有技术在人脸识别过程中计算量大,计算时间长的技术问题。
为实现上述目的,本发明提供一种人脸识别方法,所述方法包括步骤:
获取样本人脸图像和待识别人脸图像,其中,所述样本人脸图像中存在至少两幅人脸图像;
对所述样本人脸图像和所述待识别人脸图像分别进行环形对称Gabor变换,对应得到经过环形对称Gabor变换的样本人脸图像和待识别人脸图像;
将所述经过环形对称Gabor变换的样本人脸图像和待识别人脸图像分别进行叠加,对应得到经过叠加的样本人脸图像和待识别人脸图像;
对所述经过叠加的样本人脸图像和待识别人脸图像分别进行区域能量提取,对应得到经过区域能量提取的样本人脸图像和待识别人脸图像;
对所述经过区域能量提取的样本人脸图像和待识别人脸图像分别进行局部二值模式变换,对应得到包含所述样本人脸图像的纹理信息的样本直方图和包含所述待识别人脸图像的纹理信息的待识别直方图;
通过欧氏距离公式计算所述样本直方图与所述待识别直方图之间的距离;
对比所述样本直方图与所述待识别直方图之间的距离;
当所述样本直方图与所述待识别直方图之间的距离最小时,判定所述距离最小的所对应人脸图像为所述样本人脸图像中与所述待识别人脸图像相同 的人脸图像;
其中,所述将所述经过环形对称Gabor变换的样本人脸图像和待识别人脸图像分别进行叠加的公式为:
Figure PCTCN2016084618-appb-000001
其中,所述a_max为经过环形对称Gabor变换后所得人脸图像的滤波图像中第a幅滤波图像中的像素最大值;所述a为第a幅滤波图像中每一像素点的像素值;所述255表示的是图像像素的最大值;所述uint8是将计算所得的人脸图像转化为可以输出成图像的数据格式a_temp。
此外,为实现上述目的,本发明还提供提供一种人脸识别方法,所述方法包括步骤:
获取样本人脸图像和待识别人脸图像,其中,所述样本人脸图像中存在至少两幅人脸图像;
对所述样本人脸图像和所述待识别人脸图像分别进行环形对称Gabor变换,对应得到经过环形对称Gabor变换的样本人脸图像和待识别人脸图像;
将所述经过环形对称Gabor变换的样本人脸图像和待识别人脸图像分别进行叠加,对应得到经过叠加的样本人脸图像和待识别人脸图像;
对所述经过叠加的样本人脸图像和待识别人脸图像分别进行区域能量提取,对应得到经过区域能量提取的样本人脸图像和待识别人脸图像;
对所述经过区域能量提取的样本人脸图像和待识别人脸图像分别进行局部二值模式变换,对应得到包含所述样本人脸图像的纹理信息的样本直方图和包含所述待识别人脸图像的纹理信息的待识别直方图;
将所述样本直方图和所述待识别直方图进行对比,以得到所述样本人脸图像中与所述待识别人脸图像相同的人脸图像。
此外,为实现上述目的,本发明还提供一种人脸识别装置,所述装置包括:
获取模块,用于获取样本人脸图像和待识别人脸图像,其中,所述样本人脸图像中存在至少两幅人脸图像;
第一变换模块,用于对所述样本人脸图像和所述待识别人脸图像分别进行环形对称Gabor变换,对应得到经过环形对称Gabor变换的样本人脸图像和待识别人脸图像;
叠加模块,用于将所述经过环形对称Gabor变换的样本人脸图像和待识别人脸图像分别进行叠加,对应得到经过叠加的样本人脸图像和待识别人脸图像;
提取模块,用于对所述经过叠加的样本人脸图像和待识别人脸图像分别进行区域能量提取,对应得到经过区域能量提取的样本人脸图像和待识别人脸图像;
第二变换模块,用于对所述经过区域能量提取的样本人脸图像和待识别人脸图像分别进行局部二值模式变换,对应得到包含所述样本人脸图像的纹理信息的样本直方图和所述待识别人脸图像的纹理信息的待识别直方图;
对比模块,用于将所述样本直方图和所述待识别直方图进行对比,以得到所述样本人脸图像中与所述待识别人脸图像相同的人脸图像。
与现有技术相比,本发明通过对样本人脸图像和待识别人脸图像进行环形对称Gabor变换、叠加、区域能量提取和局部二值模式变换,得到所述样本直方图和所述待识别直方图,并将所述样本直方图与所述待识别直方图进行对比,得到所述样本人脸图像中与所述待识别人脸图像相同的人脸图像。实现了通过环形对称Gabor变换和局部二值模式变换相结合的方法提取人脸图像的直方图,进而确定样本人脸图像中与待识别人脸图像相同的人脸图像,降低了人脸识别过程中的计算量,缩短了计算时间,提高了人脸识别的效率。
附图说明
图1为本发明人脸识别方法第一实施例的流程示意图;
图2为本发明人脸识别方法第二实施例的流程示意图;
图3为本发明人脸识别装置第一实施例的功能模块示意图;
图4为本发明人脸识别装置第二实施例的功能模块示意图;
图5为本发明中样本人脸图像中的人脸图像经过预处理后的人脸图像的示意图;
图6为本发明中某一人脸图像经过环形对称Gabor变换后的人脸图像示意图;
图7为本发明中某一人脸图像经过预处理后的人脸图像的示意图;
图8为图7中的人脸图像经过环形对称Gabor变换后的图像示意图;
图9为图8中对经过环形对称Gabor变换的人脸图像进行叠加和区域能量提取后的人脸图像的示意图;
图10为图8和图9中的第4幅人脸图像进行对比的示意图;
图11为图8中5幅人脸图像叠加到一起的人脸图像直方图的示意图;
图12为图5中的样本人脸图像的第8幅人脸图像是与待识别人脸图像相同的人脸图像的人脸图像直方图的示意图;
图13为本发明实施例中对经过叠加后的人脸图像进行区域能量提取的示意图。
本发明目的的实现、功能特点及优点将结合实施例,参照附图做进一步说明。
具体实施方式
应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。
本发明提供一种人脸识别方法。
参照图1,图1为本发明人脸识别方法第一实施例的流程示意图。
在本实施例中,所述人脸识别方法包括:
步骤S10,获取样本人脸图像和待识别人脸图像,其中,所述样本人脸图像中存在至少两幅人脸图像;
终端通过摄像头获取人脸图像,所述终端获取的人脸图像包括了样本人脸图像和待识别人脸图像,其中,所述样本人脸图像中存在至少两幅人脸图像,所述待识别人脸图像中只存在一幅人脸图像。在本实施例中,所述样本人脸图像中包括了11幅人脸图像。所述终端为了能够获取较为清晰的人脸图像,通过像素高的摄像头获取所述样本人脸图像和所述待识别人脸图像。所述终端包括但不限于智能手机和平板电脑。
步骤S20,对所述样本人脸图像和所述待识别人脸图像分别进行环形对称Gabor变换,对应得到经过环形对称Gabor变换的样本人脸图像和待识别人脸图像;
所述终端分别对所述样本人脸图像和所述待识别人脸图像进行环形对称Gabor变换,得到经过环形对称Gabor变换的所述样本人脸图像和经过环形对称Gabor变换的所述待识别人脸图像。所述环形对称Gabor变换是5个尺度多个方向的一种小波变换,它将一幅图像变换为5个尺度多个方向上的图像。即所述终端对所述样本人脸图像和所述待识别人脸图像进行所述环形对称Gabor变换后,一幅人脸图像将会变换成5幅滤波图像。具体地,参照图6,图6为本发明中某一人脸图像经过环形对称Gabor变换后的人脸图像示意图。
步骤S30,将所述经过环形对称Gabor变换的样本人脸图像和待识别人脸图像分别进行叠加,对应得到经过叠加的样本人脸图像和待识别人脸图像;
步骤S40,对所述经过叠加的样本人脸图像和待识别人脸图像分别进行区域能量提取,对应得到经过区域能量提取的样本人脸图像和待识别人脸图像;
步骤S50,对所述经过区域能量提取后的样本人脸图像和待识别人脸图像分别进行局部二值模式变换,对应得到包含所述样本人脸图像的纹理信息的样本直方图和包含所述待识别人脸图像的纹理信息的待识别直方图;
所述终端对所述经过环形对称Gabor变换的样本人脸图像进行叠加处理,得到经过叠加的样本人脸图像,再对所述经过叠加的样本人脸图像进行区域能量提取,得到经过区域能量提取的样本人脸图像,然后对所述经过区域能量提取的样本人脸图像进行局部二值模式变换,得到包含所述样本人脸图像的纹理信息的样本直方图;所述终端对所述经过环形对称Gabor变换的待识别人脸图像进行叠加处理,得到经过叠加的待识别人脸图像,再对所述经过叠加的待识别人脸图像进行区域能量提取,得到经过区域能量提取的待识别人脸图像,然后对所述经过区域能量提取的待识别人脸图像进行局部二值模 式变换,得到包含所述待识别人脸图像的纹理信息的待识别直方图。所述样本人脸图像和所述待识别人脸图像中的任一幅人脸图像对应着5幅直方图,即每一幅滤波图像对应着一幅直方图;所述终端对经过所述环形对称Gabor变换的所述待识别人脸图像进行局部二值模式变换,得到包含所述待识别人脸图像的纹理信息的5幅待识别直方图,对所述经过环形对称Gabor变换的每一样本人脸图像进行局部二值模式变换,得到包含所述样本人脸图像的纹理信息的5幅样本直方图。
步骤S60,将所述样本直方图和所述待识别直方图进行对比,以得到所述样本人脸图像中与所述待识别人脸图像相同的人脸图像。
所述终端将所述样本直方图和所述待识别直方图进行对比,当所述终端判定所述样本直方图中的某个人脸的直方图与所述待识别直方图时,所述样本直方图中与所述待识别直方图匹配的直方图所对应的人脸图像,为所述样本人脸图像中与所述待识别人脸图像相同的人脸图像。即所述样本人脸图像中某个人脸图像的直方图与所述待识别人脸图像的直方图匹配时,所述样本人脸图像中与所述待识别人脸图像的直方图匹配的直方图所对应的人脸图像,为所述样本人脸图像中与所述待识别人脸图像相同的人脸图像。
提取人脸图像的纹理信息的传统方法为GT(Gabor Transform,Gabor变换)和所述局部二值模式相结合的算法,在利用所述GT进行人脸图像的纹理信息的滤波提取过程中,首先将所述人脸图像经过所述GT变换,得到滤波后的人脸图像,所述GT变换是针对5个尺度8个方向的变换,即生成40幅滤波图像,然后再对所述40幅滤波图像进行所述局部二值模式变换,最后进行人脸图像的识别。该方法计算复杂度过高,且计算时间过长,导致视频的读取分析时间长,效率低。而本实施例是通过将所述环形对称Gabor变换和所述局部二值模式变换相结合的算法提取人脸图像的纹理信息,所述人脸图像经过所述环形对称Gabor变换后生成5幅滤波图像进行叠加,重组成新的5幅滤波图像,然后对重组后的滤波图像进行区域能量提取,以提取出最能描述所述人脸图像的纹理信息的图像,再对所述纹理图像信息进行所述局部二值模式变换。相对于采用所述GT和所述局部二值模式相结合的人脸识别算法,只需要计算5幅滤波图像,相对比计算40幅滤波图像,降低了计算量,减少了计算时间。
本实施例通过对样本人脸图像和待识别人脸图像进行环形对称Gabor变换、叠加、区域能量提取和局部二值模式变换,得到所述样本直方图和所述待识别直方图,并将所述样本直方图与所述待识别直方图进行对比,得到所述样本人脸图像中与所述待识别人脸图像相同的人脸图像。实现了通过环形对称Gabor变换和局部二值模式变换相结合的方法提取人脸图像的直方图,进而确定样本人脸图像中与待识别人脸图像相同的人脸图像,降低了人脸识别过程中的计算量,缩短了计算时间,提高了人脸识别的效率。
参照图2,图2为本发明人脸识别方法第二实施例的流程示意图,基于本发明的第一实施例提出本发明人脸识别方法的第二实施例。
在本实施例中,所述人脸识别方法还包括:
步骤S70,对所述样本人脸图像和所述待识别人脸图像进行预处理,其中,所述预处理包括灰度处理和直方图均衡化处理。
当所述终端获取所述样本人脸图像和所述待识别人脸图像时,对所述样本人脸图像和所述待识别人脸图像进行预处理,其中所述预处理包括灰度处理和直方图均衡化处理。具体地,参照图5和图7,图5为本发明中样本人脸图像中的所有人脸图像经过预处理后的人脸图像的示意图,图7为本发明中某一人脸图像经过预处理后的人脸图像的示意图。其中,图5中的样本人脸图像一共包括了11幅人脸图像,图7表示对样本人脸图像中的第8幅人脸图像进行预处理后的人脸图像的示意图。当所述待识别人脸图像与所述样本人脸图像中的第8幅人脸图像相同时,所述图7也表示对所述待识别人脸图像进行预处理后的人脸图像的示意图。
对所述样本人脸图像和所述待识别人脸图像进行灰度变换;
所述终端对所述样本人脸图像和所述待识别人脸图像进行灰度变换,对应得到所述样本人脸图像和所述待识别人脸图像中进行灰度变换后的人脸图像。所述灰度变换又称为灰度拉伸和对比度拉伸,它是最基本的一种点操作,根据原始图像中每个像素的灰度值,按照某种映射规则,将其变换为另一种灰度值,通过对原始图像中每个像素赋一个新的灰度值来达到增强图像的目的。
对所述灰度变换后的人脸图像进行人脸图像直方图的均衡化处理。
所述终端对所述灰度变换后的人脸图像进行人脸图像直方图的均衡化处理,对应得到所述样本人脸图像和所述待识别人脸图像进行直方图均衡化处理的人脸图像,即对应得到预处理后的人脸图像。所述人脸图像直方图的均衡化的步骤为:①统计所述灰度变换后的人脸图像直方图;②根据统计出的人脸图像直方图采用累积分布函数做变换,求得变换后的新灰度;③用所述新灰度代替旧灰度,这一步是近似的过程,应根据合理的目的尽量做到合理,同时把灰度值相等或近似的合并在一起。
进一步地,所述终端分别对所述样本人脸图像和所述待识别人脸图像进行中值滤波,所述中值滤波是把局部区域的像素按灰度等级进行排序,取该领域中灰度的中值作为当前像素的灰度值。所述中值滤波的步骤为:①将滤波模板在图像中漫游,并将模板中心与图中某个像素位置重合;②读取模板中各对应像素的灰度值;③将这些灰度值从小到大排列;④取这一列数据的中间数据赋给对应模板中心位置的像素。所述终端对所述样本帧中的人脸图像和待分类帧中的人脸图像进行同态滤波。所述同态滤波是将图像乘积形式的亮度模型(非可加性)变成可加形式,以便进行滤波增强处理。所述同态滤波的步骤为:①对亮度函数两边作对数变换,再取傅氏变换;②通过统一 滤波器;③对滤波器的输出取傅氏反变换,再取指数变换。选取合适的滤波器,可以适当压缩照度分量的动态范围,同时适当提升反射度分量,可以改善图像对比度,突出物体轮廓。
本实施例通过对获取的样本人脸图像和待识别人脸图像进行灰度变换和直方图均衡化等处理,使所获取的样本人脸图像和待识别人脸图像不受光照和肤色等因素的影响,提高了人脸识别的准确率。
具体的,图1和图2所示实施例中,当所述终端对所述样本人脸图像和所述待识别人脸图像进行所述环形对称Gabor变换后,得到经过所述环形对称Gabor变换后的所述样本人脸图像的滤波图像,和经过所述环形对称Gabor变换后的所述待识别人脸图像的滤波图像。其中,每一幅人脸图像在经过所述环形对称Gabor变换后,会生成对应的5幅滤波图像。所述终端分别对所述样本人脸图像中每一人脸图像的滤波图像和所述待识别人脸图像的滤波图像进行叠加,对应得到经过叠加后的滤波图像,即将所述样本人脸图像中每一人脸图像的5幅滤波图像重组成新的5幅滤波图像,将所述待识别人脸图像的5幅滤波图像重组成新的5幅滤波图像,得到经过叠加后的样本人脸图像和待识别人脸图像。所述终端对经过所述环形对称Gabor变换的所述样本人脸图像进行叠加,和对经过所述环形对称Gabor变换的所述待识别人脸图像进行叠加,所述叠加过程的公式为:
Figure PCTCN2016084618-appb-000002
其中,所述a_max为经过环形对称Gabor变换后所得人脸图像的滤波图像中第a幅滤波图像中的像素最大值;所述a为第a幅滤波图像中每一像素点的像素值;所述255表示的是图像像素的最大值;所述uint8是将计算所得的人脸图像转化为可以输出成图像的数据格式a_temp。如:所述终端需要进行叠加的人脸图像为test,其大小为w*h,所述test经过环形对称Gabor变换后,得到的某一滤波图像为A,则所述A的大小也为w*h,且每幅人脸图像经过环形对称Gabor变换后,会得到5幅滤波图像,5幅滤波图像的大小都为w*h。所述a为这5幅滤波图像中其中一幅滤波图像,a_max为a滤波图像中像素最大值,则a_test=a/a_max表示将所述a滤波图像中每一像素点的像素值都与像素最大值相除,得到的a_test的大小同样为w*h,最后得到a_temp=uint8(a_test*255)。所述终端对人脸图像经过所述CSGT变换后所得的5幅滤波图像的叠加处理的过程是类似的,因此,在本实施例中只对一幅滤波图像的叠加过程进行说明。具体地,参照图8,所述图8为图7中的人脸图像经过环形对称Gabor变换后的图像示意图,即所述图8为所述样本人脸图像中的第8幅人脸图像经过环形对称Gabor变换后的图像示意图,当所述待识别人脸图像与所述样本人脸图像中的第8幅人脸图像相同时,所述图8也可以表示所述待识别人脸图像经过环形对称Gabor变换后的图像示意图。
在本实施例中,所述步骤S40包括:
步骤a,对所述经过叠加后的样本人脸图像和待识别人脸图像对应的a_temp图像分别进行归一化处理,得到归一化处理后的样本人脸图像和待识别人脸图像对应的b_temp图像,其中b_temp=a_temp/255;
步骤b,根据区域能量提取公式计算经区域能量提取后的样本人脸图像和待识别人脸图像,其中区域能量提取公式为:
Figure PCTCN2016084618-appb-000003
其中,所述i为经过叠加后的所述样本人脸图像或待识别人脸图像的中心点,且所述i的初始值为0,所述d为大于0的预设值,所述i以初始值为中心、以d为单位进行递增直至不满足递增条件,递增条件为:
Figure PCTCN2016084618-appb-000004
其中所述sum为以所述中心点为中心、宽和高的取值均为所述i的区域内所有像素的像素叠加值,0.9是一个设置值,还可以设置为0.8、0.85、0.95等数值,经该递增条件进行的能量提取表示提取90%的能量,在此时能够提取出最能描述第a幅滤波图片中人脸图像的纹理信息;同理当设置为0.95时,表示提取95%的能量,依此类推。
步骤c,将不满足递增条件时的i值记为I,以所述中心点为中心、以I值为宽和高从经过叠加后的所述样本人脸图像或待识别人脸图像中截取图像,截取的图像作为区域能量提取后的输出图像。
所述终端对所述经过叠加的待识别人脸图像进行区域能量提取的过程和对所述经过叠加的样本人脸图像进行区域能量提取的过程一致,在此不再赘述。具体地,参照图13,图13为本发明实施例中对经过叠加后的人脸图像进行区域能量提取的示意图。由所述图13可知,I点为所述b_temp图像的中心点,所述I的初始值为0,当sum/10小于或等于0.9时,以所述b_temp图像的中心点作为待截取图像的中心点,将所要截取的图像区域的宽和高都扩大10个单位,即i=i+10,其中,所述sum为以所述中心点为中心、宽和高的取值均为i的区域内所有像素点对应像素值的叠加。在所述图13中,C为I扩大后的图像区域;当sum/10大于0.9时,则以所述b_temp图像的中心点为中心进行图像截取,截取所得的b_temp图像的宽和高为I,I为不满足递增条件时的i的取值,即图13中的C为所需截取的图像的大小,其宽和高的大小都为I,截取所得的图像即为经过区域能量提取的人脸图像,最终得到经过区域能量提取的人脸图像为test_end=uint8(I*255)。
具体地,参照图9和图10,所述图9为图8中对经过环形对称Gabor变换的人脸图像进行叠加和区域能量提取后的人脸图像的示意图。即所述图9表示对经过所述环形对称Gabor变换的所述样本人脸图像的第8幅人脸图像进行叠加和区域能量提取后的人脸图像的示意图,也可以表示对经过所述环 形对称Gabor变换待识别人脸图像进行叠加和区域能量提取后的人脸图像的示意图。所述图10为图8和图9中的第4幅人脸图像进行对比的示意图,由所述图10中的标注可知,在经过叠加和区域能量提取后的人脸图像更加清晰,更有利于人脸图像的纹理信息的提取。具体地,参照图11,图11为图8中5幅人脸图像叠加到一起的人脸图像直方图的示意图,所述图11表示将经过所述环形对称Gabor变换所得的5幅滤波图像进行所述局部二值模式变换后,所得的5幅人脸图像直方图叠加到一起的示意图。
在本实施例中,所述步骤S60包括:
步骤d,通过欧氏距离公式计算所述样本直方图与所述待识别直方图之间的距离;
所述终端通过欧氏距离公式计算所述样本直方图与所述待识别直方图之间的距离。其中,所述欧氏距离公式也称欧几里得距离公式,它是在m维空间中两个点之间的真实距离。在二维空间中,所述欧氏距离公式为:
Figure PCTCN2016084618-appb-000005
其中,(xi,yi)为待识别直方图中人脸图像的位置坐标,(xj,yj)为样本直方图中第j个人脸图像的位置坐标,Di,j为所述样本直方图与所述待识别直方图之间的距离。
步骤e,对比所述样本直方图与所述待识别直方图之间的距离;
步骤f,当所述样本直方图与所述待识别直方图之间的距离最小时,判定所述距离最小的所对应人脸图像为所述样本人脸图像中与所述待识别人脸图像相同的人脸图像。
所述终端对比所述样本人脸图像中所有人脸图像的直方图与所述待识别直方图之间的距离,当所述样本人脸图像中的某个人脸图像的直方图与所述待识别直方图之间的距离在所计算得到的距离中最小时,所述终端判定所述距离最小的人脸图像为所述样本人脸图像中与所述待识别人脸图像相同的人脸图像。具体地,参照图12,图12为图5中的样本人脸图像的第8幅人脸图像是与待识别人脸图像相同的人脸图像的人脸图像直方图的示意图,在所述图12中可知,所述横坐标为8时,纵坐标的值最大,表示所述待识别人脸图像与所述样本人脸图像中的第8幅人脸图像的相似度最高,即所述样本人脸图像的第8幅人脸图像的直方图与所述待识别直方图之间的距离最小,即所述图12表示所述样本人脸图像中第8幅人脸图像与所述待识别人脸图像相同的示意图。
本发明进一步提供一种人脸识别装置。
参照图3,图3为本发明人脸识别装置第一实施例的功能模块示意图。
在本实施例中,所述人脸识别装置包括:
获取模块10,用于获取样本人脸图像和待识别人脸图像,其中,所述样本人脸图像中存在至少两幅人脸图像;
终端通过摄像头获取人脸图像,所述终端获取的人脸图像包括了样本人脸图像和待识别人脸图像,其中,所述样本人脸图像中存在至少两幅人脸图像,所述待识别人脸图像中只存在一幅人脸图像。在本实施例中,所述样本人脸图像中包括了11幅人脸图像。所述终端为了能够获取较为清晰的人脸图像,通过像素高的摄像头获取所述样本人脸图像和所述待识别人脸图像。所述终端包括但不限于智能手机和平板电脑。
第一变换模块20,用于对所述样本人脸图像和所述待识别人脸图像分别进行环形对称Gabor变换,对应得到经过环形对称Gabor变换的样本人脸图像和待识别人脸图像;
所述终端分别对所述样本人脸图像和所述待识别人脸图像进行环形对称Gabor变换,得到经过环形对称Gabor变换的所述样本人脸图像和经过环形对称Gabor变换的所述待识别人脸图像。所述环形对称Gabor变换是5个尺度多个方向的一种小波变换,它将一幅图像变换为5个尺度多个方向上的图像。即所述终端对所述样本人脸图像和所述待识别人脸图像进行所述环形对称Gabor变换后,一幅人脸图像将会变换成5幅滤波图像。具体地,参照图6,图6为本发明中某一人脸图像经过环形对称Gabor变换后的人脸图像示意图。
叠加模块30,用于将所述经过环形对称Gabor变换的样本人脸图像和待识别人脸图像分别进行叠加,对应得到经过叠加的样本人脸图像和待识别人脸图像;
提取模块40,用于对所述经过叠加的样本人脸图像和待识别人脸图像分别进行区域能量提取,对应得到经过区域能量提取的样本人脸图像和待识别人脸图像;
第二变换模块50,用于对所述经过区域能量提取后的样本人脸图像和待识别人脸图像分别进行局部二值模式变换,对应得到包含所述样本人脸图像的纹理信息的样本直方图和包含所述待识别人脸图像的纹理信息的待识别直方图;
所述终端对所述经过环形对称Gabor变换的样本人脸图像进行叠加处理,得到经过叠加的样本人脸图像,再对所述经过叠加的样本人脸图像进行区域能量提取,得到经过区域能量提取的样本人脸图像,然后对所述经过区域能量提取的样本人脸图像进行局部二值模式变换,得到包含所述样本人脸图像的纹理信息的样本直方图;所述终端对所述经过环形对称Gabor变换的待识别人脸图像进行叠加处理,得到经过叠加的待识别人脸图像,再对所述经过叠加的待识别人脸图像进行区域能量提取,得到经过区域能量提取的待识别人脸图像,然后对所述经过区域能量提取的待识别人脸图像进行局部二值模式变换,得到包含所述待识别人脸图像的纹理信息的待识别直方图。所述样本人脸图像和所述待识别人脸图像中的任一幅人脸图像对应着5幅直方图,即每一幅滤波图像对应着一幅直方图;所述终端对经过所述环形对称Gabor变换的所述待识别人脸图像进行局部二值模式变换,得到包含所述待识别人 脸图像的纹理信息的5幅待识别直方图,对所述经过环形对称Gabor变换的每一样本人脸图像进行局部二值模式变换,得到包含所述样本人脸图像的纹理信息的5幅样本直方图。
对比模块60,用于将所述样本直方图和所述待识别直方图进行对比,以得到所述样本人脸图像中与所述待识别人脸图像相同的人脸图像。
所述终端将所述样本直方图和所述待识别直方图进行对比,当所述终端判定所述样本直方图中的某个人脸的直方图与所述待识别直方图时,所述样本直方图中与所述待识别直方图匹配的直方图所对应的人脸图像,为所述样本人脸图像中与所述待识别人脸图像相同的人脸图像。即所述样本人脸图像中某个人脸图像的直方图与所述待识别人脸图像的直方图匹配时,所述样本人脸图像中与所述待识别人脸图像的直方图匹配的直方图所对应的人脸图像,为所述样本人脸图像中与所述待识别人脸图像相同的人脸图像。
提取人脸图像的纹理信息的传统方法为GT(Gabor Transform,Gabor变换)和所述局部二值模式相结合的算法,在利用所述GT进行人脸图像的纹理信息的滤波提取过程中,首先将所述人脸图像经过所述GT变换,得到滤波后的人脸图像,所述GT变换是针对5个尺度8个方向的变换,即生成40幅滤波图像,然后再对所述40幅滤波图像进行所述局部二值模式变换,最后进行人脸图像的识别。该方法计算复杂度过高,且计算时间过长,导致视频的读取分析时间长,效率低。而本实施例是通过将所述环形对称Gabor变换和所述局部二值模式变换相结合的算法提取人脸图像的纹理信息,所述人脸图像经过所述环形对称Gabor变换后生成5幅滤波图像进行叠加,重组成新的5幅滤波图像,然后对重组后的滤波图像进行区域能量提取,以提取出最能描述所述人脸图像的纹理信息的图像,再对所述纹理图像信息进行所述局部二值模式变换。相对于采用所述GT和所述局部二值模式相结合的人脸识别算法,只需要计算5幅滤波图像,相对比计算40幅滤波图像,降低了计算量,减少了计算时间。
本实施例通过对样本人脸图像和待识别人脸图像进行环形对称Gabor变换、叠加、区域能量提取和局部二值模式变换,得到所述样本直方图和所述待识别直方图,并将所述样本直方图与所述待识别直方图进行对比,得到所述样本人脸图像中与所述待识别人脸图像相同的人脸图像。实现了通过环形对称Gabor变换和局部二值模式变换相结合的方法提取人脸图像的直方图,进而确定样本人脸图像中与待识别人脸图像相同的人脸图像,降低了人脸识别过程中的计算量,缩短了计算时间,提高了人脸识别的效率。
参照图4,图4为本发明人脸识别装置第二实施例的功能模块示意图,基于本发明的第一实施例提出本发明人脸识别装置的第二实施例。
在本实施例中,所述人脸识别装置还包括:
预处理模块70,用于对所述样本人脸图像和所述待识别人脸图像进行预 处理,其中,所述预处理包括灰度处理和直方图均衡化处理。
当所述终端获取所述样本人脸图像和所述待识别人脸图像时,对所述样本人脸图像和所述待识别人脸图像进行预处理,其中所述预处理包括灰度处理和直方图均衡化处理。具体地,参照图5和图7,图5为本发明中样本人脸图像中的所有人脸图像经过预处理后的人脸图像的示意图,图7为本发明中某一人脸图像经过预处理后的人脸图像的示意图。其中,图5中的样本人脸图像一共包括了11幅人脸图像,图7表示对样本人脸图像中的第8幅人脸图像进行预处理后的人脸图像的示意图。当所述待识别人脸图像与所述样本人脸图像中的第8幅人脸图像相同时,所述图7也表示对所述待识别人脸图像进行预处理后的人脸图像的示意图。
对所述样本人脸图像和所述待识别人脸图像进行灰度变换;
所述终端对所述样本人脸图像和所述待识别人脸图像进行灰度变换,对应得到所述样本人脸图像和所述待识别人脸图像中进行灰度变换后的人脸图像。所述灰度变换又称为灰度拉伸和对比度拉伸,它是最基本的一种点操作,根据原始图像中每个像素的灰度值,按照某种映射规则,将其变换为另一种灰度值,通过对原始图像中每个像素赋一个新的灰度值来达到增强图像的目的。
对所述灰度变换后的人脸图像进行人脸图像直方图的均衡化处理。
所述终端对所述灰度变换后的人脸图像进行人脸图像直方图的均衡化处理,对应得到所述样本人脸图像和所述待识别人脸图像进行直方图均衡化处理的人脸图像,即对应得到预处理后的人脸图像。所述人脸图像直方图的均衡化的步骤为:①统计所述灰度变换后的人脸图像直方图;②根据统计出的人脸图像直方图采用累积分布函数做变换,求得变换后的新灰度;③用所述新灰度代替旧灰度,这一步是近似的过程,应根据合理的目的尽量做到合理,同时把灰度值相等或近似的合并在一起。
本实施例通过对获取的样本人脸图像和待识别人脸图像进行灰度变换和直方图均衡化等处理,使所获取的样本人脸图像和待识别人脸图像不受光照和肤色等因素的影响,提高了人脸识别的准确率。
具体的,图1和图2所示实施例中,当所述终端对所述样本人脸图像和所述待识别人脸图像进行所述环形对称Gabor变换后,得到经过所述环形对称Gabor变换后的所述样本人脸图像的滤波图像,和经过所述环形对称Gabor变换后的所述待识别人脸图像的滤波图像。其中,每一幅人脸图像在经过所述环形对称Gabor变换后,会生成对应的5幅滤波图像。所述终端分别对所述样本人脸图像中每一人脸图像的滤波图像和所述待识别人脸图像的滤波图像进行叠加,对应得到经过叠加后的滤波图像,即将所述样本人脸图像中每一人脸图像的5幅滤波图像重组成新的5幅滤波图像,将所述待识别人脸图像的5幅滤波图像重组成新的5幅滤波图像,得到经过叠加后的样本人脸图像和待识别人脸图像。所述终端对经过所述环形对称Gabor变换的所述样本 人脸图像进行叠加,和对经过所述环形对称Gabor变换的所述待识别人脸图像进行叠加,所述叠加过程的公式为:
Figure PCTCN2016084618-appb-000006
其中,所述a_max为经过环形对称Gabor变换后所得人脸图像的滤波图像中第a幅滤波图像中的像素最大值;所述a为第a幅滤波图像中每一像素点的像素值;所述255表示的是图像像素的最大值;所述uint8是将计算所得的人脸图像转化为可以输出成图像的数据格式a_temp。如:所述终端需要进行叠加的人脸图像为test,其大小为w*h,所述test经过环形对称Gabor变换后,得到的某一滤波图像为A,则所述A的大小也为w*h,且每幅人脸图像经过环形对称Gabor变换后,会得到5幅滤波图像,5幅滤波图像的大小都为w*h。所述a为这5幅滤波图像中其中一幅滤波图像,a_max为a滤波图像中像素最大值,则a_test=a/a_max表示将所述a滤波图像中每一像素点的像素值都与像素最大值相除,得到的a_test的大小同样为w*h,最后得到a_temp=uint8(a_test*255)。所述终端对人脸图像经过所述CSGT变换后所得的5幅滤波图像的叠加处理的过程是类似的,因此,在本实施例中只对一幅滤波图像的叠加过程进行说明。具体地,参照图8,所述图8为图7中的人脸图像经过环形对称Gabor变换后的图像示意图,即所述图8为所述样本人脸图像中的第8幅人脸图像经过环形对称Gabor变换后的图像示意图,当所述待识别人脸图像与所述样本人脸图像中的第8幅人脸图像相同时,所述图8也可以表示所述待识别人脸图像经过环形对称Gabor变换后的图像示意图。
在本实施例中,所述叠加模块40包括:
归一化处理单元,用于对所述经过叠加后的样本人脸图像和待识别人脸图像对应的a_temp图像分别进行归一化处理,得到归一化处理后的样本人脸图像和待识别人脸图像对应的b_temp图像,其中b_temp=a_temp/255;
提取单元,用于根据区域能量提取公式计算经区域能量提取后的样本人脸图像和待识别人脸图像,其中区域能量提取公式为:
Figure PCTCN2016084618-appb-000007
其中,所述i为经过叠加后的所述样本人脸图像或待识别人脸图像的中心点,且所述i的初始值为0,所述d为大于0的预设值,所述i以初始值为中心、以d为单位进行递增直至不满足递增条件,递增条件为:
Figure PCTCN2016084618-appb-000008
其中所述sum为以所述中心点为中心、宽和高的取值均为所述i的区域内所有像素的像素叠加值,0.9是一个设置值,还可以设置为0.8、0.85、0.95等数值,经该递增条件进行的能量提取表示提取90%的能量,在此时能够提 取出最能描述第a幅滤波图片中人脸图像的纹理信息;同理当设置为0.95时,表示提取95%的能量,依此类推;
截取单元,用于将不满足递增条件时的i值记为I,以所述中心点为中心、以I值为宽和高从经过叠加后的所述样本人脸图像或待识别人脸图像中截取图像,截取的图像作为区域能量提取后的输出图像;
所述终端对所述经过叠加的待识别人脸图像进行区域能量提取的过程和对所述经过叠加的样本人脸图像进行区域能量提取的过程一致,在此不再赘述。具体地,参照图13,图13为本发明实施例中对经过叠加后的人脸图像进行区域能量提取的示意图。由所述图13可知,I点为所述b_temp图像的中心点,所述I的初始值为0,当sum/10小于或等于0.9时,以所述b_temp图像的中心点作为待截取图像的中心点,将所要截取的图像区域的宽和高都扩大10个单位,即i=i+10,其中,所述sum为以所述中心点为中心、宽和高的取值均为i的区域内所有像素点对应像素值的叠加。在所述图13中,C为I扩大后的图像区域;当sum/10大于0.9时,则以所述b_temp图像的中心点为中心进行图像截取,截取所得的b_temp图像的宽和高为I,I为不满足递增条件时的i的取值,即图13中的C为所需截取的图像的大小,其宽和高的大小都为I,截取所得的图像即为经过区域能量提取的人脸图像,最终得到经过区域能量提取的人脸图像为test_end=uint8(I*255)。
具体地,参照图9和图10,所述图9为图8中对经过环形对称Gabor变换的人脸图像进行叠加和区域能量提取后的人脸图像的示意图。即所述图9表示对经过所述环形对称Gabor变换的所述样本人脸图像的第8幅人脸图像进行叠加和区域能量提取后的人脸图像的示意图,也可以表示对经过所述环形对称Gabor变换待识别人脸图像进行叠加和区域能量提取后的人脸图像的示意图。所述图10为图8和图9中的第4幅人脸图像进行对比的示意图,由所述图10中的标注可知,在经过叠加和区域能量提取后的人脸图像更加清晰,更有利于人脸图像的纹理信息的提取。具体地,参照图11,图11为图8中5幅人脸图像叠加到一起的人脸图像直方图的示意图,所述图11表示将经过所述环形对称Gabor变换所得的5幅滤波图像进行所述局部二值模式变换后,所得的5幅人脸图像直方图叠加到一起的示意图。
在本实施例中,所述对比模块60包括:
计算单元,用于通过欧氏距离公式计算所述样本直方图与所述待识别直方图之间的距离;
所述终端通过欧氏距离公式计算所述样本直方图与所述待识别直方图之间的距离。其中,所述欧氏距离公式也称欧几里得距离公式,它是在m维空间中两个点之间的真实距离。在二维空间中,所述欧氏距离公式为:
Figure PCTCN2016084618-appb-000009
其中,(xi,yi)为待识别直方图中人脸图像的位置坐标,(xj,yj)为样本直方 图中第j个人脸图像的位置坐标,Di,j为所述样本直方图与所述待识别直方图之间的距离。
对比单元,用于对比所述样本直方图与所述待识别直方图之间的距离;
第二判定单元,用于当所述样本直方图与所述待识别直方图之间的距离最小时,判定所述距离最小的所对应人脸图像为所述样本人脸图像中与所述待识别人脸图像相同的人脸图像。
所述终端对比所述样本人脸图像中所有人脸图像的直方图与所述待识别直方图之间的距离,当所述样本人脸图像中的某个人脸图像的直方图与所述待识别直方图之间的距离在所计算得到的距离中最小时,所述终端判定所述距离最小的人脸图像为所述样本人脸图像中与所述待识别人脸图像相同的人脸图像。具体地,参照图12,图12为图5中的样本人脸图像的第8幅人脸图像是与待识别人脸图像相同的人脸图像的人脸图像直方图的示意图,在所述图12中可知,所述横坐标为8时,纵坐标的值最大,表示所述待识别人脸图像与所述样本人脸图像中的第8幅人脸图像的相似度最高,即所述样本人脸图像的第8幅人脸图像的直方图与所述待识别直方图之间的距离最小,即所述图12表示所述样本人脸图像中第8幅人脸图像与所述待识别人脸图像相同的示意图。
以上仅为本发明的优选实施例,并非因此限制本发明的专利范围,凡是利用本发明说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本发明的专利保护范围内。

Claims (16)

  1. 一种人脸识别方法,其特征在于,所述人脸识别方法包括以下步骤:
    获取样本人脸图像和待识别人脸图像,其中,所述样本人脸图像中存在至少两幅人脸图像;
    对所述样本人脸图像和所述待识别人脸图像分别进行环形对称Gabor变换,对应得到经过环形对称Gabor变换的样本人脸图像和待识别人脸图像;
    将所述经过环形对称Gabor变换的样本人脸图像和待识别人脸图像分别进行叠加,对应得到经过叠加的样本人脸图像和待识别人脸图像;
    对所述经过叠加的样本人脸图像和待识别人脸图像分别进行区域能量提取,对应得到经过区域能量提取的样本人脸图像和待识别人脸图像;
    对所述经过区域能量提取的样本人脸图像和待识别人脸图像分别进行局部二值模式变换,对应得到包含所述样本人脸图像的纹理信息的样本直方图和包含所述待识别人脸图像的纹理信息的待识别直方图;
    通过欧氏距离公式计算所述样本直方图与所述待识别直方图之间的距离;
    对比所述样本直方图与所述待识别直方图之间的距离;
    当所述样本直方图与所述待识别直方图之间的距离最小时,判定所述距离最小的所对应人脸图像为所述样本人脸图像中与所述待识别人脸图像相同的人脸图像;
    其中,所述将所述经过环形对称Gabor变换的样本人脸图像和待识别人脸图像分别进行叠加的公式为:
    Figure PCTCN2016084618-appb-100001
    其中,所述a_max为经过环形对称Gabor变换后所得人脸图像的滤波图像中第a幅滤波图像中的像素最大值;所述a为第a幅滤波图像中每一像素点的像素值;所述255表示的是图像像素的最大值;所述uint8是将计算所得的人脸图像转化为可以输出成图像的数据格式a_temp。
  2. 如权利要求1所述的的人脸识别方法,其特征在于,所述对所述经过叠加的样本人脸图像和待识别人脸图像分别进行区域能量提取,对应得到经过区域能量提取的样本人脸图像和待识别人脸图像的步骤包括:
    对所述经过叠加后的样本人脸图像和待识别人脸图像对应的a_temp图像分别进行归一化处理,得到归一化处理后的样本人脸图像和待识别人脸图像对应的b_temp图像,其中b_temp=a_temp/255;
    根据区域能量提取公式计算经区域能量提取后的样本人脸图像和待识别人脸图像,其中区域能量提取公式为:
    Figure PCTCN2016084618-appb-100002
    其中,所述i为经过叠加后的所述样本人脸图像或待识别人脸图像的中心点,且所述i的初始值为0,所述d为大于0的预设值,所述i以初始值为中心、以d为单位进行递增直至不满足递增条件,所述sum为以所述中心点为中心、宽和高的取值均为所述i的区域内所有像素的像素叠加值;
    将不满足递增条件时的i值记为I,以所述中心点为中心、以I值为宽和高从经过叠加后的所述样本人脸图像或待识别人脸图像中截取图像,截取的图像作为区域能量提取后的输出图像。
  3. 如权利要求1所述的人脸识别方法,其特征在于,所述获取人脸图像,得到样本人脸图像和待识别人脸图像,其中,所述样本人脸图像中存在至少两幅人脸图像的步骤之后,还包括:
    对所述样本人脸图像和所述待识别人脸图像进行预处理,其中,所述预处理包括灰度处理和直方图均衡化处理。
  4. 如权利要求2所述的人脸识别方法,其特征在于,所述获取人脸图像,得到样本人脸图像和待识别人脸图像,其中,所述样本人脸图像中存在至少两幅人脸图像的步骤之后,还包括:
    对所述样本人脸图像和所述待识别人脸图像进行预处理,其中,所述预处理包括灰度处理和直方图均衡化处理。
  5. 一种人脸识别方法,其特征在于,所述人脸识别方法包括以下步骤:
    获取样本人脸图像和待识别人脸图像,其中,所述样本人脸图像中存在至少两幅人脸图像;
    对所述样本人脸图像和所述待识别人脸图像分别进行环形对称Gabor变换,对应得到经过环形对称Gabor变换的样本人脸图像和待识别人脸图像;
    将所述经过环形对称Gabor变换的样本人脸图像和待识别人脸图像分别进行叠加,对应得到经过叠加的样本人脸图像和待识别人脸图像;
    对所述经过叠加的样本人脸图像和待识别人脸图像分别进行区域能量提取,对应得到经过区域能量提取的样本人脸图像和待识别人脸图像;
    对所述经过区域能量提取的样本人脸图像和待识别人脸图像分别进行局部二值模式变换,对应得到包含所述样本人脸图像的纹理信息的样本直方图和包含所述待识别人脸图像的纹理信息的待识别直方图;
    将所述样本直方图和所述待识别直方图进行对比,以得到所述样本人脸图像中与所述待识别人脸图像相同的人脸图像。
  6. 如权利要求5所述的人脸识别方法,其特征在于,所述将所述经过环 形对称Gabor变换的样本人脸图像和待识别人脸图像分别进行叠加的公式为:
    Figure PCTCN2016084618-appb-100003
    其中,所述a_max为经过环形对称Gabor变换后所得人脸图像的滤波图像中第a幅滤波图像中的像素最大值;所述a为第a幅滤波图像中每一像素点的像素值;所述255表示的是图像像素的最大值;所述uint8是将计算所得的人脸图像转化为可以输出成图像的数据格式a_temp。
  7. 如权利要求5所述的的人脸识别方法,其特征在于,所述对所述经过叠加的样本人脸图像和待识别人脸图像分别进行区域能量提取,对应得到经过区域能量提取的样本人脸图像和待识别人脸图像的步骤包括:
    对所述经过叠加后的样本人脸图像和待识别人脸图像对应的a_temp图像分别进行归一化处理,得到归一化处理后的样本人脸图像和待识别人脸图像对应的b_temp图像,其中b_temp=a_temp/255;
    根据区域能量提取公式计算经区域能量提取后的样本人脸图像和待识别人脸图像,其中区域能量提取公式为:
    Figure PCTCN2016084618-appb-100004
    其中,所述i为经过叠加后的所述样本人脸图像或待识别人脸图像的中心点,且所述i的初始值为0,所述d为大于0的预设值,所述i以初始值为中心、以d为单位进行递增直至不满足递增条件,所述sum为以所述中心点为中心、宽和高的取值均为所述i的区域内所有像素的像素叠加值;
    将不满足递增条件时的i值记为I,以所述中心点为中心、以I值为宽和高从经过叠加后的所述样本人脸图像或待识别人脸图像中截取图像,截取的图像作为区域能量提取后的输出图像。
  8. 如权利要求5所述的人脸识别方法,其特征在于,所述将所述样本直方图和所述待识别直方图进行对比,以得到所述样本人脸图像中与所述待识别人脸图像相同的人脸图像的步骤包括:
    通过欧氏距离公式计算所述样本直方图与所述待识别直方图之间的距离;
    对比所述样本直方图与所述待识别直方图之间的距离;
    当所述样本直方图与所述待识别直方图之间的距离最小时,判定所述距离最小的所对应人脸图像为所述样本人脸图像中与所述待识别人脸图像相同的人脸图像。
  9. 如权利要求5所述的人脸识别方法,其特征在于,所述获取人脸图像, 得到样本人脸图像和待识别人脸图像,其中,所述样本人脸图像中存在至少两幅人脸图像的步骤之后,还包括:
    对所述样本人脸图像和所述待识别人脸图像进行预处理,其中,所述预处理包括灰度处理和直方图均衡化处理。
  10. 如权利要求6所述的人脸识别方法,其特征在于,所述获取人脸图像,得到样本人脸图像和待识别人脸图像,其中,所述样本人脸图像中存在至少两幅人脸图像的步骤之后,还包括:
    对所述样本人脸图像和所述待识别人脸图像进行预处理,其中,所述预处理包括灰度处理和直方图均衡化处理。
  11. 一种人脸识别装置,其特征在于,所述人脸识别装置包括:
    获取模块,用于获取样本人脸图像和待识别人脸图像,其中,所述样本人脸图像中存在至少两幅人脸图像;
    第一变换模块,用于对所述样本人脸图像和所述待识别人脸图像分别进行环形对称Gabor变换,对应得到经过环形对称Gabor变换的样本人脸图像和待识别人脸图像;
    叠加模块,用于将所述经过环形对称Gabor变换的样本人脸图像和待识别人脸图像分别进行叠加,对应得到经过叠加的样本人脸图像和待识别人脸图像;
    提取模块,用于对所述经过叠加的样本人脸图像和待识别人脸图像分别进行区域能量提取,对应得到经过区域能量提取的样本人脸图像和待识别人脸图像;
    第二变换模块,用于对所述经过区域能量提取的样本人脸图像和待识别人脸图像分别进行局部二值模式变换,对应得到包含所述样本人脸图像的纹理信息的样本直方图和所述待识别人脸图像的纹理信息的待识别直方图;
    对比模块,用于将所述样本直方图和所述待识别直方图进行对比,以得到所述样本人脸图像中与所述待识别人脸图像相同的人脸图像。
  12. 如权利要求11所述的人脸识别装置,其特征在于,所述将所述经过环形对称Gabor变换的样本人脸图像和待识别人脸图像分别进行叠加的公式为:
    Figure PCTCN2016084618-appb-100005
    其中,所述a_max为经过环形对称Gabor变换后所得人脸图像的滤波图像中第a幅滤波图像中的像素最大值;所述a为第a幅滤波图像中每一像素点的像素值;所述255表示的是图像像素的最大值;所述uint8是将计算所得的人脸图像转化为可以输出成图像的数据格式a_temp。
  13. 如权利要求11所述的人脸识别装置,其特征在于,所述提取模块包括:
    归一化处理单元,用于对所述经过叠加后的样本人脸图像和待识别人脸图像对应的a_temp图像分别进行归一化处理,得到归一化处理后的样本人脸图像和待识别人脸图像对应的b_temp图像,其中b_temp=a_temp/255;
    提取单元,用于根据区域能量提取公式计算经区域能量提取后的样本人脸图像和待识别人脸图像,其中区域能量提取公式为:
    Figure PCTCN2016084618-appb-100006
    其中,所述i为经过叠加后的所述样本人脸图像或待识别人脸图像的中心点,且所述i的初始值为0,所述d为大于0的预设值,所述i以初始值为中心、以d为单位进行递增直至不满足递增条件,所述sum为以所述中心点为中心、宽和高的取值均为所述i的区域内所有像素的像素叠加值;
    截取单元,用于将不满足递增条件时的i值记为I,以所述中心点为中心、以I值为宽和高从经过叠加后的所述样本人脸图像或待识别人脸图像中截取图像,截取的图像作为区域能量提取后的输出图像。
  14. 如权利要求11所述的人脸识别装置,其特征在于,所述对比模块包括:
    计算单元,用于通过欧氏距离公式计算所述样本直方图与所述待识别直方图之间的距离;
    对比单元,用于对比所述样本直方图与所述待识别直方图之间的距离;
    第二判定单元,用于当所述样本直方图与所述待识别直方图之间的距离最小时,判定所述距离最小的所对应人脸图像为所述样本人脸图像中与所述待识别人脸图像相同的人脸图像。
  15. 如权利要求11所述的人脸识别装置,其特征在于,所述人脸识别装置还包括预处理模块,用于对所述样本人脸图像和所述待识别人脸图像进行预处理,其中,所述预处理包括灰度处理和直方图均衡化处理。
  16. 如权利要求12所述的人脸识别装置,其特征在于,所述人脸识别装置还包括预处理模块,用于对所述样本人脸图像和所述待识别人脸图像进行预处理,其中,所述预处理包括灰度处理和直方图均衡化处理。
PCT/CN2016/084618 2015-12-02 2016-06-03 人脸识别方法和装置 WO2017092272A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510875482.XA CN105528616B (zh) 2015-12-02 2015-12-02 人脸识别方法和装置
CN201510875482.X 2015-12-02

Publications (1)

Publication Number Publication Date
WO2017092272A1 true WO2017092272A1 (zh) 2017-06-08

Family

ID=55770830

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/084618 WO2017092272A1 (zh) 2015-12-02 2016-06-03 人脸识别方法和装置

Country Status (2)

Country Link
CN (1) CN105528616B (zh)
WO (1) WO2017092272A1 (zh)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105528616B (zh) * 2015-12-02 2019-03-12 深圳Tcl新技术有限公司 人脸识别方法和装置
CN105956554A (zh) * 2016-04-29 2016-09-21 广西科技大学 一种人脸识别方法
CN106384406A (zh) * 2016-08-26 2017-02-08 合肥若涵信智能工程有限公司 带保护装置的互联网安防系统
CN110309838B (zh) * 2019-07-08 2023-05-16 上海天诚比集科技有限公司 基于指数变换的视频检测区物体轮廓检测预处理方法
CN110782419B (zh) * 2019-10-18 2022-06-21 杭州小影创新科技股份有限公司 一种基于图形处理器的三维人脸融合方法及系统

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102024141A (zh) * 2010-06-29 2011-04-20 上海大学 基于Gabor小波变换和局部二值模式优化的人脸识别方法
CN102750523A (zh) * 2012-06-19 2012-10-24 Tcl集团股份有限公司 一种人脸识别的方法及装置
CN102819731A (zh) * 2012-07-23 2012-12-12 常州蓝城信息科技有限公司 基于Gabor特征和Fisherface的人脸识别
US20130163829A1 (en) * 2011-12-21 2013-06-27 Electronics And Telecommunications Research Institute System for recognizing disguised face using gabor feature and svm classifier and method thereof
CN105426829A (zh) * 2015-11-10 2016-03-23 深圳Tcl新技术有限公司 基于人脸图像的视频分类方法和装置
CN105528616A (zh) * 2015-12-02 2016-04-27 深圳Tcl新技术有限公司 人脸识别方法和装置

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101089874B (zh) * 2006-06-12 2010-08-18 华为技术有限公司 一种远程人脸图像的身份识别方法
CN102306290B (zh) * 2011-10-14 2013-10-30 刘伟华 一种基于视频的人脸跟踪识别方法
CN103729625A (zh) * 2013-12-31 2014-04-16 青岛高校信息产业有限公司 一种人脸识别的方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102024141A (zh) * 2010-06-29 2011-04-20 上海大学 基于Gabor小波变换和局部二值模式优化的人脸识别方法
US20130163829A1 (en) * 2011-12-21 2013-06-27 Electronics And Telecommunications Research Institute System for recognizing disguised face using gabor feature and svm classifier and method thereof
CN102750523A (zh) * 2012-06-19 2012-10-24 Tcl集团股份有限公司 一种人脸识别的方法及装置
CN102819731A (zh) * 2012-07-23 2012-12-12 常州蓝城信息科技有限公司 基于Gabor特征和Fisherface的人脸识别
CN105426829A (zh) * 2015-11-10 2016-03-23 深圳Tcl新技术有限公司 基于人脸图像的视频分类方法和装置
CN105528616A (zh) * 2015-12-02 2016-04-27 深圳Tcl新技术有限公司 人脸识别方法和装置

Also Published As

Publication number Publication date
CN105528616B (zh) 2019-03-12
CN105528616A (zh) 2016-04-27

Similar Documents

Publication Publication Date Title
WO2017080196A1 (zh) 基于人脸图像的视频分类方法和装置
WO2017092272A1 (zh) 人脸识别方法和装置
KR101322168B1 (ko) 실시간 얼굴 인식 장치
CN106815560B (zh) 一种应用于自适应驾座的人脸识别方法
Dharavath et al. Improving face recognition rate with image preprocessing
Bourlai et al. Restoring degraded face images: A case study in matching faxed, printed, and scanned photos
EP2728511A1 (en) Apparatus and method for face recognition
CN110458792B (zh) 人脸图像质量的评价方法及装置
JP2010108494A (ja) 画像内の顔の特性を判断する方法及びシステム
Asmuni et al. An improved multiscale retinex algorithm for motion-blurred iris images to minimize the intra-individual variations
KR100887183B1 (ko) 얼굴인식 전처리장치 및 방법과 이를 이용한얼굴인식시스템
WO2017041552A1 (zh) 纹理特征提取方法及装置
CN108932492A (zh) 一种基于非采样剪切波变换的图像指纹提取方法
CN111709305A (zh) 一种基于局部图像块的人脸年龄识别方法
Jamil et al. Illumination-invariant ear authentication
CN111259792A (zh) 基于dwt-lbp-dct特征的人脸活体检测方法
Sanpachai et al. A study of image enhancement for iris recognition
KR20080079798A (ko) 얼굴 검출 및 인식을 위한 방법
Forczmański et al. An algorithm of face recognition under difficult lighting conditions
CN111079689B (zh) 一种指纹图像增强方法
Lin et al. Face detection based on skin color segmentation and SVM classification
Dharavath et al. Impact of image preprocessing on face recognition: A comparative analysis
Niazi et al. Hybrid face detection in color images
Zhao et al. A Wavelet-Based Image Preprocessing Method or Illumination Insensitive Face Recognition.
CN112418085A (zh) 一种部分遮挡工况下的面部表情识别方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16869585

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16869585

Country of ref document: EP

Kind code of ref document: A1