WO2016169219A1 - 提取人脸纹理的方法及装置 - Google Patents

提取人脸纹理的方法及装置 Download PDF

Info

Publication number
WO2016169219A1
WO2016169219A1 PCT/CN2015/091503 CN2015091503W WO2016169219A1 WO 2016169219 A1 WO2016169219 A1 WO 2016169219A1 CN 2015091503 W CN2015091503 W CN 2015091503W WO 2016169219 A1 WO2016169219 A1 WO 2016169219A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
face
extracting
initial texture
group
Prior art date
Application number
PCT/CN2015/091503
Other languages
English (en)
French (fr)
Inventor
王甜甜
江中央
Original Assignee
深圳Tcl数字技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳Tcl数字技术有限公司 filed Critical 深圳Tcl数字技术有限公司
Publication of WO2016169219A1 publication Critical patent/WO2016169219A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features

Definitions

  • the present invention relates to the field of image processing, and in particular, to a method and apparatus for extracting a face texture.
  • the 2D-Gabor filter is a short-time Fourier variation that achieves local optimization in both spatial and frequency domains.
  • 2D-Gabor filter When using 2D-Gabor filter to extract face texture, it is often necessary to perform 2D-Gabor filtering in multiple directions and multiple directions. It has good directional selectivity and frequency in spatial and frequency domains, and extracted face features. More comprehensive, can effectively express a face image information.
  • the extracted texture features are not significant, and some similar texture features may interfere with the face feature recognition problem.
  • the main object of the present invention is to provide a method for extracting facial texture, which aims to solve the problem that the extracted facial texture information is not significant in the process of extracting facial texture by 2D-Gabor filtering.
  • the present invention provides a method for extracting a face texture, and the method for extracting a face texture includes the following steps:
  • a feature image is extracted from each set of initial texture images, respectively, and the feature image is output.
  • the step of separately extracting feature images from each set of initial texture images and outputting the feature images comprises:
  • the feature image blocks of each group are combined into a feature image, and the feature image is output.
  • the step of determining an image block having the largest pixel value at the same position in each group as the feature image block includes:
  • the pixel values of the image blocks at the same position in each group are compared, and the image block having the largest pixel value is taken as the feature image block.
  • the step of averaging the plurality of initial texture images comprises:
  • the two initial texture images of adjacent scales in the same direction are divided into one group, and the average grouping is sequentially performed.
  • the step of performing 2D-Gabor filtering on the face image to generate a plurality of initial texture images includes:
  • the step of extracting a partial image containing the facial organ from the face image comprises:
  • a partial image containing the eyes, eyebrows and/or mouth is extracted from the image area.
  • the present invention further provides an apparatus for extracting a face texture
  • the apparatus for extracting a face texture includes:
  • a reading module for reading a face image
  • a filtering module configured to perform 2D-Gabor filtering processing on the face image to generate a plurality of initial texture images
  • a grouping module configured to average group the plurality of initial texture images
  • an extraction module configured to respectively extract a feature image from each set of initial texture images, and output the feature image.
  • the extraction module comprises a decomposition unit, a determination unit and a combination unit;
  • the decomposition unit is configured to averageally decompose each initial texture image in each group into image blocks;
  • the determining unit is configured to determine an image block having the largest pixel value in the same position in each group as the feature image block;
  • the combining unit is configured to combine feature image blocks of each group into a feature image, and output the feature image.
  • the determining unit includes a determining subunit and a comparing subunit;
  • the determining subunit is configured to determine a pixel value of each image block in each group
  • the comparison sub-unit is configured to compare the pixel values of the image blocks at the same position in each group, and take the image block with the largest pixel value as the feature image block.
  • the grouping module comprises a sorting unit and a dividing unit
  • the sorting unit is configured to sort the initial texture images in the same direction by a scale
  • the dividing unit is configured to divide two initial texture images of adjacent scales in the same direction into one group, and perform average grouping in sequence.
  • the filtering module includes an acquiring unit and a filtering unit;
  • the acquiring unit is configured to extract a partial image containing a facial organ from the face image
  • the filtering unit is configured to perform 2D-Gabor filtering processing on the entire face image and the partial image to generate a plurality of initial texture images.
  • the acquiring unit is further configured to determine, according to a preset ratio, an image area that includes a facial organ in the face image;
  • the acquiring unit is further configured to extract a partial image containing an eye, an eyebrow, and/or a mouth from the image region.
  • the invention averages a plurality of initial texture images generated by 2D-Gabor filtering processing, and extracts feature images from each set of initial texture images, and removes some unimportant texture features in the initial texture image, so that the extracted human faces are extracted.
  • the texture features are more prominent and are useful for recognition processing.
  • FIG. 1 is a schematic flow chart of a first embodiment of a method for extracting a face texture according to the present invention
  • FIG. 2 is a schematic diagram of a refinement process of step S40 in FIG. 1;
  • FIG. 3 is a schematic flow chart of a second embodiment of a method for extracting a face texture according to the present invention.
  • FIG. 4 is a schematic structural diagram of a preferred embodiment of dividing a face image according to a preset ratio according to the present invention
  • FIG. 5 is a schematic flow chart of a preferred embodiment of extracting a partial image containing a facial organ from a face image according to the present invention
  • FIG. 6 is a schematic diagram of functional modules of a first embodiment of an apparatus for extracting a human face texture according to the present invention
  • FIG. 7 is a schematic diagram of functional modules of a preferred embodiment of the packet module of FIG. 6;
  • FIG. 8 is a schematic diagram of functional modules of the preferred embodiment of the extraction module of FIG. 6;
  • FIG. 9 is a schematic diagram of functional modules of a second embodiment of an apparatus for extracting facial texture according to the present invention.
  • the main solution of the embodiment of the present invention is: reading a face image; performing 2D-Gabor filtering processing on the face image to generate a plurality of initial texture images; and performing average grouping on the plurality of initial texture images; A feature image is extracted from each set of initial texture images, and the feature image is output.
  • the extracted face texture information features are not significant.
  • the present invention provides a method of extracting a face texture.
  • FIG. 1 is a schematic flowchart diagram of a first embodiment of a method for extracting a face texture according to the present invention.
  • the method for extracting a face texture includes:
  • Step S10 reading a face image
  • Step S20 performing 2D-Gabor filtering processing on the face image to generate a plurality of initial texture images
  • the face image may be a face image to be recognized in the process of face recognition, or may be a sample image for comparison, or may be other Extract the face image of the face texture.
  • the initial texture image may be one or more initial texture images, preferably represented in the form of an image matrix.
  • the two-dimensional filtering process includes performing 2D-Gabor filtering processing on N (N is a positive integer) directions M (M is a positive integer) to generate N*M initial texture images. In this embodiment, it is preferable to perform 2D-Gabor filtering processing on the face image.
  • performing 2D-Gabor filtering on the face is not limited to performing 2D-Gabor filtering on the entire face image, and may also perform 2D-Gabor filtering on the partial image of the face. Processing, or performing 2D-Gabor filtering processing on the entire face image and the partial image of the face image.
  • 2D-Gabor filtering processing is performed on the entire face of the face image in N directions to generate N*M initial texture images; or partial images of the face image, such as partial images including eyebrows And a partial image including the mouth, performing 2D-Gabor filtering processing on M scales in N directions, respectively, to generate 2*N*M initial texture images; or a partial image including the eyebrows and the partial image of the face image
  • the partial image of the mouth is subjected to 2D-Gabor filtering processing in M directions in N directions to generate 3*N*M initial texture images.
  • Step S30 performing average grouping on the plurality of initial texture images
  • the initial texture images in the same direction are sorted by scale; the two initial texture images of adjacent scales in the same direction are divided into one group, and the average grouping is sequentially performed.
  • the two initial texture images of adjacent scales in the same direction may be divided into one group, or may be equally grouped in such a manner that two or more initial texture images are divided into one group. For example, 2D-Gabor filtering is performed on the entire face image in the N (N is a positive integer) direction M (M is a positive integer), and N*M initial texture images are generated, and the initial texture in the same direction is generated.
  • the image is sorted by scale, and two initial texture images of adjacent scales in the same direction are divided into one group, and the average grouping is sequentially performed, and the first initial texture image and the second initial texture image in each direction are grouped into one group.
  • a partial image containing the eyebrows of the eye and a partial image containing the mouth are respectively subjected to 2D-Gabor filtering processing in M directions in N directions to generate 2*N*M initial texture images, and the initial texture image of the eyebrows is the same
  • the two initial texture images of the adjacent scales are divided into a group, and the average grouping is sequentially performed to obtain N*(M/2) eyebrow initial texture image groups, and the initial texture image of the mouth is adjacent to the same direction.
  • the two initial texture images are divided into a group, and the average grouping is sequentially performed to obtain N*(M/2) mouth initial texture image groups; or, the face image as a whole, a partial image including eyebrows and a mouth containing
  • the partial image is subjected to 2D-Gabor filtering processing in N directions and N scales to generate an initial texture image of the N*M personal face, an initial texture image of N*M eyebrows, and an initial texture image of the N*M mouths.
  • Step S40 extracting feature images from each set of initial texture images, respectively, and outputting the feature images.
  • a feature image that best represents the face texture feature is extracted from each set of initial texture images, and the feature image is output. For example, a 2D-Gabor filtering process is performed on the entire face image, the partial image including the eyebrows, and the partial image including the mouth, respectively, in which N (N is a positive integer) directions M (M is a positive integer).
  • Step S401 averaging each initial texture image in each group into image blocks
  • the initial texture image in each group can be equally divided into i (i is a positive integer) in the height direction, and j (j is a positive integer) in the width direction, and each initial texture image is decomposed to obtain i*j.
  • the image block, the height direction is a vertical direction of the initial texture image, and the width direction is a horizontal direction of the initial texture image, and preferably, each of the initial texture images is decomposed into i*j image blocks. It is represented by an image matrix of i rows and j columns.
  • Step S402 determining an image block having the largest pixel value at the same position in each group as a feature image block
  • Determining the pixel value of each image block may be: querying pixel values of each pixel of the image block one by one, and taking the pixel value of the pixel with the largest pixel value as The pixel value of the image block. The pixel values of the image blocks at the same position in each group are compared, and the image block having the largest pixel value is taken as the feature image block.
  • an initial texture image group there is a first initial texture image and a second initial texture image, and the two initial texture images are average-decomposed into i*j (i is a positive integer, j is a positive integer) image blocks, That is, the high average of the initial texture image is divided into i equal parts, and the broad average of the initial texture image is divided into j equal parts. Determining the pixel value of each image block, querying the pixel value of each pixel of the image block one by one, and taking the pixel value of the point with the largest pixel value as the pixel value of the image block.
  • Step S403 combining the feature image blocks of each group into a feature image, and outputting the feature image.
  • the feature image blocks in each group are arranged into a feature image in a predetermined order, and the feature image is output.
  • the feature image block extracted from the i-th (i is a positive integer) row j (j is a positive integer) column of the initial texture image is arranged in the i-th row and the j-th column of the feature image, and so on.
  • a combination of images, the feature image being output as an image representing a facial texture feature.
  • the plurality of initial texture images generated by the 2D-Gabor filtering process are evenly grouped, and the feature images are extracted from each group of initial texture images, and some unimportant texture features in the initial texture image are eliminated, so that the extracted people are The face texture feature is more prominent, which is beneficial to the recognition process.
  • FIG. 3 is a schematic flowchart diagram of a second embodiment of a method for extracting a face texture according to the present invention. Based on the first embodiment of the method for extracting a face texture, the step S20 includes:
  • Step S21 extracting a partial image containing a facial organ from the face image
  • a face image is read, and a partial image containing a facial organ including an eye, an eyebrow, a nose or a mouth, and the like is extracted from the face image.
  • a partial image containing one of facial organs such as an eye, an eyebrow, a nose or a mouth may be extracted from the face image for processing; or the eye, eyebrow, mouth or nose may be extracted from the face image.
  • More than one partial image of the facial organs is processed.
  • a partial image containing the eyebrows and the mouth of the eye is extracted from the face image, and the texture features of the eyes, the eyebrows and the mouth portion are more favorable for distinguishing different faces of different people's expressions during facial expression recognition. Texture information.
  • Step S211 determining an image area of the face image that includes a facial organ according to a preset ratio
  • the preset ratio is preferably a ratio of obtaining a larger range of images containing facial organs, for example:
  • the preset ratio is larger than the ratio of the traditional three courts and five eyes, with the upper left side of the face image as the coordinate origin, and the image area of the eye eyebrows is , the image area of the mouth is , , the w is the width of the face image, and h is the height of the face image.
  • Step S212 extracting a partial image containing eyes, eyebrows and/or mouth from the image area.
  • Extracting a partial image of the eyebrow from the determined image area of the eyebrow extracting a partial image of the mouth from the determined image area of the mouth, for example, the size of the partial image of the eyebrow is The size of the partial image of the mouth is , the w is the width of the face image, and the h is the height of the face image.
  • Step S22 performing 2D-Gabor filtering processing on the entire face image and the partial image to generate a plurality of initial texture images.
  • the two-dimensional filtering process includes performing 2D-Gabor filtering processing on N (N is a positive integer) directions M (M is a positive integer) to generate N*M initial texture images.
  • N is a positive integer
  • M is a positive integer
  • the entire face image, the partial image including the eyebrow and the partial image containing the mouth are respectively subjected to 2D-Gabor filtering processing in N directions and N to generate an initial texture image of the entire N*M personal face.
  • the plurality of initial texture images generated by the 2D-Gabor filtering process are equally grouped, and the feature images are extracted from each group of initial texture images, and the partial images of the most prominent and most favorable facial organs are extracted.
  • the 2D-Gabor filtering process is performed, and some unimportant texture features in the initial texture image are eliminated, so that the extracted face texture features are more prominent, which is beneficial to the recognition process.
  • the execution bodies of the methods for extracting face textures of the above first to second embodiments may each be a face texture extraction device or a face recognition device that is connected to a face texture extraction device. Further, the method of extracting a face texture may be implemented by a client detection program installed on a face texture extraction device or a face recognition device.
  • the invention further provides an apparatus for extracting facial textures.
  • FIG. 6 is a schematic diagram of functional modules of a preferred embodiment of an apparatus for extracting facial texture according to the present invention.
  • the apparatus includes: a reading module 10, a filtering module 20, a grouping module 30, and an extraction module 40.
  • the reading module 10 is configured to read a face image
  • the filtering module 20 is configured to perform 2D-Gabor filtering processing on the face image to generate a plurality of initial texture images.
  • the face image may be a face image to be recognized in the process of face recognition, or may be a sample image for comparison, or may be other Extract the face image of the face texture.
  • the initial texture image may be one or more initial texture images, preferably represented in the form of an image matrix.
  • the two-dimensional filtering process includes performing 2D-Gabor filtering processing on N (N is a positive integer) directions M (M is a positive integer) to generate N*M initial texture images. In this embodiment, it is preferable to perform 2D-Gabor filtering processing on the face image.
  • performing 2D-Gabor filtering on the face is not limited to performing 2D-Gabor filtering on the entire face image, and may also perform 2D-Gabor filtering on the partial image of the face. Processing, or performing 2D-Gabor filtering processing on the entire face image and the partial image of the face image.
  • 2D-Gabor filtering processing is performed on the entire face of the face image in N directions to generate N*M initial texture images; or partial images of the face image, such as partial images including eyebrows And a partial image including the mouth, performing 2D-Gabor filtering processing on M scales in N directions, respectively, to generate 2*N*M initial texture images; or a partial image including the eyebrows and the partial image of the face image
  • the partial image of the mouth is subjected to 2D-Gabor filtering processing in M directions in N directions to generate 3*N*M initial texture images.
  • a grouping module 30 configured to average group the plurality of initial texture images
  • the grouping modulo 30 includes a sorting unit 31 and a dividing unit 32; the sorting unit 31 is configured to sort the initial texture images in the same direction by a scale; the dividing unit 32 is configured to The two initial texture images of adjacent scales in the same direction are divided into one group, and the average grouping is sequentially performed.
  • the initial texture images in the same direction are sorted by scale; the two initial texture images of adjacent scales in the same direction are divided into one group, and the average grouping is sequentially performed.
  • the two initial texture images of adjacent scales in the same direction may be divided into one group, or may be equally grouped in such a manner that two or more initial texture images are divided into one group.
  • N is a positive integer
  • M is a positive integer
  • N*M initial texture images are generated, and the initial texture in the same direction is generated.
  • the image is sorted by scale, and two initial texture images of adjacent scales in the same direction are divided into one group, and the average grouping is sequentially performed, and the first initial texture image and the second initial texture image in each direction are grouped into one group.
  • a partial image containing the eyebrows of the eye and a partial image containing the mouth are respectively subjected to 2D-Gabor filtering processing in M directions in N directions to generate 2*N*M initial texture images, and the initial texture image of the eyebrows is the same
  • the two initial texture images of the adjacent scales are divided into a group, and the average grouping is sequentially performed to obtain N*(M/2) eyebrow initial texture image groups, and the initial texture image of the mouth is adjacent to the same direction.
  • the two initial texture images are divided into a group, and the average grouping is sequentially performed to obtain N*(M/2) mouth initial texture image groups; or, the face image as a whole, a partial image including eyebrows and a mouth containing
  • the partial image is subjected to 2D-Gabor filtering processing in N directions and N scales to generate an initial texture image of the N*M personal face, an initial texture image of N*M eyebrows, and an initial texture image of the N*M mouths.
  • the extraction module 40 is configured to respectively extract feature images from each set of initial texture images and output the feature images.
  • a feature image that best represents the face texture feature is extracted from each set of initial texture images, and the feature image is output. For example, a 2D-Gabor filtering process is performed on the entire face image, the partial image including the eyebrows, and the partial image including the mouth, respectively, in which N (N is a positive integer) directions M (M is a positive integer).
  • FIG. 8 is a schematic diagram of functional modules of the preferred embodiment of the extraction module 40, the extraction module 40 includes a decomposition unit 41, a determination unit 42 and a combination unit 43;
  • a decomposition unit 41 configured to averageally decompose each initial texture image in each group into image blocks
  • the initial texture image in each group can be equally divided into i (i is a positive integer) in the height direction, and j (j is a positive integer) in the width direction, and each initial texture image is decomposed to obtain i*j.
  • the image block, the height direction is a vertical direction of the initial texture image, and the width direction is a horizontal direction of the initial texture image, and preferably, each of the initial texture images is decomposed into i*j image blocks. It is represented by an image matrix of i rows and j columns.
  • a determining unit 42 configured to determine an image block having the largest pixel value in the same position in each group as the feature image block
  • Determining the pixel value of each image block may be: querying pixel values of each pixel of the image block one by one, and taking the pixel value of the pixel with the largest pixel value as The pixel value of the image block. The pixel values of the image blocks at the same position in each group are compared, and the image block having the largest pixel value is taken as the feature image block.
  • the determining unit may include a determining subunit and a comparing subunit; the determining subunit for determining a pixel value of each image block in each group; and the comparing subunit for using the same position in each group
  • the pixel values of the image blocks are compared, and the image block having the largest pixel value is taken as the feature image block.
  • an initial texture image group there is a first initial texture image and a second initial texture image, and the two initial texture images are average-decomposed into i*j (i is a positive integer, j is a positive integer) image blocks, That is, the high average of the initial texture image is divided into i equal parts, and the broad average of the initial texture image is divided into j equal parts.
  • Determining the pixel value of each image block querying the pixel value of each pixel of the image block one by one, and taking the pixel value of the point with the largest pixel value as the pixel value of the image block. Comparing the pixel value of the i-th row and the j-th column image block of the first initial texture image with the pixel value of the i-th row and the j-th column image block of the second initial texture image, and taking the image block having the largest pixel value as the feature image block . Each group gets i*j feature image blocks.
  • the combining unit 43 is configured to combine the feature image blocks of each group into a feature image, and output the feature image.
  • the feature image blocks in each group are arranged into a feature image in a predetermined order, and the feature image is output.
  • the feature image block extracted from the i-th (i is a positive integer) row j (j is a positive integer) column of the initial texture image is arranged in the i-th row and the j-th column of the feature image, and so on.
  • a combination of images, the feature image being output as an image representing a facial texture feature.
  • the plurality of initial texture images generated by the 2D-Gabor filtering process are evenly grouped, and the feature images are extracted from each group of initial texture images, and some unimportant texture features in the initial texture image are eliminated, so that the extracted people are The face texture feature is more prominent, which is beneficial to the recognition process.
  • FIG. 9 is a schematic diagram of functional modules of a second embodiment of an apparatus for extracting facial texture according to the present invention.
  • the filtering module 20 includes an obtaining unit 21 and a filtering unit 22, based on the first embodiment of the apparatus for extracting a face texture.
  • the acquiring unit 21 is configured to extract a partial image containing a facial organ from the face image
  • a face image is read, and a partial image containing a facial organ including an eye, an eyebrow, a nose or a mouth, and the like is extracted from the face image.
  • a partial image containing one of facial organs such as an eye, an eyebrow, a nose or a mouth may be extracted from the face image for processing; or the eye, eyebrow, mouth or nose may be extracted from the face image.
  • More than one partial image of the facial organs is processed.
  • a partial image containing the eyebrows and the mouth of the eye is extracted from the face image, and the texture features of the eyes, the eyebrows and the mouth portion are more favorable for distinguishing different faces of different people's expressions during facial expression recognition. Texture information.
  • the filtering unit 22 is configured to perform 2D-Gabor filtering processing on the entire face image and the partial image to generate a plurality of initial texture images.
  • the preset ratio is preferably a ratio of obtaining a larger range of images containing facial organs, for example:
  • the preset ratio is larger than the ratio of the traditional three courts and five eyes, with the upper left side of the face image as the coordinate origin, and the image area of the eye eyebrows is , the image area of the mouth is , , the w is the width of the face image, and h is the height of the face image.
  • the acquiring unit 21 is further configured to determine, according to a preset ratio, an image area that includes a facial organ in the face image;
  • Extracting a partial image of the eyebrow from the determined image area of the eyebrow extracting a partial image of the mouth from the determined image area of the mouth, for example, the size of the partial image of the eyebrow is The size of the partial image of the mouth is , the w is the width of the face image, and the h is the height of the face image.
  • the acquiring unit 21 is further configured to extract a partial image containing an eye, an eyebrow, and/or a mouth from the image region.
  • the two-dimensional filtering process includes performing 2D-Gabor filtering processing on N (N is a positive integer) directions M (M is a positive integer) to generate N*M initial texture images.
  • N is a positive integer
  • M is a positive integer
  • the entire face image, the partial image including the eyebrow and the partial image containing the mouth are respectively subjected to 2D-Gabor filtering processing in N directions and N to generate an initial texture image of the entire N*M personal face.
  • the plurality of initial texture images generated by the 2D-Gabor filtering process are equally grouped, and the feature images are extracted from each group of initial texture images, and the partial images of the most prominent and most favorable facial organs are extracted.
  • the 2D-Gabor filtering process is performed, and some unimportant texture features in the initial texture image are eliminated, so that the extracted face texture features are more prominent, which is beneficial to the recognition process.
  • the technical solution of the present invention which is essential or contributes to the prior art, may be embodied in the form of a software product stored in a storage medium (such as ROM/RAM, disk,
  • a storage medium such as ROM/RAM, disk,
  • the optical disc includes a number of instructions for causing a terminal device (which may be a cell phone, a computer, a server, or a network device, etc.) to perform the methods described in various embodiments of the present invention.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

公开了一种提取人脸纹理的方法,包括如下步骤:读取人脸图像;对所述人脸图像进行2D-Gabor滤波处理,生成多个初始纹理图像;对所述多个初始纹理图像进行平均分组;分别从每组初始纹理图像中提取特征图像,并输出所述特征图像。同时还公开了一种提取人脸纹理的装置。其剔除了初始纹理图像中的一些不重要的纹理特征,使得提取的人脸纹理特征更加突出,有利于识别处理。

Description

提取人脸纹理的方法及装置
技术领域
本发明涉及图像处理领域,尤其涉及一种提取人脸纹理的方法及装置。
背景技术
2D-Gabor滤波器是一种短时傅里叶变化,可以达到空域和频域的局部最优化。利用2D-Gabor滤波器对人脸纹理进行提取时,常常需要进行多个方向多个尺度的2D-Gabor滤波,在空间域和频率域具有良好的方向选择性和频率性,提取的人脸特征比较全面,可以有效的表达一幅人脸图像的信息。目前通过2D-Gabor滤波提取人脸纹理的过程中,由于纹理特征划分的比较细,存在提取的纹理特征不显著,一些相似的纹理特征会干扰人脸特征识别的问题。
上述内容仅用于辅助理解本发明的技术方案,并不代表承认上述内容是现有技术。
发明内容
本发明的主要目的在于提供一种提取人脸纹理的方法,旨在解决现有的通过2D-Gabor滤波提取人脸纹理过程中,提取的人脸纹理信息特征不显著的问题。
为实现上述目的,本发明提供一种提取人脸纹理的方法,所述提取人脸纹理的方法包括如下步骤:
读取人脸图像;
对所述人脸图像进行2D-Gabor滤波处理,生成多个初始纹理图像;
对所述多个初始纹理图像进行平均分组;
分别从每组初始纹理图像中提取特征图像,并输出所述特征图像。
优选的,所述分别从每组初始纹理图像中提取特征图像,并输出所述特征图像的步骤包括:
将各组内的每一初始纹理图像平均分解为图像块;
确定每组内相同位置像素值最大的图像块,作为特征图像块;
将每组的特征图像块组合成特征图像,并输出所述特征图像。
优选的,所述确定每组内相同位置像素值最大的图像块,作为特征图像块的步骤包括:
确定每组内每个图像块的像素值;
将每组内相同位置的图像块的像素值进行比对,取像素值最大的图像块作为特征图像块。
优选的,所述对所述多个初始纹理图像进行平均分组的步骤包括:
将相同方向的初始纹理图像按尺度进行排序;
将相同方向相邻尺度的两个初始纹理图像划分为一组,依次进行平均分组。
优选的,所述对所述人脸图像进行2D-Gabor滤波处理,生成多个初始纹理图像的步骤包括:
从所述人脸图像中提取含有面部器官的局部图像;
对所述人脸图像整体及所述局部图像分别进行2D-Gabor滤波处理,生成多个初始纹理图像。
优选的,所述从所述人脸图像中提取含有面部器官的局部图像的步骤包括:
按照预设比例确定所述人脸图像中含有面部器官的图像区域;
从所述图像区域中提取含有眼睛、眉毛和/或嘴巴的局部图像。
此外,为实现上述目的,本发明还提供一种提取人脸纹理的装置,所述提取人脸纹理的装置包括:
读取模块,用于读取人脸图像;
滤波模块,用于对所述人脸图像进行2D-Gabor滤波处理,生成多个初始纹理图像;
分组模块,用于对所述多个初始纹理图像进行平均分组;
提取模块,用于分别从每组初始纹理图像中提取特征图像,并输出所述特征图像。
优选的,所述提取模块包括分解单元、确定单元和组合单元;
所述分解单元,用于将各组内的每一初始纹理图像平均分解为图像块;
所述确定单元,用于确定每组内相同位置像素值最大的图像块,作为特征图像块;
所述组合单元,用于将每组的特征图像块组合成特征图像,并输出所述特征图像。
优选的,所述确定单元包括确定子单元和比对子单元;
所述确定子单元,用于确定每组内每个图像块的像素值;
所述比对子单元,用于将每组内相同位置的图像块的像素值进行比对,取像素值最大的图像块作为特征图像块。
优选的,所述分组模块包括排序单元及划分单元;
所述排序单元,用于将相同方向的初始纹理图像按尺度进行排序;
所述划分单元,用于将相同方向相邻尺度的两个初始纹理图像划分为一组,依次进行平均分组。
优选的,所述滤波模块包括获取单元及滤波单元;
所述获取单元,用于从所述人脸图像中提取含有面部器官的局部图像;
所述滤波单元,用于对所述人脸图像整体及所述局部图像分别进行2D-Gabor滤波处理,生成多个初始纹理图像。
优选的,所述获取单元,还用于按照预设比例确定所述人脸图像中含有面部器官的图像区域;
所述获取单元,还用于从所述图像区域中提取含有眼睛、眉毛和/或嘴巴的局部图像。
本发明通过将2D-Gabor滤波处理生成的多个初始纹理图像平均分组,并从各组初始纹理图像中提取特征图像,剔除了初始纹理图像中的一些不重要的纹理特征,使得提取的人脸纹理特征更加突出,有利于识别处理。
附图说明
图1为本发明提取人脸纹理的方法的第一实施例的流程示意图;
图2为图1中步骤S40的细化流程示意图;
图3为本发明提取人脸纹理的方法的第二实施例的流程示意图;
图4为本发明按预设比例划分人脸图像的较佳实施例的结构示意图;
图5为本发明从人脸图像中提取出含有面部器官的局部图像的较佳实施例的流程示意图;
图6为本发明提取人脸纹理的装置的第一实施例的功能模块示意图;
图7为图6中分组模块的较佳实施例的功能模块示意图;
图8为图6中提取模块的较佳实施例的功能模块示意图;
图9为本发明提取人脸纹理的装置的第二实施例的功能模块示意图。
本发明目的的实现、功能特点及优点将结合实施例,参照附图做进一步说明。
具体实施方式
应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。
本发明实施例的主要解决方案是:读取人脸图像;对所述人脸图像进行2D-Gabor滤波处理,生成多个初始纹理图像;对所述多个初始纹理图像进行平均分组;分别从每组初始纹理图像中提取特征图像,并输出所述特征图像。
现有的通过2D-Gabor滤波提取人脸纹理过程中,提取的人脸纹理信息特征不显著的问题。
基于上述问题,本发明提供一种提取人脸纹理的方法。
参照图1,图1为本发明提取人脸纹理的方法的第一实施例的流程示意图。
在本实施例中,所述提取人脸纹理的方法包括:
步骤S10,读取人脸图像;
步骤S20,对所述人脸图像进行2D-Gabor滤波处理,生成多个初始纹理图像;
读取待提取人脸纹理特征的人脸图像,所述人脸图像可以是进行人脸识别过程中待识别的人脸图像,也可以是用于比对的样本图像,或者也可以是其它待提取人脸纹理的人脸图像。所述初始纹理图像,可以是一个或者一个以上的初始纹理图像,优选的,以图像矩阵的形式进行表示。所述二维滤波处理包括在N(N为正整数)个方向M(M为正整数)个尺度进行2D-Gabor滤波处理,生成N*M个初始纹理图像。在本实施例中,优选为对所述人脸图像进行2D-Gabor滤波处理。应当理解的是,对所述人脸进行2D-Gabor滤波处理,并不限于对所述人脸图像整体进行2D-Gabor滤波处理,也可以是对所述人脸的局部图像进行2D-Gabor滤波处理,或者对所述人脸图像整体及所述人脸图像的局部图像分别进行2D-Gabor滤波处理。例如:对所述人脸图像整体在N个方向M个尺度进行2D-Gabor滤波处理,生成N*M个初始纹理图像;或者对所述人脸图像的局部图像,如包含眼睛眉毛的局部图像及包含嘴巴的局部图像,分别在N个方向M个尺度进行2D-Gabor滤波处理,生成2*N*M个初始纹理图像;或者对所述人脸图像整体、包含眼睛眉毛的局部图像及包含嘴巴的局部图像,分别进行N个方向M个尺度的2D-Gabor滤波处理,生成3*N*M个初始纹理图像。
步骤S30,对所述多个初始纹理图像进行平均分组;
将相同方向的初始纹理图像按尺度进行排序;将相同方向相邻尺度的两个初始纹理图像划分为一组,依次进行平均分组。可以将相同方向相邻尺度的两个初始纹理图像划分为一组,也可以按照将两个以上的初始纹理图像划分为一组的方式进行平均分组。例如:对所述人脸图像整体在N(N为正整数)个方向M(M为正整数)个尺度进行2D-Gabor滤波处理,生成N*M个初始纹理图像,将相同方向的初始纹理图像按尺度进行排序,将相同方向相邻尺度的两个初始纹理图像划分为一组,依次进行平均分组,将各方向的第一个初始纹理图像和第二个初始纹理图像分为一组,将各方向的第三个初始纹理图像和第四个初始纹理图像分为一组,以此类推得到N*(M/2)个初始纹理图像组;或者,对所述人脸图像的局部图像,如包含眼睛眉毛的局部图像及包含嘴巴的局部图像,分别在N个方向M个尺度进行2D-Gabor滤波处理,生成2*N*M个初始纹理图像,将眼睛眉毛的初始纹理图像的相同方向相邻尺度的两个初始纹理图像划分为一组,依次进行平均分组,得到N*(M/2)个眼睛眉毛初始纹理图像组,将嘴巴的初始纹理图像的相同方向相邻尺度的两个初始纹理图像划分为一组,依次进行平均分组,得到N*(M/2)个嘴巴初始纹理图像组;或者,对所述人脸图像整体、包含眼睛眉毛的局部图像及包含嘴巴的局部图像,分别进行N个方向M个尺度的2D-Gabor滤波处理,生成N*M个人脸整体的初始纹理图像、N*M个眼睛眉毛的初始纹理图像及N*M个嘴巴的初始纹理图像,将人脸整体的初始纹理图像的相同方向相邻尺度的两个初始纹理图像划分为一组,依次进行平均分组,得到N*(M/2)个人脸整体的初始纹理图像组,将眼睛眉毛的初始纹理图像的相同方向相邻尺度的两个初始纹理图像划分为一组,依次进行平均分组,得到N*(M/2)个眼睛眉毛初始纹理图像组,将嘴巴的初始纹理图像的相同方向相邻尺度的两个初始纹理图像划分为一组,依次进行平均分组,得到N*(M/2)个嘴巴初始纹理图像组。
步骤S40,分别从每组初始纹理图像中提取特征图像,并输出所述特征图像。
分别从每组的初始纹理图像中提取出最能代表人脸纹理特征的特征图像,并将所述特征图像输出。例如:对所述人脸图像整体、包含眼睛眉毛的局部图像及包含嘴巴的局部图像,分别进行N(N为正整数)个方向M(M为正整数)个尺度的2D-Gabor滤波处理,生成N*M个人脸整体的初始纹理图像、N*M个眼睛眉毛的初始纹理图像及N*M个嘴巴的初始纹理图像,对所述初始纹理图像按照将相同方向相邻尺度的两个初始纹理图像划分为一组的方式进行平均分组,分别得到N*(M/2)个人脸整体的初始纹理图像组、(N*M)/2个眼睛眉毛初始纹理图像组及N*(M/2)个嘴巴初始纹理图像组,分别从每组的初始纹理图像中提取出最能代表人脸纹理特征的特征图像,得到N*(M/2)个人脸整体的特征图像,N*(M/2)个眼睛眉毛的特征图像及N*(M/2)个嘴巴的特征图像。参照图2,可以通过如下步骤实现:
步骤S401,将各组内的每一初始纹理图像平均分解为图像块;
可以将各组内的初始纹理图像在高度方向上平均分为i(i为正整数)份,宽度方向上平均分为j(j为正整数)份,每个初始纹理图像分解得到i*j个图像块,所述高度方向为所述初始纹理图像的垂直方向,所述宽度方向为所述初始纹理图像的水平方向,优选的,将每个初始纹理图像分解得到的i*j个图像块以i行j列的图像矩阵表示。
步骤S402,确定每组内相同位置像素值最大的图像块,作为特征图像块;
确定每个图像块的像素值,优选的,确定每个图像块的像素值的过程可以是,逐个查询所述图像块每个像素点的像素值,取像素值最大的像素点的像素值作为所述图像块的像素值。将每组中,相同位置的图像块的像素值进行比对,取像素值最大的图像块作为特征图像块。例如:在一个初始纹理图像组中有第一初始纹理图像和第二初始纹理图像,将这两个初始纹理图像平均分解为i*j(i为正整数,j为正整数)个图像块,即,将初始纹理图像的高平均分为i等份,将初始纹理图像的宽平均分为j等份。确定每个图像块的像素值,逐个查询所述图像块每个像素点的像素值,取像素值最大的点的像素值作为所述图像块的像素值。将第一初始纹理图像的第i行第j列图像块的像素值与第二初始纹理图像的第i行第j列图像块的像素值进行对比,取像素值最大的图像块作为特征图像块。每组得到i*j个特征图像块。
步骤S403,将每组的特征图像块组合成特征图像,并输出所述特征图像。
将每组中的特征图像块按预设顺序排列组合成特征图像,并输出所述特征图像。例如:从初始纹理图像的第i(i为正整数)行第j(j为正整数)列提取出来的特征图像块,则排在特征图像的第i行第j列,以此类推进行特征图像的组合,将所述特征图像作为代表人脸纹理特征的图像输出。
本实施例通过将2D-Gabor滤波处理生成的多个初始纹理图像平均分组,并从各组初始纹理图像中提取特征图像,剔除了初始纹理图像中的一些不重要的纹理特征,使得提取的人脸纹理特征更加突出,有利于识别处理。
参照图3,图3为本发明提取人脸纹理的方法的第二实施例的流程示意图。基于上述提取人脸纹理的方法的第一实施例,所述步骤S20包括:
步骤S21,从所述人脸图像中提取含有面部器官的局部图像;
读取人脸图像,从所述人脸图像中提取含有面部器官的局部图像,所述面部器官包括眼睛、眉毛、鼻子或嘴巴等。可以从所述人脸图像中提取出含有眼睛、眉毛、鼻子或嘴巴等面部器官其中一种的局部图像进行处理;也可以从所述人脸图像中提取出含有眼睛、眉毛、嘴巴或鼻子等面部器官的一种以上的局部图像进行处理。优选的,从所述人脸图像中提取出含有眼睛眉毛、嘴巴的局部图像,在进行人脸表情识别过程中,眼睛、眉毛和嘴巴部位的纹理特征更有利于区别不同人表情的不同人脸的纹理信息。
参照图4及图5,从所述人脸图像中提取含有面部器官的局部图像的过程可以通过如下较佳实施例实现:
步骤S211,按照预设比例确定所述人脸图像中含有面部器官的图像区域;
由于不同人脸图像的人脸比例不同,为了能够获取不同人脸图像中含有面部器官的图像区域,所述预设比例优选为获取较大范围的含有面部器官的图像的比例,例如:所述预设比例为大于传统的三庭五眼的比例,以所述人脸图像左上方为坐标原点,眼睛眉毛的图像区域为 , ,嘴巴的图像区域为 , ,所述w为所述人脸图像的宽,所述h为所述人脸图像的高。
步骤S212,从所述图像区域中提取含有眼睛、眉毛和/或嘴巴的局部图像。
从所确定的眼睛眉毛的图像区域中提取眼睛眉毛的局部图像,从所确定的嘴巴的图像区域中提取嘴巴局部图像,例如:所述眼睛眉毛局部图像的大小为 ,所述嘴巴局部图像的大小为 ,所述w为所述人脸图像的宽,所述h为所述人脸图像的高。
步骤S22,对所述人脸图像整体及所述局部图像分别进行2D-Gabor滤波处理,生成多个初始纹理图像。
所述二维滤波处理包括在N(N为正整数)个方向M(M为正整数)个尺度进行2D-Gabor滤波处理,生成N*M个初始纹理图像。在本实施例中,优选为对所述人脸图像进行2D-Gabor滤波处理。例如:对所述人脸图像整体、包含眼睛眉毛的局部图像及包含嘴巴的局部图像,分别进行N个方向M个尺度的2D-Gabor滤波处理,生成N*M个人脸整体的初始纹理图像、N*M个眉毛眼睛的初始纹理图像及N*M个嘴巴的初始纹理图像。
本实施例通过将2D-Gabor滤波处理生成的多个初始纹理图像平均分组,并从各组初始纹理图像中提取特征图像,并通过对提取的最显著和最有利于识别的面部器官的局部图像进行2D-Gabor滤波处理,剔除了初始纹理图像中的一些不重要的纹理特征,使得提取的人脸纹理特征更加突出,有利于识别处理。
上述第一至第二实施例的提取人脸纹理的方法的执行主体均可以为人脸纹理提取设备或与人脸纹理提取设备信号连接的人脸识别设备。更进一步地,所述提取人脸纹理的方法可以由安装在人脸纹理提取设备或人脸识别设备上的客户端检测程序实现。
本发明进一步提供一种提取人脸纹理的装置。
参照图6,图6为本发明一种提取人脸纹理的装置的较佳实施例的功能模块示意图。
在本实施例中,所述装置包括:读取模块10、滤波模块20、分组模块30及提取模块40。
读取模块10,用于读取人脸图像;
滤波模块20,用于对所述人脸图像进行2D-Gabor滤波处理,生成多个初始纹理图像;
读取待提取人脸纹理特征的人脸图像,所述人脸图像可以是进行人脸识别过程中待识别的人脸图像,也可以是用于比对的样本图像,或者也可以是其它待提取人脸纹理的人脸图像。所述初始纹理图像,可以是一个或者一个以上的初始纹理图像,优选的,以图像矩阵的形式进行表示。所述二维滤波处理包括在N(N为正整数)个方向M(M为正整数)个尺度进行2D-Gabor滤波处理,生成N*M个初始纹理图像。在本实施例中,优选为对所述人脸图像进行2D-Gabor滤波处理。应当理解的是,对所述人脸进行2D-Gabor滤波处理,并不限于对所述人脸图像整体进行2D-Gabor滤波处理,也可以是对所述人脸的局部图像进行2D-Gabor滤波处理,或者对所述人脸图像整体及所述人脸图像的局部图像分别进行2D-Gabor滤波处理。例如:对所述人脸图像整体在N个方向M个尺度进行2D-Gabor滤波处理,生成N*M个初始纹理图像;或者对所述人脸图像的局部图像,如包含眼睛眉毛的局部图像及包含嘴巴的局部图像,分别在N个方向M个尺度进行2D-Gabor滤波处理,生成2*N*M个初始纹理图像;或者对所述人脸图像整体、包含眼睛眉毛的局部图像及包含嘴巴的局部图像,分别进行N个方向M个尺度的2D-Gabor滤波处理,生成3*N*M个初始纹理图像。
分组模块30,用于对所述多个初始纹理图像进行平均分组;
优选的,参照图7,所述分组模30包括排序单元31及划分单元32;所述排序单元31,用于将相同方向的初始纹理图像按尺度进行排序;所述划分单元32,用于将相同方向相邻尺度的两个初始纹理图像划分为一组,依次进行平均分组。将相同方向的初始纹理图像按尺度进行排序;将相同方向相邻尺度的两个初始纹理图像划分为一组,依次进行平均分组。可以将相同方向相邻尺度的两个初始纹理图像划分为一组,也可以按照将两个以上的初始纹理图像划分为一组的方式进行平均分组。例如:对所述人脸图像整体在N(N为正整数)个方向M(M为正整数)个尺度进行2D-Gabor滤波处理,生成N*M个初始纹理图像,将相同方向的初始纹理图像按尺度进行排序,将相同方向相邻尺度的两个初始纹理图像划分为一组,依次进行平均分组,将各方向的第一个初始纹理图像和第二个初始纹理图像分为一组,将各方向的第三个初始纹理图像和第四个初始纹理图像分为一组,以此类推得到N*(M/2)个初始纹理图像组;或者,对所述人脸图像的局部图像,如包含眼睛眉毛的局部图像及包含嘴巴的局部图像,分别在N个方向M个尺度进行2D-Gabor滤波处理,生成2*N*M个初始纹理图像,将眼睛眉毛的初始纹理图像的相同方向相邻尺度的两个初始纹理图像划分为一组,依次进行平均分组,得到N*(M/2)个眼睛眉毛初始纹理图像组,将嘴巴的初始纹理图像的相同方向相邻尺度的两个初始纹理图像划分为一组,依次进行平均分组,得到N*(M/2)个嘴巴初始纹理图像组;或者,对所述人脸图像整体、包含眼睛眉毛的局部图像及包含嘴巴的局部图像,分别进行N个方向M个尺度的2D-Gabor滤波处理,生成N*M个人脸整体的初始纹理图像、N*M个眼睛眉毛的初始纹理图像及N*M个嘴巴的初始纹理图像,将人脸整体的初始纹理图像的相同方向相邻尺度的两个初始纹理图像划分为一组,依次进行平均分组,得到N*(M/2)个人脸整体的初始纹理图像组,将眼睛眉毛的初始纹理图像的相同方向相邻尺度的两个初始纹理图像划分为一组,依次进行平均分组,得到N*(M/2)个眼睛眉毛初始纹理图像组,将嘴巴的初始纹理图像的相同方向相邻尺度的两个初始纹理图像划分为一组,依次进行平均分组,得到N*(M/2)个嘴巴初始纹理图像组。
提取模块40,用于分别从每组初始纹理图像中提取特征图像,并输出所述特征图像。
分别从每组的初始纹理图像中提取出最能代表人脸纹理特征的特征图像,并将所述特征图像输出。例如:对所述人脸图像整体、包含眼睛眉毛的局部图像及包含嘴巴的局部图像,分别进行N(N为正整数)个方向M(M为正整数)个尺度的2D-Gabor滤波处理,生成N*M个人脸整体的初始纹理图像、N*M个眼睛眉毛的初始纹理图像及N*M个嘴巴的初始纹理图像,对所述初始纹理图像按照将相同方向相邻尺度的两个初始纹理图像划分为一组的方式进行平均分组,分别得到N*(M/2)个人脸整体的初始纹理图像组、(N*M)/2个眼睛眉毛初始纹理图像组及N*(M/2)个嘴巴初始纹理图像组,分别从每组的初始纹理图像中提取出最能代表人脸纹理特征的特征图像,得到N*(M/2)个人脸整体的特征图像,N*(M/2)个眼睛眉毛的特征图像及N*(M/2)个嘴巴的特征图像。参照图8,图8为所述提取模块40的较佳实施例的功能模块示意图,所述提取模块40包括分解单元41、确定单元42和组合单元43;
分解单元41,用于将各组内的每一初始纹理图像平均分解为图像块;
可以将各组内的初始纹理图像在高度方向上平均分为i(i为正整数)份,宽度方向上平均分为j(j为正整数)份,每个初始纹理图像分解得到i*j个图像块,所述高度方向为所述初始纹理图像的垂直方向,所述宽度方向为所述初始纹理图像的水平方向,优选的,将每个初始纹理图像分解得到的i*j个图像块以i行j列的图像矩阵表示。
确定单元42,用于确定每组内相同位置像素值最大的图像块,作为特征图像块;
确定每个图像块的像素值,优选的,确定每个图像块的像素值的过程可以是,逐个查询所述图像块每个像素点的像素值,取像素值最大的像素点的像素值作为所述图像块的像素值。将每组中,相同位置的图像块的像素值进行比对,取像素值最大的图像块作为特征图像块。所述确定单元可以包括确定子单元和比对子单元;所述确定子单元,用于确定每组内每个图像块的像素值;所述比对子单元,用于将每组内相同位置的图像块的像素值进行比对,取像素值最大的图像块作为特征图像块。例如:在一个初始纹理图像组中有第一初始纹理图像和第二初始纹理图像,将这两个初始纹理图像平均分解为i*j(i为正整数,j为正整数)个图像块,即,将初始纹理图像的高平均分为i等份,将初始纹理图像的宽平均分为j等份。确定每个图像块的像素值,逐个查询所述图像块每个像素点的像素值,取像素值最大的点的像素值作为所述图像块的像素值。将第一初始纹理图像的第i行第j列图像块的像素值与第二初始纹理图像的第i行第j列图像块的像素值进行对比,取像素值最大的图像块作为特征图像块。每组得到i*j个特征图像块。
组合单元43,用于将每组的特征图像块组合成特征图像,并输出所述特征图像。
将每组中的特征图像块按预设顺序排列组合成特征图像,并输出所述特征图像。例如:从初始纹理图像的第i(i为正整数)行第j(j为正整数)列提取出来的特征图像块,则排在特征图像的第i行第j列,以此类推进行特征图像的组合,将所述特征图像作为代表人脸纹理特征的图像输出。
本实施例通过将2D-Gabor滤波处理生成的多个初始纹理图像平均分组,并从各组初始纹理图像中提取特征图像,剔除了初始纹理图像中的一些不重要的纹理特征,使得提取的人脸纹理特征更加突出,有利于识别处理。
参照图9,图9为本发明提取人脸纹理的装置的第二实施例的功能模块示意图。基于上述提取人脸纹理的装置的第一实施例,所述滤波模块20包括获取单元21及滤波单元22;
所述获取单元21,用于从所述人脸图像中提取含有面部器官的局部图像;
读取人脸图像,从所述人脸图像中提取含有面部器官的局部图像,所述面部器官包括眼睛、眉毛、鼻子或嘴巴等。可以从所述人脸图像中提取出含有眼睛、眉毛、鼻子或嘴巴等面部器官其中一种的局部图像进行处理;也可以从所述人脸图像中提取出含有眼睛、眉毛、嘴巴或鼻子等面部器官的一种以上的局部图像进行处理。优选的,从所述人脸图像中提取出含有眼睛眉毛、嘴巴的局部图像,在进行人脸表情识别过程中,眼睛、眉毛和嘴巴部位的纹理特征更有利于区别不同人表情的不同人脸的纹理信息。
所述滤波单元22,用于对所述人脸图像整体及所述局部图像分别进行2D-Gabor滤波处理,生成多个初始纹理图像。
由于不同人脸图像的人脸比例不同,为了能够获取不同人脸图像中含有面部器官的图像区域,所述预设比例优选为获取较大范围的含有面部器官的图像的比例,例如:所述预设比例为大于传统的三庭五眼的比例,以所述人脸图像左上方为坐标原点,眼睛眉毛的图像区域为 , ,嘴巴的图像区域为 , ,所述w为所述人脸图像的宽,所述h为所述人脸图像的高。
所述获取单元21,还用于按照预设比例确定所述人脸图像中含有面部器官的图像区域;
从所确定的眼睛眉毛的图像区域中提取眼睛眉毛的局部图像,从所确定的嘴巴的图像区域中提取嘴巴局部图像,例如:所述眼睛眉毛局部图像的大小为 ,所述嘴巴局部图像的大小为 ,所述w为所述人脸图像的宽,所述h为所述人脸图像的高。
所述获取单元21,还用于从所述图像区域中提取含有眼睛、眉毛和/或嘴巴的局部图像。
所述二维滤波处理包括在N(N为正整数)个方向M(M为正整数)个尺度进行2D-Gabor滤波处理,生成N*M个初始纹理图像。在本实施例中,优选为对所述人脸图像进行2D-Gabor滤波处理。例如:对所述人脸图像整体、包含眼睛眉毛的局部图像及包含嘴巴的局部图像,分别进行N个方向M个尺度的2D-Gabor滤波处理,生成N*M个人脸整体的初始纹理图像、N*M个眉毛眼睛的初始纹理图像及N*M个嘴巴的初始纹理图像。
本实施例通过将2D-Gabor滤波处理生成的多个初始纹理图像平均分组,并从各组初始纹理图像中提取特征图像,并通过对提取的最显著和最有利于识别的面部器官的局部图像进行2D-Gabor滤波处理,剔除了初始纹理图像中的一些不重要的纹理特征,使得提取的人脸纹理特征更加突出,有利于识别处理。
上述本发明实施例序号仅仅为了描述,不代表实施例的优劣。通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,或者网络设备等)执行本发明各个实施例所述的方法。
以上仅为本发明的优选实施例,并非因此限制本发明的专利范围,凡是利用本发明说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本发明的专利保护范围内。

Claims (20)

  1. 一种提取人脸纹理的方法,其特征在于,所述提取人脸纹理的方法包括如下步骤:
    读取人脸图像;
    对所述人脸图像进行2D-Gabor滤波处理,生成多个初始纹理图像;
    对所述多个初始纹理图像进行平均分组;
    分别从每组初始纹理图像中提取特征图像,并输出所述特征图像。
  2. 如权利要求1所述的提取人脸纹理的方法,其特征在于,所述分别从每组初始纹理图像中提取特征图像,并输出所述特征图像的步骤包括:
    将各组内的每一初始纹理图像平均分解为图像块;
    确定每组内相同位置像素值最大的图像块,作为特征图像块;
    将每组的特征图像块组合成特征图像,并输出所述特征图像。
  3. 如权利要求2所述的提取人脸纹理的方法,其特征在于,所述确定每组内相同位置像素值最大的图像块,作为特征图像块的步骤包括:
    确定每组内每个图像块的像素值;
    将每组内相同位置的图像块的像素值进行比对,取像素值最大的图像块作为特征图像块。
  4. 如权利要求3所述的提取人脸纹理的方法,其特征在于,所述对所述多个初始纹理图像进行平均分组的步骤包括:
    将相同方向的初始纹理图像按尺度进行排序;
    将相同方向相邻尺度的两个初始纹理图像划分为一组,依次进行平均分组。
  5. 如权利要求2所述的提取人脸纹理的方法,其特征在于,所述对所述人脸图像进行2D-Gabor滤波处理,生成多个初始纹理图像的步骤包括:
    从所述人脸图像中提取含有面部器官的局部图像;
    对所述人脸图像整体及所述局部图像分别进行2D-Gabor滤波处理,生成多个初始纹理图像。
  6. 如权利要求5所述的提取人脸纹理的方法,其特征在于,所述从所述人脸图像中提取含有面部器官的局部图像的步骤包括:
    按照预设比例确定所述人脸图像中含有面部器官的图像区域;
    从所述图像区域中提取含有眼睛、眉毛和/或嘴巴的局部图像。
  7. 如权利要求2所述的提取人脸纹理的方法,其特征在于,所述对所述多个初始纹理图像进行平均分组的步骤包括:
    将相同方向的初始纹理图像按尺度进行排序;
    将相同方向相邻尺度的两个初始纹理图像划分为一组,依次进行平均分组。
  8. 如权利要求1所述的提取人脸纹理的方法,其特征在于,所述对所述人脸图像进行2D-Gabor滤波处理,生成多个初始纹理图像的步骤包括:
    从所述人脸图像中提取含有面部器官的局部图像;
    对所述人脸图像整体及所述局部图像分别进行2D-Gabor滤波处理,生成多个初始纹理图像。
  9. 如权利要求8所述的提取人脸纹理的方法,其特征在于,所述从所述人脸图像中提取含有面部器官的局部图像的步骤包括:
    按照预设比例确定所述人脸图像中含有面部器官的图像区域;
    从所述图像区域中提取含有眼睛、眉毛和/或嘴巴的局部图像。
  10. 如权利要求1所述的提取人脸纹理的方法,其特征在于,所述对所述多个初始纹理图像进行平均分组的步骤包括:
    将相同方向的初始纹理图像按尺度进行排序;
    将相同方向相邻尺度的两个初始纹理图像划分为一组,依次进行平均分组。
  11. 一种提取人脸纹理的装置,其特征在于,所述提取人脸纹理的装置包括:
    读取模块,用于读取人脸图像;
    滤波模块,用于对所述人脸图像进行2D-Gabor滤波处理,生成多个初始纹理图像;
    分组模块,用于对所述多个初始纹理图像进行平均分组;
    提取模块,用于分别从每组初始纹理图像中提取特征图像,并输出所述特征图像。
  12. 如权利要求11所述的提取人脸纹理的装置,其特征在于,所述提取模块包括分解单元、确定单元和组合单元;
    所述分解单元,用于将各组内的每一初始纹理图像平均分解为图像块;
    所述确定单元,用于确定每组内相同位置像素值最大的图像块,作为特征图像块;
    所述组合单元,用于将每组的特征图像块组合成特征图像,并输出所述特征图像。
  13. 如权利要求12所述的提取人脸纹理的方法,其特征在于,所述确定单元包括确定子单元和比对子单元;
    所述确定子单元,用于确定每组内每个图像块的像素值;
    所述比对子单元,用于将每组内相同位置的图像块的像素值进行比对,取像素值最大的图像块作为特征图像块。
  14. 如权利要求13所述的提取人脸纹理的装置,其特征在于,所述分组模块包括排序单元及划分单元;
    所述排序单元,用于将相同方向的初始纹理图像按尺度进行排序;
    所述划分单元,用于将相同方向相邻尺度的两个初始纹理图像划分为一组,依次进行平均分组。
  15. 如权利要求12所述的提取人脸纹理的装置,其特征在于,所述滤波模块包括获取单元及滤波单元;
    所述获取单元,用于从所述人脸图像中提取含有面部器官的局部图像;
    所述滤波单元,用于对所述人脸图像整体及所述局部图像分别进行2D-Gabor滤波处理,生成多个初始纹理图像。
  16. 如权利要求15所述的提取人脸纹理的装置,其特征在于,所述获取单元,还用于按照预设比例确定所述人脸图像中含有面部器官的图像区域;
    所述获取单元,还用于从所述图像区域中提取含有眼睛、眉毛和/或嘴巴的局部图像。
  17. 如权利要求12所述的提取人脸纹理的装置,其特征在于,所述分组模块包括排序单元及划分单元;
    所述排序单元,用于将相同方向的初始纹理图像按尺度进行排序;
    所述划分单元,用于将相同方向相邻尺度的两个初始纹理图像划分为一组,依次进行平均分组。
  18. 如权利要求11所述的提取人脸纹理的装置,其特征在于,所述滤波模块包括获取单元及滤波单元;
    所述获取单元,用于从所述人脸图像中提取含有面部器官的局部图像;
    所述滤波单元,用于对所述人脸图像整体及所述局部图像分别进行2D-Gabor滤波处理,生成多个初始纹理图像。
  19. 如权利要求18所述的提取人脸纹理的装置,其特征在于,所述获取单元,还用于按照预设比例确定所述人脸图像中含有面部器官的图像区域;
    所述获取单元,还用于从所述图像区域中提取含有眼睛、眉毛和/或嘴巴的局部图像。
  20. 如权利要求11所述的提取人脸纹理的装置,其特征在于,所述分组模块包括排序单元及划分单元;
    所述排序单元,用于将相同方向的初始纹理图像按尺度进行排序;
    所述划分单元,用于将相同方向相邻尺度的两个初始纹理图像划分为一组,依次进行平均分组。
PCT/CN2015/091503 2015-04-21 2015-10-09 提取人脸纹理的方法及装置 WO2016169219A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510191874.4A CN105678208B (zh) 2015-04-21 2015-04-21 提取人脸纹理的方法及装置
CN201510191874.4 2015-04-21

Publications (1)

Publication Number Publication Date
WO2016169219A1 true WO2016169219A1 (zh) 2016-10-27

Family

ID=56946809

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/091503 WO2016169219A1 (zh) 2015-04-21 2015-10-09 提取人脸纹理的方法及装置

Country Status (2)

Country Link
CN (1) CN105678208B (zh)
WO (1) WO2016169219A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110046587A (zh) * 2019-04-22 2019-07-23 安徽理工大学 基于Gabor差分权重的人脸表情特征提取方法
CN110569873A (zh) * 2019-08-02 2019-12-13 平安科技(深圳)有限公司 图像识别模型训练方法、装置以及计算机设备
CN112733570A (zh) * 2019-10-14 2021-04-30 北京眼神智能科技有限公司 眼镜检测的方法、装置、电子设备及存储介质

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107392183B (zh) * 2017-08-22 2022-01-04 深圳Tcl新技术有限公司 人脸分类识别方法、装置及可读存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090180671A1 (en) * 2007-10-19 2009-07-16 Samsung Electronics Co., Ltd. Multi-view face recognition method and system
CN101763507A (zh) * 2010-01-20 2010-06-30 北京智慧眼科技发展有限公司 人脸识别方法及人脸识别系统
CN102254304A (zh) * 2011-06-17 2011-11-23 电子科技大学 一种目标物体轮廓检测方法
CN102750523A (zh) * 2012-06-19 2012-10-24 Tcl集团股份有限公司 一种人脸识别的方法及装置
CN103902977A (zh) * 2014-03-31 2014-07-02 华为技术有限公司 基于Gabor二值模式的人脸识别方法及装置

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100426314C (zh) * 2005-08-02 2008-10-15 中国科学院计算技术研究所 一种基于特征分组的多分类器组合人脸识别方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090180671A1 (en) * 2007-10-19 2009-07-16 Samsung Electronics Co., Ltd. Multi-view face recognition method and system
CN101763507A (zh) * 2010-01-20 2010-06-30 北京智慧眼科技发展有限公司 人脸识别方法及人脸识别系统
CN102254304A (zh) * 2011-06-17 2011-11-23 电子科技大学 一种目标物体轮廓检测方法
CN102750523A (zh) * 2012-06-19 2012-10-24 Tcl集团股份有限公司 一种人脸识别的方法及装置
CN103902977A (zh) * 2014-03-31 2014-07-02 华为技术有限公司 基于Gabor二值模式的人脸识别方法及装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
WANG, TIANTIAN ET AL.: "Face Recognition Based on Circular Symmetrical Gabor Transformation and Block PCA", VIDEO ENGINEERING, vol. 37, no. 15, 31 December 2013 (2013-12-31) *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110046587A (zh) * 2019-04-22 2019-07-23 安徽理工大学 基于Gabor差分权重的人脸表情特征提取方法
CN110046587B (zh) * 2019-04-22 2022-11-25 安徽理工大学 基于Gabor差分权重的人脸表情特征提取方法
CN110569873A (zh) * 2019-08-02 2019-12-13 平安科技(深圳)有限公司 图像识别模型训练方法、装置以及计算机设备
CN112733570A (zh) * 2019-10-14 2021-04-30 北京眼神智能科技有限公司 眼镜检测的方法、装置、电子设备及存储介质
CN112733570B (zh) * 2019-10-14 2024-04-30 北京眼神智能科技有限公司 眼镜检测的方法、装置、电子设备及存储介质

Also Published As

Publication number Publication date
CN105678208B (zh) 2019-03-08
CN105678208A (zh) 2016-06-15

Similar Documents

Publication Publication Date Title
WO2018223857A1 (zh) 文本行识别方法及系统
WO2021080158A1 (en) Image processing method, apparatus, electronic device and computer readable storage medium
WO2016169219A1 (zh) 提取人脸纹理的方法及装置
WO2019047284A1 (zh) 特征提取、全景拼接方法及其装置、设备、可读存储介质
WO2017148035A1 (zh) 图像处理方法及装置
WO2018223418A1 (zh) 一种显示面板检测方法、装置及系统
WO2014079327A1 (zh) 信息推送方法和系统、数字电视接收终端及计算机存储介质
WO2015149588A1 (zh) 手持设备上用户操作模式的识别方法及手持设备
WO2018149300A1 (zh) 疾病概率的检测方法、装置、设备及计算机可读存储介质
WO2017177524A1 (zh) 音视频同步播放的方法及装置
WO2018000732A1 (zh) 危险物品检测方法和装置
WO2021177758A1 (en) Methods and systems for denoising media using contextual information of the media
WO2017197802A1 (zh) 字符串模糊匹配方法及装置
WO2019161615A1 (zh) 账单录入方法、系统、光学字符识别服务器和存储介质
WO2017190445A1 (zh) Rgb图像处理方法及系统
WO2018023926A1 (zh) 电视与移动终端的互动方法及系统
WO2016173259A1 (zh) 蓝牙配对方法和蓝牙配对设备
WO2016090652A1 (zh) 视频压缩方法及装置
WO2017107384A1 (zh) 液晶显示器的图像显示方法及液晶显示器
WO2019192096A1 (zh) 信息传递方法、接收终端设备、发送终端设备及存储介质
WO2017113600A1 (zh) 视频播放方法及装置
WO2017201893A1 (zh) 视频处理方法和装置
WO2017080195A1 (zh) 音频识别方法及装置
WO2016179903A1 (zh) 网络医院平台、专家平台以及专家会诊请求方法
WO2015135497A1 (en) User classification method, apparatus, and server

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15889705

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 11/04/2018)

122 Ep: pct application non-entry in european phase

Ref document number: 15889705

Country of ref document: EP

Kind code of ref document: A1