CN110837802A - Facial image feature extraction method based on gray level co-occurrence matrix - Google Patents

Facial image feature extraction method based on gray level co-occurrence matrix Download PDF

Info

Publication number
CN110837802A
CN110837802A CN201911076854.7A CN201911076854A CN110837802A CN 110837802 A CN110837802 A CN 110837802A CN 201911076854 A CN201911076854 A CN 201911076854A CN 110837802 A CN110837802 A CN 110837802A
Authority
CN
China
Prior art keywords
gray level
gray
image
occurrence
formula
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911076854.7A
Other languages
Chinese (zh)
Inventor
陈维洋
王梦杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qilu University of Technology
Original Assignee
Qilu University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qilu University of Technology filed Critical Qilu University of Technology
Priority to CN201911076854.7A priority Critical patent/CN110837802A/en
Publication of CN110837802A publication Critical patent/CN110837802A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a facial image feature extraction method based on a gray level co-occurrence matrix, wherein the facial image features comprise contrast, homogeneity, correlation and energy, and the method comprises the following steps: (1) acquiring a face image; (2) carrying out preprocessing operation on the face image, namely converting the face image into a gray image; (3) and processing the gray level image to obtain the texture characteristics of the face image, including contrast, homogeneity, correlation and energy. The invention adopts a gray level co-occurrence matrix method, and performs feature extraction on the image of the face texture through MATLAB simulation experiment, so that the facial features can be more easily identified, thereby obtaining a better visual effect than the original image, and the effect is obvious in the aspect of improving the classification precision.

Description

Facial image feature extraction method based on gray level co-occurrence matrix
Technical Field
The invention relates to a facial image feature extraction method based on a gray level co-occurrence matrix, and belongs to the technical field of image processing.
Background
Texture features are an important visual cue, and are ubiquitous and difficult to describe features in images. Texture, which is a basic property of the surface of an object, is a very important feature for describing and identifying objects, which is widely found in nature. The texture features of the image describe local patterns which repeatedly appear in the image and their arrangement rules, reflect some rules of gray scale change in a macroscopic sense, the image can be regarded as a combination of different texture regions, and the texture is a measure of the relationship between pixels in the local regions. Texture features can be used to quantitatively describe information in an image. Texture analysis techniques have been an active field of research in computer vision, image processing, image analysis, image retrieval, and the like.
The moment method of the gray histogram is simpler and is a method for describing texture which is proposed earlier, but the described texture features are single, so that the application is less at present. In 1976, Weszka et al compared a gray level co-occurrence matrix method, a gray level difference statistical method and a gray level travel statistical method, and considered that the performance of the gray level co-occurrence matrix method is optimal; in 1998, Carr and Miranda compare a semivariance image method and a gray level co-occurrence matrix method for remote sensing image classification, and the obtained gray level co-occurrence matrix method has a good effect on optical image classification. The visible gray level co-occurrence matrix method is dominant in the statistical method and is widely applied. The edge frequency method and the spatial autocorrelation function method are generally used only in certain specific fields due to their own limitations. The size of the Gabor filter conversion window is fixed, so that tiny change information of textures in frequency and direction is difficult to obtain, and the requirement of practical application is difficult to meet. In addition, since texture feature extraction usually uses a filter bank consisting of a plurality of Gabor filters, and requires many parameter determinations, and Gabor filters are not implemented as efficient and fast algorithms, they require a lot of computation.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a facial image feature extraction method based on a gray level co-occurrence matrix;
the image of the face texture is subjected to feature extraction through MATLAB simulation experiment, so that the facial features can be more easily identified, a visual effect better than that of an original image is obtained, and the effectiveness of the method in the aspect of improving the classification precision is remarkable.
Interpretation of terms:
1. the gray level co-occurrence matrix, GLCM, refers to a common method for describing texture by studying the spatial correlation characteristics of gray levels. In 1973 Haralick et al proposed gray level co-occurrence matrices to describe texture features. Since the texture is formed by the repeated appearance of the gray level distribution at the spatial position, a certain gray level relationship, i.e., a spatial correlation characteristic of the gray level in the image, exists between two pixels spaced apart from each other in the image space. It relates in particular to different parameters in certain functions, such as window size, step size and direction of the gray level co-occurrence matrix, etc.
2. Contrast, which is the contrast of relative density between a pixel and a pixel with a relative position delta;
3. the diagonal of the GLCM represents the frequency of occurrence of almost all neighboring pixels, which is the key spatial similarity moment, describing whether two neighboring pixels in an image are close or not.
4. Correlation, which is a correlation index between a certain pixel in the whole image and the pixel with respect to the delta direction and the distance of the certain pixel;
5. energy, which is a standard moment, is equal to the square of the moment of the GLCM, and returns the sum of the squares of the elements in the GLCM, with a value range of [ 01 ].
The technical scheme of the invention is as follows:
a facial image feature extraction method based on a gray level co-occurrence matrix, wherein the facial image features comprise contrast, homogeneity, correlation and energy, and the method comprises the following steps:
(1) acquiring a facial image using a camera or other device; the pictures used can be downloaded from a database such as a public CACD.
(2) Carrying out preprocessing operation on the face image, namely converting the face image into a gray image;
(3) and processing the gray level image to obtain the texture characteristics of the face image, including contrast, homogeneity, correlation and energy.
Preferably, step (3) includes:
A. mask processing, obtaining an interested area:
for example, the facial image of the pixel points of 250 × 250 is traversed by using a 5 × 5 sliding window, the pixel points at the outermost periphery cannot be made into a 5 × 5 sliding window, the mask is used for removing the picture which cannot be traversed by the window, and the region of interest is the remaining region.
Masking refers to the masking of the image to be processed (either wholly or partially) with a selected image, graphic or object to control the area or process of image processing. The selected image, pattern or object for overlay is referred to as a mask or template. In the optical image processing, the mask may be a pellicle, an optical filter, or the like. In digital image processing, a mask is a two-dimensional matrix array, and a multi-valued image may be used.
The mask is used for extracting an interested area in the program, the pre-made interested area mask is multiplied by the image to be processed to obtain an interested area image, the image value in the interested area is kept unchanged, and the image value outside the interested area is 0.
The mask filters partial information, only selects the interested area of the mask, and highlights the needed area, and by using the characteristic, the mask which is suitable for the data set of the facial features is added on the basis of the gray level co-occurrence matrix processing characteristics, so that the comparison between the facial features to be identified and other areas of the human face is more obvious.
B. Selecting the following parameters, including: a sliding window of size n × n, n being 5; step d, d is 1; the step distance d is used for realizing the comparison operation of the central pixel and the adjacent pixel point; the directions of the gray level co-occurrence matrix comprise four directions of 0 degree, 45 degrees, 90 degrees and 135 degrees;
C. calculating a gray level co-occurrence matrix and a texture characteristic value in a sliding window in the region of interest; the method comprises the following steps:
① respectively adopt a method of finding out gray level matrixes G1, G2, G3 and G4 corresponding to four cases that a pixel pair is a left diagonal line, namely 135 ° scan, when a is 1 and b is 0, namely 0 ° scan, a is 0 and b is 1, namely 45 ° scan, the pixel pair is vertical, namely 90 ° scan, when a is 1 and b is 1, namely 90 ° scan, the pixel pair is a right diagonal line, namely 45 ° scan, and a is-1 and b is 1, namely 135 ° scan.
Taking any point (x, y) in the sliding window and another point (x + a, y + b) deviating from the point (x + a, y + b), setting the gray value of the point pair as (g1, g2), moving the point (x, y) on the whole sliding window to obtain the gray values (g1, g2) of a plurality of point pairs, and setting the gray value of the point pair as k, wherein the total k of the combination of the gray values (g1, g2) of the point pairs2Seed growing; counting the number of occurrences of gray value (g1, g2) of each point pair for the whole sliding window, arranging the gray value into a square matrix, and then using the gray value of the point pairsNormalizing the total times of occurrence of the gray values (g1, g2) into the probability P (g1, g2) of occurrence to obtain a gray level co-occurrence matrix;
suppose that: d is 1, the direction matrix is A, and the co-occurrence matrix of the 0-degree direction matrix A is solved: then the matrix values (a, B) are counted in the 0 ° direction (i.e. the horizontal direction is from left to right, and from right to left), which satisfies the number of the matrix values (a, B) in the horizontal direction from left to right and from right to left.
The distance difference values (a, b) take different numerical value combinations, and joint probability matrixes under different conditions can be obtained. The values of (a, b) are selected according to the characteristics of the texture period distribution, and for the finer textures, small difference values such as (1, 0), (1, 1), (2, 0) and the like are selected.
②, calculating the feature values of the four directional matrixes to obtain the final feature value co-occurrence matrix, wherein each calculated feature is based on the pixel points, and the feature of each pixel point is calculated by the gray level values of the pixel points in the sliding window where the pixel point is located, and the feature is calculated for each pixel point.
Obtaining contrast, homogeneity, correlation and energy respectively corresponding to the gray level co-occurrence matrixes G1, G2, G3 and G4, wherein each gray level co-occurrence matrix corresponds to four texture characteristic values, and 16 texture characteristic values are obtained in total;
D. moving the sliding window by 1 pixel point to form a new sliding window image, repeating the calculation of the step C on the new sliding window image, and generating a gray level co-occurrence matrix and a texture characteristic value of the sliding window image until a complete region of interest is traversed;
E. and D, forming a texture characteristic value matrix formed by the texture characteristic values after the step D is finished, and converting the texture characteristic value matrix into a texture characteristic image.
Each calculated feature is based on a pixel, and the feature of each pixel is calculated by using the gray values of the pixels in the sliding window where the pixel is located, and the feature is calculated for each pixel.
According to the present invention, when the offset amount is set to (Δ a, Δ b) for the image I of n × m, the formula for obtaining the corresponding gray level co-occurrence matrix C (I, j) is shown in formula (I): the gray level co-occurrence matrix (GLCM) is the number of different gray level pairs obtained by giving a certain offset to an image;
Figure BDA0002262733130000031
formula (I), wherein p and q are increments, p is 1-n, q is 1-m, i and j are two pixels with the gray levels of i and j respectively;
the parameters of the gray level co-occurrence matrix are selected as follows: for an image with 8-bit gray level, the gray scale value range of a pixel is 0-255, and the offset parameters delta x and delta y of the gray co-occurrence matrix are both set to be 0 or 1; the method employed by the present invention only considers pixels with a pitch of 1. Four angles of 0 degrees, 45 degrees, 90 degrees and 135 degrees are selected to obtain four gray level co-occurrence matrixes G1, G2, G3 and G4 through the formula (I). The orientation may in principle be anywhere from 0 ° to 360 °, but typically four orientations of 0 °,45 °,90 ° and 135 ° are used to represent texture.
According to the invention, the formula for the Contrast is preferably as shown in formula (ii):
Contrast=∑i,j|i-j|2pδ(i,j) (Ⅱ)
in the formula (II), i and j are two pixels with the gray scales of i and j respectively; pδ(i, j) is the number of times or frequency of occurrence of two pixels having a spatial positional relationship and gradations i and j, respectively.
The spatial position relationship of the pixels refers to: for finer texture analysis, a pixel pitch of 1 can be taken, and δ (± 1,0) is a horizontal scan; δ ═ (0, ± 1) is the vertical scan; δ (-1, -1), δ (-1, 1) is a 45 degree scan; δ (1, 1), δ (-1, -1) is a 135 degree scan. Once the location space is determined, a gray level co-occurrence matrix may be generated.
The range of contrast is [0, q2], q2 denotes the length of the gray level co-occurrence matrix. For a GLCM with only non-zero elements, the contrast image has a value of 0.
According to the invention, the formula for obtaining Homogeneity of Homogeneity is preferably shown as formula (III):
Figure BDA0002262733130000041
in the formula (III), i and j are two pixels with the gray scales of i and j respectively; pδ(i, j) is the number of times or frequency of occurrence of two pixels having a spatial positional relationship and gradations i and j, respectively.
The value range of homogeneity is [0,1], 1 represents the diagonal moment, i.e. the pixel pairs with equal values in a given direction and distance; since the weight used for homogeneity is related to the distance to the GLCM diagonal, it will be somewhat similar to the contrast.
Preferably, according to the present invention, the Correlation is a Correlation index between a pixel and pixels with respect to the delta direction and the pitch of the pixel in the whole image, and the formula of Correlation is shown in formula (IV):
in the formula (IV), the compound is shown in the specification,
Figure BDA0002262733130000043
Figure BDA0002262733130000044
i and j are two pixels with the gray scales of i and j respectively; pδ(i, j) is the number of times or frequency of occurrence of two pixels having a spatial positional relationship and gradations i and j, respectively.
The correlation is a descriptive statistic with a range of values of-1, 1.
Preferably, according to the present invention, the Energy is a standard moment, and the Energy is obtained according to the formula (v):
Energy=∑i,jPδ(i,j)2(Ⅴ)
in the formula (V), i and j are two pixels with the gray scales of i and j respectively; pδ(i, j) are in a spatial relationship andthe number of times or frequency of occurrence of two pixels whose gray levels are i and j, respectively.
The energy is in the range of [0,1], where 1 represents an invariant image. The energy describes the order of the images.
According to a preferred embodiment of the present invention, the step (1) is to download and acquire a face image from a database CACD.
The CACD collected 163,446 pictures of 2000 celebrities, spanning ages 16 to 62.
The invention has the beneficial effects that:
the invention adopts a gray level co-occurrence matrix method, and performs feature extraction on the image of the face texture through MATLAB simulation experiment, so that the facial features can be more easily identified, thereby obtaining a better visual effect than the original image, and the effect is obvious in the aspect of improving the classification precision. It can be seen from these figures that the above textural features can distinguish different regions of the face. As can be clearly seen from fig. 2(c), the facial contour, eyes, mouth, and other five sense organ regions are identified.
Drawings
FIG. 1 is a schematic diagram of an acquired gray scale image;
FIG. 2(a) is a contrast histogram;
FIG. 2(b) is a contrast line graph;
FIG. 2(c) is a contrast average plot;
FIG. 3(a) is a histogram of homogeneity characteristics;
FIG. 3(b) is a homogeneity feature line graph;
FIG. 3(c) is a homogeneity map;
FIG. 4(a) is a correlation histogram;
FIG. 4(b) is a correlation line graph;
FIG. 4(c) is a correlation diagram;
FIG. 5(a) is an energy histogram;
FIG. 5(b) is an energy line graph;
fig. 5(c) is an energy diagram.
Detailed Description
The invention is further defined in the following, but not limited to, the figures and examples in the description.
Example 1
A facial image feature extraction method based on a gray level co-occurrence matrix, wherein the facial image features comprise contrast, homogeneity, correlation and energy, and the method comprises the following steps:
(1) acquiring a facial image using a camera or other device; the pictures used can be downloaded from a database such as a public CACD.
(2) Carrying out preprocessing operation on the face image, namely converting the face image into a gray image; the acquired grayscale image is shown in fig. 1.
(3) And processing the gray level image to obtain the texture characteristics of the face image, including contrast, homogeneity, correlation and energy.
Example 2
The facial image feature extraction method based on the gray level co-occurrence matrix according to embodiment 1 is characterized in that:
and (3) processing the gray level image to obtain texture features of the face image, including contrast, homogeneity, correlation and energy, and including:
A. mask processing, obtaining an interested area:
for example, the facial image of the pixel points of 250 × 250 is traversed by using a 5 × 5 sliding window, the pixel points at the outermost periphery cannot be made into a 5 × 5 sliding window, the mask is used for removing the picture which cannot be traversed by the window, and the region of interest is the remaining region.
Masking refers to the masking of the image to be processed (either wholly or partially) with a selected image, graphic or object to control the area or process of image processing. The selected image, pattern or object for overlay is referred to as a mask or template. In the optical image processing, the mask may be a pellicle, an optical filter, or the like. In digital image processing, a mask is a two-dimensional matrix array, and a multi-valued image may be used.
The mask is used for extracting an interested area in the program, the pre-made interested area mask is multiplied by the image to be processed to obtain an interested area image, the image value in the interested area is kept unchanged, and the image value outside the interested area is 0.
The mask filters partial information, only selects the interested area of the mask, and highlights the needed area, and by using the characteristic, the mask which is suitable for the data set of the facial features is added on the basis of the gray level co-occurrence matrix processing characteristics, so that the comparison between the facial features to be identified and other areas of the human face is more obvious.
B. Selecting the following parameters, including: a sliding window of size n × n, n being 5; step d, d is 1; the step distance d is used for realizing the comparison operation of the central pixel and the adjacent pixel point; the directions of the gray level co-occurrence matrix comprise four directions of 0 degree, 45 degrees, 90 degrees and 135 degrees;
C. calculating a gray level co-occurrence matrix and a texture characteristic value in a sliding window in the region of interest; the method comprises the following steps:
① respectively adopt a method of finding out gray level matrixes G1, G2, G3 and G4 corresponding to four cases that a pixel pair is a left diagonal line, namely 135 ° scan, when a is 1 and b is 0, namely 0 ° scan, a is 0 and b is 1, namely 45 ° scan, the pixel pair is vertical, namely 90 ° scan, when a is 1 and b is 1, namely 90 ° scan, the pixel pair is a right diagonal line, namely 45 ° scan, and a is-1 and b is 1, namely 135 ° scan.
Taking any point (x, y) in the sliding window and another point (x + a, y + b) deviating from the point (x + a, y + b), setting the gray value of the point pair as (g1, g2), moving the point (x, y) on the whole sliding window to obtain the gray values (g1, g2) of a plurality of point pairs, and setting the gray value of the point pair as k, wherein the total k of the combination of the gray values (g1, g2) of the point pairs2Seed growing; counting the occurrence frequency of the gray values (g1, g2) of each point pair for the whole sliding window, arranging the gray values into a square matrix, and normalizing the gray values (g1, g2) of the point pairs into probability P (g1, g2) of occurrence by using the total occurrence frequency of the gray values (g1, g2) of the point pairs, namely obtaining a gray level co-occurrence matrix;
suppose that: d is 1, the direction matrix is A, and the co-occurrence matrix of the 0-degree direction matrix A is solved: then the matrix values (a, B) are counted in the 0 ° direction (i.e. the horizontal direction is from left to right, and from right to left), which satisfies the number of the matrix values (a, B) in the horizontal direction from left to right and from right to left.
The distance difference values (a, b) take different numerical value combinations, and joint probability matrixes under different conditions can be obtained. The values of (a, b) are selected according to the characteristics of the texture period distribution, and for the finer textures, small difference values such as (1, 0), (1, 1), (2, 0) and the like are selected.
②, calculating the feature values of the four directional matrixes to obtain the final feature value co-occurrence matrix, wherein each calculated feature is based on the pixel points, and the feature of each pixel point is calculated by the gray level values of the pixel points in the sliding window where the pixel point is located, and the feature is calculated for each pixel point.
Obtaining contrast, homogeneity, correlation and energy respectively corresponding to the gray level co-occurrence matrixes G1, G2, G3 and G4, wherein each gray level co-occurrence matrix corresponds to four texture characteristic values, and 16 texture characteristic values are obtained in total;
D. moving the sliding window by 1 pixel point to form a new sliding window image, repeating the calculation of the step C on the new sliding window image, and generating a gray level co-occurrence matrix and a texture characteristic value of the sliding window image until a complete region of interest is traversed;
E. and D, forming a texture characteristic value matrix formed by the texture characteristic values after the step D is finished, and converting the texture characteristic value matrix into a texture characteristic image.
Each calculated feature is based on a pixel, and the feature of each pixel is calculated by using the gray values of the pixels in the sliding window where the pixel is located, and the feature is calculated for each pixel.
For an image I with n × m, when the offset is set to be (Δ a, Δ b), the formula for obtaining the corresponding gray level co-occurrence matrix C (I, j) is shown as formula (I): the gray level co-occurrence matrix (GLCM) is the number of different gray level pairs obtained by giving a certain offset to an image;
formula (I), wherein p and q are increments, p is 1-n, q is 1-m, i and j are two pixels with the gray levels of i and j respectively;
the parameters of the gray level co-occurrence matrix are selected as follows: for an image with 8-bit gray level, the gray scale value range of a pixel is 0-255, and the offset parameters delta x and delta y of the gray co-occurrence matrix are both set to be 0 or 1; the method employed by the present invention only considers pixels with a pitch of 1. Four angles of 0 degrees, 45 degrees, 90 degrees and 135 degrees are selected to obtain four gray level co-occurrence matrixes G1, G2, G3 and G4 through the formula (I). The orientation may in principle be anywhere from 0 ° to 360 °, but typically four orientations of 0 °,45 °,90 ° and 135 ° are used to represent texture.
The formula for obtaining Contrast is shown in formula (ii):
Contrast=∑i,j|i-j|2pδ(i,j) (Ⅱ)
in the formula (II), i and j are two pixels with the gray scales of i and j respectively; pδ(i, j) is the number of times or frequency of occurrence of two pixels having a spatial positional relationship and gradations i and j, respectively.
The contrast histogram is shown in fig. 2(a), the abscissa is the number of contrast eigenvalue intervals, and the ordinate is the number of contrast eigenvalue intervals; the contrast line graph is shown in fig. 2(b), the abscissa is the number of contrast eigenvalue intervals, and the ordinate is the number of contrast eigenvalue intervals; the contrast-average graph is shown in fig. 2 (c). It can be seen from these figures that the above textural features can distinguish different regions of the face. As can be clearly seen from fig. 2(c), the facial contour, eyes, mouth, and other five sense organ regions are identified.
The spatial position relationship of the pixels refers to: for finer texture analysis, a pixel pitch of 1 can be taken, and δ (± 1,0) is a horizontal scan; δ ═ (0, ± 1) is the vertical scan; δ (-1, -1), δ (-1, 1) is a 45 degree scan; δ (1, 1), δ (-1, -1) is a 135 degree scan. Once the location space is determined, a gray level co-occurrence matrix may be generated.
The range of contrast is [0, q2], q2 denotes the length of the gray level co-occurrence matrix. For a GLCM with only non-zero elements, the contrast image has a value of 0.
The formula for obtaining Homogeneity of Homogeneity is shown as formula (III):
Figure BDA0002262733130000081
in the formula (III), i and j are two pixels with the gray scales of i and j respectively; pδ(i, j) is the number of times or frequency of occurrence of two pixels having a spatial positional relationship and gradations i and j, respectively.
The value range of homogeneity is [0,1], 1 represents the diagonal moment, i.e. the pixel pairs with equal values in a given direction and distance; since the weight used for homogeneity is related to the distance to the GLCM diagonal, it will be somewhat similar to the contrast.
The homogeneity histogram is shown in fig. 3(a), the abscissa is the number of contrast eigenvalue intervals, and the ordinate is the number of contrast eigenvalue intervals; the homogeneity line graph is shown in fig. 3(b), the abscissa is the number of fractions of the contrast eigenvalue interval, and the ordinate is the number of the contrast eigenvalue intervals; the homogeneity map is shown in FIG. 3 (c).
The Correlation is a Correlation index between a certain pixel in the whole image and a pixel corresponding to the delta direction and the distance, and the formula of the Correlation calculation is shown as the formula (IV):
Figure BDA0002262733130000082
in the formula (IV), the compound is shown in the specification,
Figure BDA0002262733130000083
i and j are two pixels with the gray scales of i and j respectively; pδ(i, j) are in a spatial positional relationship and have respective gradationsThe number or frequency of occurrences of the two pixels of i and j.
The correlation is a descriptive statistic with a range of values of-1, 1.
The correlation histogram is shown in fig. 4(a), the abscissa indicates the number of contrast eigenvalue intervals, and the ordinate indicates the number of contrast eigenvalue intervals; the correlation line graph is shown in fig. 4(b), the abscissa is the number of contrast eigenvalue intervals, and the ordinate is the number of contrast eigenvalue intervals; the correlation graph is shown in fig. 4 (c).
The Energy is a standard moment, and the Energy is obtained by the formula (V):
Energy=∑i,jPδ(i,j)2(Ⅴ)
in the formula (V), i and j are two pixels with the gray scales of i and j respectively; pδ(i, j) is the number of times or frequency of occurrence of two pixels having a spatial positional relationship and gradations i and j, respectively.
The energy is in the range of [0,1], where 1 represents an invariant image. The energy describes the order of the images.
The energy histogram is shown in fig. 5(a), the abscissa is the number of contrast eigenvalue intervals, and the ordinate is the number of contrast eigenvalue intervals; the energy line diagram is shown in fig. 5(b), the abscissa is the number of contrast eigenvalue intervals, and the ordinate is the number of contrast eigenvalue intervals; the energy map is shown in fig. 5 (c).

Claims (8)

1. A facial image feature extraction method based on a gray level co-occurrence matrix is characterized in that facial image features comprise contrast, homogeneity, correlation and energy, and the method comprises the following steps:
(1) acquiring a face image;
(2) carrying out preprocessing operation on the face image, namely converting the face image into a gray image;
(3) and processing the gray level image to obtain the texture characteristics of the face image, including contrast, homogeneity, correlation and energy.
2. The method for extracting facial image features based on gray level co-occurrence matrix according to claim 1, wherein the step (3) comprises:
A. mask processing, obtaining an interested area:
B. selecting the following parameters, including: a sliding window of size n × n, n being 5; step d, d is 1; the directions of the gray level co-occurrence matrix comprise four directions of 0 degree, 45 degrees, 90 degrees and 135 degrees;
C. calculating a gray level co-occurrence matrix and a texture characteristic value in a sliding window in the region of interest; the method comprises the following steps:
①, gray level co-occurrence matrices G1, G2, G3 and G4 are determined for four cases, i.e., a 0 ° direction, b 1, 45 ° direction, a 1, b 1, 90 ° direction, a-1 and b 1, 135 ° direction, respectively, using the following methods:
taking any point (x, y) in the sliding window and another point (x + a, y + b) deviating from the point (x + a, y + b), setting the gray value of the point pair as (g1, g2), moving the point (x, y) on the whole sliding window to obtain the gray values (g1, g2) of a plurality of point pairs, and setting the gray value of the point pair as k, wherein the total k of the combination of the gray values (g1, g2) of the point pairs2Seed growing; counting the occurrence frequency of the gray values (g1, g2) of each point pair for the whole sliding window, arranging the gray values into a square matrix, and normalizing the gray values (g1, g2) of the point pairs into probability P (g1, g2) of occurrence by using the total occurrence frequency of the gray values (g1, g2) of the point pairs, namely obtaining a gray level co-occurrence matrix;
②, obtaining contrast, homogeneity, correlation and energy respectively corresponding to G1, G2, G3 and G4 of each gray level co-occurrence matrix, wherein each gray level co-occurrence matrix corresponds to four texture characteristic values, and totally obtaining 16 texture characteristic values;
D. moving the sliding window by 1 pixel point to form a new sliding window image, repeating the calculation of the step C on the new sliding window image, and generating a gray level co-occurrence matrix and a texture characteristic value of the sliding window image until a complete region of interest is traversed;
E. and D, forming a texture characteristic value matrix consisting of texture characteristic values after the step D is finished, and converting the texture characteristic value matrix into a texture characteristic image.
3. The method for extracting facial image features based on gray level co-occurrence matrix according to claim 2, wherein when the offset is set to (Δ a, Δ b) for the image I of n × m, the formula for obtaining the corresponding gray level co-occurrence matrix C (I, j) is shown as formula (I):
Figure FDA0002262733120000021
formula (I), wherein p and q are increments, p is 1-n, q is 1-m, i and j are two pixels with the gray levels of i and j respectively;
the parameters of the gray level co-occurrence matrix are selected as follows: for an image with 8-bit gray level, the gray scale value range of a pixel is 0-255, and the offset parameters delta x and delta y of the gray co-occurrence matrix are both set to be 0 or 1; four angles of 0 degrees, 45 degrees, 90 degrees and 135 degrees are selected to obtain four gray level co-occurrence matrixes G1, G2, G3 and G4 through the formula (I).
4. The method for extracting facial image features based on gray level co-occurrence matrix as claimed in claim 2, wherein the formula for the Contrast is as shown in formula (ii):
Contrast=∑i,j|i-j|2pδ(i,j) (Ⅱ)
in the formula (II), i and j are two pixels with the gray scales of i and j respectively; pδ(i, j) is the number of times or frequency of occurrence of two pixels having a spatial positional relationship and gradations i and j, respectively.
5. The method for extracting facial image features based on gray level co-occurrence matrix as claimed in claim 2, wherein the formula for obtaining Homogeneity hologeneity is shown as formula (iii):
Figure FDA0002262733120000022
in the formula (III), i and j are two pixels with the gray scales of i and j respectively; pδ(i, j) is the number of times or frequency of occurrence of two pixels having a spatial positional relationship and gradations i and j, respectively.
6. The method for extracting facial image features based on gray level co-occurrence matrix as claimed in claim 2, wherein the Correlation is a Correlation index between a pixel and pixels with respect to its δ direction and pitch in the whole image, and the formula of Correlation is shown in formula (IV):
Figure FDA0002262733120000023
in the formula (IV), the compound is shown in the specification,
i and j are two pixels with the gray scales of i and j respectively; pδ(i, j) is the number of times or frequency of occurrence of two pixels having a spatial positional relationship and gradations i and j, respectively.
7. The method for extracting facial image features based on gray level co-occurrence matrix as claimed in claim 2, wherein the Energy is a standard moment, and the Energy is obtained according to the formula (v):
Energy=∑i,jPδ(i,j)2(Ⅴ)
in the formula (V), i and j are two pixels with the gray scales of i and j respectively; pδ(i, j) is the number of times or frequency of occurrence of two pixels having a spatial positional relationship and gradations i and j, respectively.
8. The facial image feature extraction method based on gray level co-occurrence matrix according to any one of claims 1 to 7, wherein in the step (1), the facial image is downloaded from a database CACD.
CN201911076854.7A 2019-11-06 2019-11-06 Facial image feature extraction method based on gray level co-occurrence matrix Pending CN110837802A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911076854.7A CN110837802A (en) 2019-11-06 2019-11-06 Facial image feature extraction method based on gray level co-occurrence matrix

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911076854.7A CN110837802A (en) 2019-11-06 2019-11-06 Facial image feature extraction method based on gray level co-occurrence matrix

Publications (1)

Publication Number Publication Date
CN110837802A true CN110837802A (en) 2020-02-25

Family

ID=69576140

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911076854.7A Pending CN110837802A (en) 2019-11-06 2019-11-06 Facial image feature extraction method based on gray level co-occurrence matrix

Country Status (1)

Country Link
CN (1) CN110837802A (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111402244A (en) * 2020-03-20 2020-07-10 华侨大学 Automatic classification method for standard fetal heart tangent planes
CN111401280A (en) * 2020-03-23 2020-07-10 上海电力大学 Image identification method for adjusting learning rate based on gray level co-occurrence matrix
CN111558222A (en) * 2020-04-08 2020-08-21 完美世界(北京)软件科技发展有限公司 Illumination map compression method, device and equipment
CN112075922A (en) * 2020-10-14 2020-12-15 中国人民解放军空军军医大学 Method for measuring fundus image indexes of type 2 diabetes mellitus and analyzing correlation between fundus image indexes and diabetic nephropathy
CN112150386A (en) * 2020-09-29 2020-12-29 西安工程大学 SAR image speckle non-local average inhibition method based on contrast mean value
CN112149751A (en) * 2020-09-29 2020-12-29 北京邮电大学 Fused media information acquisition method based on 3D-CNN and CVV-GLCM
CN112488158A (en) * 2020-11-13 2021-03-12 东南大学 Asphalt pavement segregation detection method based on image texture feature extraction
CN112949657A (en) * 2021-03-09 2021-06-11 河南省现代农业大数据产业技术研究院有限公司 Forest land distribution extraction method and device based on remote sensing image texture features
CN113392854A (en) * 2021-07-06 2021-09-14 南京信息工程大学 Image texture feature extraction and classification method
CN113610839A (en) * 2021-08-26 2021-11-05 北京中星天视科技有限公司 Infrared target significance detection method and device, electronic equipment and medium
CN114757899A (en) * 2022-04-01 2022-07-15 南通阿牛家居科技有限公司 Cloud computing-based optimization method for paper quality detection
CN116624065A (en) * 2023-07-20 2023-08-22 山东智赢门窗科技有限公司 Automatic folding regulation and control method for intelligent doors and windows

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108960088A (en) * 2018-06-20 2018-12-07 天津大学 The detection of facial living body characteristics, the recognition methods of specific environment
CN109934287A (en) * 2019-03-12 2019-06-25 上海宝尊电子商务有限公司 A kind of clothing texture method for identifying and classifying based on LBP and GLCM

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108960088A (en) * 2018-06-20 2018-12-07 天津大学 The detection of facial living body characteristics, the recognition methods of specific environment
CN109934287A (en) * 2019-03-12 2019-06-25 上海宝尊电子商务有限公司 A kind of clothing texture method for identifying and classifying based on LBP and GLCM

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
廖小兵;: "光照不均下图像明暗区域关键目标增强仿真" *
李素梅;秦龙斌;胡佳洁;: "基于特定环境的面部活体检测" *
杨中悦;林伟;延伟东;温金环;: "基于热核共生矩阵的SAR图像纹理目标识别" *

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111402244B (en) * 2020-03-20 2023-04-07 华侨大学 Automatic classification method for standard fetal heart tangent planes
CN111402244A (en) * 2020-03-20 2020-07-10 华侨大学 Automatic classification method for standard fetal heart tangent planes
CN111401280A (en) * 2020-03-23 2020-07-10 上海电力大学 Image identification method for adjusting learning rate based on gray level co-occurrence matrix
CN111558222A (en) * 2020-04-08 2020-08-21 完美世界(北京)软件科技发展有限公司 Illumination map compression method, device and equipment
CN111558222B (en) * 2020-04-08 2023-11-24 完美世界(北京)软件科技发展有限公司 Method, device and equipment for compressing illumination graph
CN112150386A (en) * 2020-09-29 2020-12-29 西安工程大学 SAR image speckle non-local average inhibition method based on contrast mean value
CN112150386B (en) * 2020-09-29 2023-03-21 西安工程大学 SAR image speckle non-local average inhibition method based on contrast mean value
CN112149751A (en) * 2020-09-29 2020-12-29 北京邮电大学 Fused media information acquisition method based on 3D-CNN and CVV-GLCM
CN112075922A (en) * 2020-10-14 2020-12-15 中国人民解放军空军军医大学 Method for measuring fundus image indexes of type 2 diabetes mellitus and analyzing correlation between fundus image indexes and diabetic nephropathy
CN112488158A (en) * 2020-11-13 2021-03-12 东南大学 Asphalt pavement segregation detection method based on image texture feature extraction
CN112949657A (en) * 2021-03-09 2021-06-11 河南省现代农业大数据产业技术研究院有限公司 Forest land distribution extraction method and device based on remote sensing image texture features
CN113392854A (en) * 2021-07-06 2021-09-14 南京信息工程大学 Image texture feature extraction and classification method
CN113610839A (en) * 2021-08-26 2021-11-05 北京中星天视科技有限公司 Infrared target significance detection method and device, electronic equipment and medium
CN114757899A (en) * 2022-04-01 2022-07-15 南通阿牛家居科技有限公司 Cloud computing-based optimization method for paper quality detection
CN114757899B (en) * 2022-04-01 2023-11-21 南通阿牛家居科技有限公司 Optimization method for paper quality detection based on cloud computing
CN116624065A (en) * 2023-07-20 2023-08-22 山东智赢门窗科技有限公司 Automatic folding regulation and control method for intelligent doors and windows
CN116624065B (en) * 2023-07-20 2023-10-13 山东智赢门窗科技有限公司 Automatic folding regulation and control method for intelligent doors and windows

Similar Documents

Publication Publication Date Title
CN110837802A (en) Facial image feature extraction method based on gray level co-occurrence matrix
CN112435221A (en) Image anomaly detection method based on generative confrontation network model
CN113935992B (en) Image processing-based oil pollution interference resistant gear crack detection method and system
CN107358260B (en) Multispectral image classification method based on surface wave CNN
Senthilkumar et al. A novel region growing segmentation algorithm for the detection of breast cancer
CN101556600B (en) Method for retrieving images in DCT domain
He et al. Unsupervised textural classification of images using the texture spectrum
CN110348459B (en) Sonar image fractal feature extraction method based on multi-scale rapid carpet covering method
CN107169962A (en) The gray level image fast partition method of Kernel fuzzy clustering is constrained based on space density
CN115131325A (en) Breaker fault operation and maintenance monitoring method and system based on image recognition and analysis
CN115797813B (en) Water environment pollution detection method based on aerial image
CN116246174B (en) Sweet potato variety identification method based on image processing
CN111292256A (en) Texture enhancement algorithm based on microscopic hyperspectral imaging
Sanpachai et al. A study of image enhancement for iris recognition
CN107358625B (en) SAR image change detection method based on SPP Net and region-of-interest detection
CN111126185B (en) Deep learning vehicle target recognition method for road gate scene
CN106096650B (en) Based on the SAR image classification method for shrinking self-encoding encoder
Mosleh et al. Texture image retrieval using contourlet transform
CN108960285B (en) Classification model generation method, tongue image classification method and tongue image classification device
CN114742849B (en) Leveling instrument distance measuring method based on image enhancement
CN111461999A (en) SAR image speckle suppression method based on super-pixel similarity measurement
CN109447952A (en) A kind of half reference image quality evaluation method based on Gabor differential box weighting dimension
CN108304766B (en) A method of dangerous material stockyard is screened based on high-definition remote sensing
CN102968793A (en) Method for identifying natural image and computer generated image based on DCT (Discrete Cosine Transformation)-domain statistic characteristics
Gizatullin et al. Method for Constructing Texture Features based on an Image Weight Model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination