CN104834909A - Image characteristic description method based on Gabor synthetic characteristic - Google Patents

Image characteristic description method based on Gabor synthetic characteristic Download PDF

Info

Publication number
CN104834909A
CN104834909A CN201510231155.0A CN201510231155A CN104834909A CN 104834909 A CN104834909 A CN 104834909A CN 201510231155 A CN201510231155 A CN 201510231155A CN 104834909 A CN104834909 A CN 104834909A
Authority
CN
China
Prior art keywords
msub
mrow
msup
theta
math
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510231155.0A
Other languages
Chinese (zh)
Other versions
CN104834909B (en
Inventor
高涛
冯兴乐
刘占文
谭魏萌
吴晓龙
翟娟红
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changan University
Original Assignee
Changan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changan University filed Critical Changan University
Priority to CN201510231155.0A priority Critical patent/CN104834909B/en
Publication of CN104834909A publication Critical patent/CN104834909A/en
Application granted granted Critical
Publication of CN104834909B publication Critical patent/CN104834909B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/449Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • Molecular Biology (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to an image characteristic description method based on a Gabor synthetic characteristic, which comprises the following steps: a first step, acquiring and uploading a human face image signal; a second step, performing resolution adjustment and matrix expression of a face image; a third step, extracting an image characteristic; and a fourth step, synchronously outputting a processing result. According to the image characteristic description method, through using the amplitude part and the phase part which are converted by the Gabor wave filter, wherein the phase part comprises direction information in the Gabor filtering result, a certain characteristic discriminating meaning is realized; the filtering result of a Gabor wave filter set is sufficiently utilized; more abundant characteristic information is extracted for facilitating afterward identification; furthermore for aiming at a defect of averaging image blocks, different importance degrees of sub image blocks to the integral image are considered; and furthermore the face characteristic can be better described in combination with texture contribution degrees.

Description

Image feature description method based on Gabor comprehensive features
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to an image feature description method based on Gabor comprehensive features.
Background
In the prior art, a face image feature description method based on Gabor features has achieved very excellent results. In practical applications, however, complex lighting and different angles are often encountered, such as video surveillance, human-computer interaction, public security systems, and so on. Therefore, research on complex illumination and face recognition from different angles also becomes a research hotspot in recent years, and various solutions have been proposed for describing features of complex illumination faces in recent years, and in summary, the methods are mainly divided into two categories:
1. the method has the advantages that the global features of the Gabor features can be well obtained, but the method has the defects that the existing Gabor features only use the amplitude part of Gabor filter conversion, and the method is not conscious and considers the phase information of the Gabor features which play an important role in face description;
2. the method has the defects that the contribution degree of each local feature to the overall description of the image is not considered to be different, and the contribution degree of all the local feature descriptions to the whole is considered to be the same.
In summary, the existing face feature description method based on Gabor features does not fully utilize the phase information of the Gobor features, and the phase of the Gobor features plays an important role in face feature description. In addition, the contribution degree of local features to the whole image is not considered enough, and the defects of poor classification and identification effects and low stability exist, so that the requirements of practical application cannot be well met.
Disclosure of Invention
The technical problem to be solved by the present invention is to provide an image feature description method based on Gabor comprehensive features, which fully utilizes the phase information of the Gobor features and improves the recognition rate, in order to overcome the defects in the prior art.
In order to solve the technical problems, the invention adopts the technical scheme that:
the method comprises the following steps:
step one, acquiring and uploading a face image signal: the image acquisition equipment acquires a face image signal and uploads the face image signal acquired in real time to the processor through the image signal transmission device;
step two, adjusting the resolution of the face image and representing the matrix: firstly, a processor calls a resolution difference value adjusting module to adjust the resolution of a received face image signal to be m multiplied by n to obtain a face image G; then, the processor represents the face image G as an m × n dimensional image matrix X;
step three, image feature extraction: the processor analyzes and processes the image matrix X obtained in the step two to obtain a feature vector C of the face image G, and the analyzing and processing process is as follows:
step 301, performing multi-scale image blocking on the image matrix X: dividing the image matrix X into p × q blocks, we get:
X = X 11 X 12 L X 1 q X 21 X 22 L X 2 q L L L L X p 1 X p 2 L X pq
wherein p and q are natural numbers and take the values of 2, 4, 8 or 16, and XijIs composed ofA face sub-image matrix of dimensions, where i ═ 1, 2.., p; j ═ 1,2,. q;
step 302, filtering the image matrix X by using a two-dimensional Gabor filter bank, specifically including the following steps:
step 3021, constructing a two-dimensional Gabor filter bank in a time domain:
the multi-channel two-dimensional Gabor transform filter is defined as:
whereinRepresenting an odd-symmetric Gabor filter,representing an even symmetric Gabor filter, X (X, y) being the pixel values in the face sub-image matrix X; the simplified calculation model of wavelet transform is defined as follows:
wherein,is an even-symmetric two-dimensional Gabor filter,a two-dimensional Gabor filter with odd symmetry, wherein f is the central frequency, x is the abscissa variable in the time domain, y is the ordinate variable in the time domain, theta is the spatial phase angle, sigma is the spatial constant, g (x, y, sigma) is a Gaussian function <math> <mrow> <mi>g</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>,</mo> <mi>&sigma;</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>1</mn> <mrow> <mn>2</mn> <mi>&pi;</mi> <msup> <mi>&sigma;</mi> <mn>2</mn> </msup> </mrow> </mfrac> <mi>exp</mi> <mo>[</mo> <mo>-</mo> <mfrac> <mrow> <msup> <mi>x</mi> <mn>2</mn> </msup> <mo>+</mo> <msup> <mi>y</mi> <mn>2</mn> </msup> </mrow> <mrow> <mn>2</mn> <msup> <mi>&sigma;</mi> <mn>2</mn> </msup> </mrow> </mfrac> <mo>]</mo> <mo>;</mo> </mrow> </math>
Step 3022, transforming the two-dimensional Gabor filter bank in the time domain into a two-dimensional Gabor filter bank in the frequency domain:
<math> <mfenced open='{' close=''> <mtable> <mtr> <mtd> <msub> <mi>&Phi;</mi> <mi>e</mi> </msub> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>,</mo> <mi>f</mi> <mo>,</mo> <mi>&theta;</mi> <mo>,</mo> <mi>&sigma;</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <mo>[</mo> <msub> <mi>&Phi;</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>,</mo> <mi>f</mi> <mo>,</mo> <mi>&theta;</mi> <mo>,</mo> <mi>&sigma;</mi> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mi>&Phi;</mi> <mn>2</mn> </msub> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>,</mo> <mi>f</mi> <mo>,</mo> <mi>&theta;</mi> <mo>,</mo> <mi>&sigma;</mi> <mo>)</mo> </mrow> <mo>]</mo> </mrow> <mn>2</mn> </mfrac> </mtd> </mtr> <mtr> <mtd> <msub> <mi>&Phi;</mi> <mn>0</mn> </msub> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>,</mo> <mi>f</mi> <mo>,</mo> <mi>&theta;</mi> <mo>,</mo> <mi>&sigma;</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <mo>[</mo> <msub> <mi>&Phi;</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>,</mo> <mi>f</mi> <mo>,</mo> <mi>&theta;</mi> <mo>,</mo> <mi>&sigma;</mi> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>&Phi;</mi> <mn>2</mn> </msub> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>,</mo> <mi>f</mi> <mo>,</mo> <mi>&theta;</mi> <mo>,</mo> <mi>&sigma;</mi> <mo>)</mo> </mrow> <mo>]</mo> </mrow> <mrow> <mn>2</mn> <mi>j</mi> </mrow> </mfrac> </mtd> </mtr> </mtable> </mfenced> </math>
wherein phi1(u,v,f,θ,σ)=exp{-2π2σ2[(u-f cosθ)2+(v-f sinθ)2]},Φ2(u,v,f,θ,σ)=exp{-2π2σ2[(u+f cosθ)2+(v+f sinθ)2]},Φe(u, v, f, θ, σ) isFourier transform of phio(u, v, f, θ, σ) isIs given in terms of imaginary units andu and v are space frequency variables in a frequency domain;
andcan be obtained by fast fourier transform:
wherein Z (u, v) is the Fourier transform of Z (x, y), which represents the pixel values of image Z;
step 3023, first, the face subimage matrix X is processedijEach pixel value in (1) is represented as Xij(x, y); then, two-dimensional Gabor filter group in frequency domain is adopted to carry out XijAnd (x, y) filtering to obtain a filtering result:
<math> <mrow> <msup> <msub> <mi>M</mi> <mi>e</mi> </msub> <mrow> <msub> <mi>F</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mi>d</mi> </msub> </mrow> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <msub> <mi>X</mi> <mi>ij</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>&CircleTimes;</mo> <msup> <msub> <mi>&phi;</mi> <mi>e</mi> </msub> <mrow> <msub> <mi>F</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mi>d</mi> </msub> </mrow> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> </math>
<math> <mrow> <msup> <msub> <mi>M</mi> <mi>o</mi> </msub> <mrow> <msub> <mi>F</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mi>d</mi> </msub> </mrow> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <msub> <mi>X</mi> <mi>ij</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>&CircleTimes;</mo> <msup> <msub> <mi>&phi;</mi> <mi>o</mi> </msub> <mrow> <msub> <mi>F</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mi>d</mi> </msub> </mrow> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> </math>
wherein,for using even-symmetric two-dimensional Gabor filter pair Xij(x, y) the result of the filtering,for using odd symmetric two-dimensional Gabor filter pair Xij(x, y) the result of the filtering, FsIs the s-th center frequency, θdIs the d-th spatial phase angle;
step 3024, selecting n according to step 30231Are different from each otherCentral frequency F ofsAnd for each center frequency FsSelecting n2A plurality of different spatial phase angles thetadWherein s is less than or equal to n1,d≤n2Form n1×n2The Gabor filtering channels extract amplitude values and phase values of the filtering results of each Gabor filtering channel as characteristics representing the Gabor filtering channels; wherein, two-dimensional Gabor filter pair X with even symmetry is adoptedij(x, y) the result of the filteringHas an amplitude ofPhase value ofTwo-dimensional Gabor filter pair X with odd symmetryij(x, y) the result of the filteringHas an amplitude ofPhase value of
Step 3025, filtering the result of each Gabor filtering channelAndamplitude ofAndthe materials are spread according to the row direction,form a row vector <math> <mrow> <msup> <mi>A</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> </msup> <mo>=</mo> <mo>[</mo> <msup> <msub> <mi>A</mi> <mi>e</mi> </msub> <mrow> <msub> <mi>F</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mi>d</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> </mrow> </msup> <mo>,</mo> <msup> <msub> <mi>A</mi> <mi>o</mi> </msub> <mrow> <msub> <mi>F</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mi>d</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> </mrow> </msup> <mo>]</mo> <mo>;</mo> </mrow> </math> And filtering results for each Gabor filtering channelAndphase value ofAndspread by line to form a line vector theta(i,j)=[θe (i,j)o (i,j)](ii) a Wherein
<math> <mrow> <msup> <msub> <mi>A</mi> <mi>e</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> </msup> <mo>=</mo> <mo>[</mo> <mo>|</mo> <msup> <msub> <mi>M</mi> <mi>e</mi> </msub> <mrow> <msub> <mi>F</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mn>1</mn> </msub> </mrow> </msup> <mo>|</mo> <mo>,</mo> <mo>|</mo> <msup> <msub> <mi>M</mi> <mi>e</mi> </msub> <mrow> <msub> <mi>F</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mn>2</mn> </msub> </mrow> </msup> <mo>|</mo> <mo>,</mo> <mi>L</mi> <mo>,</mo> <mo>|</mo> <msup> <msub> <mi>M</mi> <mi>e</mi> </msub> <mrow> <msub> <mi>F</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mn>8</mn> </msub> </mrow> </msup> <mo>|</mo> <mo>,</mo> <mo>|</mo> <msup> <msub> <mi>M</mi> <mi>e</mi> </msub> <mrow> <msub> <mi>F</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mn>1</mn> </msub> </mrow> </msup> <mo>|</mo> <mo>,</mo> <mi>L</mi> <mo>,</mo> <mo>|</mo> <msup> <msub> <mi>M</mi> <mi>e</mi> </msub> <mrow> <msub> <mi>F</mi> <mn>4</mn> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mn>8</mn> </msub> </mrow> </msup> <mo>|</mo> <mo>]</mo> </mrow> </math>
<math> <mrow> <msup> <msub> <mi>A</mi> <mi>o</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> </msup> <mo>=</mo> <mo>[</mo> <mo>|</mo> <msup> <msub> <mi>M</mi> <mi>o</mi> </msub> <mrow> <msub> <mi>F</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mn>1</mn> </msub> </mrow> </msup> <mo>|</mo> <mo>,</mo> <mo>|</mo> <msup> <msub> <mi>M</mi> <mi>o</mi> </msub> <mrow> <msub> <mi>F</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mn>2</mn> </msub> </mrow> </msup> <mo>|</mo> <mo>,</mo> <mi>L</mi> <mo>,</mo> <mo>|</mo> <msup> <msub> <mi>M</mi> <mi>o</mi> </msub> <mrow> <msub> <mi>F</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mn>8</mn> </msub> </mrow> </msup> <mo>|</mo> <mo>,</mo> <mo>|</mo> <msup> <msub> <mi>M</mi> <mi>o</mi> </msub> <mrow> <msub> <mi>F</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mn>1</mn> </msub> </mrow> </msup> <mo>|</mo> <mo>,</mo> <mi>L</mi> <mo>,</mo> <mo>|</mo> <msup> <msub> <mi>M</mi> <mi>o</mi> </msub> <mrow> <msub> <mi>F</mi> <mn>4</mn> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mn>8</mn> </msub> </mrow> </msup> <mo>|</mo> <mo>]</mo> </mrow> </math>
<math> <mrow> <msup> <msub> <mi>&theta;</mi> <mi>e</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> </msup> <mo>=</mo> <mo>[</mo> <msub> <mi>&phi;</mi> <mrow> <mi>e</mi> <mo>,</mo> <msub> <mi>F</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mn>1</mn> </msub> </mrow> </msub> <mo>,</mo> <msub> <mi>&phi;</mi> <mrow> <mi>e</mi> <mo>,</mo> <msub> <mi>F</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mn>2</mn> </msub> </mrow> </msub> <mo>,</mo> <mi>L</mi> <mo>,</mo> <msub> <mi>&phi;</mi> <mrow> <mi>e</mi> <mo>,</mo> <msub> <mi>F</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mn>8</mn> </msub> </mrow> </msub> <mo>,</mo> <msub> <mi>&phi;</mi> <mrow> <mi>e</mi> <mo>,</mo> <msub> <mi>F</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mn>1</mn> </msub> </mrow> </msub> <mo>,</mo> <mi>L</mi> <mo>,</mo> <msub> <mi>&phi;</mi> <mrow> <mi>e</mi> <mo>,</mo> <msub> <mi>F</mi> <mn>4</mn> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mn>8</mn> </msub> </mrow> </msub> <mo>]</mo> </mrow> </math>
<math> <mrow> <msup> <msub> <mi>&theta;</mi> <mi>o</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> </msup> <mo>=</mo> <mo>[</mo> <msub> <mi>&phi;</mi> <mrow> <mi>o</mi> <mo>,</mo> <msub> <mi>F</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mn>1</mn> </msub> </mrow> </msub> <mo>,</mo> <msub> <mi>&phi;</mi> <mrow> <mi>o</mi> <mo>,</mo> <msub> <mi>F</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mn>2</mn> </msub> </mrow> </msub> <mo>,</mo> <mi>L</mi> <mo>,</mo> <msub> <mi>&phi;</mi> <mrow> <mi>o</mi> <mo>,</mo> <msub> <mi>F</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mn>8</mn> </msub> </mrow> </msub> <mo>,</mo> <msub> <mi>&phi;</mi> <mrow> <mi>o</mi> <mo>,</mo> <msub> <mi>F</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mn>1</mn> </msub> </mrow> </msub> <mo>,</mo> <mi>L</mi> <mo>,</mo> <msub> <mi>&phi;</mi> <mrow> <mi>o</mi> <mo>,</mo> <msub> <mi>F</mi> <mn>4</mn> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mn>8</mn> </msub> </mrow> </msub> <mo>]</mo> </mrow> </math>
Step 3026, adding n1×n2N of one Gabor filter channel1×n2X 2 linesThe vectors are connected in sequence to form XijCharacteristic C of a two-dimensional Gabor filter bank of (x, y)(i,j)=[A(i,j)(i,j)];
Step 303, obtaining the face subimage matrix XijEach pixel value X in (X, y)ij(x, y) texture contribution CMij(x',y');
Step 304, according to the formula C ═ C(1,1)×CM11,C(1,2)×CM12,L C(p,q)×CMpq]Solving a feature vector C of the face image G;
step four, synchronously outputting a processing result: in the process of extracting image characteristics in the third step, the processor synchronously displays the image signal processing process and the image characteristic extraction result in the third step through the display connected with the processor.
Further, m × n in step two is 128 × 128.
Further, the value of σ in step 3021 is 1.
Further, in step 3023,andcan be expressed as a complex number:
<math> <mrow> <msup> <msub> <mi>M</mi> <mi>e</mi> </msub> <mrow> <msub> <mi>F</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mi>d</mi> </msub> </mrow> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <msubsup> <mi>M</mi> <mrow> <mi>e</mi> <mo>,</mo> <msub> <mi>F</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mi>d</mi> </msub> </mrow> <mi>R</mi> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>+</mo> <mi>j</mi> <msubsup> <mi>M</mi> <mrow> <mi>e</mi> <mo>,</mo> <msub> <mi>F</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mi>d</mi> </msub> </mrow> <mi>I</mi> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> </math>
<math> <mrow> <msup> <msub> <mi>M</mi> <mi>o</mi> </msub> <mrow> <msub> <mi>F</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mi>d</mi> </msub> </mrow> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <msubsup> <mi>M</mi> <mrow> <mi>o</mi> <mo>,</mo> <msub> <mi>F</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mi>d</mi> </msub> </mrow> <mi>R</mi> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>+</mo> <mi>j</mi> <msubsup> <mi>M</mi> <mrow> <mi>o</mi> <mo>,</mo> <msub> <mi>F</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mi>d</mi> </msub> </mrow> <mi>I</mi> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> </math>
wherein,andrepresentsThe real and imaginary parts of (a) and (b),andrepresentsThe real part and the imaginary part of (a), the filtering result is expressed as:
<math> <mrow> <msup> <msub> <mi>M</mi> <mi>e</mi> </msub> <mrow> <msub> <mi>F</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mi>d</mi> </msub> </mrow> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mo>|</mo> <msup> <msub> <mi>M</mi> <mi>e</mi> </msub> <mrow> <msub> <mi>F</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mi>d</mi> </msub> </mrow> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>|</mo> <msup> <mi>e</mi> <mrow> <mi>j</mi> <msub> <mi>&phi;</mi> <mrow> <mi>e</mi> <mo>,</mo> <msub> <mi>F</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mi>d</mi> </msub> </mrow> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> </msup> </mrow> </math>
<math> <mrow> <msup> <msub> <mi>M</mi> <mi>o</mi> </msub> <mrow> <msub> <mi>F</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mi>d</mi> </msub> </mrow> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mo>|</mo> <msup> <msub> <mi>M</mi> <mi>o</mi> </msub> <mrow> <msub> <mi>F</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mi>d</mi> </msub> </mrow> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>|</mo> <msup> <mi>e</mi> <mrow> <mi>j</mi> <msub> <mi>&phi;</mi> <mrow> <mi>o</mi> <mo>,</mo> <msub> <mi>F</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mi>d</mi> </msub> </mrow> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> </msup> </mrow> </math>
in order to be the amplitude value,are phase values, respectively expressed as:
<math> <mrow> <mo>|</mo> <msup> <msub> <mi>M</mi> <mi>e</mi> </msub> <mrow> <msub> <mi>F</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mi>d</mi> </msub> </mrow> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>|</mo> <mo>=</mo> <msqrt> <msup> <mrow> <mo>(</mo> <msubsup> <mi>M</mi> <mrow> <mi>e</mi> <mo>,</mo> <msub> <mi>F</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mi>d</mi> </msub> </mrow> <mi>R</mi> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msup> <mrow> <mo>(</mo> <msubsup> <mi>M</mi> <mrow> <mi>e</mi> <mo>,</mo> <msub> <mi>F</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mi>d</mi> </msub> </mrow> <mi>I</mi> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mn>2</mn> </msup> </msqrt> </mrow> </math>
<math> <mrow> <mo>|</mo> <msup> <msub> <mi>M</mi> <mi>o</mi> </msub> <mrow> <msub> <mi>F</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mi>d</mi> </msub> </mrow> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>|</mo> <mo>=</mo> <msqrt> <msup> <mrow> <mo>(</mo> <msubsup> <mi>M</mi> <mrow> <mi>o</mi> <mo>,</mo> <msub> <mi>F</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mi>d</mi> </msub> </mrow> <mi>R</mi> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msup> <mrow> <mo>(</mo> <msubsup> <mi>M</mi> <mrow> <mi>o</mi> <mo>,</mo> <msub> <mi>F</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mi>d</mi> </msub> </mrow> <mi>I</mi> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mn>2</mn> </msup> </msqrt> </mrow> </math>
<math> <mrow> <msub> <mi>&phi;</mi> <mrow> <mi>e</mi> <mo>,</mo> <msub> <mi>F</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mi>d</mi> </msub> </mrow> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <msup> <mi>tan</mi> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> <mrow> <mo>(</mo> <mfrac> <mrow> <msubsup> <mi>M</mi> <mrow> <mi>e</mi> <mo>,</mo> <msub> <mi>F</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mi>d</mi> </msub> </mrow> <mi>I</mi> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> <mrow> <msubsup> <mi>M</mi> <mrow> <mi>e</mi> <mo>,</mo> <msub> <mi>F</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mi>d</mi> </msub> </mrow> <mi>R</mi> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>)</mo> </mrow> <mo>,</mo> <msub> <mi>&phi;</mi> <mrow> <mi>e</mi> <mo>,</mo> <msub> <mi>F</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mi>d</mi> </msub> </mrow> </msub> <mo>&Element;</mo> <mrow> <mo>(</mo> <mn>0,2</mn> <mi>&pi;</mi> <mo>)</mo> </mrow> </mrow> </math>
<math> <mrow> <msub> <mi>&phi;</mi> <mrow> <mi>o</mi> <mo>,</mo> <msub> <mi>F</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mi>d</mi> </msub> </mrow> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <msup> <mi>tan</mi> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> <mrow> <mo>(</mo> <mfrac> <mrow> <msubsup> <mi>M</mi> <mrow> <mi>o</mi> <mo>,</mo> <msub> <mi>F</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mi>d</mi> </msub> </mrow> <mi>I</mi> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> <mrow> <msubsup> <mi>M</mi> <mrow> <mi>o</mi> <mo>,</mo> <msub> <mi>F</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mi>d</mi> </msub> </mrow> <mi>R</mi> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>)</mo> </mrow> <mo>,</mo> <msub> <mi>&phi;</mi> <mrow> <mi>o</mi> <mo>,</mo> <msub> <mi>F</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mi>d</mi> </msub> </mrow> </msub> <mo>&Element;</mo> <mrow> <mo>(</mo> <mn>0,2</mn> <mi>&pi;</mi> <mo>)</mo> </mrow> <mo>.</mo> </mrow> </math>
further, in step 3024, the n is1The values of (a) are 4, and the values of the 4 central frequencies f are 2Hz, 4Hz, 8Hz and 16Hz respectively.
Further, in step 3024, the n is2Is 8, and the values of the 8 spatial phase angles theta are respectively 0 deg., 45 deg., 90 deg., 135 deg., 180 deg., 225 deg., 270 deg., and 315 deg..
Further, in step 303, the specific process of calculating the texture contribution degree is as follows:
step 3031, defining the entropy function of the face image G as follows:
<math> <mrow> <mi>H</mi> <mrow> <mo>(</mo> <mi>X</mi> <mrow> <mo>(</mo> <msup> <mi>x</mi> <mo>&prime;</mo> </msup> <mo>,</mo> <msup> <mi>y</mi> <mo>&prime;</mo> </msup> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>a</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>m</mi> </munderover> <msub> <mi>p</mi> <mi>a</mi> </msub> <mi>log</mi> <mrow> <mo>(</mo> <mfrac> <mn>1</mn> <msub> <mi>p</mi> <mi>a</mi> </msub> </mfrac> <mo>)</mo> </mrow> <mo>=</mo> <mo>-</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>a</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>m</mi> </munderover> <msub> <mi>p</mi> <mi>a</mi> </msub> <mi>log</mi> <mrow> <mo>(</mo> <msub> <mi>p</mi> <mi>a</mi> </msub> <mo>)</mo> </mrow> </mrow> </math>
where X (X ', y') is the pixel value of the image matrix X, X 'is the transverse coordinate of the image X (X', y '), y' is the longitudinal coordinate of the image X (X ', y'), m is the total number of gray levels of the face image G, paThe probability of the occurrence of the a-th gray level is shown, a is a natural number, and the value of a is 1-m;
step 3032, defining the image entropy corresponding to the local information entropy map LH () as follows:
LH(i',j')=H(F(i',j')w)
where w is the size of the sliding variable window, H (F (i ', j')w) Is an image F (i ', j')wIs the image F (i ', j')wAt the position of each pixel, i ' is the image F (i ', j ')wJ ' is the image F (i ', j ')wLongitudinal coordinate of (a), F (i ', j')wIs a sliding sub-image within a variable window centered on (i ', j') and:
F(i',j')w={X(x',y')|x∈[i'-w/2,i'+w/2-1],y'∈[j'-w/2,j'+w/2-1]};
3033, defining the face subimage matrix XijEach pixel value in (1)XijThe texture contribution of (x ', y') is:
<math> <mrow> <msub> <mi>CM</mi> <mi>ij</mi> </msub> <mrow> <mo>(</mo> <msup> <mi>x</mi> <mo>&prime;</mo> </msup> <mo>,</mo> <msup> <mi>y</mi> <mo>&prime;</mo> </msup> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>1</mn> <mrow> <mi>m</mi> <mo>/</mo> <mi>p</mi> <mo>&times;</mo> <mi>n</mi> <mo>/</mo> <mi>q</mi> </mrow> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <msup> <mi>x</mi> <mo>&prime;</mo> </msup> <mo>=</mo> <mn>1</mn> </mrow> <mrow> <mi>m</mi> <mo>/</mo> <mi>p</mi> </mrow> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <msup> <mi>y</mi> <mo>&prime;</mo> </msup> <mo>=</mo> <mn>1</mn> </mrow> <mrow> <mi>n</mi> <mo>/</mo> <mi>q</mi> </mrow> </munderover> <mi>LH</mi> <mrow> <mo>(</mo> <mi>X</mi> <mrow> <mo>(</mo> <msup> <mi>x</mi> <mo>&prime;</mo> </msup> <mo>+</mo> <mrow> <mo>(</mo> <mi>i</mi> <mo>-</mo> <mn>1</mn> <mo>)</mo> </mrow> <mo>&times;</mo> <mi>m</mi> <mo>/</mo> <mi>p</mi> <mo>,</mo> <msup> <mi>y</mi> <mo>&prime;</mo> </msup> <mo>+</mo> <mrow> <mo>(</mo> <mi>j</mi> <mo>-</mo> <mn>1</mn> <mo>)</mo> </mrow> <mo>&times;</mo> <mi>n</mi> <mo>/</mo> <mi>q</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mrow> </math>
wherein, X (X '+ (i-1) xm/p, y' + (j-1) xn/q) is the pixel value of the (X ', y') sub-image of the ij th block after the image matrix X is partitioned into p × q.
Further, in step 3031, the value of m is 256.
Further, the processor is a computer.
Compared with the prior art, the invention has the following beneficial technical effects:
1. the invention uses the amplitude part and the phase part transformed by the Gabor filter, wherein the phase part comprises the direction information in the Gabor filtering result, has certain characteristic identification significance, fully utilizes the filtering result of a Gabor filter bank, extracts more abundant characteristic information, is convenient for later identification, and can better describe the human face characteristic by considering different importance degrees of each sub-image block to the whole image aiming at the defect of the average thought of the image blocks at present, and is obviously superior to a plurality of common image characteristic extraction algorithms based on a single training sample, such as a Local Binary Pattern (LBP), a Local Gabor Binary Pattern (LGBP), a local Gabor pattern (LG), a Local Principal Component Analysis (LPCA), a Local Ternary Pattern (LTP) and a local comprehensive Gabor histogram pattern (LCGH) and the like in the aspect of performance, along with the reasonable increase of the number of the blocks, the recognition rate of the invention is increased and even reaches more than 90 percent;
2. the human face feature extraction method has the advantages of high human face feature extraction speed, high stability, good effect and high practicability, can be applied to human face recognition, realizes the application of the human face recognition in the aspects of video monitoring, human-computer interaction, identity authentication and the like, can be suitable for the situations of complex illumination, different angles and numerous human face recognition lacking training samples in actual use, and can well meet the requirements of actual application;
3. the method has simple steps, reasonable design, convenient implementation, low input cost and simple and convenient operation;
in conclusion, the method has the advantages of reasonable design, convenience in implementation, low input cost, simplicity and convenience in operation, high face feature extraction speed, good effect and strong practicability, solves the problems of insufficient utilization of Gabor features, insufficient consideration of importance degree of image subblocks and the like in the prior art, and proves the effectiveness of the algorithm.
Furthermore, by adopting 8 different space phase angles, 32 Gabor filtering channels can be formed, and the method is favorable for improving the recognition rate of the algorithm.
Drawings
Fig. 1 is a schematic block diagram of a circuit of a face feature extraction device used in the present invention.
Fig. 2 is a flow chart of the method of the face feature extraction method of the present invention.
FIG. 3 is a comparison diagram of YALE face library of face recognition results obtained by the present invention and various image feature extraction methods.
FIG. 4 is a comparison diagram of an ORL face library of face recognition results obtained by the present invention and various image feature extraction methods.
Wherein: 1-image acquisition; 2-image signal transmission means; 3, a processor; 4-display.
Detailed Description
The technical solution of the present invention is further described in detail by the accompanying drawings and embodiments.
As shown in fig. 1 and fig. 2, the image feature description method based on Gabor comprehensive features of the present invention includes the following steps:
step one, acquiring and uploading a face image signal: the image acquisition equipment 1 acquires a face image signal and uploads the face image signal acquired in real time to the processor 3 through the image signal transmission device 2;
step two, adjusting the resolution of the face image and representing the matrix: firstly, the processor 3 calls a resolution difference value adjusting module to adjust the resolution of a received face image signal to be m multiplied by n to obtain a face image G; then, the processor 3 represents the face image G as an m × n-dimensional image matrix X; where m × n can be set to 128 × 128;
step three, image feature extraction: the processor 3 analyzes and processes the image matrix X obtained in the step two to obtain a feature vector C of the face image G, and the analyzing and processing process is as follows:
step 301, performing multi-scale image blocking on the m × n dimensional image matrix X: dividing the image matrix X into p × q blocks, we get:
X = X 11 X 12 L X 1 q X 21 X 22 L X 2 q L L L L X p 1 X p 2 L X pq
wherein p and q are natural numbers and take the values of 2, 4, 8 or 16, and XijIs composed ofA face sub-image matrix of dimensions, where i ═ 1, 2.., p; j ═ 1,2,. q;
step 302, filtering the image matrix X by using a two-dimensional Gabor filter bank, specifically including the following steps:
step 3021, constructing a two-dimensional Gabor filter bank in a time domain:
the multi-channel two-dimensional Gabor transform filter in the time domain is defined as:
whereinRepresenting an odd-symmetric Gabor filter,representing an even symmetric Gabor filter, X (X, y) being the pixel values in the face sub-image matrix X; the simplified calculation model of wavelet transform is defined as follows:
<math> <mfenced open='{' close=''> <mtable> <mtr> <mtd> <msub> <mi>&phi;</mi> <mi>e</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>,</mo> <mi>f</mi> <mo>,</mo> <mi>&theta;</mi> <mo>,</mo> <mi>&sigma;</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>g</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>,</mo> <mi>&sigma;</mi> <mo>)</mo> </mrow> <mo>&CenterDot;</mo> <mi>cos</mi> <mo>[</mo> <mrow> <mo>(</mo> <mn>2</mn> <mi>&pi;f</mi> <mrow> <mo>(</mo> <mi>x</mi> <mi>cos</mi> <mi>&theta;</mi> <mo>+</mo> <mi>y</mi> <mi>sin</mi> <mi>&theta;</mi> <mo>)</mo> </mrow> </mrow> <mo>]</mo> </mtd> </mtr> <mtr> <mtd> <msub> <mi>&phi;</mi> <mi>o</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>,</mo> <mi>f</mi> <mo>,</mo> <mi>&theta;</mi> <mo>,</mo> <mi>&sigma;</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>g</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>,</mo> <mi>&sigma;</mi> <mo>)</mo> </mrow> <mo>&CenterDot;</mo> <mi>sin</mi> <mo>[</mo> <mrow> <mo>(</mo> <mn>2</mn> <mi>&pi;f</mi> <mrow> <mo>(</mo> <mi>x</mi> <mi>cos</mi> <mi>&theta;</mi> <mo>+</mo> <mi>y</mi> <mi>sin</mi> <mi>&theta;</mi> <mo>)</mo> </mrow> </mrow> <mo>]</mo> </mtd> </mtr> </mtable> </mfenced> </math>
wherein phi ise(x, y, f, theta, sigma) is an even symmetric two-dimensional Gabor filter, phio(x, y, f, theta, sigma) is an odd-symmetric two-dimensional Gabor filter, f is a central frequency, x is an abscissa variable in a time domain, y is an ordinate variable in the time domain, theta is a spatial phase angle, sigma is a spatial constant, g (x, y, sigma) is a Gaussian function <math> <mrow> <mi>g</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>,</mo> <mi>&sigma;</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>1</mn> <mrow> <mn>2</mn> <mi>&pi;</mi> <msup> <mi>&sigma;</mi> <mn>2</mn> </msup> </mrow> </mfrac> <mi>exp</mi> <mo>[</mo> <mo>-</mo> <mfrac> <mrow> <msup> <mi>x</mi> <mn>2</mn> </msup> <mo>+</mo> <msup> <mi>y</mi> <mn>2</mn> </msup> </mrow> <mrow> <mn>2</mn> <msup> <mi>&sigma;</mi> <mn>2</mn> </msup> </mrow> </mfrac> <mo>]</mo> <mo>;</mo> </mrow> </math>
In this embodiment, the value of σ in step 3021 is 1.
Step 3022, transforming the two-dimensional Gabor filter bank in the time domain into a two-dimensional Gabor filter bank in the frequency domain:
<math> <mfenced open='{' close=''> <mtable> <mtr> <mtd> <msub> <mi>&Phi;</mi> <mi>e</mi> </msub> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>,</mo> <mi>f</mi> <mo>,</mo> <mi>&theta;</mi> <mo>,</mo> <mi>&sigma;</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <mo>[</mo> <msub> <mi>&Phi;</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>,</mo> <mi>f</mi> <mo>,</mo> <mi>&theta;</mi> <mo>,</mo> <mi>&sigma;</mi> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mi>&Phi;</mi> <mn>2</mn> </msub> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>,</mo> <mi>f</mi> <mo>,</mo> <mi>&theta;</mi> <mo>,</mo> <mi>&sigma;</mi> <mo>)</mo> </mrow> <mo>]</mo> </mrow> <mn>2</mn> </mfrac> </mtd> </mtr> <mtr> <mtd> <msub> <mi>&Phi;</mi> <mi>o</mi> </msub> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>,</mo> <mi>f</mi> <mo>,</mo> <mi>&theta;</mi> <mo>,</mo> <mi>&sigma;</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <mo>[</mo> <msub> <mi>&Phi;</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>,</mo> <mi>f</mi> <mo>,</mo> <mi>&theta;</mi> <mo>,</mo> <mi>&sigma;</mi> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>&Phi;</mi> <mn>2</mn> </msub> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>,</mo> <mi>f</mi> <mo>,</mo> <mi>&theta;</mi> <mo>,</mo> <mi>&sigma;</mi> <mo>)</mo> </mrow> <mo>]</mo> </mrow> <mrow> <mn>2</mn> <mi>j</mi> </mrow> </mfrac> </mtd> </mtr> </mtable> </mfenced> </math>
wherein phi1(u,v,f,θ,σ)=exp{-2π2σ2[(u-f cosθ)2+(v-f sinθ)2]},Φ2(u,v,f,θ,σ)=exp{-2π2σ2[(u+f cosθ)2+(v+f sinθ)2]},Φe(u, v, f, theta, sigma) is phieFourier transform of (x, y, f, theta, sigma), phio(u,v,f, theta, sigma) is phio(x, y, f, theta, sigma), j being an imaginary unit andu and v are space frequency variables in a frequency domain;
andcan be obtained by fast fourier transform:
wherein Z (u, v) is the Fourier transform of Z (x, y), which represents the pixel values of image Z;
step 3023, first, the face subimage matrix X is processedijEach pixel value in (i 1, 2.. multidot.p; j 1, 2.. multidot.q) is denoted as Xij(x, y) (i 1, 2.., p; j 1, 2.., q); then, two-dimensional Gabor filter group in frequency domain is adopted to carry out Xij(x, y) (i 1,2,.., p; j 1, 2.., q) is filtered, and a filtering result is obtained:
<math> <mfenced open='' close=''> <mtable> <mtr> <mtd> <msup> <msub> <mi>M</mi> <mi>e</mi> </msub> <mrow> <msub> <mi>F</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mi>d</mi> </msub> </mrow> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <msub> <mi>X</mi> <mi>ij</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>&CircleTimes;</mo> <msup> <msub> <mi>&phi;</mi> <mi>e</mi> </msub> <mrow> <msub> <mi>F</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mi>d</mi> </msub> </mrow> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mtd> <mtd> <mrow> <mo>(</mo> <mi>i</mi> <mo>=</mo> <mn>1,2</mn> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>,</mo> <mi>p</mi> <mo>;</mo> <mi>j</mi> <mo>=</mo> <mn>1,2</mn> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>,</mo> <mi>q</mi> <mo>)</mo> </mrow> </mtd> </mtr> </mtable> </mfenced> </math>
<math> <mfenced open='' close=''> <mtable> <mtr> <mtd> <msup> <msub> <mi>M</mi> <mi>o</mi> </msub> <mrow> <msub> <mi>F</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mi>d</mi> </msub> </mrow> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <msub> <mi>X</mi> <mi>ij</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>&CircleTimes;</mo> <msup> <msub> <mi>&phi;</mi> <mi>o</mi> </msub> <mrow> <msub> <mi>F</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mi>d</mi> </msub> </mrow> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mtd> <mtd> <mrow> <mo>(</mo> <mi>i</mi> <mo>=</mo> <mn>1,2</mn> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>,</mo> <mi>p</mi> <mo>;</mo> <mi>j</mi> <mo>=</mo> <mn>1,2</mn> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>,</mo> <mi>q</mi> <mo>)</mo> </mrow> </mtd> </mtr> </mtable> </mfenced> </math>
wherein,for using even-symmetric two-dimensional Gabor filter pair Xij(x,y)(i=1,2,...P; j ═ 1, 2.., q) the result of the filtering,for using odd symmetric two-dimensional Gabor filter pair Xij(x, y) (i 1, 2.., p; j 1, 2.., q) the result of the filtering, FsIs the s-th center frequency f, thetadIs the d-th spatial phase angle, and s and d represent the corresponding number of scales and phases.
Andis a complex number, which can be expressed as:
<math> <mrow> <msup> <msub> <mi>M</mi> <mi>e</mi> </msub> <mrow> <msub> <mi>F</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mi>d</mi> </msub> </mrow> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <msubsup> <mi>M</mi> <mrow> <mi>e</mi> <mo>,</mo> <msub> <mi>F</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mi>d</mi> </msub> </mrow> <mi>R</mi> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>+</mo> <mi>j</mi> <msubsup> <mi>M</mi> <mrow> <mi>e</mi> <mo>,</mo> <msub> <mi>F</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mi>d</mi> </msub> </mrow> <mi>I</mi> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> </math>
<math> <mrow> <msup> <msub> <mi>M</mi> <mi>o</mi> </msub> <mrow> <msub> <mi>F</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mi>d</mi> </msub> </mrow> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <msubsup> <mi>M</mi> <mrow> <mi>o</mi> <mo>,</mo> <msub> <mi>F</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mi>d</mi> </msub> </mrow> <mi>R</mi> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>+</mo> <mi>j</mi> <msubsup> <mi>M</mi> <mrow> <mi>o</mi> <mo>,</mo> <msub> <mi>F</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mi>d</mi> </msub> </mrow> <mi>I</mi> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> </math>
wherein,andrepresentsThe real and imaginary parts of (a) and (b),andrepresentsThe real and imaginary parts of (c). Thus, the filtering result can be expressed as:
<math> <mrow> <msup> <msub> <mi>M</mi> <mi>e</mi> </msub> <mrow> <msub> <mi>F</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mi>d</mi> </msub> </mrow> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mo>|</mo> <msup> <msub> <mi>M</mi> <mi>e</mi> </msub> <mrow> <msub> <mi>F</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mi>d</mi> </msub> </mrow> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>|</mo> <msup> <mi>e</mi> <mrow> <mi>j</mi> <msub> <mi>&phi;</mi> <mrow> <mi>e</mi> <mo>,</mo> <msub> <mi>F</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mi>d</mi> </msub> </mrow> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> </msup> </mrow> </math>
<math> <mrow> <msup> <msub> <mi>M</mi> <mi>o</mi> </msub> <mrow> <msub> <mi>F</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mi>d</mi> </msub> </mrow> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mo>|</mo> <msup> <msub> <mi>M</mi> <mi>o</mi> </msub> <mrow> <msub> <mi>F</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mi>d</mi> </msub> </mrow> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>|</mo> <msup> <mi>e</mi> <mrow> <mi>j</mi> <msub> <mi>&phi;</mi> <mrow> <mi>o</mi> <mo>,</mo> <msub> <mi>F</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mi>d</mi> </msub> </mrow> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> </msup> </mrow> </math>
amplitude valueSum phase valueRespectively expressed as:
<math> <mrow> <mo>|</mo> <msup> <msub> <mi>M</mi> <mi>e</mi> </msub> <mrow> <msub> <mi>F</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mi>d</mi> </msub> </mrow> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>|</mo> <mo>=</mo> <msqrt> <msup> <mrow> <mo>(</mo> <msubsup> <mi>M</mi> <mrow> <mi>e</mi> <mo>,</mo> <msub> <mi>F</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mi>d</mi> </msub> </mrow> <mi>R</mi> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msup> <mrow> <mo>(</mo> <msubsup> <mi>M</mi> <mrow> <mi>e</mi> <mo>,</mo> <msub> <mi>F</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mi>d</mi> </msub> </mrow> <mi>I</mi> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mn>2</mn> </msup> </msqrt> </mrow> </math>
<math> <mrow> <mo>|</mo> <msup> <msub> <mi>M</mi> <mi>o</mi> </msub> <mrow> <msub> <mi>F</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mi>d</mi> </msub> </mrow> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>|</mo> <mo>=</mo> <msqrt> <msup> <mrow> <mo>(</mo> <msubsup> <mi>M</mi> <mrow> <mi>o</mi> <mo>,</mo> <msub> <mi>F</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mi>d</mi> </msub> </mrow> <mi>R</mi> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msup> <mrow> <mo>(</mo> <msubsup> <mi>M</mi> <mrow> <mi>o</mi> <mo>,</mo> <msub> <mi>F</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mi>d</mi> </msub> </mrow> <mi>I</mi> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mn>2</mn> </msup> </msqrt> </mrow> </math>
<math> <mrow> <msub> <mi>&phi;</mi> <mrow> <mi>e</mi> <mo>,</mo> <msub> <mi>F</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mi>d</mi> </msub> </mrow> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <msup> <mi>tan</mi> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> <mrow> <mo>(</mo> <mfrac> <mrow> <msubsup> <mi>M</mi> <mrow> <mi>e</mi> <mo>,</mo> <msub> <mi>F</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mi>d</mi> </msub> </mrow> <mi>I</mi> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> <mrow> <msubsup> <mi>M</mi> <mrow> <mi>e</mi> <mo>,</mo> <msub> <mi>F</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mi>d</mi> </msub> </mrow> <mi>R</mi> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>)</mo> </mrow> <mo>,</mo> <msub> <mi>&phi;</mi> <mrow> <mi>e</mi> <mo>,</mo> <msub> <mi>F</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mi>d</mi> </msub> </mrow> </msub> <mo>&Element;</mo> <mrow> <mo>(</mo> <mn>0,2</mn> <mi>&pi;</mi> <mo>)</mo> </mrow> </mrow> </math>
<math> <mrow> <msub> <mi>&phi;</mi> <mrow> <mi>o</mi> <mo>,</mo> <msub> <mi>F</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mi>d</mi> </msub> </mrow> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <msup> <mi>tan</mi> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> <mrow> <mo>(</mo> <mfrac> <mrow> <msubsup> <mi>M</mi> <mrow> <mi>o</mi> <mo>,</mo> <msub> <mi>F</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mi>d</mi> </msub> </mrow> <mi>I</mi> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> <mrow> <msubsup> <mi>M</mi> <mrow> <mi>o</mi> <mo>,</mo> <msub> <mi>F</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mi>d</mi> </msub> </mrow> <mi>R</mi> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>)</mo> </mrow> <mo>,</mo> <msub> <mi>&phi;</mi> <mrow> <mi>o</mi> <mo>,</mo> <msub> <mi>F</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mi>d</mi> </msub> </mrow> </msub> <mo>&Element;</mo> <mrow> <mo>(</mo> <mn>0,2</mn> <mi>&pi;</mi> <mo>)</mo> </mrow> </mrow> </math>
step 3024, selecting n1A different center frequency FsAnd for each center frequency FsSelecting n2A plurality of different spatial phase angles thetadWherein s and d are natural numbers respectively and s is less than or equal to n1,d≤n2Form n1×n2The Gabor filtering channels extract amplitude values and phase values of the filtering results of each Gabor filtering channel as characteristics representing the Gabor filtering channels; wherein, two-dimensional Gabor filter pair X with even symmetry is adoptedij(x, y) the result of the filteringHas an amplitude ofPhase value ofTwo-dimensional Gabor filter pair X with odd symmetryij(x, y) the result of the filteringHas an amplitude ofPhase value of
In this embodiment, n in step 30241Has a value of 4, 4 different center frequencies F, i.e. F1~F4The values of (A) are respectively 2Hz, 4Hz, 8Hz and 16 Hz; n in step 30242Value ofIs 8, 8 different space phase angles theta1~θ8The values of (a) are respectively 0 degrees, 45 degrees, 90 degrees, 135 degrees, 180 degrees, 225 degrees, 270 degrees and 315 degrees; therefore, the embodiment can form 32 Gabor filtering channels;
step 3025, filtering the result of each Gabor filtering channelAndamplitude ofAndspread by line to form a line vector <math> <mrow> <msup> <mi>A</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> </msup> <mo>=</mo> <mo>[</mo> <msup> <msub> <mi>A</mi> <mi>e</mi> </msub> <mrow> <msub> <mi>F</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mi>d</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> </mrow> </msup> <mo>,</mo> <msup> <msub> <mi>A</mi> <mi>o</mi> </msub> <mrow> <msub> <mi>F</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mi>d</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> </mrow> </msup> <mo>]</mo> <mo>;</mo> </mrow> </math> And filtering results for each Gabor filtering channelAndphase value ofAndspread by line to form a line vector theta(i,j)=[θe (i,j),θo (i,j)](ii) a Wherein
<math> <mrow> <msup> <msub> <mi>A</mi> <mi>e</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> </msup> <mo>=</mo> <mo>[</mo> <mo>|</mo> <msup> <msub> <mi>M</mi> <mi>e</mi> </msub> <mrow> <msub> <mi>F</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mn>1</mn> </msub> </mrow> </msup> <mo>|</mo> <mo>,</mo> <mo>|</mo> <msup> <msub> <mi>M</mi> <mi>e</mi> </msub> <mrow> <msub> <mi>F</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mn>2</mn> </msub> </mrow> </msup> <mo>|</mo> <mo>,</mo> <mi>L</mi> <mo>,</mo> <mo>|</mo> <msup> <msub> <mi>M</mi> <mi>e</mi> </msub> <mrow> <msub> <mi>F</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mn>8</mn> </msub> </mrow> </msup> <mo>|</mo> <mo>,</mo> <mo>|</mo> <msup> <msub> <mi>M</mi> <mi>e</mi> </msub> <mrow> <msub> <mi>F</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mn>1</mn> </msub> </mrow> </msup> <mo>|</mo> <mo>,</mo> <mi>L</mi> <mo>,</mo> <mo>|</mo> <msup> <msub> <mi>M</mi> <mi>e</mi> </msub> <mrow> <msub> <mi>F</mi> <mn>4</mn> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mn>8</mn> </msub> </mrow> </msup> <mo>|</mo> <mo>]</mo> </mrow> </math>
<math> <mrow> <msup> <msub> <mi>A</mi> <mi>o</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> </msup> <mo>=</mo> <mo>[</mo> <mo>|</mo> <msup> <msub> <mi>M</mi> <mi>o</mi> </msub> <mrow> <msub> <mi>F</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mn>1</mn> </msub> </mrow> </msup> <mo>|</mo> <mo>,</mo> <mo>|</mo> <msup> <msub> <mi>M</mi> <mi>o</mi> </msub> <mrow> <msub> <mi>F</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mn>2</mn> </msub> </mrow> </msup> <mo>|</mo> <mo>,</mo> <mi>L</mi> <mo>,</mo> <mo>|</mo> <msup> <msub> <mi>M</mi> <mi>o</mi> </msub> <mrow> <msub> <mi>F</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mn>8</mn> </msub> </mrow> </msup> <mo>|</mo> <mo>,</mo> <mo>|</mo> <msup> <msub> <mi>M</mi> <mi>o</mi> </msub> <mrow> <msub> <mi>F</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mn>1</mn> </msub> </mrow> </msup> <mo>|</mo> <mo>,</mo> <mi>L</mi> <mo>,</mo> <mo>|</mo> <msup> <msub> <mi>M</mi> <mi>o</mi> </msub> <mrow> <msub> <mi>F</mi> <mn>4</mn> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mn>8</mn> </msub> </mrow> </msup> <mo>|</mo> <mo>]</mo> </mrow> </math>
<math> <mrow> <msup> <msub> <mi>&theta;</mi> <mi>e</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> </msup> <mo>=</mo> <mo>[</mo> <msub> <mi>&phi;</mi> <mrow> <mi>e</mi> <mo>,</mo> <msub> <mi>F</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mn>1</mn> </msub> </mrow> </msub> <mo>,</mo> <msub> <mi>&phi;</mi> <mrow> <mi>e</mi> <mo>,</mo> <msub> <mi>F</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mn>2</mn> </msub> </mrow> </msub> <mo>,</mo> <mi>L</mi> <mo>,</mo> <msub> <mi>&phi;</mi> <mrow> <mi>e</mi> <mo>,</mo> <msub> <mi>F</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mn>8</mn> </msub> </mrow> </msub> <mo>,</mo> <msub> <mi>&phi;</mi> <mrow> <mi>e</mi> <mo>,</mo> <msub> <mi>F</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mn>1</mn> </msub> </mrow> </msub> <mo>,</mo> <mi>L</mi> <mo>,</mo> <msub> <mi>&phi;</mi> <mrow> <mi>e</mi> <mo>,</mo> <msub> <mi>F</mi> <mn>4</mn> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mn>8</mn> </msub> </mrow> </msub> <mo>]</mo> </mrow> </math>
<math> <mrow> <msup> <msub> <mi>&theta;</mi> <mi>o</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> </msup> <mo>=</mo> <mo>[</mo> <msub> <mi>&phi;</mi> <mrow> <mi>o</mi> <mo>,</mo> <msub> <mi>F</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mn>1</mn> </msub> </mrow> </msub> <mo>,</mo> <msub> <mi>&phi;</mi> <mrow> <mi>o</mi> <mo>,</mo> <msub> <mi>F</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mn>2</mn> </msub> </mrow> </msub> <mo>,</mo> <mi>L</mi> <mo>,</mo> <msub> <mi>&phi;</mi> <mrow> <mi>o</mi> <mo>,</mo> <msub> <mi>F</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mn>8</mn> </msub> </mrow> </msub> <mo>,</mo> <msub> <mi>&phi;</mi> <mrow> <mi>o</mi> <mo>,</mo> <msub> <mi>F</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mn>1</mn> </msub> </mrow> </msub> <mo>,</mo> <mi>L</mi> <mo>,</mo> <msub> <mi>&phi;</mi> <mrow> <mi>o</mi> <mo>,</mo> <msub> <mi>F</mi> <mn>4</mn> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mn>8</mn> </msub> </mrow> </msub> <mo>]</mo> </mrow> </math>
Step 3026, adding n1×n2N of one Gabor filter channel1×n2X2 row vectors are connected in sequence to form Xij(x, y) (i 1, 2.. times.p; j 1, 2.. times.q) characteristic C of a two-dimensional Gabor filter bank(i,j)=[A(i,j),θ(i,j)],(i=1,2,...,p;j=1,2,...,q);
In this embodiment, the number of row vectors is 64;
step 303, obtaining the face subimage matrix XijEach pixel value X in (i 1, 2.. times.p; j 1, 2.. times.q)ij(x, y) (i 1, 2.., p; j 1, 2.., q) by the following process:
step 3031, defining the entropy function of the face image G as follows:
<math> <mrow> <mi>H</mi> <mrow> <mo>(</mo> <mi>X</mi> <mrow> <mo>(</mo> <msup> <mi>x</mi> <mo>&prime;</mo> </msup> <mo>,</mo> <msup> <mi>y</mi> <mo>&prime;</mo> </msup> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>a</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>m</mi> </munderover> <msub> <mi>p</mi> <mi>a</mi> </msub> <mi>log</mi> <mrow> <mo>(</mo> <mfrac> <mn>1</mn> <msub> <mi>p</mi> <mi>a</mi> </msub> </mfrac> <mo>)</mo> </mrow> <mo>=</mo> <mo>-</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>a</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>m</mi> </munderover> <msub> <mi>p</mi> <mi>a</mi> </msub> <mi>log</mi> <mrow> <mo>(</mo> <msub> <mi>p</mi> <mi>a</mi> </msub> <mo>)</mo> </mrow> </mrow> </math>
where X (X ', y') is the pixel value of the image matrix X, X 'is the transverse coordinate of the image X (X', y '), y' is the longitudinal coordinate of the image X (X ', y'), m is the total number of gray levels of the face image G, paThe probability of the occurrence of the a-th gray level is shown, a is a natural number, and the value of a is 1-m;
in this embodiment, m in step 3031 takes the value of 256.
Step 3032, defining the image entropy corresponding to the local information entropy map LH () as follows:
LH(i',j')=H(F(i',j')w)
where w is the size of the sliding variable window, H (F (i ', j')w) Is an image F (i ', j')wIs the image F (i ', j')wAt the position of each pixel, i ' is the image F (i ', j ')wJ ' is the image F (i ', j ')wLongitudinal coordinate of (a), F (i ', j')wIs a sliding sub-image within a variable window centered on (i ', j') and:
F(i',j')w={X(x',y')|x∈[i'-w/2,i'+w/2-1],y'∈[j'-w/2,j'+w/2-1]};
step 3033. Defining the face subimage matrix Xi1Each pixel value X in (i ═ 1, 2.. times, q)i1The texture contribution of (x ', y') (i ═ 1, 2.., q) is:
<math> <mrow> <msub> <mi>CM</mi> <mi>ij</mi> </msub> <mrow> <mo>(</mo> <msup> <mi>x</mi> <mo>&prime;</mo> </msup> <mo>,</mo> <msup> <mi>y</mi> <mo>&prime;</mo> </msup> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>1</mn> <mrow> <mi>m</mi> <mo>/</mo> <mi>p</mi> <mo>&times;</mo> <mi>n</mi> <mo>/</mo> <mi>q</mi> </mrow> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <msup> <mi>x</mi> <mo>&prime;</mo> </msup> <mo>=</mo> <mn>1</mn> </mrow> <mrow> <mi>m</mi> <mo>/</mo> <mi>p</mi> </mrow> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <msup> <mi>y</mi> <mo>&prime;</mo> </msup> <mo>=</mo> <mn>1</mn> </mrow> <mrow> <mi>n</mi> <mo>/</mo> <mi>q</mi> </mrow> </munderover> <mi>LH</mi> <mrow> <mo>(</mo> <mi>X</mi> <mrow> <mo>(</mo> <msup> <mi>x</mi> <mo>&prime;</mo> </msup> <mo>+</mo> <mrow> <mo>(</mo> <mi>i</mi> <mo>-</mo> <mn>1</mn> <mo>)</mo> </mrow> <mo>&times;</mo> <mi>m</mi> <mo>/</mo> <mi>p</mi> <mo>,</mo> <msup> <mi>y</mi> <mo>&prime;</mo> </msup> <mo>+</mo> <mrow> <mo>(</mo> <mi>j</mi> <mo>-</mo> <mn>1</mn> <mo>)</mo> </mrow> <mo>&times;</mo> <mi>n</mi> <mo>/</mo> <mi>q</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mrow> </math>
wherein, X (X '+ (i-1) xm/p, y' + (j-1) xn/q) is the pixel value of the (X ', y') sub-image of the ij block after the image matrix X is divided into p × q blocks;
step 304, according to the formula C ═ C(1,1)×CM11,C(1,2)×CM12,L C(p,q)×CMpq]Solving a feature vector C of the face image G;
step four, synchronously outputting a processing result: in the process of extracting image features in the third step, the processor 3 synchronously displays the image signal processing process and the image feature extraction result in the third step through the display 4 connected with the processor.
In this embodiment, the processor 3 is a computer.
For a face image, the image entropy of the whole image can express the information content of the whole face, but the description of the face features is meaningless, if a face image is partitioned, the information entropy of each sub-image can express the information content of the sub-image, and simultaneously express the richness of the detail texture of the sub-image, and the richness of the texture plays an important role in describing the whole face features, so that the contribution degree of each sub-image texture to the whole face information can be constructed according to the local image information entropy of the sub-image, and the face features can be well described.
In order to verify the effectiveness and universality of the face feature extraction method, the face feature extraction method is compared with a common image feature extraction algorithm based on a single training sample, which is a Local Binary Pattern (LBP), a Local Gabor Binary Pattern (LGBP), a local Gabor pattern (LG), a Local Principal Component Analysis (LPCA), a Local Ternary Pattern (LTP) and a local comprehensive Gabor histogram pattern (LCGH), and specifically comprises the following steps:
(1) under the simulation environment of MATLAB, a Yale face library is used as an experimental object for testing, the Yale face library comprises 15 face images of which the number is 165 per person, the Yale face library has the changes of eyes opening and closing, mouth opening and closing and very rich facial expressions, 1 face image per person is selected as a training sample, the rest are test samples, various face feature extraction algorithms to be compared are respectively adopted for face feature extraction, the face features extracted by each algorithm are classified and recognized by adopting an RBF neural network classification recognition method in the prior art, and the classification recognition comparison result is shown in figure 3.
(2) Under the simulation environment of MATLAB, an ORL face library is used as an experimental object for testing, the ORL face library comprises 40 different face images such as illumination, expression, hairstyle, glasses and the like, each person has 10 faces, the number of the face images is 400, 1 face image of each person is selected as a training sample, the rest are test samples, various face feature extraction algorithms to be compared are respectively adopted for face feature extraction, RBF neural network classification recognition methods in the prior art are adopted for classification and recognition of the face features extracted by each algorithm, and the classification, recognition and comparison results are shown in figure 4.
As can be seen from fig. 3 and 4, the recognition rate of the face feature extraction method for face recognition is significantly higher than that of other common image feature extraction algorithms based on a single training sample, and the method can be applied to a plurality of scenes lacking training samples in practical application.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the present invention, and all simple modifications, changes and equivalent structural changes made to the above embodiment according to the technical spirit of the present invention still fall within the protection scope of the technical solution of the present invention.

Claims (9)

1. An image feature description method based on Gabor comprehensive features is characterized in that: the method comprises the following steps:
step one, acquiring and uploading a face image signal: the image acquisition equipment (1) acquires a face image signal and uploads the face image signal acquired in real time to the processor (3) through the image signal transmission device (2);
step two, adjusting the resolution of the face image and representing the matrix: firstly, a processor (3) calls a resolution difference value adjusting module to adjust the resolution of a received face image signal to be m multiplied by n to obtain a face image G; then, the processor (3) represents the face image G as an m × n dimensional image matrix X;
step three, image feature extraction: the processor (3) analyzes and processes the image matrix X obtained in the step two to obtain a feature vector C of the face image G, and the analyzing and processing process is as follows:
step 301, performing multi-scale image blocking on the image matrix X: dividing the image matrix X into p × q blocks, we get:
X = X 11 X 12 L X 1 q X 21 X 22 L X 2 q L L L L X p 1 X p 2 L X pq
wherein p and q are natural numbers and take the values of 2, 4, 8 or 16, and XijIs composed ofA face sub-image matrix of dimensions, where i ═ 1, 2.., p; j ═ 1,2,. q;
step 302, filtering the image matrix X by using a two-dimensional Gabor filter bank, specifically including the following steps:
step 3021, constructing a two-dimensional Gabor filter bank in a time domain:
the multi-channel two-dimensional Gabor transform filter is defined as:
whereinRepresenting an odd-symmetric Gabor filter,representing an even symmetric Gabor filter, X (X, y) being the pixel values in the face sub-image matrix X; the simplified calculation model of wavelet transform is defined as follows:
wherein,is an even-symmetric two-dimensional Gabor filter,a two-dimensional Gabor filter with odd symmetry, wherein f is the central frequency, x is the abscissa variable in the time domain, y is the ordinate variable in the time domain, theta is the spatial phase angle, sigma is the spatial constant, g (x, y, sigma) is a Gaussian function <math> <mrow> <mi>g</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>,</mo> <mi>&sigma;</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>1</mn> <msup> <mrow> <mn>2</mn> <mi>&pi;&sigma;</mi> </mrow> <mn>2</mn> </msup> </mfrac> <mi>exp</mi> <mo>[</mo> <mo>-</mo> <mfrac> <mrow> <msup> <mi>x</mi> <mn>2</mn> </msup> <mo>+</mo> <msup> <mi>y</mi> <mn>2</mn> </msup> </mrow> <msup> <mrow> <mn>2</mn> <mi>&sigma;</mi> </mrow> <mn>2</mn> </msup> </mfrac> <mo>]</mo> <mo>;</mo> </mrow> </math>
Step 3022, transforming the two-dimensional Gabor filter bank in the time domain into a two-dimensional Gabor filter bank in the frequency domain:
<math> <mfenced open='{' close=''> <mtable> <mtr> <mtd> <msub> <mi>&Phi;</mi> <mi>e</mi> </msub> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>,</mo> <mi>f</mi> <mo>,</mo> <mi>&theta;</mi> <mo>,</mo> <mi>&sigma;</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <mo>[</mo> <msub> <mi>&Phi;</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>,</mo> <mi>f</mi> <mo>,</mo> <mi>&theta;</mi> <mo>,</mo> <mi>&sigma;</mi> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mi>&Phi;</mi> <mn>2</mn> </msub> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>,</mo> <mi>f</mi> <mo>,</mo> <mi>&theta;</mi> <mo>,</mo> <mi>&sigma;</mi> <mo>)</mo> </mrow> <mo>]</mo> </mrow> <mn>2</mn> </mfrac> </mtd> </mtr> <mtr> <mtd> <msub> <mi>&Phi;</mi> <mi>o</mi> </msub> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>,</mo> <mi>f</mi> <mo>,</mo> <mi>&theta;</mi> <mo>,</mo> <mi>&sigma;</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <mo>[</mo> <msub> <mi>&Phi;</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>,</mo> <mi>f</mi> <mo>,</mo> <mi>&theta;</mi> <mo>,</mo> <mi>&sigma;</mi> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>&Phi;</mi> <mn>2</mn> </msub> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>,</mo> <mi>f</mi> <mo>,</mo> <mi>&theta;</mi> <mo>,</mo> <mi>&sigma;</mi> <mo>)</mo> </mrow> <mo>]</mo> </mrow> <mrow> <mn>2</mn> <mi>j</mi> </mrow> </mfrac> </mtd> </mtr> </mtable> </mfenced> </math>
wherein phi1(u,v,f,θ,σ)=exp{-2π2σ2[(u-fcosθ)2+(v-fsinθ)2]},Φ2(u,v,f,θ,σ)=exp{-2π2σ2[(u+fcosθ)2+(v+fsinθ)2]},Φe(u, v, f, θ, σ) isFourier transform of phio(u, v, f, θ, σ) isIs given in terms of imaginary units andu and v are space frequency variables in a frequency domain;
andcan be obtained by fast fourier transform:
wherein Z (u, v) is the Fourier transform of Z (x, y), which represents the pixel values of image Z;
step 3023, first, the face subimage matrix X is processedijEach pixel value in (1) is represented as Xij(x, y); then, two-dimensional Gabor filter group in frequency domain is adopted to carry out XijAnd (x, y) filtering to obtain a filtering result:
<math> <mrow> <msup> <msub> <mi>M</mi> <mi>e</mi> </msub> <mrow> <msub> <mi>F</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mi>d</mi> </msub> </mrow> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <msub> <mi>X</mi> <mi>ij</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>&CircleTimes;</mo> <msup> <msub> <mi>&phi;</mi> <mi>e</mi> </msub> <mrow> <msub> <mi>F</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mi>d</mi> </msub> </mrow> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> </math>
<math> <mrow> <msup> <msub> <mi>M</mi> <mi>o</mi> </msub> <mrow> <msub> <mi>F</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mi>d</mi> </msub> </mrow> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <msub> <mi>X</mi> <mi>ij</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>&CircleTimes;</mo> <msup> <msub> <mi>&phi;</mi> <mi>o</mi> </msub> <mrow> <msub> <mi>F</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mi>d</mi> </msub> </mrow> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> </math>
wherein,for using even-symmetric two-dimensional Gabor filter pair Xij(x, y) the result of the filtering,for using odd symmetric two-dimensional Gabor filter pair Xij(x, y) the result of the filtering, FsIs the s-th center frequency, θdIs the d-th spatial phase angle;
step 3024, selecting n according to step 30231A different center frequency FsAnd for each center frequency FsSelecting n2A plurality of different spatial phase angles thetadWherein s is less than or equal to n1,d≤n2Form n1×n2The Gabor filtering channels extract amplitude values and phase values of the filtering results of each Gabor filtering channel as characteristics representing the Gabor filtering channels; wherein, two-dimensional Gabor filter pair X with even symmetry is adoptedij(x, y) filteringWave resultHas an amplitude ofPhase value ofTwo-dimensional Gabor filter pair X with odd symmetryij(x, y) the result of the filteringHas an amplitude ofPhase value of
Step 3025, filtering the result of each Gabor filtering channelAndamplitude ofAndspread by line to form a line vector <math> <mrow> <msup> <mi>A</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> </msup> <mo>=</mo> <mo>[</mo> <msup> <msub> <mi>A</mi> <mi>e</mi> </msub> <mrow> <msub> <mi>F</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mi>d</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> </mrow> </msup> <mo>,</mo> <msup> <msub> <mi>A</mi> <mi>o</mi> </msub> <mrow> <msub> <mi>F</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mi>d</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> </mrow> </msup> <mo>]</mo> <mo>;</mo> </mrow> </math> And filtering results for each Gabor filtering channelAndphase value ofAndspread by line to form a line vector theta(i,j)=[θe (i,j)o (i,j)](ii) a Wherein
<math> <mrow> <msup> <msub> <mi>A</mi> <mi>e</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> </msup> <mo>=</mo> <mo>[</mo> <mo>|</mo> <msup> <msub> <mi>M</mi> <mi>e</mi> </msub> <mrow> <msub> <mi>F</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mn>1</mn> </msub> </mrow> </msup> <mo>|</mo> <mo>,</mo> <mo>|</mo> <msup> <msub> <mi>M</mi> <mi>e</mi> </msub> <mrow> <msub> <mi>F</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mn>2</mn> </msub> </mrow> </msup> <mo>|</mo> <mo>,</mo> <mi>L</mi> <mo>,</mo> <mo>|</mo> <msup> <msub> <mi>M</mi> <mi>e</mi> </msub> <mrow> <msub> <mi>F</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mn>8</mn> </msub> </mrow> </msup> <mo>|</mo> <mo>,</mo> <mo>|</mo> <msup> <msub> <mi>M</mi> <mi>e</mi> </msub> <mrow> <msub> <mi>F</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mn>1</mn> </msub> </mrow> </msup> <mo>|</mo> <mo>,</mo> <mi>L</mi> <mo>,</mo> <mo>|</mo> <msup> <msub> <mi>M</mi> <mi>e</mi> </msub> <mrow> <msub> <mi>F</mi> <mn>4</mn> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mn>8</mn> </msub> </mrow> </msup> <mo>|</mo> <mo>]</mo> </mrow> </math>
<math> <mrow> <msup> <msub> <mi>A</mi> <mi>o</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> </msup> <mo>=</mo> <mo>[</mo> <mo>|</mo> <msup> <msub> <mi>M</mi> <mi>o</mi> </msub> <mrow> <msub> <mi>F</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mn>1</mn> </msub> </mrow> </msup> <mo>|</mo> <mo>,</mo> <mo>|</mo> <msup> <msub> <mi>M</mi> <mi>o</mi> </msub> <mrow> <msub> <mi>F</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mn>2</mn> </msub> </mrow> </msup> <mo>|</mo> <mo>,</mo> <mi>L</mi> <mo>,</mo> <mo>|</mo> <msup> <msub> <mi>M</mi> <mi>o</mi> </msub> <mrow> <msub> <mi>F</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mn>8</mn> </msub> </mrow> </msup> <mo>|</mo> <mo>,</mo> <mo>|</mo> <msup> <msub> <mi>M</mi> <mi>o</mi> </msub> <mrow> <msub> <mi>F</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mn>1</mn> </msub> </mrow> </msup> <mo>|</mo> <mo>,</mo> <mi>L</mi> <mo>,</mo> <mo>|</mo> <msup> <msub> <mi>M</mi> <mi>o</mi> </msub> <mrow> <msub> <mi>F</mi> <mn>4</mn> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mn>8</mn> </msub> </mrow> </msup> <mo>|</mo> <mo>]</mo> </mrow> </math>
<math> <mrow> <msup> <msub> <mi>&theta;</mi> <mi>e</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> </msup> <mo>=</mo> <mo>[</mo> <msub> <mi>&phi;</mi> <mrow> <mi>e</mi> <mo>,</mo> <msub> <mi>F</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mn>1</mn> </msub> </mrow> </msub> <mo>,</mo> <msub> <mi>&phi;</mi> <mrow> <mi>e</mi> <mo>,</mo> <msub> <mi>F</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mn>2</mn> </msub> </mrow> </msub> <mo>,</mo> <mi>L</mi> <mo>,</mo> <msub> <mi>&phi;</mi> <mrow> <mi>e</mi> <mo>,</mo> <msub> <mi>F</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mn>8</mn> </msub> </mrow> </msub> <mo>,</mo> <msub> <mi>&phi;</mi> <mrow> <mi>e</mi> <mo>,</mo> <msub> <mi>F</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mn>1</mn> </msub> </mrow> </msub> <mo>,</mo> <mi>L</mi> <mo>,</mo> <msub> <mi>&phi;</mi> <mrow> <mi>e</mi> <mo>,</mo> <msub> <mi>F</mi> <mn>4</mn> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mn>8</mn> </msub> </mrow> </msub> <mo>]</mo> </mrow> </math>
<math> <mrow> <msup> <msub> <mi>&theta;</mi> <mi>o</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> </msup> <mo>=</mo> <mo>[</mo> <msub> <mi>&phi;</mi> <mrow> <mi>o</mi> <mo>,</mo> <msub> <mi>F</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mn>1</mn> </msub> </mrow> </msub> <mo>,</mo> <msub> <mi>&phi;</mi> <mrow> <mi>o</mi> <mo>,</mo> <msub> <mi>F</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mn>2</mn> </msub> </mrow> </msub> <mo>,</mo> <mi>L</mi> <mo>,</mo> <msub> <mi>&phi;</mi> <mrow> <mi>o</mi> <mo>,</mo> <msub> <mi>F</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mn>8</mn> </msub> </mrow> </msub> <mo>,</mo> <msub> <mi>&phi;</mi> <mrow> <mi>o</mi> <mo>,</mo> <msub> <mi>F</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mn>1</mn> </msub> </mrow> </msub> <mo>,</mo> <mi>L</mi> <mo>,</mo> <msub> <mi>&phi;</mi> <mrow> <mi>o</mi> <mo>,</mo> <msub> <mi>F</mi> <mn>4</mn> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mn>8</mn> </msub> </mrow> </msub> <mo>]</mo> </mrow> </math>
Step 3026, adding n1×n2N of one Gabor filter channel1×n2X2 row vectors are connected in sequence to form XijCharacteristic C of a two-dimensional Gabor filter bank of (x, y)(i,j)=[A(i,j)(i,j)];
Step 303, obtaining the face subimage matrix XijEach pixel value X in (X, y)ij(x, y) texture contribution CMij(x',y');
Step 304, according to the formula C ═ C(1,1)×CM11,C(1,2)×CM12,L C(p,q)×CMpq]Solving a feature vector C of the face image G;
step four, synchronously outputting a processing result: in the process of extracting the image characteristics in the third step, the processor (3) synchronously displays the image signal processing process and the image characteristic extraction result in the third step through the display (4) connected with the processor.
2. The image feature description method based on Gabor comprehensive features of claim 1, wherein: in step two, mxn is 128 × 128.
3. The image feature description method based on Gabor comprehensive features of claim 1, wherein: in step 3021, σ is set to 1.
4. The image feature description method based on Gabor comprehensive features of claim 1, wherein: in a step 3023, in which the process is carried out,andcan be expressed as a complex number:
<math> <mrow> <msup> <msub> <mi>M</mi> <mi>e</mi> </msub> <mrow> <msub> <mi>F</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mi>d</mi> </msub> </mrow> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <msubsup> <mi>M</mi> <mrow> <mi>e</mi> <mo>,</mo> <msub> <mi>F</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mi>d</mi> </msub> </mrow> <mi>R</mi> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>+</mo> <msubsup> <mi>JM</mi> <mrow> <mi>e</mi> <mo>,</mo> <msub> <mi>F</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mi>d</mi> </msub> </mrow> <mi>I</mi> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> </math>
<math> <mrow> <msup> <msub> <mi>M</mi> <mi>o</mi> </msub> <mrow> <msub> <mi>F</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mi>d</mi> </msub> </mrow> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <msubsup> <mi>M</mi> <mrow> <mi>o</mi> <mo>,</mo> <msub> <mi>F</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mi>d</mi> </msub> </mrow> <mi>R</mi> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>+</mo> <msubsup> <mi>jM</mi> <mrow> <mi>o</mi> <mo>,</mo> <msub> <mi>F</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mi>d</mi> </msub> </mrow> <mi>I</mi> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> </math>
wherein,andrepresentsThe real and imaginary parts of (a) and (b),andrepresentsThe real part and the imaginary part of (a), the filtering result is expressed as:
<math> <mrow> <msup> <msub> <mi>M</mi> <mi>e</mi> </msub> <mrow> <msub> <mi>F</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mi>d</mi> </msub> </mrow> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mo>|</mo> <msup> <msub> <mi>M</mi> <mi>e</mi> </msub> <mrow> <msub> <mi>F</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mi>d</mi> </msub> </mrow> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>|</mo> <msup> <mi>e</mi> <mrow> <msub> <mi>j&phi;</mi> <mi>e</mi> </msub> <mo>,</mo> <msub> <mi>F</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mi>d</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> </msup> </mrow> </math>
<math> <mrow> <msup> <msub> <mi>M</mi> <mi>o</mi> </msub> <mrow> <msub> <mi>F</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mi>d</mi> </msub> </mrow> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mo>|</mo> <msup> <msub> <mi>M</mi> <mi>o</mi> </msub> <mrow> <msub> <mi>F</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mi>d</mi> </msub> </mrow> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>|</mo> <msup> <mi>e</mi> <mrow> <msub> <mi>j&phi;</mi> <mrow> <mi>o</mi> <mo>,</mo> <msub> <mi>F</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mi>d</mi> </msub> </mrow> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> </msup> </mrow> </math>
in order to be the amplitude value,are phase values, respectively expressed as:
<math> <mrow> <mo>|</mo> <msup> <msub> <mi>M</mi> <mi>e</mi> </msub> <mrow> <msub> <mi>F</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mi>d</mi> </msub> </mrow> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>|</mo> <mo>=</mo> <msqrt> <msup> <mrow> <mo>(</mo> <msubsup> <mi>M</mi> <mrow> <mi>e</mi> <mo>,</mo> <msub> <mi>F</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mi>d</mi> </msub> </mrow> <mi>R</mi> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msup> <mrow> <mo>(</mo> <msubsup> <mi>M</mi> <mrow> <mi>e</mi> <mo>,</mo> <msub> <mi>F</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mi>d</mi> </msub> </mrow> <mi>I</mi> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mn>2</mn> </msup> </msqrt> </mrow> </math>
<math> <mrow> <mo>|</mo> <msup> <msub> <mi>M</mi> <mi>o</mi> </msub> <mrow> <msub> <mi>F</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mi>d</mi> </msub> </mrow> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>|</mo> <mo>=</mo> <msqrt> <msup> <mrow> <mo>(</mo> <msubsup> <mi>M</mi> <mrow> <mi>o</mi> <mo>,</mo> <msub> <mi>F</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mi>d</mi> </msub> </mrow> <mi>R</mi> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msup> <mrow> <mo>(</mo> <msubsup> <mi>M</mi> <mrow> <mi>o</mi> <mo>,</mo> <msub> <mi>F</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mi>d</mi> </msub> </mrow> <mi>I</mi> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mn>2</mn> </msup> </msqrt> </mrow> </math>
<math> <mrow> <msub> <mi>&phi;</mi> <mrow> <mi>e</mi> <mo>,</mo> <msub> <mi>F</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mi>d</mi> </msub> </mrow> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <msup> <mi>tan</mi> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> <mrow> <mo>(</mo> <mfrac> <mrow> <msubsup> <mi>M</mi> <mrow> <mi>e</mi> <mo>,</mo> <msub> <mi>F</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mi>d</mi> </msub> </mrow> <mi>I</mi> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> <mrow> <msubsup> <mi>M</mi> <mrow> <mi>e</mi> <mo>,</mo> <msub> <mi>F</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mi>d</mi> </msub> </mrow> <mi>R</mi> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>)</mo> </mrow> <mo>,</mo> <msub> <mi>&phi;</mi> <mrow> <mi>e</mi> <mo>,</mo> <msub> <mi>F</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mi>d</mi> </msub> </mrow> </msub> <mo>&Element;</mo> <mrow> <mo>(</mo> <mn>0,2</mn> <mi>&pi;</mi> <mo>)</mo> </mrow> </mrow> </math>
<math> <mrow> <msub> <mi>&phi;</mi> <mrow> <mi>o</mi> <mo>,</mo> <msub> <mi>F</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mi>d</mi> </msub> </mrow> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <msup> <mi>tan</mi> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> <mrow> <mo>(</mo> <mfrac> <mrow> <msubsup> <mi>M</mi> <mrow> <mi>o</mi> <mo>,</mo> <msub> <mi>F</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mi>d</mi> </msub> </mrow> <mi>I</mi> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> <mrow> <msub> <mi>M</mi> <mrow> <mi>o</mi> <mo>,</mo> <msub> <mi>F</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mi>d</mi> </msub> </mrow> <mi>R</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> </mfrac> <mtext></mtext> <mo>)</mo> </mrow> <mo>,</mo> <msub> <mi>&phi;</mi> <mrow> <mi>o</mi> <mo>,</mo> <msub> <mi>F</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&theta;</mi> <mi>d</mi> </msub> </mrow> </msub> <mo>&Element;</mo> <mrow> <mo>(</mo> <mn>0,2</mn> <mi>&pi;</mi> <mo>)</mo> </mrow> <mo>.</mo> </mrow> </math>
5. the image feature description method based on Gabor comprehensive features of claim 1, wherein: in step 3024, n is1The values of (a) are 4, and the values of the 4 central frequencies f are 2Hz, 4Hz, 8Hz and 16Hz respectively.
6. The image feature description method based on Gabor comprehensive features of claim 1, wherein: in step 3024, n is2Is 8, and the values of the 8 spatial phase angles theta are respectively 0 deg., 45 deg., 90 deg., 135 deg., 180 deg., 225 deg., 270 deg., and 315 deg..
7. The image feature description method based on Gabor comprehensive features of claim 1, wherein: in step 303, the specific process of calculating the texture contribution degree is as follows:
step 3031, defining the entropy function of the face image G as follows:
<math> <mrow> <mi>H</mi> <mrow> <mo>(</mo> <mi>X</mi> <mrow> <mo>(</mo> <msup> <mi>x</mi> <mo>&prime;</mo> </msup> <mo>,</mo> <msup> <mi>y</mi> <mo>&prime;</mo> </msup> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>a</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>m</mi> </munderover> <msub> <mi>p</mi> <mi>a</mi> </msub> <mi>log</mi> <mrow> <mo>(</mo> <mfrac> <mn>1</mn> <msub> <mi>p</mi> <mi>a</mi> </msub> </mfrac> <mo>)</mo> </mrow> <mo>=</mo> <mo>-</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>a</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>m</mi> </munderover> <msub> <mi>p</mi> <mi>a</mi> </msub> <mi>log</mi> <mrow> <mo>(</mo> <msub> <mi>p</mi> <mi>a</mi> </msub> <mo>)</mo> </mrow> </mrow> </math>
where X (X ', y') is the pixel value of the image matrix X, X 'is the transverse coordinate of the image X (X', y '), y' is the longitudinal coordinate of the image X (X ', y'), m is the total number of gray levels of the face image G, paThe probability of the occurrence of the a-th gray level is shown, a is a natural number, and the value of a is 1-m;
step 3032, defining the image entropy corresponding to the local information entropy map LH () as follows:
LH(i',j')=H(F(i',j')w)
where w is the size of the sliding variable window, H (F (i ', j')w) Is an image F (i ', j')wIs the image F (i ', j')wAt the position of each pixel, i ' is the image F (i ', j ')wJ ' is the image F (i ', j ')wLongitudinal coordinate of (a), F (i ', j')wIs a sliding sub-image within a variable window centered on (i ', j') and:
F(i',j')w={X(x',y')|x∈[i'-w/2,i'+w/2-1],y'∈[j'-w/2,j'+w/2-1]};
3033, defining the face subimage matrix XijEach pixel value X in (1)ijThe texture contribution of (x ', y') is:
<math> <mrow> <msub> <mi>CM</mi> <mi>ij</mi> </msub> <mrow> <mo>(</mo> <msup> <mi>x</mi> <mo>&prime;</mo> </msup> <mo>,</mo> <msup> <mi>y</mi> <mo>&prime;</mo> </msup> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>1</mn> <mrow> <mi>m</mi> <mo>/</mo> <mi>p</mi> <mo>&times;</mo> <mi>n</mi> <mo>/</mo> <mi>q</mi> </mrow> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <msup> <mi>x</mi> <mo>&prime;</mo> </msup> <mo>=</mo> <mn>1</mn> </mrow> <mrow> <mi>m</mi> <mo>/</mo> <mi>p</mi> </mrow> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <msup> <mi>y</mi> <mo>&prime;</mo> </msup> <mo>=</mo> <mn>1</mn> </mrow> <mrow> <mi>n</mi> <mo>/</mo> <mi>q</mi> </mrow> </munderover> <mi>LH</mi> <mrow> <mo>(</mo> <mi>X</mi> <mrow> <mo>(</mo> <msup> <mi>x</mi> <mo>&prime;</mo> </msup> <mo>+</mo> <mrow> <mo>(</mo> <mi>i</mi> <mo>-</mo> <mn>1</mn> <mo>)</mo> </mrow> <mo>&times;</mo> <mi>m</mi> <mo>/</mo> <mi>p</mi> <mo>,</mo> <msup> <mi>y</mi> <mo>&prime;</mo> </msup> <mo>+</mo> <mrow> <mo>(</mo> <mi>j</mi> <mo>-</mo> <mn>1</mn> <mo>)</mo> </mrow> <mo>&times;</mo> <mi>n</mi> <mo>/</mo> <mi>q</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mrow> </math>
wherein, X (X '+ (i-1) xm/p, y' + (j-1) xn/q) is the pixel value of the (X ', y') sub-image of the ij th block after the image matrix X is partitioned into p × q.
8. The image feature description method based on Gabor comprehensive features of claim 7, wherein: in step 3031, the value of m is 256.
9. The image feature description method based on Gabor comprehensive features of claim 1, wherein: the processor is a computer.
CN201510231155.0A 2015-05-07 2015-05-07 A kind of new image representation method based on Gabor comprehensive characteristics Expired - Fee Related CN104834909B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510231155.0A CN104834909B (en) 2015-05-07 2015-05-07 A kind of new image representation method based on Gabor comprehensive characteristics

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510231155.0A CN104834909B (en) 2015-05-07 2015-05-07 A kind of new image representation method based on Gabor comprehensive characteristics

Publications (2)

Publication Number Publication Date
CN104834909A true CN104834909A (en) 2015-08-12
CN104834909B CN104834909B (en) 2018-09-21

Family

ID=53812787

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510231155.0A Expired - Fee Related CN104834909B (en) 2015-05-07 2015-05-07 A kind of new image representation method based on Gabor comprehensive characteristics

Country Status (1)

Country Link
CN (1) CN104834909B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106228163A (en) * 2016-07-25 2016-12-14 长安大学 The local poor ternary sequential image feature that a kind of feature based selects describes method
CN107256407A (en) * 2017-04-21 2017-10-17 深圳大学 A kind of Classification of hyperspectral remote sensing image method and device
CN108230413A (en) * 2018-01-23 2018-06-29 北京市商汤科技开发有限公司 Image Description Methods and device, electronic equipment, computer storage media, program
CN110377909A (en) * 2019-07-19 2019-10-25 中国联合网络通信集团有限公司 A kind of classification method and device of client feedback information
CN111680549A (en) * 2020-04-28 2020-09-18 肯维捷斯(武汉)科技有限公司 Paper pattern recognition method
CN116309578A (en) * 2023-05-19 2023-06-23 山东硅科新材料有限公司 Plastic wear resistance image auxiliary detection method using silane coupling agent

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090304272A1 (en) * 2008-06-06 2009-12-10 Google Inc. Annotating images
CN103927527A (en) * 2014-04-30 2014-07-16 长安大学 Human face feature extraction method based on single training sample

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090304272A1 (en) * 2008-06-06 2009-12-10 Google Inc. Annotating images
CN103927527A (en) * 2014-04-30 2014-07-16 长安大学 Human face feature extraction method based on single training sample

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
CONG GENG .ETC: ""Face recognitionbasedonthemulti-scalelocalimagestructures"", 《PATTERN RECOGNITION》 *
QUAN-XUE GAO .ETC: ""Face recognition using FLDA with single training image per person"", 《APPLIED MATHEMATICS AND COMPUTATION》 *
TIMO AHONEN .ETC: ""Face Description with Local Binary Patterns:Application to Face Recognition"", 《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》 *
王芳,等: ""基于Gabor幅值特征和相位特征融合的ISAR像目标识别"", 《电子与信息学报》 *
高涛,等: ""组合局部多通道Gabor滤波器和ICA的人脸描述与识别"", 《计算机应用研究》 *
高涛: ""基于多级Gabor变换序列特征的人脸识别"", 《计算机工程》 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106228163A (en) * 2016-07-25 2016-12-14 长安大学 The local poor ternary sequential image feature that a kind of feature based selects describes method
CN106228163B (en) * 2016-07-25 2019-06-25 长安大学 A kind of poor ternary sequential image feature in part based on feature selecting describes method
CN107256407A (en) * 2017-04-21 2017-10-17 深圳大学 A kind of Classification of hyperspectral remote sensing image method and device
CN107256407B (en) * 2017-04-21 2020-11-10 深圳大学 Hyperspectral remote sensing image classification method and device
CN108230413A (en) * 2018-01-23 2018-06-29 北京市商汤科技开发有限公司 Image Description Methods and device, electronic equipment, computer storage media, program
CN108230413B (en) * 2018-01-23 2021-07-06 北京市商汤科技开发有限公司 Image description method and device, electronic equipment and computer storage medium
CN110377909A (en) * 2019-07-19 2019-10-25 中国联合网络通信集团有限公司 A kind of classification method and device of client feedback information
CN110377909B (en) * 2019-07-19 2022-09-23 中国联合网络通信集团有限公司 Classification method and device for client feedback information
CN111680549A (en) * 2020-04-28 2020-09-18 肯维捷斯(武汉)科技有限公司 Paper pattern recognition method
CN111680549B (en) * 2020-04-28 2023-12-05 肯维捷斯(武汉)科技有限公司 Paper grain identification method
CN116309578A (en) * 2023-05-19 2023-06-23 山东硅科新材料有限公司 Plastic wear resistance image auxiliary detection method using silane coupling agent
CN116309578B (en) * 2023-05-19 2023-08-04 山东硅科新材料有限公司 Plastic wear resistance image auxiliary detection method using silane coupling agent

Also Published As

Publication number Publication date
CN104834909B (en) 2018-09-21

Similar Documents

Publication Publication Date Title
CN104834909B (en) A kind of new image representation method based on Gabor comprehensive characteristics
CN103646244B (en) Extraction, authentication method and the device of face characteristic
CN103927527A (en) Human face feature extraction method based on single training sample
Jia et al. Inconsistency-aware wavelet dual-branch network for face forgery detection
Wu et al. Curvelet feature extraction for face recognition and facial expression recognition
CN103714326B (en) One-sample face identification method
Hariprasath et al. Multimodal biometric recognition using iris feature extraction and palmprint features
Mukhedkar et al. Fast face recognition based on Wavelet Transform on PCA
CN111797702A (en) Face counterfeit video detection method based on spatial local binary pattern and optical flow gradient
CN115829909A (en) Forgery detection method based on feature enhancement and spectrum analysis
CN106940904A (en) Attendance checking system based on recognition of face and speech recognition
Al-Ani et al. Face recognition approach based on wavelet-curvelet technique
CN106228163B (en) A kind of poor ternary sequential image feature in part based on feature selecting describes method
Xu et al. An efficient method for human face recognition using nonsubsampled contourlet transform and support vector machine
Al-Rawi et al. Feature Extraction of Human Facail Expressions Using Haar Wavelet and Neural network
Aguilar-Torres et al. Eigenface-gabor algorithm for feature extraction in face recognition
Waghmare et al. DCT pyramid based face recognition system
Qu et al. Facial Expression Recognition Based on Shearlet Transform
Rania et al. Sparse representation approach for variation-robust face recognition using discrete wavelet transform
CN107203967A (en) A kind of face super-resolution reconstruction method based on context image block
Jassim Wavelet–Based Face Recognition Schemes
Sanjay Gaikwad et al. A Novel Fingerprint-Based Age Group Classification Approach Using DWT and DFT Analysis
Xuebin et al. NSCTWavelet: An efficient method for multimodal biometric recognition based on pixel level fusion
Zhai et al. A novel Iris recognition method based on the contourlet transform and Biomimetic Pattern Recognition Algorithm
Emerich et al. Biometrics Recognition based on Image Local Features Ordinal Encoding

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
EXSB Decision made by sipo to initiate substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20180921