CN101196983A - Image recognition method - Google Patents
Image recognition method Download PDFInfo
- Publication number
- CN101196983A CN101196983A CNA2006101620045A CN200610162004A CN101196983A CN 101196983 A CN101196983 A CN 101196983A CN A2006101620045 A CNA2006101620045 A CN A2006101620045A CN 200610162004 A CN200610162004 A CN 200610162004A CN 101196983 A CN101196983 A CN 101196983A
- Authority
- CN
- China
- Prior art keywords
- matrix
- image
- gray level
- dimentional
- classification function
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Landscapes
- Image Analysis (AREA)
Abstract
The present invention relates to an image processing method, comprising the steps of converting an image into a gray image, performing two-dimensional Fourier transformation to the gray image to acquire a two-dimensional imaginary number matrix, performing module value conversion to the two-dimensional imaginary number matrix to acquire a frequency spectrum matrix, and performing K-L transformation to the frequency spectrum matrix to acquire a projection vector.
Description
Technical field
The invention belongs to image processing field, especially relate to a kind of image-recognizing method.
Background technology
In the prior art, the method that face image is discerned mainly is to utilize the method for Karhunen-Loeve transformation, can obtain the orthogonal basis of difference in one group of reflected sample after this conversion, and differentiating people's face is facial image to be mapped to this group base carry out the differentiation of people's face.Specifically be, at first, N sample image changed that the sample image after the conversion is a gray level image, and has unified size; Then the gray level image after the conversion is carried out Karhunen-Loeve transformation, obtain projection vector; Next, with the maximization of projection vector frontier distance, obtain classification function; At last, facial image to be identified is changed, image after the conversion is a gray level image, and have with the conversion after the identical size of sample image, carry out Karhunen-Loeve transformation again, it is projected to the K-L space obtain projection vector, be input in this N classification function, which output valve maximum is just judged the people of the classification function correspondence of this artificial output valve maximum.
But this kind method be not very high for discriminating expression and attitude variation facial image accuracy greatly, or algorithm operation quantity is bigger for the skew sensitivity of the people's face position in the image.
Summary of the invention
Because facial image is because expression and attitude change greatly and because the distortion that the cam lens problem causes is restricted the accuracy of people's face discriminating.Therefore, the feature extraction of image is a key point.The present invention has improved the accuracy that people's face is differentiated by feature extraction in Flame Image Process thus, and the not significant increase of operand.
For this reason, the invention provides image processing method, comprise step: with image transitions is gray level image; Described gray level image is carried out two-dimentional Fourier Tranform, obtain two-dimentional imaginary number matrix; Two-dimentional imaginary number matrix is carried out the conversion of mould value, obtain spectral matrix; Spectral matrix is carried out Karhunen-Loeve transformation, obtain projection vector.
Wherein, also comprise step:, obtain classification function with the maximization of projection vector frontier distance.
Wherein, described image number is N, N=a * e wherein, and wherein a represents number, e represents everyone picture number.
Wherein, the gray level image after the conversion is the gray level image with unified size.
Wherein, the gray matrix to gray level image carries out two-dimentional Fourier Tranform.
Wherein, described classification function is a, and wherein a represents number.
Further, the invention provides a kind of image-recognizing method, comprise step: sample image is converted to first gray level image; Described first gray level image is carried out two-dimentional Fourier Tranform, obtain the first two-dimentional imaginary number matrix; The described first two-dimentional imaginary number matrix is carried out the conversion of mould value, obtain first spectral matrix; Described first spectral matrix is carried out Karhunen-Loeve transformation, obtain first projection vector; With described first projection vector frontier distance maximization, obtain classification function; With image transitions to be identified is second gray level image; Described second gray level image is carried out two-dimentional Fourier Tranform, obtain the second two-dimentional imaginary number matrix; The described second two-dimentional imaginary number matrix is carried out the conversion of mould value, obtain second spectral matrix; Described second spectral matrix is carried out Karhunen-Loeve transformation, obtain second projection vector; Calculate the classification function value of described second projection vector.
Wherein, also comprise step: more above-mentioned classification function value, select maximal value wherein.
Wherein, described first gray level image and second gray level image have same size.
Wherein, described sample image number is N, N=a * e wherein, and wherein a represents number, e represents everyone sample image number.
Wherein, the gray matrix to described first gray level image carries out two-dimentional Fourier Tranform.
Wherein, the gray matrix to described second gray level image carries out two-dimentional Fourier Tranform.
Wherein, described classification function is a, and wherein a represents number.
Wherein, described classification function value is a, and wherein a represents number.
Wherein, in the step of the classification function value of described second projection vector of described calculating, calculate described classification function value according to above-mentioned classification function.
Thus, the present invention utilizes the advantage separately of Karhunen-Loeve transformation and Fourier Tranform, for differentiating that expression and attitude change facial image greatly, have improved the discriminating accuracy.
Should be appreciated that above general description and following detailed description all are exemplary and indicative, and be intended to provide of the present invention further explanation claimed.
Description of drawings
Accompanying drawing provides further understanding of the present invention, and it is included in the instructions and constitutes the part of instructions, embodiments of the present invention is described and is used from explanation principle of the present invention with instructions one.In the accompanying drawings:
Fig. 1 is the Flame Image Process process flow diagram according to embodiment of the present invention;
Fig. 2 is the image recognition process flow diagram according to embodiment of the present invention;
Fig. 3 is the corresponding relation synoptic diagram according to embodiment of the present invention.
Embodiment
Below with reference to accompanying drawings the preferred embodiment of the present invention is described in detail.
In image-recognizing method provided by the present invention, known sample image is come as a reference to determine whether image to be identified is present in the sample image, thereby determine image to be identified belongs to which individual in the sample image.At this, at first need existing sample image is handled, specifically treatment step is as shown in Figure 1:
S101, at first change sample image, specifically be that N sample image is converted to first gray level image with unified size, at this, select a personal accomplishment reference among the present invention, and select e sample image respectively at everyone, forming number is the sample image of N, a sample image and a people's corresponding relation as shown in Figure 3, be N=a * e, wherein a represents number, and e represents everyone sample image number, promptly, comprise a plurality of facial images that correspond respectively to a plurality of people in the above-mentioned sample image, but be understandable that the present invention is not limited to this, for a person skilled in the art, it also can select number sample image inequality at everyone;
S102 carries out two-dimentional Fourier Tranform to the gray matrix of first gray level image that obtains after the conversion, obtains the first two-dimentional imaginary number matrix after the conversion;
S103, this first two-dimentional imaginary number matrix is carried out the conversion of mould value, thereby obtain pairing first spectral matrix of each image in N the sample image, specifically be to obtain first spectral matrix to this first two-dimentional imaginary number matrix delivery value, this first spectral matrix has only the size of vector and does not have phase place, promptly in this first spectral matrix its each element value to the mould value of each imaginary number element in should the first two-dimentional imaginary number matrix;
S104 carries out K-L (Karhunen-Loeve) conversion to each first spectral matrix, obtains first projection vector, and concrete conversion is as follows:
At first, should be converted to one-dimensional vector X one by one by each first spectral matrix
i, i=1,2 ... .N, N are the number of sample image.Each one-dimensional vector dimension is d, and wherein, each first spectral matrix pixel count is d=m * n, and m is the first spectral matrix line number, and n is the columns of first spectral matrix;
Next, calculate all one-dimensional vector X
iVectorial mean value E (x), formula is as follows:
With each one-dimensional vector X
iDeduct vectorial mean value E (x), can obtain a new one-dimensional vector X
i, i=1,2 ... .N, wherein, N is the number of sample image, that is, and X
i=X
i-E (X);
Then, with these one-dimensional vector X
iForming dimension by row is the matrix X of d * N, and wherein N is the number of sample image;
Below, the covariance matrix M of compute matrix X, formula is as follows:
Wherein N is the number of sample image.The proper vector u of covariance matrix M
1, u
2, u
dConstituted one group of orthogonal basis, i.e. the K-L base.Proper vector u
1, u
2, u
dCorresponding whole eigenwerts are respectively λ
1, λ
2... λ
d, proper vector u
1, u
2, u
dLine up matrix U by row, so in feature space, one-dimensional vector X
iProjection vector be Z
i=U
τX
i, i=1,2 ... .N, wherein, N is the number of sample image;
By eigenwert with by big to little series arrangement proper vector u
1, u
2, u
d, select wherein preceding p proper vector u
1, u
2, u
pAs orthogonal basis, line up matrix by row
Wherein (p<d), thus constitute by proper vector u
1, u
2, u
pOne-dimensional vector X is calculated in the K-L space of forming then in the K-L space
iThe first projection vector x
i, formula is
I=1,2 ... .N, wherein, N is the number of sample image;
S105 is with the first projection vector x
iThe frontier distance maximization obtains classification function f (x).Specific as follows:
With the first projection vector x
iAs input vector,, calculate coefficient w and parameter b by following formula (1), (2) and (3);
(w·x)+b=0 (2)
And make
Maximum (3)
Wherein, work as y
i=1 o'clock, the first projection vector x
iBe someone's sample image wherein; Work as y
i=-1, the first projection vector x
iBe other people sample image except that above-mentioned wherein someone.Because the number in N sample image is a, so corresponding a coefficient w and the parameter b of obtaining; Calculate a classification function f (x) by above-mentioned a coefficient w and parameter b, thereby sample image is classified, the formula that calculates each classification function f (x) is as follows:
f(x)=(w·x)+b;
Like this, each classification function f (x) corresponds respectively to everyone of a philtrum, and as shown in Figure 3, everyone of classification function f (x) and a philtrum is corresponding, everyone has different classification function f (x), thereby by this classification function f (x) this person is distinguished mutually with other people.
Next, the image to be identified of input is discerned, obtained above-mentioned image to be identified and a people's corresponding relation, i.e. identification draws which individual that this image to be identified belongs to an above-mentioned a philtrum, and as shown in Figure 2, concrete steps are as follows:
S201 changes the image to be identified of input, and the image to be identified after the conversion is second gray level image, and this second gray level image have with change after the identical size of first gray level image of sample image;
S202 carries out two-dimentional Fourier Tranform to the gray matrix of second gray level image that obtains after the conversion, obtains the second two-dimentional imaginary number matrix after the conversion;
S203, this second two-dimentional imaginary number matrix is carried out the conversion of mould value, thereby obtain pairing second spectral matrix of image to be identified, specifically be to obtain second spectral matrix to this second two-dimentional imaginary number matrix delivery value, this second spectral matrix has only the size of vector and does not have phase place, promptly in this second spectral matrix its each element value to the mould value of each imaginary number element in should the second two-dimentional imaginary number matrix;
S204 is converted to one-dimensional vector V with this second spectral matrix; Then, calculate the second projection vector W of one-dimensional vector V in the K-L space, formula is
S205 calculates the classification function value f (W) of the second projection vector W, specifically is, the second projection vector W as independent variable, is calculated in a classification function f (x) respectively, and formula is as follows:
f(W)=(w·W)+b;
Thereby, calculate a the classification function value f (W) of image to be identified, as shown in Figure 3, everyone of classification function value f (W) and a philtrum is corresponding, be directed to image to be identified, everyone has different classification function value f (W), thereby by this classification function value f (W) this person is distinguished mutually with other people;
S206, more above-mentioned a classification function value f (W) selects classification function value f (W) maximum among above-mentioned a the classification function value f (W)
Max, this f (W)
MaxCorresponding to a a certain people's of philtrum f (x), thereby obtain the pairing people of this f (x).
Those skilled in the art can also carry out various modifications to above content under the condition of the spirit and scope of the present invention that the claims that do not come off are determined.Therefore scope of the present invention is not limited in above explanation, but determine by the scope of claims.
Claims (15)
1. image processing method comprises step:
With image transitions is gray level image;
Described gray level image is carried out two-dimentional Fourier Tranform, obtain two-dimentional imaginary number matrix;
Two-dimentional imaginary number matrix is carried out the conversion of mould value, obtain spectral matrix;
Spectral matrix is carried out Karhunen-Loeve transformation, obtain projection vector.
2. the method for claim 1 is characterized in that, also comprises step:
With the maximization of projection vector frontier distance, obtain classification function.
3. method as claimed in claim 1 or 2 is characterized in that,
Described image number is N, N=a * e wherein, and wherein a represents number, e represents everyone picture number.
4. method as claimed in claim 1 or 2 is characterized in that,
Gray level image after the conversion is the gray level image with unified size.
5. method as claimed in claim 1 or 2 is characterized in that,
Gray matrix to gray level image carries out two-dimentional Fourier Tranform.
6. method as claimed in claim 2 is characterized in that,
Described classification function is a, and wherein a represents number.
7. image-recognizing method comprises step:
Sample image is converted to first gray level image;
Described first gray level image is carried out two-dimentional Fourier Tranform, obtain the first two-dimentional imaginary number matrix;
The described first two-dimentional imaginary number matrix is carried out the conversion of mould value, obtain first spectral matrix;
Described first spectral matrix is carried out Karhunen-Loeve transformation, obtain first projection vector;
With described first projection vector frontier distance maximization, obtain classification function;
With image transitions to be identified is second gray level image;
Described second gray level image is carried out two-dimentional Fourier Tranform, obtain the second two-dimentional imaginary number matrix;
The described second two-dimentional imaginary number matrix is carried out the conversion of mould value, obtain second spectral matrix;
Described second spectral matrix is carried out Karhunen-Loeve transformation, obtain second projection vector;
Calculate the classification function value of described second projection vector.
8. method as claimed in claim 7 is characterized in that, also comprises step:
More above-mentioned classification function value is selected maximal value wherein.
9. as claim 7 or 8 described methods, it is characterized in that,
Described first gray level image and second gray level image have same size.
10. as claim 7 or 8 described methods, it is characterized in that,
Described sample image number is N, N=a * e wherein, and wherein a represents number, e represents everyone sample image number.
11. as claim 7 or 8 described methods, it is characterized in that,
Gray matrix to described first gray level image carries out two-dimentional Fourier Tranform.
12. as claim 7 or 8 described methods, it is characterized in that,
Gray matrix to described second gray level image carries out two-dimentional Fourier Tranform.
13. as claim 7 or 8 described methods, it is characterized in that,
Described classification function is a, and wherein a represents number.
14. as claim 7 or 8 described methods, it is characterized in that,
Described classification function value is a, and wherein a represents number.
15. as claim 7 or 8 described methods, it is characterized in that, in the step of the classification function value of described second projection vector of described calculating, calculate described classification function value according to above-mentioned classification function.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CNA2006101620045A CN101196983A (en) | 2006-12-08 | 2006-12-08 | Image recognition method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CNA2006101620045A CN101196983A (en) | 2006-12-08 | 2006-12-08 | Image recognition method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN101196983A true CN101196983A (en) | 2008-06-11 |
Family
ID=39547377
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CNA2006101620045A Pending CN101196983A (en) | 2006-12-08 | 2006-12-08 | Image recognition method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN101196983A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101866488A (en) * | 2010-06-21 | 2010-10-20 | 哈尔滨工程大学 | Target detection method based on image frequency domain direction template |
CN101630401B (en) * | 2009-07-31 | 2011-12-21 | 北京师范大学 | ATGP-VCA projection vector-obtaining method |
CN104700833A (en) * | 2014-12-29 | 2015-06-10 | 芜湖乐锐思信息咨询有限公司 | Big data speech classification method |
CN106204478A (en) * | 2016-07-06 | 2016-12-07 | 电子科技大学 | The magneto optic images based on background noise feature space strengthens algorithm |
CN111275111A (en) * | 2020-01-20 | 2020-06-12 | 陕西科技大学 | Classification method for pelts of same animal and same color |
-
2006
- 2006-12-08 CN CNA2006101620045A patent/CN101196983A/en active Pending
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101630401B (en) * | 2009-07-31 | 2011-12-21 | 北京师范大学 | ATGP-VCA projection vector-obtaining method |
CN101866488A (en) * | 2010-06-21 | 2010-10-20 | 哈尔滨工程大学 | Target detection method based on image frequency domain direction template |
CN101866488B (en) * | 2010-06-21 | 2012-10-31 | 哈尔滨工程大学 | Target detection method based on image frequency domain direction template |
CN104700833A (en) * | 2014-12-29 | 2015-06-10 | 芜湖乐锐思信息咨询有限公司 | Big data speech classification method |
CN106204478A (en) * | 2016-07-06 | 2016-12-07 | 电子科技大学 | The magneto optic images based on background noise feature space strengthens algorithm |
CN106204478B (en) * | 2016-07-06 | 2018-09-07 | 电子科技大学 | The magneto optic images Enhancement Method based on ambient noise feature space |
CN111275111A (en) * | 2020-01-20 | 2020-06-12 | 陕西科技大学 | Classification method for pelts of same animal and same color |
CN111275111B (en) * | 2020-01-20 | 2023-10-31 | 陕西科技大学 | Method for classifying homozoon furs |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN102254319B (en) | Method for carrying out change detection on multi-level segmented remote sensing image | |
CN102147920B (en) | Shadow detection method for high-resolution remote sensing image | |
CN102324047B (en) | Hyper-spectral image ground object recognition method based on sparse kernel representation (SKR) | |
CN107563355A (en) | Hyperspectral abnormity detection method based on generation confrontation network | |
CN102176208B (en) | Robust video fingerprint method based on three-dimensional space-time characteristics | |
CN101196983A (en) | Image recognition method | |
CN109657610A (en) | A kind of land use change survey detection method of high-resolution multi-source Remote Sensing Images | |
CN104036290A (en) | Method and device for identifying face value of paper money | |
CN101777130A (en) | Method for evaluating similarity of fingerprint images | |
CN103065160A (en) | Hyperspectral image classification method based on local cooperative expression and neighbourhood information constraint | |
CN103020321B (en) | Neighbor search method and system | |
CN102903114A (en) | Hyperspectral remotely-sensed data dimensionality reduction method based on improved hierarchical clustering | |
CN103593695A (en) | Method for positioning DPM two-dimension code area | |
CN104680151B (en) | A kind of panchromatic remote sensing image variation detection method of high-resolution for taking snow covering influence into account | |
CN104778442A (en) | Automatic segmentation and counting method of retina cell fluorescence microscopic image | |
CN103714148A (en) | SAR image search method based on sparse coding classification | |
CN104933397A (en) | Image description and image recognition method thereof | |
CN104008394A (en) | Semi-supervision hyperspectral data dimension descending method based on largest neighbor boundary principle | |
CN105574265B (en) | Entire assembly model quantitative description towards model index | |
CN102663438A (en) | Monte carlo characteristics dimension reduction method for small-sample hyperspectral image | |
CN116740474A (en) | Remote sensing image classification method based on anchoring stripe attention mechanism | |
CN112308040A (en) | River sewage outlet detection method and system based on high-definition images | |
CN115019147A (en) | Grabbing detection model based on transform mechanism and suitable for object stacking cluttered scene | |
Tu et al. | Hyperspectral image classification via superpixel spectral metrics representation | |
CN101571923A (en) | Method for semi-automatically extracting remote sensing image water system network based on intelligent ant colony algorithm |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C57 | Notification of unclear or unknown address | ||
DD01 | Delivery of document by public notice |
Addressee: Han Zhitian Document name: Written notice of preliminary examination of application for patent for invention |
|
C06 | Publication | ||
PB01 | Publication | ||
C02 | Deemed withdrawal of patent application after publication (patent law 2001) | ||
WD01 | Invention patent application deemed withdrawn after publication |
Open date: 20080611 |