CN104091174A - Portrait style classification method based on support vector machine - Google Patents
Portrait style classification method based on support vector machine Download PDFInfo
- Publication number
- CN104091174A CN104091174A CN201410330945.XA CN201410330945A CN104091174A CN 104091174 A CN104091174 A CN 104091174A CN 201410330945 A CN201410330945 A CN 201410330945A CN 104091174 A CN104091174 A CN 104091174A
- Authority
- CN
- China
- Prior art keywords
- test
- training
- portrait
- square block
- parts
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000012706 support-vector machine Methods 0.000 title claims abstract description 27
- 238000000034 method Methods 0.000 title claims abstract description 24
- 238000012360 testing method Methods 0.000 claims abstract description 155
- 238000012549 training Methods 0.000 claims abstract description 146
- 239000013598 vector Substances 0.000 claims abstract description 100
- 210000001747 pupil Anatomy 0.000 claims description 8
- 230000004044 response Effects 0.000 claims description 8
- 239000000203 mixture Substances 0.000 claims description 4
- 239000003973 paint Substances 0.000 claims description 3
- 238000005192 partition Methods 0.000 claims description 3
- 238000010422 painting Methods 0.000 abstract description 9
- 238000011840 criminal investigation Methods 0.000 abstract description 7
- 238000005336 cracking Methods 0.000 abstract 2
- 238000001514 detection method Methods 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 5
- 238000012797 qualification Methods 0.000 description 4
- 238000003384 imaging method Methods 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000013178 mathematical model Methods 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 230000035515 penetration Effects 0.000 description 1
- 239000000049 pigment Substances 0.000 description 1
- 230000005855 radiation Effects 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Landscapes
- Image Analysis (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention discloses a portrait style classification method based on a support vector machine. The method mainly solves the problems that professional physical devices are required in the famous painting discrimination process, and the accuracy of criminal suspect photos which are compared in the criminal investigation and case cracking process is poor. According to the technical scheme, (1) a database sample set is divided into a training set and a testing set; (2) the training set is divided into five component training sets; (3) five groups of style sets and corresponding class labels are generated through each component training set; (4) support vector parameters are generated between the five groups of style sets and the corresponding class labels; (5) the testing set is divided into five component testing sets; (6) a component vector is generated through each component testing set; (7) five component class labels are generated through the five component vectors; (8) a style class label of a tested portrait is generated through the five component class labels. By means of the portrait style classification method based on the support vector machine, the style of the portrait can be discriminated without the professional physical devices, the criminal suspect photos with high accuracy can be obtained through comparison, and the method can be applied to famous painting discrimination, criminal investigation and case cracking.
Description
Technical field
The invention belongs to technical field of image processing, further relate to the portrait genre classification method in pattern-recognition and technical field of computer vision, can be used for famous painting and screen and criminal investigation and case detection.
Background technology
Being sorted in famous painting examination and criminal investigation and case detection of portrait style plays an important role at present.Screen field at famous painting, public and private hidden famous painting enormous amount handed down from ancient times both at home and abroad, their situation is very complicated, some true some false, some anonymities, need to carry out strict scientific verification to it.In recent years art history scholar more and more can utilize the science tools qualification artwork true and false, and their conventional authenticate technology has isotope analysis technology and infrared external reflection imaging technique.Wherein infrared external reflection imaging authenticate technology is on the paintings that first need to identify with infrared radiation, then utilizes thermal infrared imager to help qualification.After infrared penetration is pigment coated, absorbed by the material of initial profile rough draft, because rough draft only reflects little heat, on infrared image, present the black lines of rough draft.When everyone draws rough draft, there is the drawing style of oneself, just can disclose the true and false of paintings by the style of rough draft.But these authenticate technologies are all physical analysis means, when qualification, need professional equipment, this has brought inconvenience with regard to the qualification of giving famous painting.
In criminal investigation and case detection, be generally by artist by paint out suspect's portrait of eye witness's description, then portrait is put into photomontage in sketch-photo database, then compares out suspect's photo with the photo in picture data storehouse.Due to the portrait style difference in sketch-photo database, make this database there is diversity, photomontage is not accurate, suspect's photo poor accuracy of comparing out.
Summary of the invention
The object of the invention is to propose a kind of portrait genre classification method based on Support Vector Machine, to assist expert and amateur identify the author of portrait and help criminal investigation and case detection personnel to compare out suspect's photo that accuracy is high.
For achieving the above object, technical scheme of the present invention comprises the steps:
(1) partition database sample set: portrait collection is divided into training set U={U
p, p=1,2 ..., 150} and test set
wherein U
prepresent that the p in training set opens portrait,
represent that the q in test set opens portrait;
(2) divide training set: the difference that pending portrait training set U is pressed to parts, is divided into five training component collection
respectively face, left eye, right eye, nose and nozzle component training set,
represent that training set p opens i parts of portrait;
(3) at each parts training set U
i, i=1,2 ..., generate five groups of style collection and corresponding class mark on 5;
(4) integrate and generate support vector parameter between corresponding class mark five groups of styles: first for five groups of style collection of step (3) and accordingly class mark respectively as the input and output of Support Vector Machine, between generates one group of member supporting vector parameter again, five corresponding five groups of member supporting vector parameters of parts;
(5) divide test set: by pending portrait test set
press the difference of parts, be divided into five test component collection
respectively face, left eye, right eye, nose and nozzle component test set,
represent that respectively test set q opens i parts of portrait;
(6) at each unit test collection
upper generation test component vector set
(7) with every five test component vector sets that test is drawn a portrait in step (6)
and five member supporting vector parameters of correspondence in step (4), as the input of Support Vector Machine, obtain five parts class marks;
(8), by five parts class marks of step (7), by the Voting principle of " the minority is subordinate to the majority ", obtain the style class mark of test portrait.
Tool of the present invention has the following advantages:
The first, the present invention is owing to having considered face, left eye, and right eye, five parts of nose and mouth, and extracted four features relevant with portrait style: grey level histogram, Gray Moment, SURF and LBP feature, the classification of style of making to draw a portrait is more accurate;
The second, the present invention, due to the style that adopts the classification of Support Vector Machine to draw a portrait, is applicable to solve the such small sample problem of portrait genre classification;
The 3rd, the present invention uses mathematical model directly to portrait genre classification first, does not distinguish portrait style by means of professional physical equipment, not only convenient operation, and also the suspicion of crime photo of comparing out is more accurate.
Brief description of the drawings
Fig. 1 is the process flow diagram that the present invention is based on the portrait genre classification method of Support Vector Machine.
Embodiment
Core concept of the present invention is: the thought by Support Vector Machine proposes a kind of sorting technique of drawing a portrait style, to assist expert and amateur identify the author of portrait and help criminal investigation and case detection personnel to compare out suspect's photo that accuracy is high.Below provide an example:
With reference to Fig. 1, the concrete implementation step of the present invention is as follows:
Step 1, partition database sample set.
One group in pending VIPSL database 200 portrait data sets that comprise 5 artists are divided into training set U={U
p, p=1,2 ..., 150} and test set
wherein U
prepresent that the p in training set opens portrait,
represent that the q in test set opens portrait.
Step 2, divides training set.
The difference that pending portrait training set U is pressed to parts, is divided into five training component collection
respectively face, left eye, right eye, nose and nozzle component training set,
represent that training set p opens i parts of portrait:
(2a) using the former figure of every portrait of training set U as face's part, size is made as 176x239, using face's part of all portraits of training set U as the part training set U of face
1;
(2b) centered by the pupil of left eye of every portrait of training set U, the square chart picture that to get size be 30x22 is as left eye parts, using the left eye parts of all portraits of training set U as left eye parts training set U
2;
(2c) centered by the pupil of right eye of every portrait of training set U, the square chart picture that to get size be 30x22 is as right eye parts, using the right eye parts of all portraits of training set U as right eye parts training set U
3;
(2d) by the center of two interpupillary lines of every portrait of training set U centered by the intermediate point of nose, the square chart picture that to get size be 30x22 is as nose piece, using the nose piece of all portraits of training set U as nose piece training set U
4;
(2e) centered by the face center of every portrait of training set U, the square chart picture that to get size be 30x22 is as nozzle component, using the nozzle component of all portraits of training set U as nozzle component training set U
5.
Step 3, at the training set U of each parts
i, i=1,2 ..., generate five groups of style collection and corresponding class mark on 5.
(3a) by each parts training set U
iin each parts be divided into training square block:
(3a1) by the part training set U of face
1face's part be divided into size for the training square block of 32x32, the square block that the lap between piece is 16x16;
(3a2) by left eye parts training set U
2left eye parts be divided into size for the training square block of 22x22, the square block that the lap between piece is 11x11;
(3a3) by right eye parts training set U
3right eye parts be divided into size for the training square block of 22x22, the square block that the lap between piece is 11x11;
(3a4) by nose piece training set U
4nose piece be divided into size for the training square block of 22x22, the square block that the lap between piece is 11x11;
(3a5) by nozzle component training set U
5nozzle component be divided into size for the training square block of 22x22, the lap size between piece is the square block of 11x11;
(3b) on each training square block, generate training feature vector V
f:
(3b1) on each training square block, extract grey level histogram feature, the pixel of the 0-255 of an each training square block gray level is counted, obtain a dimension and be 256 training grey level histogram proper vector V
1, the numerical value of every dimension is the pixel quantity of this gray level;
(3b2) on each training square block, extract gray Moment Feature, each training square block is calculated the first moment of gray scale
, second moment
And third moment
Generate the training gray Moment Feature vector V of 3 dimensions
2, wherein, t
arepresent the gray-scale value of a pixel of component block, the pixel number that N is component block;
(3b3) on each training square block, extract SURF feature,, centered by training square block center, the square window of a 20x20 of structure, is divided into 4x4 sub regions by this window, and every sub regions has 25 pixels; Each pixel calculated level to subregion and the little wave response of Haar of vertical direction, note is d respectively
xand d
y; By the response d of 25 pixels of subregion
x, d
yand absolute value | d
x|, | d
y| add up, every sub regions obtains 4 vectors: ∑ d
x, ∑ d
y, ∑ | d
x|, ∑ | d
y|, and then obtain the training SURF proper vector V that each square window generation 4x (4x4)=64 ties up
3;
(3b4) on each training square block, extract LBP feature: the pixel value Yu Yikuai center by piece center is the center of circle, radius is that the pixel value of 8 points in 5 annular compares one by one, if center pixel value is larger than the pixel value of the upper point of annular, be 1 by the some assignment in this annular, otherwise be 0, and then with 18 bit of 8 dot generation on annular field; Just this 8 bit is converted to the decimal number of 256 again, generates the training LBP proper vector V of 256 dimensions
4;
(3b5) by the training grey level histogram proper vector V of step (3b1)~(3b4) obtain
1, training gray Moment Feature vector V
2, training SURF proper vector V
3, training LBP proper vector V
4these four vectors are arranged in order in a column vector, obtain training feature vector V
f;
(3c) each parts training set U
ithe training feature vector V of all square blocks of each parts of every training portrait
fbe arranged in order in a column vector, obtain training component vector V
c, and then with each parts training set U
ithe training component vector V of 150 training portraits
ccomposition training component vector set
(3d) according to each training component vector set
150 affiliated portraits comprise five artists' paint, by training component vector set
be divided into five groups of style collection, every group of style set pair answered an artist, and sets corresponding class mark for every group of style collection.
Step 4 generates support vector parameter between five groups of style collection and corresponding class mark.
The first input using five groups of style collection of step 3 as Support Vector Machine, and output using corresponding class mark as Support Vector Machine; Between input and output, generate again the support vector parameter of one group of member supporting vector parameter and five corresponding five groups of parts of parts.
Step 5, divides test set.
By pending portrait test set
press the difference of parts, be divided into five test component collection
respectively face, left eye, right eye, nose and nozzle component test set,
represent that respectively the q in test set opens i parts of portrait:
(5a) by test set
the former figure of every portrait as face's part, size is made as 176x239, by test set
face's part of all portraits as face's part test set
(5b) with test set
the pupil of left eye of every portrait centered by, the square chart picture that to get size be 30x22 is as left eye parts, with test set
the left eye parts of all portraits as left eye unit test collection
(5c) with test set
the pupil of right eye of every portrait centered by, the square chart picture that to get size be 30x22 is as right eye parts, with test set
the right eye parts of all portraits as right eye unit test collection
(5d) with test set
the center of two interpupillary lines of every portrait to centered by the intermediate point of nose, the square chart picture that to get size be 30x22 is as nose piece, with test set
the nose piece of all portraits as nose piece test set
(5e) with test set
the face center of every portrait centered by, the square chart picture that to get size be 30x22 is as nozzle component, with test set
the nozzle component of all portraits as nozzle component test set
Step 6, at each unit test collection
upper generation test component vector set
(6a) by each unit test collection
in each parts be divided into test square block:
(6a1) by face's part test set
face's part be divided into size for the test square block of 32x32, the square block that the lap between piece is 16x16;
(6a2) by left eye unit test collection
left eye parts be divided into size for the test square block of 22x22, the square block that the lap between piece is 11x11;
(6a3) by right eye unit test collection
right eye parts be divided into size for the test square block of 22x22, the square block that the lap between piece is 11x11;
(6a4) by nose piece test set
nose piece be divided into size for the test square block of 22x22, the square block that the lap between piece is 11x11;
(6a5) by nozzle component test set
nozzle component be divided into size for the test square block of 22x22, the square block that the lap between piece is 11x11;
(6b) on each test square block, generate testing feature vector
(6b1) on each test square block, extract grey level histogram feature, the pixel of the 0-255 of an each test square block gray level is counted, obtain a dimension and be 256 test grey level histogram proper vector
the numerical value of every dimension is the pixel quantity of this gray level;
(6b2) on each test square block, extract gray Moment Feature, each test square block is calculated the first moment of gray scale
, second moment
And third moment
Generate the test gray Moment Feature vector of 3 dimensions
wherein,
represent the gray-scale value of r pixel of component block, the pixel number that N is component block;
(6b3) on each test square block, extract SURF feature:, centered by test square block center, the square window of a 20x20 of structure, is divided into 4x4 sub regions by this window, and every sub regions has 25 pixels; Each pixel calculated level to subregion and the little wave response of Haar of vertical direction, note is done respectively
with
by the response of 25 pixels of subregion
and absolute value
add up, every sub regions obtains following 4 vectors:
and then obtain each square window and generate the test SURF proper vector of 4x (4x4)=64 dimension
(6b4) on each test square block, extract LBP feature, pixel value Yu Yikuai center by piece center is the center of circle, radius is that the pixel value of 8 points in 5 annular compares one by one, if center pixel value is larger than the pixel value of the upper point of annular, be 1 by the some assignment in this annular, otherwise be 0, and then with 18 bit of 8 dot generation on annular field; Just this 8 bit is converted to the decimal number of 256 again, generates the test LBP proper vector of 256 dimensions
(6b5) by the test grey level histogram proper vector of step (6b1)~(6b4) obtain
test gray Moment Feature vector
test SURF proper vector
test LBP proper vector
these four vectors are arranged in order in a column vector, obtain testing feature vector
(6c) each unit test collection
in the testing feature vector of all test square blocks of each parts
be arranged in order in a column vector, obtain test component vector
and then with each unit test collection
the test component vectors of 50 test portraits
composition test component vector set
Step 7, generates parts class mark.
With five test component vectors of every test portrait in step 6
with five the member supporting vector parameters of correspondence in step 4, as the input of Support Vector Machine, obtain five parts class marks.
Step 8, generates style class mark.
By five parts class marks of step 7, by the Voting principle of " the minority is subordinate to the majority ", obtain the style class mark of test portrait.
Effect of the present invention can be described further by following emulation experiment.
1. simulated conditions
The present invention is to be in Inter (R) Core (TM) i3-21003.10GHz, internal memory 4G, WINDOWS7 operating system at central processing unit, uses the MATLAB2010b of Mathworks company of U.S. exploitation to carry out emulation.Database adopts VIPSL database.SVM realizes the MATLAB kit " http://www.csie.ntu.edu.tw/~cjlin/libsvm/ " that adopts Taiwan Univ.'s woods intelligence benevolence.
2. emulation content
Get two groups of data in VIPSL database, every group has each 40 portraits of five artists, amounts to 200, and wherein 150 are made training portrait, and 50 are made test portrait.On VIPSL database, draw a portrait the classification of style by the inventive method, the classification results of two groups is as table 1.
Table 1. is used the portrait genre classification result based on Support Vector Machine on VIPSL database
As can be seen from Table 1, the portrait genre classification method based on Support Vector Machine of the present invention all has higher classification results in two groups of data.
Claims (9)
1. the portrait genre classification method based on Support Vector Machine, comprises the steps:
(1) partition database sample set: portrait collection is divided into training set U={U
p, p=1,2 ..., 150} and test set
wherein U
prepresent that the p in training set opens portrait,
represent that the q in test set opens portrait;
(2) divide training set: the difference that pending portrait training set U is pressed to parts, is divided into five training component collection
respectively face, left eye, right eye, nose and nozzle component training set,
represent that the p in training set opens i parts of portrait;
(3) at each parts training set U
i, i=1,2 ... on 5, generate five groups of style collection and corresponding class mark;
(4) between five groups of style collection and corresponding class mark, generate support vector parameter: the first input using five groups of style collection of step (3) as Support Vector Machine, and output using corresponding class mark as Support Vector Machine; Between input and output, generate again the support vector parameter of one group of member supporting vector parameter and five corresponding five groups of parts of parts;
(5) divide test set: by pending portrait test set
press the difference of parts, be divided into five test component collection
respectively face, left eye, right eye, nose and nozzle component test set,
represent that respectively test set q opens i parts of portrait;
(6) at each unit test collection
upper generation test component vector set
(7) with every five test component vector sets that test is drawn a portrait in step (6)
and five member supporting vector parameters of correspondence in step (4), as the input of Support Vector Machine, obtain five parts class marks;
(8), by five parts class marks of step (7), by the Voting principle of " the minority is subordinate to the majority ", obtain the style class mark of test portrait.
2. the method for the portrait genre classification based on Support Vector Machine according to claim 1, is characterized in that: step (2) described by pending portrait training set U by the difference of parts, be divided into five training component collection
Carry out as follows:
(2a) using the former figure of every portrait of training set U as face's part, size is made as 176x239, using face's part of all portraits of training set U as the part training set U of face
1;
(2b) centered by the pupil of left eye of every portrait of training set U, the square chart picture that to get size be 30x22 is as left eye parts, using the left eye parts of all portraits of training set U as left eye parts training set U
2;
(2c) centered by the pupil of right eye of every portrait of training set U, the square chart picture that to get size be 30x22 is as right eye parts, using the right eye parts of all portraits of training set U as right eye parts training set U
3;
(2d) by the center of two interpupillary lines of every portrait of training set U centered by the intermediate point of nose, the square chart picture that to get size be 30x22 is as nose piece, using the nose piece of all portraits of training set U as nose piece training set U
4;
(2e) centered by the face center of every portrait of training set U, the square chart picture that to get size be 30x22 is as nozzle component, using the nozzle component of all portraits of training set U as nozzle component training set U
5.
3. the method for the portrait genre classification based on Support Vector Machine according to claim 1, is characterized in that: step (3) described at each parts training set U
ifive groups of style collection of upper generation and corresponding class mark, carry out as follows:
(3a) by each parts training set U
iin each parts be divided into training square block:
(3b) on each training square block, generate training feature vector V
f:
(3c) each parts training set U
ithe training feature vector V of all training square block L of each parts of every portrait
fbe arranged in order in a column vector, obtain training component vector V
c, and then with each parts training set U
ithe training component vector V of 150 training portraits
ccomposition training component vector set
(3d) according to each training component vector set
150 affiliated portraits comprise five artists' paint, by training component vector set
be divided into five groups of style collection, every group of style set pair answered an artist, and sets corresponding class mark for every group of style collection.
4. the method for the portrait genre classification based on Support Vector Machine according to claim 3, is characterized in that: step (3a) described by each parts training set U
i, i=1,2 ..., the each parts in 5 are divided into training square block, carry out as follows:
(3a1) by the part training set U of face
1face's part be divided into size for the training square block of 32x32, the square block that the lap between piece is 16x16;
(3a2) by left eye parts training set U
2left eye parts be divided into size for the training square block of 22x22, the square block that the lap between piece is 11x11;
(3a3) by right eye parts training set U
3right eye parts be divided into size for the training square block of 22x22, the square block that the lap between piece is 11x11;
(3a4) by nose piece training set U
4nose piece be divided into size for the training square block of 22x22, the square block that the lap between piece is 11x11;
(3a5) by nozzle component training set U
5nozzle component be divided into size for the training square block of 22x22, the lap size between piece is the square block of 11x11.
5. the method for the portrait genre classification based on Support Vector Machine according to claim 3, is characterized in that: what step (3b) was described generates training feature vector V on each training square block
f, carry out as follows:
(3b1) on each training square block, extract grey level histogram feature, the pixel of the 0-255 of an each training square block gray level is counted, obtain a dimension and be 256 training grey level histogram proper vector V
1, the numerical value of every dimension is the pixel quantity of this gray level;
(3b2) on each training square block, extract gray Moment Feature, each training square block is calculated the first moment of gray scale
, second moment
And third moment
Generate the training gray Moment Feature vector V of 3 dimensions
2, wherein, t
arepresent the gray-scale value of a pixel of component block, the pixel number that N is component block;
(3b3) on each training square block, extract SURF feature,, centered by training square block center, the square window of a 20x20 of structure, is divided into 4x4 sub regions by this window, and every sub regions has 25 pixels; Each pixel calculated level to subregion and the little wave response of Haar of vertical direction, note is d respectively
xand d
y; By the response d of 25 pixels of subregion
x, d
yand absolute value | d
x|, | d
y| add up, every sub regions obtains following 4 vectors: Σ d
x, Σ d
y, Σ | d
x|, Σ | d
y|, and then obtain the training SURF proper vector V that each square window generation 4x (4x4)=64 ties up
3;
(3b4) on each training square block, extract LBP feature: the pixel value Yu Yikuai center by piece center is the center of circle, radius is that the pixel value of 8 points in 5 annular compares one by one, if center pixel value is larger than the pixel value of the upper point of annular, be 1 by the some assignment in this annular, otherwise be 0, and then with 18 bit of 8 dot generation on annular field; Just this 8 bit is converted to the decimal number of 256 again, generates the training LBP proper vector V of 256 dimensions
4;
(3b5) by the training grey level histogram proper vector V of step (3b1)~(3b4) obtain
1, training gray Moment Feature vector V
2, training SURF proper vector V
3, training LBP proper vector V
4these four vectors are arranged in order in a column vector, obtain training feature vector V
f.
6. the method for the portrait genre classification based on Support Vector Machine according to claim 1, is characterized in that: step (5) described by pending portrait test set
press the difference of parts, be divided into five test component collection
Carry out as follows:
(5a) by test set
the former figure of every portrait as face's part, size is made as 176x239, by test set
face's part of all portraits as face's part test set
(5b) with test set
the pupil of left eye of every portrait centered by, the square chart picture that to get size be 30x22 is as left eye parts, with test set
the left eye parts of all portraits as left eye unit test collection
(5c) with test set
the pupil of right eye of every portrait centered by, the square chart picture that to get size be 30x22 is as right eye parts, with test set
the right eye parts of all portraits as right eye unit test collection
(5d) with test set
the center of two interpupillary lines of every portrait to centered by the intermediate point of nose, the square chart picture that to get size be 30x22 is as nose piece, with test set
the nose piece of all portraits as nose piece test set
(5e) with test set
the face center of every portrait centered by, the square chart picture that to get size be 30x22 is as nozzle component, with test set
the nozzle component of all portraits as nozzle component test set
7. the method for the portrait genre classification based on Support Vector Machine according to claim 1, is characterized in that: step (6) described at each unit test collection
upper generation test component vector set
carry out as follows:
(6a) by each unit test collection
in each parts be divided into test square block;
(6b) on each test square block, generate testing feature vector
(6c) each unit test collection
in the testing feature vector of all test square blocks of each parts
be arranged in order in a column vector, obtain test component vector
and then with each unit test collection
the test component vectors of 50 test portraits
composition test component vector set
8. the method for the portrait genre classification based on Support Vector Machine according to claim 7, is characterized in that: step (6a) is by each unit test collection
in each parts be divided into test square block, carry out as follows:
(6a1) by face's part test set
face's part be divided into size for the test square block of 32x32, the square block that the lap between piece is 16x16;
(6a2) by left eye unit test collection
left eye parts be divided into size for the test square block of 22x22, the square block that the lap between piece is 11x11;
(6a3) by right eye unit test collection
right eye parts be divided into size for the test square block of 22x22, the square block that the lap between piece is 11x11;
(6a4) by nose piece test set
nose piece be divided into size for the test square block of 22x22, the square block that the lap between piece is 11x11
(6a5) by nozzle component test set
nozzle component be divided into size for the test square block of 22x22, the square block that the lap between piece is 11x11.
9. the method for the portrait genre classification based on Support Vector Machine according to claim 7, is characterized in that: in described step (6b), on each test square block, generate testing feature vector
carry out as follows:
(6b1) on each test square block, extract grey level histogram feature, the pixel of the 0-255 of an each test square block gray level is counted, obtain a dimension and be 256 test grey level histogram proper vector
the numerical value of every dimension is the pixel quantity of this gray level;
(6b2) on each test square block, extract gray Moment Feature, each test square block is calculated the first moment of gray scale
, second moment
And third moment
Generate the test gray Moment Feature vector of 3 dimensions
wherein,
represent the gray-scale value of r pixel of component block, the pixel number that N is component block;
(6b3) on each test square block, extract SURF feature,, centered by test square block center, the square window of a 20x20 of structure, is divided into 4x4 sub regions by this window, and every sub regions has 25 pixels; Each pixel calculated level to subregion and the little wave response of Haar of vertical direction, note is done respectively
with
by the response of 25 pixels of subregion
and absolute value
add up, every sub regions obtains following 4 vectors:
and then obtain each square window and generate the test SURF proper vector of 4x (4x4)=64 dimension
(6b4) on each test square block, extract LBP feature: the pixel value Yu Yikuai center by piece center is the center of circle, radius is that the pixel value of 8 points in 5 annular compares one by one, if center pixel value is larger than the pixel value of the upper point of annular, be 1 by the some assignment in this annular, otherwise be 0, and then with 18 bit of 8 dot generation on annular field; Just this 8 bit is converted to the decimal number of 256 again, generates the test LBP proper vector of 256 dimensions
(6b5) by the test grey level histogram proper vector of step (6b1)~(6b4) obtain
test gray Moment Feature vector
test SURF proper vector
test LBP proper vector
these four vectors are arranged in order in a column vector, obtain testing feature vector
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410330945.XA CN104091174B (en) | 2014-07-13 | 2014-07-13 | portrait style classification method based on support vector machine |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410330945.XA CN104091174B (en) | 2014-07-13 | 2014-07-13 | portrait style classification method based on support vector machine |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104091174A true CN104091174A (en) | 2014-10-08 |
CN104091174B CN104091174B (en) | 2017-04-19 |
Family
ID=51638889
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410330945.XA Expired - Fee Related CN104091174B (en) | 2014-07-13 | 2014-07-13 | portrait style classification method based on support vector machine |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104091174B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107967667A (en) * | 2017-12-21 | 2018-04-27 | 广东欧珀移动通信有限公司 | Generation method, device, terminal device and the storage medium of sketch |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090132530A1 (en) * | 2007-11-19 | 2009-05-21 | Microsoft Corporation | Web content mining of pair-based data |
CN103198303A (en) * | 2013-04-12 | 2013-07-10 | 南京邮电大学 | Gender identification method based on facial image |
-
2014
- 2014-07-13 CN CN201410330945.XA patent/CN104091174B/en not_active Expired - Fee Related
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090132530A1 (en) * | 2007-11-19 | 2009-05-21 | Microsoft Corporation | Web content mining of pair-based data |
CN103198303A (en) * | 2013-04-12 | 2013-07-10 | 南京邮电大学 | Gender identification method based on facial image |
Non-Patent Citations (2)
Title |
---|
刘宏等: "基于SVM 和纹理的笔迹鉴别方法", 《计算机辅助设计与图形学学报》 * |
李金坡: "基于五官及其组合的人脸性别识别研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107967667A (en) * | 2017-12-21 | 2018-04-27 | 广东欧珀移动通信有限公司 | Generation method, device, terminal device and the storage medium of sketch |
Also Published As
Publication number | Publication date |
---|---|
CN104091174B (en) | 2017-04-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106650740B (en) | A kind of licence plate recognition method and terminal | |
CN103413145B (en) | Intra-articular irrigation method based on depth image | |
CN106446754A (en) | Image identification method, metric learning method, image source identification method and devices | |
CN112016605B (en) | Target detection method based on corner alignment and boundary matching of bounding box | |
Türkyılmaz et al. | License plate recognition system using artificial neural networks | |
CN105913093A (en) | Template matching method for character recognizing and processing | |
CN105404886A (en) | Feature model generating method and feature model generating device | |
CN106203454B (en) | The method and device of certificate format analysis | |
CN104008401A (en) | Method and device for image character recognition | |
CN106203329B (en) | A method of identity template is established based on eyebrow and carries out identification | |
CN105893952A (en) | Hand-written signature identifying method based on PCA method | |
CN105893947A (en) | Bi-visual-angle face identification method based on multi-local correlation characteristic learning | |
CN110163567A (en) | Classroom roll calling system based on multitask concatenated convolutional neural network | |
CN105005798A (en) | Target recognition method based on collecting and matching local similar structure | |
Lv et al. | Chinese character CAPTCHA recognition based on convolution neural network | |
CN116152870A (en) | Face recognition method, device, electronic equipment and computer readable storage medium | |
CN104318224A (en) | Face recognition method and monitoring equipment | |
CN103714340A (en) | Self-adaptation feature extracting method based on image partitioning | |
CN107369086A (en) | A kind of identity card stamp system and method | |
CN104598898A (en) | Aerially photographed image quick recognizing system and aerially photographed image quick recognizing method based on multi-task topology learning | |
Gupta et al. | Number Plate extraction using Template matching technique | |
CN109741351A (en) | A kind of classification responsive type edge detection method based on deep learning | |
CN105469099A (en) | Sparse-representation-classification-based pavement crack detection and identification method | |
CN103136536A (en) | System and method for detecting target and method for exacting image features | |
CN103761538A (en) | Traffic sign recognition method based on shape feature invariant subspace |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20170419 |