CN108564111A - A kind of image classification method based on neighborhood rough set feature selecting - Google Patents
A kind of image classification method based on neighborhood rough set feature selecting Download PDFInfo
- Publication number
- CN108564111A CN108564111A CN201810254854.0A CN201810254854A CN108564111A CN 108564111 A CN108564111 A CN 108564111A CN 201810254854 A CN201810254854 A CN 201810254854A CN 108564111 A CN108564111 A CN 108564111A
- Authority
- CN
- China
- Prior art keywords
- image
- feature
- neighborhood
- rough set
- point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2411—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
- G06F18/232—Non-hierarchical techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
Landscapes
- Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
A kind of image classification method based on neighborhood rough set feature selecting.On spatial pyramid model, first, characteristics of image is extracted by SURF and HOG, existing Scale invariant characteristic can describe the presentation and shape of image local target again, and reject the redundancy feature in SURF and HOG features using the image feature selection algorithm based on neighborhood rough set;Secondly, visual dictionary is generated to the k means clustering algorithms of the set of image characteristics after yojan;Then, the number of each visual signature word of each scale image of statistical space pyramid model, the histogram of acquisition is all together in series, and assigns corresponding weight to the feature of different scale;Finally, weighted histogram will be obtained to be put into training in Linear SVM grader and predict.The present invention overcomes the missings that single features extraction in conventional images sorting technique be easy to cause image information, and multiple features fusion can then generate the defect that a large amount of redundancy feature causes the accuracy rate of image classification not high.
Description
Technical field
The invention belongs to the image classification methods of computer vision field.
Background technology
Image classification is an important subject of computer vision field.The purpose of image classification is to allow computer can
To there is the ability for quick and precisely identifying complicated visual pattern as people.With artificial intelligence and the rapid development of pattern-recognition,
Image classification is widely used in image understanding, target identification and image retrieval etc..
Spatial pyramid model (Spatial Pyramid Matching, SPM) is current main image classification method
One of, on the basis of vision bag of words (bag of words, BOW), increase figure by carrying out multi-level divide to image
The spatial position of picture and shape information.Image classification method based on spatial pyramid model includes mainly four parts:Feature
Extraction generates visual dictionary, structure spatial pyramid and generates image vision description with the method for merging histogram is counted.It is special
Sign extraction and feature selecting are the key preconditions of image classification, and traditional spatial pyramid Matching Model (SPM) is although in image
Very big breakthrough is achieved in classification problem, but since the feature of its extraction is unable to the information of effective expression image, classification performance is still
It is so not high.It is relatively broad currently based on application of the feature extracting method of local description in constructing visual dictionary.SIFT is special
Sign description is translating, and rotates, has preferable stability on the images such as uneven illumination.HOG describes son and can be very good expression figure
The shape of target as in.The SURF characteristics with scale, invariable rotary similar with SIFT feature, calculation amount and calculating time but compare
SIFT greatly reduces, while having the robustness of acceleration.
In image characteristic extracting method, single features extraction is more unilateral to the description of image, cannot express image well
In content.Multiple features, which can take, can be described more fully image information, have at present largely by combining multiple Feature Descriptors
To express picture material.Multiple features also bring a large amount of redundancy while description image comprehensively, how in numerous spies
Inessential or even redundancy feature is removed the expression without influencing picture material in sign, is most important in image classification
's.Neighborhood rough set feature selecting has good effect in terms of the redundancy feature for rejecting continuous type knowledge-representation system.Feature
More effective character subset is formed by feature selecting after extraction, image information is simplified, and the basic letter expressed by image
Breath is not also lost.
Image cannot be fully described there are still single features in feature extraction in image classification at present, but multiple features carry
Description image is taken to have an a large amount of redundancy, when cluster needs to generate a large amount of visual dictionary, not only results in image in this way
The accuracy rate of classification is not high and image classification time-consuming also very big.Therefore, in image classification, the feature after image characteristics extraction
Selection is also vital.
Invention content
The technical problem to be solved in the present invention:A kind of image classification method based on neighborhood rough set feature selecting is provided,
It is rejected using the mutual supplement with each other's advantages feature extraction characteristics of image between HOG and SURF, and using neighborhood rough set feature selecting algorithm
Redundancy feature in image, overcomes multi-feature extraction in conventional images sorting technique and describes image and have a large amount of redundancy letter
Breath causes the accuracy rate of image classification not high and image classification time-consuming larger defect.
In order to solve the above technical problems, the present invention provides a kind of image classification method of neighborhood rough set feature selecting,
It includes the following steps:
(1) feature of training sample and test set sample image is extracted respectively;
(2) image feature representation system is built;
(3) redundancy feature in image knowledge expression system is picked with the feature selecting algorithm based on neighborhood rough set
It removes, obtains new set of image characteristics;
(4) cluster generates visual signature dictionary;
(5) spatial pyramid model is built, according to the visual signature dictionary of generation, statistics merges training set and test set is every
The pyramidal weighting visual signature histogram of width image space;
(6) training Linear SVM grader classifies to test image.
The feature of extraction training sample and test set sample image specifically includes:The SURF features and HOG features of image.
SURF features are a kind of local feature description's, and SURF algorithm is similar with SIFT algorithms, equally possesses Scale invariant
Characteristic, but calculating speed and robustness are better than SIFT.
SURF characteristic extraction steps are:
Step 1. utilizes cassette filter construction pyramid scale space;
In order to find characteristic point on different scale spaces, the concept of SURF introducing cassette filters establishes the ruler of image
Spend space.The division of the scale space of SURF algorithm includes different rank and layer.Many different layers are all contained per single order, this
A little layers are response of the various sizes of cassette filter to original image.The cassette filter size of the first layer of lowest-order is 9
× 9, at this moment corresponding Gauss scale is σ=1.2, and then size can be continuously increased to 15 × 15,21 × 21,27 × 27.
Step 2. establishes rapid characteristic points detector acquisition extreme value stable point by Hessian matrixes;
The detector of SURF algorithm is based on Hessian matrixes, and for certain point X=(x, y) in image I, then the point is in ruler
The Hessian matrixes spent under σ are H (p, σ):
Wherein Lxx(P, σ) indicates second-order partial differential coefficientExist with image I
Convolution at point P, Lxy(P, σ) is indicatedWith convolution of the image I at point P, Lyx(P, σ) is indicatedWith figure
As convolution of the I at point P, Lyy(P, σ) is indicatedWith convolution of the image I at point P;.
By the determinant for calculating H-matrix:det(Happrox)=DxxDyy-(ωDxy)2
Dxx, Dyy, DxyIt is after having used filter respectively to Lxx, Lyy, LxyApproximation, ω is weight, general value 0.9.
The value of determinant represents the response of the spot at X=(x, y, σ), can pass through this letter on space and scale
It counts to find the characteristic point in image.Characteristic point in order to obtain, the method for taking non-maxima suppression herein, by certain scale
Characteristic point and 26 adjoint points of scale around it compare, and observe whether it is maximum point.
Step 3. Expressive Features point principal direction;
Can be that each characteristic point distributes a master to allow the characteristic point obtained in the picture that there is rotational invariance
Direction.Centered on characteristic point, calculating radius is the region of 6s (s is characterized a place figure layer scale-value), selects 60 ° of sector
Region calculates small echo response vector superposition in its Harr small echo response region in the x and y direction, traverses entire circle, maximum arrow
Measure the principal direction of direction, that is, characteristic point.
For step 4. in construction feature vector, it is the square-shaped frame of 20s that the length of side is taken around characteristic point, and the direction of the frame is
Characteristic point principal direction.Square-shaped frame is divided into 4 × 4 sub-regions, counts the directions x and the side y per 25 sampled points of sub-regions
To Harr small echos response, can be obtained one 64 dimension feature description vector.
The overlapped local contrast of HOG characteristic uses normalizes technology to characterize the presentation and shape of image localized target
Shape is to describe edge and best one of the feature of shape information.
HOG feature extractions include the following steps:
Step 1. normalized color image, eliminates the influence of illumination.
Step 2. divides the image into the block of pixels (cell) of same size, calculates each pixel (x, y) in each cell
Gradient horizontally and vertically is respectively:
Gx (x, y)=G (x+1, y)-G (x-1, y)
Gy(x, y)=G (x, y+1)-G (x, y-1)
The size and Orientation for then obtaining the gradient of the pixel is respectively:
Step 3. connects all features of entire image to obtain the HOG features of image.HOG Feature Descriptor dimensions are 36
(parameter of HOG feature extractions is set as in the present invention:Cell sizes are 8 × 8 pixels, and 2 × 2 cell form a block
Block counts the gradient information of each cell using the histogram of 9 bin).
The image feature representation system of the structure is:
Build image feature representation system NTD=<U,C,D>, U={ u1,u2…,um},uiIndicate the feature of i-th image
The set u of vectori=[X1,X2,…,Xn], C={ c1,c2,…,cnBe image knowledge expression system conditional attribute, clIt indicates
First of feature vector of image, dimension are the length of image feature descriptor.The classification D of image is as decision attribute.
The related definition of the neighborhood rough set feature selecting algorithm specifically includes:
Define image u in 1. image feature representation systemsiδ neighborhoods be:δ(u|Δ(u,ui)≤δ).Δ is distance function,
The distance function used herein is Chebyshev's distance (Infinite Norm):
Under the identical radius of neighbourhood, the contiguous range that Chebyshev's distance (Infinite Norm) indicates is maximum, and calculates simple.
The consistent neighborhood definition for defining 2. image pattern u is:The identical image of classification in the neighborhood of image pattern u, i.e.,:It is right
Inδc(u)∩δD(u);Otherwise the inconsistent neighborhood definition of image pattern u is:In the neighborhood of image pattern x
The different image of classification, i.e.,:Forδc(u)-δD(u)。
Define 3:Image feature representation system NTD=<U,C,D>Comentropy and conditional entropy be defined as follows, comentropy and
Conditional entropy can indicate the uncertainty degree of image information:
Comentropy:
Conditional entropy:
The conditional entropy of image feature representation system is related to the inconsistent neighborhood of its image pattern.In each feature hypograph
Inconsistent neighborhood number it is more, conditional entropy is bigger, and this feature is just more uncorrelated to the classification of image;Conversely, conditional entropy is smaller
Feature, just it is more related to image category.It can reflect the degree of correlation between feature and image by conditional entropy.
The neighborhood rough set feature selecting includes the following steps:
Step 1:Image knowledge expression system NTD=is calculated according to conditional entropy formula (3)<U,C,D>Conditional entropy E (D |
C), yojan feature set red, initiallyXi∈C-red;
Step 2:Foundation formula (3) calculating E (D | red ∪ { Xi) conditional entropy, find out the feature X of conditional entropy minimumi, will
XiIt is added in red;
Step 3:Calculating E (D | red ∪ { bi) whether it is equal to E (D | C), feature reduction collection red is exported if equal, if not
Deng then returning to step 2.
The cluster generates visual signature dictionary:
The feature obtained by feature selecting is considered as " vision word " to cluster it using k-means clustering algorithms
To obtain " the vision bag of words " that quantity is M.
Build spatial pyramid model step:
1) image is divided into three layers, the 0th layer using entire image as a region, image uniform is divided into 4 by the 1st layer
Region, image uniform is divided into 16 regions by the 2nd layer, and different layers are assigned to different weights, and weight is followed successively by [1/4,1/4,1/
2];
2) the pyramid statistics sequence from left to right, from top to bottom of statistics different layers in each layer each region " depending on
Feel word " in the frequency of " vision bag of words " appearance, the image histogram for obtaining each region in image different layers indicates;
3) it indicates the image histogram in each region in above-mentioned different layers the weight in being assigned to 1) respectively, is together in series to obtain
Final graphical representation
The trained Linear SVM grader classifies to test image:Randomly select training set image and test set
Image regards training set image by feature extraction, feature selecting, generation visual dictionary, structure image pyramid space
It is added to the grader after being trained in Linear SVM after feeling feature histogram;To the progress feature extraction of test set image, and
Visual dictionary with training set image builds image space pyramid model, and the histogram for obtaining test set image indicates, by it
It is input in trained Linear SVM grader, exports the classification of test set image.
Compared with prior art, present invention substantive features outstanding and conspicuousness are as follows:
(1) present invention is combined using SURF the and HOG features for having more complementary characteristic, and the feature extracted can not only retouch
The presentation and shape of image local target are stated, and has the features such as very fast Scale invariant, calculating speed, preferable robustness.
(2) present invention constructs image feature representation system, with the feature selecting algorithm of neighborhood rough set reject SURF and
The bulk redundancy information that HOG is brought after being combined improves the accuracy rate of image classification while decreasing the classification time.
Description of the drawings
Fig. 1 is the flow chart of the present invention.
Specific implementation mode
With reference to the accompanying drawings and examples, the specific implementation mode of the present invention is described in further detail:
The present invention provides a kind of image classification methods of neighborhood rough set feature selecting, include the following steps:
1) feature of training sample and test set sample image is extracted respectively;
2) image feature representation system is built;
3) redundancy feature in image knowledge expression system is picked with the feature selecting algorithm based on neighborhood rough set
It removes, obtains new set of image characteristics;
4) cluster generates visual signature dictionary;
5) spatial pyramid model is built, according to the visual signature dictionary of generation, statistics merges training set and test set is every
The pyramidal weighting visual signature histogram of width image space;
6) training Linear SVM grader classifies to test image.
The feature that wherein step 1) is extracted specifically includes:The SURF features and HOG features of image.
SURF is a kind of local feature description's, and SURF algorithm is similar with SIFT algorithms, equally possesses Scale invariant characteristic,
But calculating speed and robustness are better than SIFT.Four basic steps of SURF feature extraction algorithms are:
Step1. cassette filter construction pyramid scale space is utilized;
In order to find characteristic point on different scale spaces, the concept of SURF introducing cassette filters establishes the ruler of image
Spend space.The division of the scale space of SURF algorithm includes different rank and layer.Many different layers are all contained per single order, this
A little layers are response of the various sizes of cassette filter to original image.The cassette filter size of the first layer of lowest-order is 9
× 9, at this moment corresponding Gauss scale is σ=1.2, and then size can be continuously increased to 15 × 15,21 × 21,27 × 27.
Step2. rapid characteristic points detector acquisition extreme value stable point is established by Hessian matrixes;
The detector of SURF algorithm is based on Hessian matrixes, and for certain point X=(x, y) in image I, then the point is in ruler
The Hessian matrixes spent under σ are H (p, σ):
Wherein Lxx(P, σ) indicates second-order partial differential coefficientWith image I
Convolution at point P, Lxy(P, σ) is indicatedWith convolution of the image I at point P, Lyx(P, σ) is indicatedWith
Convolution of the image I at point P, Lyy(P, σ) is indicatedWith convolution of the image I at point P;.
By the determinant for calculating H-matrix:det(Happrox)=DxxDyy-(ωDxy)2
Dxx, Dyy, DxyIt is after having used filter respectively to Lxx, Lyy, LxyApproximation, ω is weight, and general value is
0.9。
The value of determinant represents the response of the spot at X=(x, y, σ), can pass through this letter on space and scale
It counts to find the characteristic point in image.Characteristic point in order to obtain, the method for taking non-maxima suppression herein, by certain scale
Characteristic point and 26 adjoint points of scale around it compare, and observe whether it is maximum point.
Step3. Expressive Features point principal direction;
Can be that each characteristic point distributes a master to allow the characteristic point obtained in the picture that there is rotational invariance
Direction.Centered on characteristic point, calculating radius is the region of 6s (s is characterized a place figure layer scale-value), selects 60 ° of sector
Region calculates small echo response vector superposition in its Harr small echo response region in the x and y direction, traverses entire circle, maximum arrow
Measure the principal direction of direction, that is, characteristic point.
Step4. in construction feature vector, it is the square-shaped frame of 20s that the length of side is taken around characteristic point, and the direction of the frame is
Characteristic point principal direction.Square-shaped frame is divided into 4 × 4 sub-regions, counts the directions x and the side y per 25 sampled points of sub-regions
To Harr small echos response, can be obtained one 64 dimension feature description vector.
The overlapped local contrast of HOG characteristic uses normalizes technology to characterize the presentation and shape of image localized target
Shape is to describe edge and best one of the feature of shape information.HOG feature extractions can be divided into following steps:
1. normalized color images of Step, eliminate the influence of illumination.
Step 2. divides the image into the block of pixels (cell) of same size, calculates each pixel (x, y) in each cell
Gradient horizontally and vertically is respectively:
Gx(x, y)=G (x+1, y)-G (x-1, y)
Gy(x, y)=G (x, y+1)-G (x, y-1)
The size and Orientation for then obtaining the gradient of the pixel is respectively:
Step3. all features of entire image are connected to obtain the HOG features of image.HOG Feature Descriptor dimensions are 36
(parameter of HOG feature extractions is set as in the present invention:Cell sizes are 8 × 8 pixels, and 2 × 2 cell form a block
Block counts the gradient information of each cell using the histogram of 9 bin).
The image feature representation system of structure wherein described in step 2) is:
Build image feature representation system NTD=<U,C,D>, U={ u1,u2…,um},uiIndicate the feature of i-th image
The set u of vectori=[X1,X2,…,Xn], C={ c1,c2,…,cnBe image knowledge expression system conditional attribute, clIt indicates
First of feature vector of image, dimension are the length of image feature descriptor.The classification D of image is as decision attribute.
The related definition of neighborhood rough set feature selecting algorithm wherein described in step 3) specifically includes:
Define image u in 1. image feature representation systemsiδ neighborhoods be:δ(u|Δ(u,ui)≤δ).Δ is distance function,
The distance function used herein is Chebyshev's distance (Infinite Norm):
Under the identical radius of neighbourhood, the contiguous range that Chebyshev's distance (Infinite Norm) indicates is maximum, and calculates simple.
The consistent neighborhood definition for defining 2. image pattern u is:The identical image of classification in the neighborhood of image pattern u, i.e.,:It is right
Inδc(u)∩δD(u);Otherwise the inconsistent neighborhood definition of image pattern u is:In the neighborhood of image pattern x
The different image of classification, i.e.,:Forδc(u)-δD(u)。
Define 3:Document [11] defines NTD=<U,C,D>Comentropy and conditional entropy, comentropy and conditional entropy can be with tables
Show the uncertainty degree of image information:
Comentropy:
Conditional entropy:
The conditional entropy of image feature representation system is related to the inconsistent neighborhood of its image pattern.In each feature hypograph
Inconsistent neighborhood number it is more, conditional entropy is bigger, and this feature is just more uncorrelated to the classification of image;Conversely, conditional entropy is smaller
Feature, just it is more related to image category.It can reflect the degree of correlation between feature and image by conditional entropy.
Neighborhood rough set feature selecting algorithm wherein described in step 3) includes the following steps:
Step 1:According to conditional entropy formula (3), we can calculate image knowledge expression system NTD=<U,C,D>Condition
Entropy E (D | C), yojan feature set red, initiallyXi∈C-red;
Step 2:Also according to formula (3) calculate E (D | red ∪ { Xi) conditional entropy, find out the feature of conditional entropy minimum
Xi, by XiIt is added in red;
Step 3:Calculating E (D | red ∪ { bi) whether it is equal to E (D | C), feature reduction collection red is exported if equal, if not
Deng then returning to step 2.
Wherein step 4) specifically includes:
The feature obtained by feature selecting is considered as " vision word " to cluster it using k-means clustering algorithms
To obtain " visual dictionary " that quantity is M.
Wherein step 5) includes based on spatial pyramid model SPM specific steps:
1) image is divided into three layers, the 0th layer using entire image as a region, image uniform is divided into 4 by the 1st layer
Region, image uniform is divided into 16 regions by the 2nd layer, and different layers are assigned to different weights, and weight is followed successively by [1/4,1/4,1/
2];
2) the pyramid statistics sequence from left to right, from top to bottom of statistics different layers in each layer each region " depending on
Feel word " in the frequency of " vision bag of words " appearance, the image histogram for obtaining each region in image different layers indicates;
3) it indicates the image histogram in each region in above-mentioned different layers the weight in being assigned to 1) respectively, is together in series to obtain
Final graphical representation
Wherein step 6) specifically includes:
1) training set and test set are randomly selected, preceding 5 step process of claim 1 are carried out to training set image, are obtained
The histogram of training set image indicates, the grader after being trained in input linear SVM;
2) feature extraction is carried out to test set image, and matches the visual dictionary structure image space gold word of training set image
Tower model, the histogram for obtaining test set image indicates, is entered into trained Linear SVM grader, output test
Collect the classification of image.
Claims (8)
1. a kind of image classification method based on neighborhood rough set feature selecting, which is characterized in that include the following steps:
(1) feature of training sample and test sample image is extracted;
(2) training sample image feature representation system is built;
(3) redundancy feature in image knowledge expression system is rejected with the feature selecting algorithm based on neighborhood rough set,
Obtain new set of image characteristics;
(4) cluster generates visual dictionary;
(5) spatial pyramid model is built, according to the visual dictionary of generation, statistics merges the weighting of each image spatial pyramid
Visual signature histogram;
(6) training Linear SVM grader classifies to test image.
2. the image classification method as described in claim 1 based on neighborhood rough set feature selecting, which is characterized in that extraction instruction
Practice sample and test sample image refers to the SURF features and HOG features for extracting image respectively;
The SURF characteristic extraction steps are:
Step 1. utilizes cassette filter construction pyramid scale space;
Step 2. establishes rapid characteristic points detector acquisition extreme value stable point by Hessian matrixes;
The detector of SURF algorithm is based on Hessian matrixes, and for certain point X=(x, y) in image I, then the point is at scale σ
Hessian matrixes be H (p, σ):
Wherein Lxx(P, σ) is indicatedWith convolution of the image I at point P, Lxy
(P, σ) is indicatedWith convolution of the image I at point P, Lyx(P, σ) is indicatedWith volumes of the image I at point P
Product, Lyy(P, σ) is indicatedWith convolution of the image I at point P;
Calculate the determinant of H-matrix:det(Happrox)=DxxDyy-(ωDxy)2
Dxx, Dyy, DxyIt is after having used filter respectively to Lxx, Lyy, LxyApproximation, ω is weight, and ω values are 0.9;
Step 3. Expressive Features point principal direction;
I.e. centered on characteristic point, the region that radius is 6s is calculated, s is characterized a place figure layer scale-value, selects 60 ° of sector
Region calculates small echo response vector superposition in the Harr small echo response regions of sector region in the x and y direction, traverses entire circle,
The principal direction of maximum vector direction, that is, characteristic point;
For step 4. in construction feature vector, it is the square-shaped frame of 20s that the length of side is taken around characteristic point, and the direction of square-shaped frame is
Characteristic point principal direction;Square-shaped frame is divided into 4 × 4 sub-regions, counts the directions x and the side y per 25 sampled points of sub-regions
To Harr small echos response, obtain one 64 dimension feature description vector;
The HOG characteristic extraction steps:
Step 1. normalized color image, eliminates the influence of illumination;
Step 2. divides the image into the block of pixels of same size, calculates the horizontal direction of each pixel (x, y) in each block of pixels
Gradient with vertical direction is respectively:
Gx(x, y)=G (x+1, y)-G (x-1, y)
Gy(x, y)=G (x, y+1)-G (x, y-1)
The size and Orientation for then obtaining the gradient of the pixel is respectively:
Step 3. connects all features of entire image to obtain the HOG features of image.
3. the image classification method as described in claim 1 based on neighborhood rough set feature selecting, which is characterized in that structure instruction
Practicing sample image feature representation system is:
Build image feature representation system NTD=<U,C,D>, U={ u1,u2…,um},ui=[X1,X2,…,Xn] indicate i-th
The set of the feature vector of image, C={ c1,c2,…,cnBe image knowledge expression system conditional attribute, clIndicate image
First of feature vector, dimension are the length of image feature descriptor.
4. the image classification method as described in claim 1 based on neighborhood rough set feature selecting, which is characterized in that based on neighbour
Domain rough set image feature selection includes:
(1) image u in image feature representation systemiδ neighborhoods be:δ(u|Δ(u,ui)≤δ), Δ is distance function;
(2) the consistent neighborhood definition of image pattern u is:The identical image of classification in the neighborhood of image pattern u, i.e.,:Forδc(u)∩δD(u);Otherwise the inconsistent neighborhood definition of image pattern u is:Class in the neighborhood of image pattern x
Not different images, i.e.,:Forδc(u)-δD(u)。
(3) document [11] defines NTD=<U,C,D>Comentropy and conditional entropy, comentropy and conditional entropy:
Comentropy:
Conditional entropy: 。
5. the image classification method as described in claim 1 based on neighborhood rough set feature selecting, which is characterized in that based on neighbour
The image feature selection step of domain rough set:
Step 1:According to conditional entropy formula
Calculate image knowledge expression system NTD=<U,C,D>Conditional entropy E (D | C), yojan feature set red, initiallyXi∈C-red;
Step 2:According to formula
Calculating E (D | red ∪ { Xi) conditional entropy, find out the feature X of conditional entropy minimumi, by XiIt is added in red;
Step 3:Calculating E (D | red ∪ { bi) whether it is equal to E (D | C), if E (D | red ∪ { bi)=E (D | C), then feature is exported
Yojan collection red;If E (D | red ∪ { bi) ≠ E (D | C), then 2 are returned to step.
6. the image classification method as described in claim 1 based on neighborhood rough set feature selecting, which is characterized in that cluster life
Include at visual dictionary:
The feature obtained by feature selecting be considered as " vision word " using k-means clustering algorithms it is carried out cluster to
Obtain " the vision bag of words " that quantity is M.
7. the image classification method as described in claim 1 based on neighborhood rough set feature selecting, which is characterized in that based on sky
Between pyramid model SPM the specific steps are:
(1) image is divided into three layers, the 0th layer using entire image as a region, image uniform is divided into 4 areas by the 1st floor
Domain, image uniform is divided into 16 regions by the 2nd layer, and different layers are assigned to different weights, and weight is followed successively by [1/4,1/4,1/
2];
(2) " vision list of the sequence of the pyramid statistics of statistics different layers from left to right, from top to bottom to each region in each layer
The frequency that word " occurs in " vision bag of words ", the image histogram for obtaining each region in image different layers indicate;
(3) in above-mentioned different layers each region image histogram indicate be assigned to 1 respectively) in weight, be together in series to obtain most
Whole graphical representation.
8. the image classification method as described in claim 1 based on neighborhood rough set feature selecting, which is characterized in that training line
Property SVM classifier to test image carry out classification include:
(1) training set and test set are randomly selected, preceding 5 step process of claim 1 are carried out to training set image, are trained
Collect the histogram expression of image, the grader after being trained in input linear SVM;
(2) feature extraction is carried out to test set image, and matches the visual dictionary structure image space pyramid of training set image
Model, the histogram for obtaining test set image indicates, is entered into trained Linear SVM grader, exports test set
The classification of image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810254854.0A CN108564111A (en) | 2018-03-26 | 2018-03-26 | A kind of image classification method based on neighborhood rough set feature selecting |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810254854.0A CN108564111A (en) | 2018-03-26 | 2018-03-26 | A kind of image classification method based on neighborhood rough set feature selecting |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108564111A true CN108564111A (en) | 2018-09-21 |
Family
ID=63533316
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810254854.0A Pending CN108564111A (en) | 2018-03-26 | 2018-03-26 | A kind of image classification method based on neighborhood rough set feature selecting |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108564111A (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109448038A (en) * | 2018-11-06 | 2019-03-08 | 哈尔滨工程大学 | Sediment sonar image feature extracting method based on DRLBP and random forest |
CN110738265A (en) * | 2019-10-18 | 2020-01-31 | 太原理工大学 | improved ORB algorithm based on fusion of improved LBP feature and LNDP feature |
WO2020150897A1 (en) * | 2019-01-22 | 2020-07-30 | 深圳大学 | Multi-target tracking method and apparatus for video target, and storage medium |
CN112163133A (en) * | 2020-09-25 | 2021-01-01 | 南通大学 | Breast cancer data classification method based on multi-granularity evidence neighborhood rough set |
CN112580659A (en) * | 2020-11-10 | 2021-03-30 | 湘潭大学 | Ore identification method based on machine vision |
CN112598661A (en) * | 2020-12-29 | 2021-04-02 | 河北工业大学 | Ankle fracture and ligament injury diagnosis method based on machine learning |
CN113112471A (en) * | 2021-04-09 | 2021-07-13 | 南京大学 | Target detection method based on RI-HOG characteristics and quick pyramid |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150131899A1 (en) * | 2013-11-13 | 2015-05-14 | Canon Kabushiki Kaisha | Devices, systems, and methods for learning a discriminant image representation |
CN105389593A (en) * | 2015-11-16 | 2016-03-09 | 上海交通大学 | Image object recognition method based on SURF |
CN105550708A (en) * | 2015-12-14 | 2016-05-04 | 北京工业大学 | Visual word bag model constructing model based on improved SURF characteristic |
CN105654035A (en) * | 2015-12-21 | 2016-06-08 | 湖南拓视觉信息技术有限公司 | Three-dimensional face recognition method and data processing device applying three-dimensional face recognition method |
CN106250919A (en) * | 2016-07-25 | 2016-12-21 | 河海大学 | The scene image classification method that combination of multiple features based on spatial pyramid model is expressed |
CN106644484A (en) * | 2016-09-14 | 2017-05-10 | 西安工业大学 | Turboprop Engine rotor system fault diagnosis method through combination of EEMD and neighborhood rough set |
CN107368807A (en) * | 2017-07-20 | 2017-11-21 | 东南大学 | A kind of monitor video vehicle type classification method of view-based access control model bag of words |
-
2018
- 2018-03-26 CN CN201810254854.0A patent/CN108564111A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150131899A1 (en) * | 2013-11-13 | 2015-05-14 | Canon Kabushiki Kaisha | Devices, systems, and methods for learning a discriminant image representation |
CN105389593A (en) * | 2015-11-16 | 2016-03-09 | 上海交通大学 | Image object recognition method based on SURF |
CN105550708A (en) * | 2015-12-14 | 2016-05-04 | 北京工业大学 | Visual word bag model constructing model based on improved SURF characteristic |
CN105654035A (en) * | 2015-12-21 | 2016-06-08 | 湖南拓视觉信息技术有限公司 | Three-dimensional face recognition method and data processing device applying three-dimensional face recognition method |
CN106250919A (en) * | 2016-07-25 | 2016-12-21 | 河海大学 | The scene image classification method that combination of multiple features based on spatial pyramid model is expressed |
CN106644484A (en) * | 2016-09-14 | 2017-05-10 | 西安工业大学 | Turboprop Engine rotor system fault diagnosis method through combination of EEMD and neighborhood rough set |
CN107368807A (en) * | 2017-07-20 | 2017-11-21 | 东南大学 | A kind of monitor video vehicle type classification method of view-based access control model bag of words |
Non-Patent Citations (3)
Title |
---|
AYSEGÜL UÇAR等: "Moving towards in object recognition with deep learning for autonomous driving applications", 《2016 INTERNATIONAL SYMPOSIUM ON INNOVATIONS IN INTELLIGENT SYSTEMS AND APPLICATIONS (INISTA)》 * |
吴修浩: "基于视频图像的车型识别系统设计与实现", 《中国优秀博硕士学位论文全文数据库(硕士) 信息科技辑》 * |
续欣莹等: "信息观下基于不一致邻域矩阵的属性约简", 《控制与决策》 * |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109448038A (en) * | 2018-11-06 | 2019-03-08 | 哈尔滨工程大学 | Sediment sonar image feature extracting method based on DRLBP and random forest |
WO2020150897A1 (en) * | 2019-01-22 | 2020-07-30 | 深圳大学 | Multi-target tracking method and apparatus for video target, and storage medium |
CN110738265A (en) * | 2019-10-18 | 2020-01-31 | 太原理工大学 | improved ORB algorithm based on fusion of improved LBP feature and LNDP feature |
CN112163133A (en) * | 2020-09-25 | 2021-01-01 | 南通大学 | Breast cancer data classification method based on multi-granularity evidence neighborhood rough set |
CN112163133B (en) * | 2020-09-25 | 2021-10-08 | 南通大学 | Breast cancer data classification method based on multi-granularity evidence neighborhood rough set |
CN112580659A (en) * | 2020-11-10 | 2021-03-30 | 湘潭大学 | Ore identification method based on machine vision |
CN112598661A (en) * | 2020-12-29 | 2021-04-02 | 河北工业大学 | Ankle fracture and ligament injury diagnosis method based on machine learning |
CN112598661B (en) * | 2020-12-29 | 2022-07-22 | 河北工业大学 | Ankle fracture and ligament injury diagnosis method based on machine learning |
CN113112471A (en) * | 2021-04-09 | 2021-07-13 | 南京大学 | Target detection method based on RI-HOG characteristics and quick pyramid |
CN113112471B (en) * | 2021-04-09 | 2023-12-29 | 南京大学 | Target detection method based on RI-HOG characteristics and rapid pyramid |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108564111A (en) | A kind of image classification method based on neighborhood rough set feature selecting | |
Le Goff et al. | Deep learning for cloud detection | |
CN107506740A (en) | A kind of Human bodys' response method based on Three dimensional convolution neutral net and transfer learning model | |
CN105184309B (en) | Classification of Polarimetric SAR Image based on CNN and SVM | |
CN107368807B (en) | Monitoring video vehicle type classification method based on visual word bag model | |
CN104680173B (en) | A kind of remote sensing images scene classification method | |
CN108491849A (en) | Hyperspectral image classification method based on three-dimensional dense connection convolutional neural networks | |
CN111080678B (en) | Multi-temporal SAR image change detection method based on deep learning | |
CN106650830A (en) | Deep model and shallow model decision fusion-based pulmonary nodule CT image automatic classification method | |
CN103996047B (en) | Hyperspectral image classification method based on squeezed spectra clustering ensemble | |
CN105069478B (en) | High-spectrum remote-sensing terrain classification method based on super-pixel tensor sparse coding | |
CN104881671B (en) | A kind of high score remote sensing image Local Feature Extraction based on 2D Gabor | |
Zou et al. | Chronological classification of ancient paintings using appearance and shape features | |
CN104778476B (en) | A kind of image classification method | |
CN109190643A (en) | Based on the recognition methods of convolutional neural networks Chinese medicine and electronic equipment | |
CN102054178A (en) | Chinese painting image identifying method based on local semantic concept | |
CN108932455B (en) | Remote sensing image scene recognition method and device | |
CN107292336A (en) | A kind of Classification of Polarimetric SAR Image method based on DCGAN | |
CN105303195A (en) | Bag-of-word image classification method | |
CN104298974A (en) | Human body behavior recognition method based on depth video sequence | |
CN107092884A (en) | Rapid coarse-fine cascade pedestrian detection method | |
CN105654122B (en) | Based on the matched spatial pyramid object identification method of kernel function | |
CN105574545B (en) | The semantic cutting method of street environment image various visual angles and device | |
CN109635811A (en) | The image analysis method of spatial plant | |
Banerji et al. | A new bag of words LBP (BoWL) descriptor for scene image classification |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20180921 |