CN108564111A - A kind of image classification method based on neighborhood rough set feature selecting - Google Patents
A kind of image classification method based on neighborhood rough set feature selecting Download PDFInfo
- Publication number
- CN108564111A CN108564111A CN201810254854.0A CN201810254854A CN108564111A CN 108564111 A CN108564111 A CN 108564111A CN 201810254854 A CN201810254854 A CN 201810254854A CN 108564111 A CN108564111 A CN 108564111A
- Authority
- CN
- China
- Prior art keywords
- image
- feature
- neighborhood
- features
- point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 21
- 230000000007 visual effect Effects 0.000 claims abstract description 31
- 238000000605 extraction Methods 0.000 claims abstract description 24
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 23
- 238000012549 training Methods 0.000 claims abstract description 21
- 238000003064 k means clustering Methods 0.000 claims abstract description 4
- 238000012360 testing method Methods 0.000 claims description 24
- 239000013598 vector Substances 0.000 claims description 18
- 239000011159 matrix material Substances 0.000 claims description 9
- 238000010276 construction Methods 0.000 claims description 6
- 238000005286 illumination Methods 0.000 claims description 2
- 238000002592 echocardiography Methods 0.000 claims 1
- 230000007547 defect Effects 0.000 abstract 1
- 230000004927 fusion Effects 0.000 abstract 1
- 238000004364 calculation method Methods 0.000 description 7
- 238000005516 engineering process Methods 0.000 description 4
- 230000000295 complement effect Effects 0.000 description 2
- 238000010606 normalization Methods 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 230000001629 suppression Effects 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2411—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
- G06F18/232—Non-hierarchical techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
Landscapes
- Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
一种基于邻域粗糙集特征选择的图像分类方法。在空间金字塔模型上,首先,通过SURF和HOG提取图像特征,既有尺度不变特性又能描述图像局部目标的表象和形状,并采用基于邻域粗糙集的图像特征选择算法剔除SURF和HOG特征中的冗余特征;其次,对约简后的图像特征集用k‑means聚类算法生成视觉词典;然后,统计空间金字塔模型每个尺度图像各视觉特征词的个数,将获得的直方图都串联起来,并且给不同尺度的特征赋相应的权重;最后,将得到加权直方图放入线性SVM分类器中训练和预测。本发明克服了现有图像分类方法中单一特征提取容易造成图像信息的缺失,而多特征融合则会生成大量的冗余特征造成图像分类的准确率不高的缺陷。
An Image Classification Method Based on Neighborhood Rough Set Feature Selection. On the spatial pyramid model, firstly, the image features are extracted by SURF and HOG, which has scale-invariant characteristics and can describe the appearance and shape of local objects in the image, and the image feature selection algorithm based on neighborhood rough sets is used to eliminate SURF and HOG features The redundant features in; secondly, use the k-means clustering algorithm to generate a visual dictionary for the reduced image feature set; then, count the number of visual feature words in each scale image of the spatial pyramid model, and obtain the histogram All are connected in series, and corresponding weights are assigned to the features of different scales; finally, the weighted histogram is put into the linear SVM classifier for training and prediction. The present invention overcomes the defects in the existing image classification method that single feature extraction easily causes image information loss, while multi-feature fusion will generate a large number of redundant features, resulting in low image classification accuracy.
Description
技术领域technical field
本发明属于计算机视觉领域的图像分类方法。The invention belongs to an image classification method in the field of computer vision.
背景技术Background technique
图像分类是计算机视觉领域的一个重要研究课题。图像分类的目的是让计算机可以和人一样有快速准确识别复杂视觉图像的能力。随着人工智能及模式识别的迅速发展,图像分类广泛应用于图像理解、目标识别和图像检索等方面。Image classification is an important research topic in the field of computer vision. The purpose of image classification is to enable computers to have the same ability to quickly and accurately identify complex visual images as humans. With the rapid development of artificial intelligence and pattern recognition, image classification is widely used in image understanding, object recognition and image retrieval.
空间金字塔模型(Spatial Pyramid Matching,SPM)是目前主要的图像分类方法之一,在视觉词袋模型(bag of words,BOW)的基础上,通过对图像进行多层次划分增加图像的空间位置以及形状信息。基于空间金字塔模型的图像分类方法主要包括四部分:特征提取、生成视觉词典、构建空间金字塔以及用统计合并直方图的方法生成图像视觉描述。特征提取和特征选择是图像分类的关键前提,传统的空间金字塔匹配模型(SPM)虽然在图像分类问题上取得了很大突破,但由于其提取的特征不能有效表达图像的信息,分类性能仍然不高。目前基于局部描述子的特征提取方法在构造视觉词典中的应用较为广泛。SIFT特征描述子在平移,旋转,光照不均等图像上有较好的稳定性。HOG描述子可以很好的表示图像中目标的形状。SURF与SIFT特征类似具有尺度、旋转不变的特性,计算量和计算时间却比SIFT大大减少,同时具有加速的鲁棒性。Spatial Pyramid Matching (SPM) is one of the main image classification methods at present. Based on the bag of words (BOW), the spatial position and shape of the image are increased by multi-level division of the image. information. The image classification method based on the spatial pyramid model mainly includes four parts: feature extraction, generation of visual dictionary, construction of spatial pyramid and generation of image visual description by merging histogram statistics. Feature extraction and feature selection are the key prerequisites for image classification. Although the traditional spatial pyramid matching model (SPM) has made great breakthroughs in image classification, the classification performance is still not good because the extracted features cannot effectively express the image information. high. At present, feature extraction methods based on local descriptors are widely used in constructing visual dictionaries. The SIFT feature descriptor has better stability on images such as translation, rotation, and uneven illumination. The HOG descriptor can well represent the shape of the object in the image. Similar to SIFT features, SURF has the characteristics of scale and rotation invariance, but the calculation amount and calculation time are greatly reduced compared with SIFT, and it has the robustness of acceleration.
图像特征提取方法中,单一特征提取对图像的描述较片面,不能很好的表达图像中的内容。多特征可以取能更全面描述图像信息,目前已有大量通过结合多个特征描述子来表达图像内容。多特征在全面描述图像的同时也带来大量的冗余信息,如何在众多的特征中把不重要甚至冗余的特征去掉而不影响图像内容的表达,在图像分类中是至关重要的。邻域粗糙集特征选择在剔除连续型知识表达系统的冗余特征方面有很好的效果。特征提取后通过特征选择形成更有效的特征子集,图像信息得到简化,而图像所表达的基本信息也没有丢失。In the image feature extraction method, single feature extraction can describe the image one-sidedly, and cannot express the content of the image well. Multi-features can be used to describe image information more comprehensively. At present, there are a large number of image content expressed by combining multiple feature descriptors. While multi-features fully describe images, they also bring a lot of redundant information. How to remove unimportant or even redundant features from many features without affecting the expression of image content is crucial in image classification. Neighborhood rough set feature selection has a good effect in eliminating redundant features of continuous knowledge representation systems. After feature extraction, a more effective feature subset is formed through feature selection, the image information is simplified, and the basic information expressed by the image is not lost.
目前图像分类在特征提取上仍然存在单一特征不能完全描述图像,但是多特征提取描述图像会有大量的冗余信息,聚类时需要生成大量的视觉词典,这样不仅会导致图像分类的准确率不高而且图像分类的耗时也很大。因此,图像分类中,图像特征提取后的特征选择也是至关重要的。At present, in the feature extraction of image classification, there is still a single feature that cannot fully describe the image, but there will be a lot of redundant information in describing the image through multi-feature extraction. When clustering, a large number of visual dictionaries need to be generated, which will not only lead to low accuracy of image classification. High and time-consuming for image classification. Therefore, in image classification, feature selection after image feature extraction is also crucial.
发明内容Contents of the invention
本发明要解决的技术问题:提供一种基于邻域粗糙集特征选择的图像分类方法,利用HOG和SURF之间的优势互补特性提取图像特征,并利用邻域粗糙集特征选择算法剔除图像中的冗余特征,克服了现有图像分类技术中多特征提取描述图像会有大量的冗余信息,导致图像分类的准确率不高而且图像分类的耗时较大的缺陷。The technical problem to be solved by the present invention is to provide an image classification method based on neighborhood rough set feature selection, which uses the complementary advantages between HOG and SURF to extract image features, and uses neighborhood rough set feature selection algorithm to eliminate Redundant features overcome the shortcomings of the existing image classification technology that there will be a lot of redundant information in the multi-feature extraction description image, resulting in low accuracy of image classification and large time-consuming image classification.
为解决上述技术问题,本发明提供了一种邻域粗糙集特征选择的图像分类方法,它包括以下步骤:For solving the problems of the technologies described above, the invention provides a kind of image classification method of neighborhood rough set feature selection, and it comprises the following steps:
(1)分别提取训练样本和测试集样本图像的特征;(1) Extract the features of the training sample and the test set sample image respectively;
(2)构建图像特征表达系统;(2) Build an image feature expression system;
(3)用基于邻域粗糙集的特征选择算法对图像知识表达系统中的冗余特征进行剔除,得到新的图像特征集;(3) Use the feature selection algorithm based on the neighborhood rough set to eliminate the redundant features in the image knowledge expression system to obtain a new image feature set;
(4)聚类生成视觉特征词典;(4) clustering generates a visual feature dictionary;
(5)构建空间金字塔模型,根据生成的视觉特征词典,统计合并训练集和测试集每幅图像空间金字塔的加权视觉特征直方图;(5) construct the spatial pyramid model, according to the visual characteristic dictionary of generation, count and merge the weighted visual characteristic histogram of each image spatial pyramid of training set and test set;
(6)训练线性SVM分类器对测试图像进行分类。(6) Train a linear SVM classifier to classify the test image.
提取训练样本和测试集样本图像的特征具体包括:图像的SURF特征和HOG特征。Extracting the features of the training sample and the test set sample image specifically includes: the SURF feature and the HOG feature of the image.
SURF特征是一种局部特征描述子,SURF算法和SIFT算法类似,同样拥有尺度不变特性,但计算速度和鲁棒性比SIFT好。The SURF feature is a local feature descriptor. The SURF algorithm is similar to the SIFT algorithm and also has scale-invariant properties, but its calculation speed and robustness are better than SIFT.
SURF特征提取步骤为:The SURF feature extraction steps are:
步骤1.利用盒子滤波器构造金字塔尺度空间;Step 1. Use the box filter to construct a pyramid scale space;
为了在不同的尺度空间上寻找特征点,SURF引入盒子滤波器的概念建立图像的尺度空间。SURF算法的尺度空间的划分包括不同的阶和层。每一阶都包含了许多不同的层,这些层是不同尺寸的盒子滤波器对原始图像的响应。最低阶的第一层的盒子滤波器尺寸是9×9,这时对应的高斯尺度为σ=1.2,然后尺寸会不断增加到15×15,21×21,27×27。In order to find feature points in different scale spaces, SURF introduces the concept of box filter to establish the scale space of the image. The division of the scale space of the SURF algorithm includes different orders and layers. Each stage consists of many different layers which are box filter responses of different sizes to the original image. The box filter size of the lowest-order first layer is 9×9, and the corresponding Gaussian scale is σ=1.2, and then the size will continue to increase to 15×15, 21×21, and 27×27.
步骤2.通过Hessian矩阵建立快速特征点检测器获取极值稳定点;Step 2. Establish a fast feature point detector through the Hessian matrix to obtain extreme stable points;
SURF算法的检测器基于Hessian矩阵,对于图像I中的某点X=(x,y),则该点在尺度σ下的Hessian矩阵为H(p,σ):The detector of the SURF algorithm is based on the Hessian matrix. For a point X=(x, y) in the image I, the Hessian matrix of the point at the scale σ is H(p, σ):
其中Lxx(P,σ)表示二阶偏导数与图像I在点P处的卷积,Lxy(P,σ)表示与图像I在点P处的卷积,Lyx(P,σ)表示与图像I在点P处的卷积,Lyy(P,σ)表示与图像I在点P处的卷积;。 where L xx (P,σ) represents the second order partial derivative Convolved with image I at point P, L xy (P,σ) represents Convolved with image I at point P, L yx (P,σ) represents Convolved with image I at point P, L yy (P,σ) represents Convolution with image I at point P; .
通过计算H矩阵的行列式:det(Happrox)=DxxDyy-(ωDxy)2 By calculating the determinant of the H matrix: det(H approx )=D xx D yy -(ωD xy ) 2
Dxx,Dyy,Dxy分别是使用了滤波器后对Lxx,Lyy,Lxy的近似,ω为权重,一般取值0.9。D xx , D yy , and D xy are the approximations to L xx , L yy , and L xy respectively after using filters, and ω is the weight, which generally takes a value of 0.9.
行列式的值代表了在X=(x,y,σ)处的斑点响应,可以在空间和尺度上通过这个函数来寻找图像中的特征点。为了得到特征点,在这里采取非极大值抑制的方法,将某尺度的特征点与其周围尺度的26个邻点做对比,观察其是否为极大值点。The value of the determinant represents the speckle response at X=(x, y, σ), and this function can be used to find the feature points in the image in space and scale. In order to obtain the feature points, the method of non-maximum value suppression is adopted here, and the feature point of a certain scale is compared with 26 adjacent points of the surrounding scale to observe whether it is a maximum value point.
步骤3.描述特征点主方向;Step 3. Describe the main direction of the feature point;
为了让在图像中获取的特征点具有旋转不变性,可以为每个特征点都分配一个主方向。以特征点为中心,计算半径为6s(s为特征点所在图层尺度值)的区域,选择60°的扇形区域,计算其在x和y方向上的Harr小波响应区域内小波响应矢量叠加,遍历整个圆,最大矢量方向即特征点的主方向。In order to make the feature points acquired in the image invariant to rotation, each feature point can be assigned a main direction. Taking the feature point as the center, calculate the area with a radius of 6s (s is the scale value of the layer where the feature point is located), select a fan-shaped area of 60°, and calculate the wavelet response vector superposition in the Harr wavelet response area in the x and y directions, Traversing the entire circle, the maximum vector direction is the main direction of the feature point.
步骤4.在构造特征向量时,在特征点周围取边长为20s的正方形框,该框的方向为特征点主方向。将正方形框分为4×4个子区域,统计每个子区域25个采样点的x方向和y方向的Harr小波响应,可得到一个64维的特征描述向量。Step 4. When constructing the feature vector, take a square frame with a side length of 20s around the feature point, and the direction of the frame is the main direction of the feature point. Divide the square frame into 4×4 sub-areas, and count the Harr wavelet responses in the x-direction and y-direction of 25 sampling points in each sub-area, and a 64-dimensional feature description vector can be obtained.
HOG特征利用相互重叠的局部对比度归一化技术来表征图像局部目标的表象和形状,是描述边缘和形状信息最好的特征之一。The HOG feature uses the overlapping local contrast normalization technology to characterize the appearance and shape of the local target in the image, and is one of the best features to describe the edge and shape information.
HOG特征提取包括如下步骤:HOG feature extraction includes the following steps:
步骤1.归一化彩色图像,消除光照的影响。Step 1. Normalize the color image to remove the influence of lighting.
步骤2.将图像分成相同大小的像素块(cell),计算每个cell中各像素点(x,y)的水平方向和垂直方向的梯度分别为:Step 2. Divide the image into pixel blocks (cells) of the same size, and calculate the horizontal and vertical gradients of each pixel point (x, y) in each cell as follows:
Gx(x,y)=G(x+1,y)-G(x-1,y)Gx(x,y)=G(x+1,y)-G(x-1,y)
Gy(x,y)=G(x,y+1)-G(x,y-1)G y (x, y) = G(x, y+1)-G(x, y-1)
则得到该像素点的梯度的大小和方向分别为:Then the size and direction of the gradient of the pixel point are obtained as:
步骤3.将整幅图像的所有特征串联得到图像的HOG特征。HOG特征描述子维数为36(本发明中HOG特征提取的参数设置为:cell大小为8×8像素,2×2个cell组成一个block块,采用9个bin的直方图来统计每个cell的梯度信息)。Step 3. Concatenate all the features of the entire image to obtain the HOG features of the image. The HOG feature descriptor dimension is 36 (the parameters of HOG feature extraction in the present invention are set as follows: the cell size is 8×8 pixels, 2×2 cells form a block block, and the histogram of 9 bins is used to count each cell gradient information).
所述的构建的图像特征表达系统为:The image feature expression system of the construction is:
构建图像特征表达系统NTD=<U,C,D>,U={u1,u2…,um},ui表示第i张图像的特征向量的集合ui=[X1,X2,…,Xn],C={c1,c2,…,cn}是图像知识表达系统的条件属性,cl表示图像的第l个特征向量,维数为图像特征描述子的长度。图像的类别D作为决策属性。Construct image feature expression system NTD=<U,C,D>, U={u 1 ,u 2 ...,u m }, u i represents the set of feature vectors of the i-th image u i =[X 1 ,X 2 ,…,X n ], C={c 1 ,c 2 ,…,c n } is the conditional attribute of the image knowledge representation system, c l represents the lth feature vector of the image, and the dimension is the length of the image feature descriptor . The category D of the image is used as a decision attribute.
所述的邻域粗糙集特征选择算法的相关定义具体包括:The relevant definitions of the neighborhood rough set feature selection algorithm specifically include:
定义1.图像特征表达系统中图像ui的δ邻域为:δ(u|Δ(u,ui)≤δ)。Δ为距离函数,本文采用的距离函数为切比雪夫距离(无穷范数):Definition 1. The δ neighborhood of image u i in the image feature expression system is: δ(u|Δ(u,u i )≤δ). Δ is the distance function, and the distance function used in this paper is Chebyshev distance (infinite norm):
相同的邻域半径下,切比雪夫距离(无穷范数)表示的邻域范围最大,且计算简单。Under the same neighborhood radius, Chebyshev distance (infinite norm) represents the largest neighborhood range, and the calculation is simple.
定义2.图像样本u的一致邻域定义为:图像样本u的邻域中类别相同的图像,即:对于δc(u)∩δD(u);反之,图像样本u的不一致邻域定义为:图像样本x的邻域中类别不同的图像,即:对于δc(u)-δD(u)。Definition 2. The consistent neighborhood of image sample u is defined as: images of the same category in the neighborhood of image sample u, that is: for δ c (u)∩δ D (u); Conversely, the inconsistent neighborhood of image sample u is defined as: images of different categories in the neighborhood of image sample x, that is: for δ c (u) - δ D (u).
定义3:图像特征表达系统NTD=<U,C,D>的信息熵和条件熵的定义如下,信息熵和条件熵可以表示图像信息的不确定程度:Definition 3: The information entropy and conditional entropy of image feature expression system NTD=<U,C,D> are defined as follows, information entropy and conditional entropy can represent the degree of uncertainty of image information:
信息熵: Information entropy:
条件熵: Conditional entropy:
图像特征表达系统的条件熵和其图像样本的不一致邻域相关。在每个特征下图像的不一致邻域个数越多,条件熵越大,该特征就与图像的类别越不相关;反之,条件熵越小的特征,就和图像类别越相关。通过条件熵可以反映出特征与图像之间的相关程度。The conditional entropy of an image feature representation system is related to the inconsistent neighborhood of its image samples. The larger the number of inconsistent neighborhoods of the image under each feature, the greater the conditional entropy, the less relevant the feature is to the image category; conversely, the smaller the conditional entropy, the more relevant the feature is to the image category. The degree of correlation between features and images can be reflected by conditional entropy.
所述的邻域粗糙集特征选择包括如下步骤:The neighborhood rough set feature selection includes the following steps:
步骤1:依据条件熵公式(3)计算出图像知识表达系统NTD=<U,C,D>的条件熵E(D|C),约简特征集red,初始Xi∈C-red;Step 1: Calculate the conditional entropy E(D|C) of the image knowledge expression system NTD=<U,C,D> according to the conditional entropy formula (3), reduce the feature set red, and initially X i ∈ C-red;
步骤2:依据公式(3)计算E(D|red∪{Xi})的条件熵,找出条件熵最小的特征Xi,将Xi加入到red中;Step 2: Calculate the conditional entropy of E(D|red∪{X i }) according to formula (3), find out the feature Xi with the smallest conditional entropy, and add Xi to red;
步骤3:计算E(D|red∪{bi})是否等于E(D|C),若相等则输出特征约简集red,若不等,则返回执行步骤2。Step 3: Calculate whether E(D|red∪{b i }) is equal to E(D|C), if they are equal, output the feature reduction set red, if not, return to step 2.
所述聚类生成视觉特征词典包括:The clustering generation visual feature dictionary includes:
把通过特征选择得到的特征视为“视觉单词”采用k-means聚类算法对其进行聚类从而获得数量为M的“视觉词袋”。The features obtained through feature selection are regarded as "visual words" and clustered using the k-means clustering algorithm to obtain a "bag of visual words" with a number of M.
构建空间金字塔模型步骤:Steps to build a spatial pyramid model:
1)将图像分为三层,第0层将整幅图像作为一个区域,第1层将图像均匀划分为4个区域,第2层将图像均匀划分为16个区域,不同层赋以不同的权重,权重依次为[1/4,1/4,1/2];1) Divide the image into three layers, the 0th layer takes the whole image as a region, the 1st layer evenly divides the image into 4 regions, the 2nd layer evenly divides the image into 16 regions, different layers are given different Weight, the weight is [1/4,1/4,1/2] in turn;
2)统计不同层的金字塔统计从左到右、从上到下的顺序对各层中各个区域的“视觉单词”在“视觉词袋”出现的频数,得到图像不同层中各区域的图像直方图表示;2) Count the pyramid statistics of different layers from left to right and from top to bottom to the frequency of "visual words" in each area of each layer in the "visual word bag", and obtain the image histogram of each area in different layers of the image Figure representation;
3)对上述不同层中各区域的图像直方图表示分别赋以1)中的权重,串联起来得到最终的图像表示3) Assign the weights in 1) to the image histogram representations of each region in the above different layers, and connect them in series to obtain the final image representation
所述训练线性SVM分类器对测试图像进行分类:即随机选取训练集图像和测试集图像,对训练集图像通过特征提取、特征选择、生成视觉词典、构建图像金字塔空间得到视觉特征直方图后加入到线性SVM中得到训练后的分类器;对测试集图像进行特征提取,并匹配训练集图像的视觉词典构建图像空间金字塔模型,得到测试集图像的直方图表示,将其输入到训练好的线性SVM分类器中,输出测试集图像的类别。The training linear SVM classifier classifies the test images: that is, the training set images and the test set images are randomly selected, and the training set images are obtained through feature extraction, feature selection, generation of visual dictionaries, and construction of image pyramid spaces to obtain visual feature histograms. Get the trained classifier in the linear SVM; perform feature extraction on the test set image, and match the visual dictionary of the training set image to construct an image space pyramid model, obtain the histogram representation of the test set image, and input it to the trained linear In the SVM classifier, the category of the output test set image.
与现有技术相比,本发明突出的实质特点和显著性如下:Compared with the prior art, the outstanding substantive features and notable features of the present invention are as follows:
(1)本发明采用更具互补特性的SURF和HOG特征相结合,所提取的特征不仅可以描述图像局部目标的表象和形状,而且有尺度不变、计算速度较快、较好的鲁棒性等特点。(1) The present invention uses the combination of SURF and HOG features which are more complementary. The extracted features can not only describe the appearance and shape of the local target in the image, but also have scale invariance, fast calculation speed and good robustness Features.
(2)本发明构建了图像特征表达系统,用邻域粗糙集的特征选择算法剔除SURF和HOG相结合后带来的大量冗余信息,提高了图像分类的准确率同时也减少了分类时间。(2) The present invention builds an image feature expression system, uses the feature selection algorithm of the neighborhood rough set to eliminate a large amount of redundant information brought by the combination of SURF and HOG, improves the accuracy of image classification and reduces the classification time.
附图说明Description of drawings
图1是本发明的流程图。Fig. 1 is a flow chart of the present invention.
具体实施方式Detailed ways
下面结合附图和实施例,对本发明的具体实施方式做进一步详细描述:Below in conjunction with accompanying drawing and embodiment, the specific embodiment of the present invention is described in further detail:
本发明提供了一种邻域粗糙集特征选择的图像分类方法,包括以下步骤:The invention provides an image classification method for neighborhood rough set feature selection, comprising the following steps:
1)分别提取训练样本和测试集样本图像的特征;1) Extract the features of training samples and test set sample images respectively;
2)构建图像特征表达系统;2) Build an image feature expression system;
3)用基于邻域粗糙集的特征选择算法对图像知识表达系统中的冗余特征进行剔除,得到新的图像特征集;3) Use the feature selection algorithm based on the neighborhood rough set to eliminate the redundant features in the image knowledge expression system to obtain a new image feature set;
4)聚类生成视觉特征词典;4) Clustering generates a visual feature dictionary;
5)构建空间金字塔模型,根据生成的视觉特征词典,统计合并训练集和测试集每幅图像空间金字塔的加权视觉特征直方图;5) construct the spatial pyramid model, according to the generated visual feature dictionary, count the weighted visual feature histogram of each image space pyramid of the combined training set and test set;
6)训练线性SVM分类器对测试图像进行分类。6) Train a linear SVM classifier to classify the test image.
其中步骤1)所提取的特征具体包括:图像的SURF特征和HOG特征。The features extracted in step 1) specifically include: SURF features and HOG features of the image.
SURF是一种局部特征描述子,SURF算法和SIFT算法类似,同样拥有尺度不变特性,但计算速度和鲁棒性比SIFT好。SURF特征提取算法的四个基本步骤为:SURF is a local feature descriptor. The SURF algorithm is similar to the SIFT algorithm and also has scale-invariant properties, but its calculation speed and robustness are better than SIFT. The four basic steps of the SURF feature extraction algorithm are:
Step1.利用盒子滤波器构造金字塔尺度空间;Step1. Use the box filter to construct the pyramid scale space;
为了在不同的尺度空间上寻找特征点,SURF引入盒子滤波器的概念建立图像的尺度空间。SURF算法的尺度空间的划分包括不同的阶和层。每一阶都包含了许多不同的层,这些层是不同尺寸的盒子滤波器对原始图像的响应。最低阶的第一层的盒子滤波器尺寸是9×9,这时对应的高斯尺度为σ=1.2,然后尺寸会不断增加到15×15,21×21,27×27。In order to find feature points in different scale spaces, SURF introduces the concept of box filter to establish the scale space of the image. The division of the scale space of the SURF algorithm includes different orders and layers. Each stage consists of many different layers which are box filter responses of different sizes to the original image. The box filter size of the lowest-order first layer is 9×9, and the corresponding Gaussian scale is σ=1.2, and then the size will continue to increase to 15×15, 21×21, and 27×27.
Step2.通过Hessian矩阵建立快速特征点检测器获取极值稳定点;Step2. Establish a fast feature point detector through the Hessian matrix to obtain extreme stable points;
SURF算法的检测器基于Hessian矩阵,对于图像I中的某点X=(x,y),则该点在尺度σ下的Hessian矩阵为H(p,σ):The detector of the SURF algorithm is based on the Hessian matrix. For a point X=(x, y) in the image I, the Hessian matrix of the point at the scale σ is H(p, σ):
其中Lxx(P,σ)表示二阶偏导数与图像I在点P处的卷积,Lxy(P,σ)表示与图像I在点P处的卷积,Lyx(P,σ)表示与图像I在点P处的卷积,Lyy(P,σ)表示与图像I在点P处的卷积;。 where L xx (P,σ) represents the second order partial derivative Convolved with image I at point P, L xy (P,σ) represents Convolved with image I at point P, L yx (P,σ) represents Convolved with image I at point P, L yy (P,σ) represents Convolution with image I at point P; .
通过计算H矩阵的行列式:det(Happrox)=DxxDyy-(ωDxy)2 By calculating the determinant of the H matrix: det(H approx )=D xx D yy -(ωD xy ) 2
Dxx,Dyy,Dxy分别是使用了滤波器后对Lxx,Lyy,Lxy的近似,ω为权重,一般取值为0.9。D xx , D yy , and D xy are the approximations to L xx , L yy , and L xy respectively after using filters, and ω is the weight, which is generally 0.9.
行列式的值代表了在X=(x,y,σ)处的斑点响应,可以在空间和尺度上通过这个函数来寻找图像中的特征点。为了得到特征点,在这里采取非极大值抑制的方法,将某尺度的特征点与其周围尺度的26个邻点做对比,观察其是否为极大值点。The value of the determinant represents the speckle response at X=(x, y, σ), and this function can be used to find the feature points in the image in space and scale. In order to obtain the feature points, the method of non-maximum value suppression is adopted here, and the feature point of a certain scale is compared with 26 adjacent points of the surrounding scale to observe whether it is a maximum value point.
Step3.描述特征点主方向;Step3. Describe the main direction of the feature point;
为了让在图像中获取的特征点具有旋转不变性,可以为每个特征点都分配一个主方向。以特征点为中心,计算半径为6s(s为特征点所在图层尺度值)的区域,选择60°的扇形区域,计算其在x和y方向上的Harr小波响应区域内小波响应矢量叠加,遍历整个圆,最大矢量方向即特征点的主方向。In order to make the feature points acquired in the image invariant to rotation, each feature point can be assigned a main direction. Taking the feature point as the center, calculate the area with a radius of 6s (s is the scale value of the layer where the feature point is located), select a fan-shaped area of 60°, and calculate the wavelet response vector superposition in the Harr wavelet response area in the x and y directions, Traversing the entire circle, the maximum vector direction is the main direction of the feature point.
Step4.在构造特征向量时,在特征点周围取边长为20s的正方形框,该框的方向为特征点主方向。将正方形框分为4×4个子区域,统计每个子区域25个采样点的x方向和y方向的Harr小波响应,可得到一个64维的特征描述向量。Step4. When constructing the feature vector, take a square frame with a side length of 20s around the feature point, and the direction of the frame is the main direction of the feature point. Divide the square frame into 4×4 sub-areas, and count the Harr wavelet responses in the x-direction and y-direction of 25 sampling points in each sub-area, and a 64-dimensional feature description vector can be obtained.
HOG特征利用相互重叠的局部对比度归一化技术来表征图像局部目标的表象和形状,是描述边缘和形状信息最好的特征之一。HOG特征提取可分为以下步骤:The HOG feature uses the overlapping local contrast normalization technology to characterize the appearance and shape of the local target in the image, and is one of the best features to describe the edge and shape information. HOG feature extraction can be divided into the following steps:
Step 1.归一化彩色图像,消除光照的影响。Step 1. Normalize the color image to eliminate the influence of lighting.
Step 2.将图像分成相同大小的像素块(cell),计算每个cell中各像素点(x,y)的水平方向和垂直方向的梯度分别为:Step 2. Divide the image into pixel blocks (cells) of the same size, and calculate the horizontal and vertical gradients of each pixel point (x, y) in each cell as follows:
Gx(x,y)=G(x+1,y)-G(x-1,y)G x (x,y)=G(x+1,y)-G(x-1,y)
Gy(x,y)=G(x,y+1)-G(x,y-1)G y (x,y)=G(x,y+1)-G(x,y-1)
则得到该像素点的梯度的大小和方向分别为:Then the size and direction of the gradient of the pixel point are obtained as:
Step3.将整幅图像的所有特征串联得到图像的HOG特征。HOG特征描述子维数为36(本发明中HOG特征提取的参数设置为:cell大小为8×8像素,2×2个cell组成一个block块,采用9个bin的直方图来统计每个cell的梯度信息)。Step3. Concatenate all the features of the entire image to obtain the HOG features of the image. The HOG feature descriptor dimension is 36 (the parameters of HOG feature extraction in the present invention are set as follows: the cell size is 8×8 pixels, 2×2 cells form a block block, and the histogram of 9 bins is used to count each cell gradient information).
其中步骤2)所述的构建的图像特征表达系统为:Wherein step 2) the image feature expression system described in construction is:
构建图像特征表达系统NTD=<U,C,D>,U={u1,u2…,um},ui表示第i张图像的特征向量的集合ui=[X1,X2,…,Xn],C={c1,c2,…,cn}是图像知识表达系统的条件属性,cl表示图像的第l个特征向量,维数为图像特征描述子的长度。图像的类别D作为决策属性。Construct image feature expression system NTD=<U,C,D>, U={u 1 ,u 2 ...,u m }, u i represents the set of feature vectors of the i-th image u i =[X 1 ,X 2 ,…,X n ], C={c 1 ,c 2 ,…,c n } is the conditional attribute of the image knowledge representation system, c l represents the lth feature vector of the image, and the dimension is the length of the image feature descriptor . The category D of the image is used as a decision attribute.
其中步骤3)所述的邻域粗糙集特征选择算法的相关定义具体包括:Wherein the relevant definitions of the neighborhood rough set feature selection algorithm described in step 3) specifically include:
定义1.图像特征表达系统中图像ui的δ邻域为:δ(u|Δ(u,ui)≤δ)。Δ为距离函数,本文采用的距离函数为切比雪夫距离(无穷范数):Definition 1. The δ neighborhood of image u i in the image feature expression system is: δ(u|Δ(u,u i )≤δ). Δ is the distance function, and the distance function used in this paper is Chebyshev distance (infinite norm):
相同的邻域半径下,切比雪夫距离(无穷范数)表示的邻域范围最大,且计算简单。Under the same neighborhood radius, Chebyshev distance (infinite norm) represents the largest neighborhood range, and the calculation is simple.
定义2.图像样本u的一致邻域定义为:图像样本u的邻域中类别相同的图像,即:对于δc(u)∩δD(u);反之,图像样本u的不一致邻域定义为:图像样本x的邻域中类别不同的图像,即:对于δc(u)-δD(u)。Definition 2. The consistent neighborhood of image sample u is defined as: images of the same category in the neighborhood of image sample u, that is: for δ c (u)∩δ D (u); Conversely, the inconsistent neighborhood of image sample u is defined as: images of different categories in the neighborhood of image sample x, that is: for δ c (u) - δ D (u).
定义3:文献[11]定义了NTD=<U,C,D>的信息熵和条件熵,信息熵和条件熵可以表示图像信息的不确定程度:Definition 3: Literature [11] defines the information entropy and conditional entropy of NTD=<U,C,D>, which can represent the degree of uncertainty of image information:
信息熵: Information entropy:
条件熵: Conditional entropy:
图像特征表达系统的条件熵和其图像样本的不一致邻域相关。在每个特征下图像的不一致邻域个数越多,条件熵越大,该特征就与图像的类别越不相关;反之,条件熵越小的特征,就和图像类别越相关。通过条件熵可以反映出特征与图像之间的相关程度。The conditional entropy of an image feature representation system is related to the inconsistent neighborhood of its image samples. The larger the number of inconsistent neighborhoods of the image under each feature, the greater the conditional entropy, the less relevant the feature is to the image category; conversely, the smaller the conditional entropy, the more relevant the feature is to the image category. The degree of correlation between features and images can be reflected by conditional entropy.
其中步骤3)所述的邻域粗糙集特征选择算法包括如下步骤:Wherein step 3) described neighborhood rough set feature selection algorithm comprises the following steps:
步骤1:依据条件熵公式(3)我们可计算出图像知识表达系统NTD=<U,C,D>的条件熵E(D|C),约简特征集red,初始Xi∈C-red;Step 1: According to the conditional entropy formula (3), we can calculate the conditional entropy E(D|C) of the image knowledge expression system NTD=<U,C,D>, reduce the feature set red, and initially X i ∈ C-red;
步骤2:同样依据公式(3)计算E(D|red∪{Xi})的条件熵,找出条件熵最小的特征Xi,将Xi加入到red中;Step 2: Also calculate the conditional entropy of E(D|red∪{X i }) according to the formula (3), find out the feature X i with the smallest conditional entropy, and add Xi i to red;
步骤3:计算E(D|red∪{bi})是否等于E(D|C),若相等则输出特征约简集red,若不等,则返回执行步骤2。Step 3: Calculate whether E(D|red∪{b i }) is equal to E(D|C), if they are equal, output the feature reduction set red, if not, return to step 2.
其中步骤4)具体包括:Wherein step 4) specifically includes:
把通过特征选择得到的特征视为“视觉单词”采用k-means聚类算法对其进行聚类从而获得数量为M的“视觉词典”。The features obtained through feature selection are regarded as "visual words" and clustered with the k-means clustering algorithm to obtain a "visual dictionary" with a number of M.
其中步骤5)基于空间金字塔模型SPM具体步骤包括:Wherein step 5) based on the spatial pyramid model SPM specific steps include:
1)将图像分为三层,第0层将整幅图像作为一个区域,第1层将图像均匀划分为4个区域,第2层将图像均匀划分为16个区域,不同层赋以不同的权重,权重依次为[1/4,1/4,1/2];1) Divide the image into three layers, the 0th layer takes the whole image as a region, the 1st layer evenly divides the image into 4 regions, the 2nd layer evenly divides the image into 16 regions, different layers are given different Weight, the weight is [1/4,1/4,1/2] in turn;
2)统计不同层的金字塔统计从左到右、从上到下的顺序对各层中各个区域的“视觉单词”在“视觉词袋”出现的频数,得到图像不同层中各区域的图像直方图表示;2) Count the pyramid statistics of different layers from left to right and from top to bottom to the frequency of "visual words" in each area of each layer in the "visual word bag", and obtain the image histogram of each area in different layers of the image Figure representation;
3)对上述不同层中各区域的图像直方图表示分别赋以1)中的权重,串联起来得到最终的图像表示3) Assign the weights in 1) to the image histogram representations of each region in the above different layers, and connect them in series to obtain the final image representation
其中步骤6)具体包括:Wherein step 6) specifically comprises:
1)随机选取训练集和测试集,对训练集图像进行权利要求1前5个步骤处理,获得训练集图像的直方图表示,输入线性SVM中得到训练后的分类器;1) Randomly select a training set and a test set, carry out the first 5 steps of claim 1 to the training set image, obtain the histogram representation of the training set image, and input the trained classifier in the linear SVM;
2)对测试集图像进行特征提取,并匹配训练集图像的视觉词典构建图像空间金字塔模型,得到测试集图像的直方图表示,将其输入到训练好的线性SVM分类器中,输出测试集图像的类别。2) Perform feature extraction on the test set image, and match the visual dictionary of the training set image to construct an image space pyramid model, obtain the histogram representation of the test set image, input it into the trained linear SVM classifier, and output the test set image category.
Claims (8)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810254854.0A CN108564111A (en) | 2018-03-26 | 2018-03-26 | A kind of image classification method based on neighborhood rough set feature selecting |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810254854.0A CN108564111A (en) | 2018-03-26 | 2018-03-26 | A kind of image classification method based on neighborhood rough set feature selecting |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108564111A true CN108564111A (en) | 2018-09-21 |
Family
ID=63533316
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810254854.0A Pending CN108564111A (en) | 2018-03-26 | 2018-03-26 | A kind of image classification method based on neighborhood rough set feature selecting |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108564111A (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109448038A (en) * | 2018-11-06 | 2019-03-08 | 哈尔滨工程大学 | Sediment sonar image feature extracting method based on DRLBP and random forest |
CN110738265A (en) * | 2019-10-18 | 2020-01-31 | 太原理工大学 | improved ORB algorithm based on fusion of improved LBP feature and LNDP feature |
WO2020150897A1 (en) * | 2019-01-22 | 2020-07-30 | 深圳大学 | Multi-target tracking method and apparatus for video target, and storage medium |
CN112163133A (en) * | 2020-09-25 | 2021-01-01 | 南通大学 | Breast cancer data classification method based on multi-granularity evidence neighborhood rough set |
CN112580659A (en) * | 2020-11-10 | 2021-03-30 | 湘潭大学 | Ore identification method based on machine vision |
CN112598661A (en) * | 2020-12-29 | 2021-04-02 | 河北工业大学 | Ankle fracture and ligament injury diagnosis method based on machine learning |
CN113112471A (en) * | 2021-04-09 | 2021-07-13 | 南京大学 | Target detection method based on RI-HOG characteristics and quick pyramid |
CN114387634A (en) * | 2020-10-20 | 2022-04-22 | 南京理工大学 | Elderly people nursing demand identification method based on SURF algorithm |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150131899A1 (en) * | 2013-11-13 | 2015-05-14 | Canon Kabushiki Kaisha | Devices, systems, and methods for learning a discriminant image representation |
CN105389593A (en) * | 2015-11-16 | 2016-03-09 | 上海交通大学 | Image object recognition method based on SURF |
CN105550708A (en) * | 2015-12-14 | 2016-05-04 | 北京工业大学 | Construction method of visual bag-of-words model based on improved SURF feature |
CN105654035A (en) * | 2015-12-21 | 2016-06-08 | 湖南拓视觉信息技术有限公司 | Three-dimensional face recognition method and data processing device applying three-dimensional face recognition method |
CN106250919A (en) * | 2016-07-25 | 2016-12-21 | 河海大学 | The scene image classification method that combination of multiple features based on spatial pyramid model is expressed |
CN106644484A (en) * | 2016-09-14 | 2017-05-10 | 西安工业大学 | Turboprop Engine rotor system fault diagnosis method through combination of EEMD and neighborhood rough set |
CN107368807A (en) * | 2017-07-20 | 2017-11-21 | 东南大学 | A kind of monitor video vehicle type classification method of view-based access control model bag of words |
-
2018
- 2018-03-26 CN CN201810254854.0A patent/CN108564111A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150131899A1 (en) * | 2013-11-13 | 2015-05-14 | Canon Kabushiki Kaisha | Devices, systems, and methods for learning a discriminant image representation |
CN105389593A (en) * | 2015-11-16 | 2016-03-09 | 上海交通大学 | Image object recognition method based on SURF |
CN105550708A (en) * | 2015-12-14 | 2016-05-04 | 北京工业大学 | Construction method of visual bag-of-words model based on improved SURF feature |
CN105654035A (en) * | 2015-12-21 | 2016-06-08 | 湖南拓视觉信息技术有限公司 | Three-dimensional face recognition method and data processing device applying three-dimensional face recognition method |
CN106250919A (en) * | 2016-07-25 | 2016-12-21 | 河海大学 | The scene image classification method that combination of multiple features based on spatial pyramid model is expressed |
CN106644484A (en) * | 2016-09-14 | 2017-05-10 | 西安工业大学 | Turboprop Engine rotor system fault diagnosis method through combination of EEMD and neighborhood rough set |
CN107368807A (en) * | 2017-07-20 | 2017-11-21 | 东南大学 | A kind of monitor video vehicle type classification method of view-based access control model bag of words |
Non-Patent Citations (3)
Title |
---|
AYSEGÜL UÇAR等: "Moving towards in object recognition with deep learning for autonomous driving applications", 《2016 INTERNATIONAL SYMPOSIUM ON INNOVATIONS IN INTELLIGENT SYSTEMS AND APPLICATIONS (INISTA)》 * |
吴修浩: "基于视频图像的车型识别系统设计与实现", 《中国优秀博硕士学位论文全文数据库(硕士) 信息科技辑》 * |
续欣莹等: "信息观下基于不一致邻域矩阵的属性约简", 《控制与决策》 * |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109448038A (en) * | 2018-11-06 | 2019-03-08 | 哈尔滨工程大学 | Sediment sonar image feature extracting method based on DRLBP and random forest |
WO2020150897A1 (en) * | 2019-01-22 | 2020-07-30 | 深圳大学 | Multi-target tracking method and apparatus for video target, and storage medium |
CN110738265A (en) * | 2019-10-18 | 2020-01-31 | 太原理工大学 | improved ORB algorithm based on fusion of improved LBP feature and LNDP feature |
CN112163133A (en) * | 2020-09-25 | 2021-01-01 | 南通大学 | Breast cancer data classification method based on multi-granularity evidence neighborhood rough set |
CN112163133B (en) * | 2020-09-25 | 2021-10-08 | 南通大学 | A breast cancer data classification method based on multi-granularity evidence neighborhood rough sets |
CN114387634A (en) * | 2020-10-20 | 2022-04-22 | 南京理工大学 | Elderly people nursing demand identification method based on SURF algorithm |
CN112580659A (en) * | 2020-11-10 | 2021-03-30 | 湘潭大学 | Ore identification method based on machine vision |
CN112598661A (en) * | 2020-12-29 | 2021-04-02 | 河北工业大学 | Ankle fracture and ligament injury diagnosis method based on machine learning |
CN112598661B (en) * | 2020-12-29 | 2022-07-22 | 河北工业大学 | Ankle fracture and ligament injury diagnosis method based on machine learning |
CN113112471A (en) * | 2021-04-09 | 2021-07-13 | 南京大学 | Target detection method based on RI-HOG characteristics and quick pyramid |
CN113112471B (en) * | 2021-04-09 | 2023-12-29 | 南京大学 | Target detection method based on RI-HOG characteristics and rapid pyramid |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108564111A (en) | A kind of image classification method based on neighborhood rough set feature selecting | |
CN102622607B (en) | Remote sensing image classification method based on multi-feature fusion | |
CN107368807B (en) | A vehicle classification method based on visual word bag model for surveillance video | |
CN110321963A (en) | Based on the hyperspectral image classification method for merging multiple dimensioned multidimensional sky spectrum signature | |
CN110348399B (en) | Hyperspectral intelligent classification method based on prototype learning mechanism and multidimensional residual error network | |
CN108122008B (en) | SAR image recognition method based on sparse representation and multi-feature decision-level fusion | |
CN107480620B (en) | Remote sensing image automatic target identification method based on heterogeneous feature fusion | |
CN111898621B (en) | A Contour Shape Recognition Method | |
Yuan et al. | Acm: Adaptive cross-modal graph convolutional neural networks for rgb-d scene recognition | |
CN111080678B (en) | Multi-temporal SAR image change detection method based on deep learning | |
CN105528595A (en) | Method for identifying and positioning power transmission line insulators in unmanned aerial vehicle aerial images | |
CN105488536A (en) | Agricultural pest image recognition method based on multi-feature deep learning technology | |
Zou et al. | Chronological classification of ancient paintings using appearance and shape features | |
Xie et al. | Combination of dominant color descriptor and Hu moments in consistent zone for content based image retrieval | |
CN103440508B (en) | The Remote Sensing Target recognition methods of view-based access control model word bag model | |
Liao et al. | Triplet-based deep similarity learning for person re-identification | |
CN103679192A (en) | Image scene type discrimination method based on covariance features | |
Varish | A modified similarity measurement for image retrieval scheme using fusion of color, texture and shape moments | |
Salhi et al. | Fast and efficient face recognition system using random forest and histograms of oriented gradients | |
CN105205135A (en) | 3D (three-dimensional) model retrieving method based on topic model and retrieving device thereof | |
CN108932518A (en) | A kind of feature extraction of shoes watermark image and search method of view-based access control model bag of words | |
Wu et al. | Typical target detection in satellite images based on convolutional neural networks | |
Qian et al. | Classification of rice seed variety using point cloud data combined with deep learning | |
Yuan et al. | Few-shot scene classification with multi-attention deepemd network in remote sensing | |
CN109558803B (en) | SAR target identification method based on convolutional neural network and NP criterion |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20180921 |
|
WD01 | Invention patent application deemed withdrawn after publication |