CN114821181A - Image classification method - Google Patents

Image classification method Download PDF

Info

Publication number
CN114821181A
CN114821181A CN202210493913.6A CN202210493913A CN114821181A CN 114821181 A CN114821181 A CN 114821181A CN 202210493913 A CN202210493913 A CN 202210493913A CN 114821181 A CN114821181 A CN 114821181A
Authority
CN
China
Prior art keywords
matrix
points
data
similarity
anchor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210493913.6A
Other languages
Chinese (zh)
Inventor
王靖宇
谢方园
聂飞平
李学龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northwestern Polytechnical University
Original Assignee
Northwestern Polytechnical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwestern Polytechnical University filed Critical Northwestern Polytechnical University
Priority to CN202210493913.6A priority Critical patent/CN114821181A/en
Publication of CN114821181A publication Critical patent/CN114821181A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to an image classification method, firstly, extracting the characteristics of original image data to obtain the characteristic data of the original image, and generating a representative anchor point in the characteristic data by adopting a K-Means algorithm; secondly, establishing a correlation matrix between the anchor points and the data points by adopting a probability neighbor-based method, and obtaining a similarity matrix between the data points; and finally, obtaining a discrete class indication matrix according to the corresponding relation between the neighbor graph and the class indication matrix by adopting a coordinate ascending method. The method selects representative anchor points, constructs a two-step graph between the anchor points and the sample points, and constructs a neighbor graph between the sample points, thereby reducing the calculation amount of excavating neighbor relation between images. And (3) directly solving a label matrix in a clustering algorithm by adopting a coordinate ascending method to directly obtain an image classification result, and accelerating the calculation speed of image classification.

Description

Image classification method
Technical Field
The invention belongs to the field of image classification and machine learning, and particularly relates to an image classification method.
Background
Clustering algorithms are one of the important research subjects in the field of machine learning, and have been widely used in the fields of pattern recognition, data analysis, image processing, and the like. In image classification, image data is firstly subjected to feature extraction, and common image features include color features, shape features, texture features, edge features, features obtained through a neural network and the like. Along with the development of acquisition equipment, more and more data are acquired, so that data annotation is difficult and high in cost. The clustering algorithm reveals the intrinsic properties and rules of data by learning the unlabeled data and divides the data into different clusters, so that the same types are as identical as possible and the different types are as different as possible. By utilizing the advantage of no need of labels, the clustering algorithm is widely applied to image classification. The spectral clustering is a typical clustering algorithm and mainly comprises two steps, firstly, a neighbor graph is established according to the distance between data, a low-dimensional indication matrix is obtained by carrying out eigenvalue decomposition on a Laplacian matrix based on the neighbor graph, and since the low-dimensional indication matrix possibly contains negative values and can not represent the sample class, the low-dimensional indication matrix needs to be subjected to K-Means clustering or spectral rotation again to obtain a final discrete class indication matrix. However, the computation complexity is high when feature decomposition is performed, and the K-Means clustering performed on the low-dimensional indication matrix again may affect the real category of the sample point, increase the computation complexity, and be difficult to compute and expand on large-scale data.
Aiming at the two problems, Ni Zhongyuan and Liu Jing Rele (Ni Zhongyuan, Liu Jing Rele. fast spectral clustering based on graph filtering [ J/OL ]. Shanxi university school newspaper (nature science edition): 1-14[2022-01-22]. DOI:10.13451/j.sxu.ns.2021058.) provide a fast spectral clustering algorithm based on graph filtering, which overcomes the difficulty of feature decomposition of a Laplace matrix by generating a pseudo feature vector, reduces the data scale by a sampling method and accelerates the subsequent K-Means clustering calculation process. Although the time complexity of spectral clustering is improved and errors are reduced by using an optimization process, the main calculation process of spectral clustering is still maintained, only relevant steps are optimized, and the original structure of data can be changed due to the adoption of a sampling method to reduce the data size.
Disclosure of Invention
Technical problem to be solved
The invention provides an image classification method, which can solve the problem that the existing unsupervised learning method has high calculation complexity when applied to an image classification task, and aims to solve the problems that the eigenvalue decomposition calculation amount in the existing spectral clustering algorithm is too large, and a class indication matrix cannot be directly obtained and cannot be better applied to image classification.
Technical scheme
An image classification method is characterized by comprising the following steps:
s1: inputting a classified image;
s2: extracting image features to obtain feature data
Figure BDA0003622641490000021
Wherein x is i Is the ith data point, d is the dimension of the feature, and n is the number of image data points;
s3: selecting m anchor points by adopting a balance K mean value method of variable anchor point numbers to obtain an anchor point set U;
s4: constructing a similarity matrix A between data points according to the anchor points;
s5: constructing a model according to the similarity matrix A and the label relation;
s6: and solving the model by adopting a coordinate ascending method to obtain a label of the characteristic data, and finishing image classification.
The further technical scheme of the invention is as follows: the step S4 is divided into the following two steps:
s41: constructing a correlation relation matrix B between the data points and the anchor points by adopting a similarity self-learning method;
s42: and calculating to obtain a sample neighbor graph A and a similarity matrix A according to the correlation matrix B.
The further technical scheme of the invention is as follows: the method for constructing the correlation matrix between the data points and the anchor points by adopting the similarity self-learning method comprises the following specific steps:
obtaining a correlation matrix B between the data point X and the anchor point U by a similarity self-learning method,
Figure BDA0003622641490000022
a correlation relation matrix between the sample point and the anchor point is obtained; for the ith sample point, the model is as follows:
Figure BDA0003622641490000023
wherein, b ij Is the ith row and jth column element in B,
Figure BDA0003622641490000031
in row i of B, γ is a regularization coefficient. Is provided with
Figure BDA0003622641490000032
Is the euclidean distance between the ith sample point and the jth anchor point,
Figure BDA0003622641490000033
is a vector whose j-th element is k ij Then the above optimization problem can be written in the form of a vector as follows
Figure BDA0003622641490000034
Let b be i In which there are l (l is less than or equal to m) non-zero elements, for k i Reordering is performed, assuming after the ordering
Figure BDA0003622641490000035
Corresponding to b i Is composed of
Figure BDA0003622641490000036
Then
Figure BDA0003622641490000037
Is expressed as
Figure BDA00036226414900000313
The invention further adopts the technical scheme that: calculating according to the correlation matrix B to obtain a sample neighbor graph A, wherein the similarity matrix A is as follows:
after B is obtained, the similarity between the sample points is constructed through the similarity between the sample points and the anchor points
A=BΔ -1 B T
Wherein the content of the first and second substances,
Figure BDA0003622641490000039
is a diagonal matrix whose i-th element is calculated as
Figure BDA00036226414900000310
Order to
Figure BDA00036226414900000311
At this time
Figure BDA00036226414900000312
A computer system, comprising: one or more processors, a computer readable storage medium, for storing one or more programs, which when executed by the one or more processors, cause the one or more processors to implement the above-described method.
A computer-readable storage medium having stored thereon computer-executable instructions for performing the above-described method when executed.
A computer program comprising computer executable instructions which when executed perform the method described above.
Advantageous effects
Firstly, carrying out feature extraction on original image data to obtain feature data of the original image, and generating representative anchor points in the feature data by adopting a balanced K-means method with variable anchor point number; secondly, establishing a correlation matrix between the anchor points and the data points by adopting a probability neighbor-based method, and obtaining a similarity matrix between the data points; and finally, obtaining a discrete class indication matrix according to the corresponding relation between the neighbor graph and the class indication matrix by adopting a coordinate ascending method.
Compared with the prior art, the invention has the following beneficial effects:
(1) by selecting representative anchor points and constructing a two-step graph between the anchor points and the sample points, a neighbor graph between the sample points is constructed, so that the calculation amount of mining neighbor relations between images is reduced.
(2) And (3) directly solving the label matrix in the clustering algorithm by adopting a coordinate ascending method to directly obtain an image classification result, so that the calculation speed of image classification is accelerated.
(3) The method has a linear relationship between the calculation complexity and the number of samples, and is suitable for the classification of large-scale image data.
Drawings
The drawings are only for purposes of illustrating particular embodiments and are not to be construed as limiting the invention, wherein like reference numerals are used to designate like parts throughout.
FIG. 1 is a flow chart of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail below with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. In addition, the technical features involved in the respective embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
The invention relates to an image classification method, a basic flow chart of which is shown in figure 1, and the method comprises the following specific steps:
step 1: generating representative anchor points
Assume that the image corresponds to feature data of
Figure BDA0003622641490000051
Wherein x is i Is the ith data point, d is the dimension of the feature, and n is the number of image data points. Using variable number of anchor pointsGenerating m representative anchor points from original n data by a balanced K mean value method to obtain an anchor point matrix
Figure BDA0003622641490000052
Wherein u is i Is the ith anchor point, and m is the number of anchor points. In the traditional balanced K-means algorithm, the data are continuously divided into two by adopting K-means, and if the dividing times is q, the number of the anchor points is 2 q The method has the defects that the number of anchor points can only be the power of 2, and the method for balancing the K mean value of the variable number of the anchor points can select any number of anchor points, and has the following specific operation steps. Suppose that m anchor points need to be selected, and m cannot be represented as 2 q In the form of (1), then calculate q first min And q is max Both of them satisfy
Figure BDA0003622641490000053
First select
Figure BDA0003622641490000054
Sorting the clusters where the anchor points are located according to the variance, and taking the first
Figure BDA0003622641490000055
And (4) dividing the cluster with the largest variance into two parts to obtain m anchor points.
Step 2: constructing an adaptive neighbor graph from representative anchor points
Step 2.1: constructing a correlation relation matrix between representative anchor points and data points
Obtaining a correlation matrix B between the data point X and the anchor point U by a similarity self-learning method,
Figure BDA0003622641490000056
is a correlation matrix between the sample point and the anchor point. For the ith sample point, the model is as follows:
Figure BDA0003622641490000057
wherein, b ij Is the ith row and jth column element in B,
Figure BDA0003622641490000058
in row i of B, γ is a regularization coefficient. Is provided with
Figure BDA0003622641490000059
Is the euclidean distance between the ith sample point and the jth anchor point,
Figure BDA00036226414900000510
is a vector whose j-th element is k ij Then the above optimization problem can be written in the form of a vector as follows
Figure BDA00036226414900000511
Let b be i In which there are l (l is less than or equal to m) non-zero elements. To k is paired i Reordering is performed, assuming after the ordering
Figure BDA00036226414900000512
Corresponding to b i Is composed of
Figure BDA00036226414900000513
Then
Figure BDA00036226414900000514
Is expressed as
Figure BDA0003622641490000061
Step 2.2: constructing similarity matrix between samples
After B is obtained, the similarity between the sample points is constructed through the similarity between the sample points and the anchor points
A=BΔ -1 B T
Wherein the content of the first and second substances,
Figure BDA0003622641490000062
is a diagonal matrix whose i-th element is calculated as
Figure BDA0003622641490000063
For a more concise expression A, let
Figure BDA0003622641490000064
At this time
Figure BDA0003622641490000065
And step 3: solution of optimization problem
The invention is realized by minimizing A and FF T The distance between the two matrixes is used for solving the label matrix, and considering that the two matrixes possibly have scale difference, the matrix S is added to balance the difference, so that the general optimization problem is as follows
Figure BDA0003622641490000066
Where Ind denotes that the matrix F is a discrete value and S can be a scalar, a diagonal matrix, or a symmetric matrix. The solution of the problem is discussed in three cases below.
The first condition is as follows: when s is a number
The optimization problem can be transformed as follows:
Figure BDA0003622641490000067
the objective function contains two variables of F and s, s is unconstrained, s is directly differentiated, and J is ordered 1 =2sTr(F T AF)-s 2 Tr(FF T FF T ) Is obtained by
Figure BDA0003622641490000068
Substituting the expression of s into the original optimization problem, the optimization problem can be changed into
Figure BDA0003622641490000071
Wherein the content of the first and second substances,
Figure BDA0003622641490000072
let F be the matrix obtained in the previous step, whose column I is
Figure BDA0003622641490000073
Row ith and column ith elements of
Figure BDA0003622641490000074
When updating ith row and ith column of F, the variation of the objective function is calculated as follows
Figure BDA0003622641490000075
Wherein g is il For the ith row and the ith column element of the matrix G, f is optimal i Is composed of
Figure BDA0003622641490000076
Wherein < > indicates that the value of the parameter is 1 if it is true and 0 if it is false.
Case two: when S is a diagonal matrix
When S is a diagonal matrix, the original problem is
Figure BDA0003622641490000077
At this time, the optimization problem is written in the form of summation
Figure BDA0003622641490000078
The optimization problem for each s jj Are independent and therefore can solve for s individually jj To s to jj Obtaining the partial derivative, and making the partial derivative value be 0 to obtain s jj Is solved as
Figure BDA0003622641490000079
Handle s jj Substituting the expression into the original optimization problem, the optimization problem becomes
Figure BDA0003622641490000081
Updating F line by adopting a coordinate ascending method, namely when a certain line is updated, other lines are all fixed, and the currently obtained optimal is assumed
Figure BDA0003622641490000082
When updating the ith row and the ith element of the ith row is changed from 0 to 1, the increment of the objective function can be calculated as
Figure BDA0003622641490000083
Then
Figure BDA0003622641490000084
Wherein < > operation means that the value of the inside is 1 if true, and 0 if not.
Case three: when S is a symmetric matrix
An objective function of
Figure BDA0003622641490000085
Let J equal 2Tr (F) T AFS)-Tr(FSF T FSF T ) Since S is not constrained, the derivation of S is derived to have a derivative value of 0
F T AF=F T FSF T F
Can be pushed out
S=(F T F) -1 F T AF(F T F) -1
Substituting the above into the expression of J can result in
J=Tr(F T AFS)
=Tr(F T AF(F T F) -1 F T AF(F T F) -1 )
After the objective function is derived from S, the original objective function is substituted to obtain:
J=Tr(F T AF(F T F) -1 F T AF(F T F) -1 )
let P be F T AF(F T F) -1 The objective function becomes:
Figure BDA0003622641490000091
wherein the content of the first and second substances,
Figure BDA0003622641490000092
can be substituted to obtain
Figure BDA0003622641490000093
Updating F by adopting a coordinate descent method, wherein when the ith row and the ith column of the F are updated, F il Is changed, J is in and f il Relevant part J 1 Is expressed as
Figure BDA0003622641490000094
Let A ═ a 1 ,a 2 ,...,a i ,...,a n ]Wherein a is i Line i of A, a j Line j of A. Then, when the coordinate-ascent method is used, when updating the ith row and the ith column of F,optimum f i Is composed of
Figure BDA0003622641490000095
Wherein the content of the first and second substances,<>the operation indicates that the value of the internal is 1 if the internal is true, and is 0 and q if the internal is not true il Is expressed as
Figure BDA0003622641490000096
Example 1:
taking clustering of a Yale image data set recognized by a certain face as an example, the Yale image data set comprises 165 images, the length and the width of each image are 32 pixels, 15 categories are provided, each category comprises 11 sample points, and the number of selected anchor points is 64.
Step 1: generating representative anchor points
The gray feature of the image is used as the data feature of the image, the pixel gray value of each image is straightened into a vector, the dimension of the vector is 1024, and the feature data corresponding to the image is 1024
Figure BDA0003622641490000101
Wherein x is i For the ith data point, d is 1024, which is the dimension of the feature, and n is 165, which is the number of image data points. Generating 64 representative anchor points from 165 data by adopting a balance K-means method of variable anchor point number to obtain an anchor point matrix
Figure BDA0003622641490000102
Wherein u is i And m is the number of the anchor points, and is 64.
Step 2: constructing an adaptive neighbor graph from representative anchor points
Step 2.1: constructing a correlation relation matrix between the representative anchor point and the data point
Obtaining the correlation relationship moment between the data point X and the anchor point U by a similarity self-learning methodThe matrix B is a matrix B in which the matrix B is a matrix,
Figure BDA0003622641490000103
is a correlation matrix between the sample point and the anchor point. For the ith sample point, the model is as follows:
Figure BDA0003622641490000104
wherein, b ij Is the ith row and jth column element in B,
Figure BDA0003622641490000105
in row i of B, γ is a regularization coefficient. Is provided with
Figure BDA0003622641490000106
Is the euclidean distance between the ith sample point and the jth anchor point,
Figure BDA0003622641490000107
is a vector whose j-th element is k ij Then the above optimization problem can be written in the form of a vector as follows
Figure BDA0003622641490000108
Let b be i In which there are l (l is less than or equal to m) non-zero elements. To k is paired i Reordering is performed, assuming after the ordering
Figure BDA0003622641490000109
Corresponding to b i Is composed of
Figure BDA00036226414900001010
Then
Figure BDA00036226414900001011
Is expressed as
Figure BDA00036226414900001012
Step 2.2: constructing similarity matrix between samples
After B is obtained, the similarity between the sample points is constructed through the similarity between the sample points and the anchor points
A=BΔ -1 B T
Wherein the content of the first and second substances,
Figure BDA0003622641490000111
is a diagonal matrix whose i-th element is calculated as
Figure BDA0003622641490000112
For a more concise expression A, let
Figure BDA0003622641490000113
At this time
Figure BDA0003622641490000114
And 3, step 3: solution of optimization problem
The method proposed by the present invention is described herein by taking s as an example. The optimization problem is now
Figure BDA0003622641490000115
The objective function contains two variables of F and s, s is unconstrained, s is directly differentiated, and J is ordered 1 =2sTr(F T AF)-s 2 Tr(FF T FF T ) Is obtained by
Figure BDA0003622641490000116
Substituting the expression of s into the original optimization problem, the optimization problem can be changed into
Figure BDA0003622641490000117
Wherein the content of the first and second substances,
Figure BDA0003622641490000118
when updating ith row and ith column of F, the variation of the objective function is as follows
Figure BDA0003622641490000119
Wherein g is il Is the ith row and the ith column element of the matrix G, then f is optimal i Is composed of
Figure BDA00036226414900001110
Wherein < > indicates that the value of the parameter is 1 if it is true and 0 if it is false.
The optimal discrete label matrix can be obtained through iterative calculation, the clustering precision can be 43.64 percent by comparing the label matrix obtained through the comparison calculation with the sample real label, the clustering precision is improved by 11.52 percent compared with spectral clustering in precision, the time is less than 0.0026s compared with spectral clustering, and the image classification task of the invention has advantages.
While the invention has been described with reference to specific embodiments, the invention is not limited thereto, and various equivalent modifications or substitutions can be easily made by those skilled in the art within the technical scope of the present disclosure.

Claims (7)

1. An image classification method is characterized by comprising the following steps:
s1: inputting a classified image;
s2: extracting image features to obtain feature data
Figure FDA0003622641480000011
Wherein x is i Is the ith data point, d is the dimension of the feature, and n is the number of image data points;
s3: selecting m anchor points by adopting a balance K mean value method of variable anchor point numbers to obtain an anchor point set U;
s4: constructing a similarity matrix A between data points according to the anchor points;
s5: constructing a model according to the similarity matrix A and the label relation;
s6: and solving the model by adopting a coordinate lifting method to obtain a label of the characteristic data, and finishing image classification.
2. An image classification method according to claim 1, characterized in that said S4 is divided into the following two steps:
s41: constructing a correlation relation matrix B between the data points and the anchor points by adopting a similarity self-learning method;
s42: and calculating to obtain a sample neighbor graph A and a similarity matrix A according to the correlation matrix B.
3. An image classification method according to claim 2, characterized in that: the method for constructing the correlation matrix between the data points and the anchor points by adopting the similarity self-learning method comprises the following specific steps:
obtaining a correlation matrix B between the data point X and the anchor point U by a similarity self-learning method,
Figure FDA0003622641480000012
a correlation relation matrix between the sample point and the anchor point is obtained; for the ith sample point, the model is as follows:
Figure FDA0003622641480000013
wherein, b ij Is the ith row and jth column element in B,
Figure FDA0003622641480000014
for row i in B, γ is the regularization coefficient. Is provided with
Figure FDA0003622641480000015
Is the euclidean distance between the ith sample point and the jth anchor point,
Figure FDA0003622641480000016
is a vector whose j-th element is k ij Then the above optimization problem can be written in the form of a vector as follows
Figure FDA0003622641480000017
Let b be i In which there are l (l is less than or equal to m) non-zero elements, for k i Reordering is performed, assuming after the ordering
Figure FDA0003622641480000018
Corresponding to b i Is composed of
Figure FDA0003622641480000019
Then
Figure FDA00036226414800000110
Is expressed as
Figure FDA0003622641480000021
4. An image classification method according to claim 3, characterized in that: calculating according to the correlation matrix B to obtain a sample neighbor graph A, wherein the similarity matrix A is as follows:
after B is obtained, the similarity between the sample points is constructed through the similarity between the sample points and the anchor points
A=BΔ -1 B T
Wherein the content of the first and second substances,
Figure FDA0003622641480000022
is a diagonal matrix whose i-th element is calculated as
Figure FDA0003622641480000023
Order to
Figure FDA0003622641480000024
At this time
Figure FDA0003622641480000025
5. A computer system, comprising: one or more processors, a computer readable storage medium, for storing one or more programs, wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method of any of claims 1-4.
6. A computer-readable storage medium having stored thereon computer-executable instructions for performing the method of any of claims 1-4 when executed.
7. A computer program comprising computer executable instructions which when executed perform the method of any one of claims 1 to 4.
CN202210493913.6A 2022-04-28 2022-04-28 Image classification method Pending CN114821181A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210493913.6A CN114821181A (en) 2022-04-28 2022-04-28 Image classification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210493913.6A CN114821181A (en) 2022-04-28 2022-04-28 Image classification method

Publications (1)

Publication Number Publication Date
CN114821181A true CN114821181A (en) 2022-07-29

Family

ID=82511783

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210493913.6A Pending CN114821181A (en) 2022-04-28 2022-04-28 Image classification method

Country Status (1)

Country Link
CN (1) CN114821181A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116310452A (en) * 2023-02-16 2023-06-23 广东能哥知识科技有限公司 Multi-view clustering method and system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116310452A (en) * 2023-02-16 2023-06-23 广东能哥知识科技有限公司 Multi-view clustering method and system
CN116310452B (en) * 2023-02-16 2024-03-19 广东能哥知识科技有限公司 Multi-view clustering method and system

Similar Documents

Publication Publication Date Title
CN111881714B (en) Unsupervised cross-domain pedestrian re-identification method
CN112862811B (en) Material microscopic image defect identification method, equipment and device based on deep learning
CN109543723B (en) Robust image clustering method
CN109376787B (en) Manifold learning network and computer vision image set classification method based on manifold learning network
CN111008650B (en) Metallographic structure automatic grading method based on deep convolution antagonistic neural network
CN110263855B (en) Method for classifying images by utilizing common-basis capsule projection
CN110909643A (en) Remote sensing ship image small sample classification method based on nearest neighbor prototype representation
CN109726725A (en) The oil painting writer identification method of heterogeneite Multiple Kernel Learning between a kind of class based on large-spacing
CN111709443B (en) Calligraphy character style classification method based on rotation invariant convolution neural network
CN108564116A (en) A kind of ingredient intelligent analysis method of camera scene image
CN114821181A (en) Image classification method
Hong et al. Improved yolov7 model for insulator surface defect detection
CN113420173A (en) Minority dress image retrieval method based on quadruple deep learning
Cui et al. Applying Radam method to improve treatment of convolutional neural network on banknote identification
CN116842210B (en) Textile printing texture intelligent retrieval method based on texture features
WO2024060839A1 (en) Object operation method and apparatus, computer device, and computer storage medium
CN108460412A (en) A kind of image classification method based on subspace joint sparse low-rank Structure learning
CN109299295B (en) Blue printing layout database searching method
CN112270404A (en) Detection structure and method for bulge defect of fastener product based on ResNet64 network
CN112381108A (en) Bullet trace similarity recognition method and system based on graph convolution neural network deep learning
CN115393631A (en) Hyperspectral image classification method based on Bayesian layer graph convolution neural network
CN113435480B (en) Method for improving long tail distribution visual recognition capability through channel sequential switching and self-supervision
CN114926691A (en) Insect pest intelligent identification method and system based on convolutional neural network
CN107392225A (en) Plants identification method based on ellipse Fourier descriptor and weighting rarefaction representation
CN112632313A (en) Bud thread lace retrieval method based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination