CN103971129B - Sorting technique and device that a kind of picture material based on study across data field subspace is recognized - Google Patents

Sorting technique and device that a kind of picture material based on study across data field subspace is recognized Download PDF

Info

Publication number
CN103971129B
CN103971129B CN201410228632.3A CN201410228632A CN103971129B CN 103971129 B CN103971129 B CN 103971129B CN 201410228632 A CN201410228632 A CN 201410228632A CN 103971129 B CN103971129 B CN 103971129B
Authority
CN
China
Prior art keywords
data
domain
subspace
target
label
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201410228632.3A
Other languages
Chinese (zh)
Other versions
CN103971129A (en
Inventor
方正
张仲非
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN201410228632.3A priority Critical patent/CN103971129B/en
Publication of CN103971129A publication Critical patent/CN103971129A/en
Application granted granted Critical
Publication of CN103971129B publication Critical patent/CN103971129B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The sorting technique and its device recognized the invention discloses a kind of picture material based on study across data field subspace, it includes:Feature extraction is carried out to each target image to be sorted, view data is changed into the numeric type data that can be used for classifying;The data characteristics of each destination image data domain and ancillary data field is input into, and correspondence aids in the generic label data of numeric field data, sets up spectrogram adjacent map;To view data, label information and adjacent map, founding mathematical models;According to Mathematical Modeling, base vector builds the new expression feature in subspace of coefficient, data, grader coefficient and the plan class label predicted in aiming field view data in renewal subspace;The tag along sort of aiming field view data is predicted, destination image data is classified with this tag along sort.Compared to existing transfer learning sorting technique, sorting technique proposed by the present invention is all greatly improved in accuracy rate and stability.

Description

Classification method and device for image content recognition based on learning cross-data-domain subspace
Technical Field
The invention belongs to the technical field of migration learning and classification, and particularly relates to a classification method and device for recognizing image content based on learning cross-data-domain subspace.
Background
With the advent of the big data age, various forms of data have grown in geometric progression. How to extract useful knowledge and information from these massive data has become a focus of attention for researchers in data mining and machine learning in the industry and academia. In a plurality of data mining and machine learning technologies, the classification technology has important application value. Such as anomaly detection in video data, classification of spam, and object recognition in images all require well-performing classifiers. It is noted that training of the classifier requires labeling data, however, people often encounter the problem of insufficient labeling data. For example, in an application of a learning classifier for content recognition of pictures on the internet, researchers generally use pictures under the line for labeling as training data. However, the artificially labeled data is far from enough compared with mass data on the internet, and the data characteristics on the internet are different in distribution from data sets acquired under the line, i.e., the data characteristics are different. For another example, in military exploration, a classifier for detecting objects is trained with acquired image data under the prevailing circumstances. However, as time goes on, the same region may be different according to climate change, geomorphic feature change (such as advancing of desertification), artificial reconstruction and other factors, so that the image data collected by the sensor and the data of several years ago have great difference. Previously available training annotation data has largely failed to meet new needs. In the application of user spam filtering, previously defined spam sensitive words are slowly used rarely as society develops, and new words appear in spam. The previously defined labeling data cannot meet the training requirement, and a large amount of manpower and material resources are consumed for re-labeling the data. Therefore, in practical application, the problem of lack of labeling information in the model learning process is common.
On the other hand, researchers have also noted that although the actual problem under study lacks annotation information, annotation information in other related data sets can be used to assist in learning. For example, in the above examples, the previous training data is not always the case; there is still a lot of information that is worth mining in the relevant data sets. Some useful supervision information contained in the annotation data in the relevant data set can be continuously used as a reference for model training. Based on these considerations, researchers have proposed a classification method of transfer learning to help classifier learning in target problems with related data sets. In the migration learning technique, a data set for a target problem study is referred to as a target data field, and a related data set for assistance is referred to as an assistance data field. The goal of the transfer learning is to extract useful annotation information from the auxiliary data domain to assist in the training of the model in the target data domain.
In conventional techniques, classifier models are classified only for the raw features of the data. In the knowledge migration learning problem from the auxiliary domain to the target domain, the data of the auxiliary data domain and the data of the target data domain have difference in feature distribution, and the conventional classification technology is often ineffective. Therefore, unlike conventional methods, the present invention learns new feature representations of data by learning a subspace shared by different data domains, thereby reducing the variability in features between domains. In addition, the technique of the present invention uses a classifier to act on the new feature representation of the data in the subspace to perform model training and prediction across the data. Unlike the existing migration learning technology, the technology provided by the invention can not even need any labeled data in the target domain, and can simultaneously learn to obtain new representation characteristics of data in the subspace and a classification model across data domains. Compared with the existing migration learning classification technology, the classification method provided by the invention has the advantages that the accuracy and the stability are greatly improved on the aspect of cross-data-domain classification performance.
Disclosure of Invention
The invention provides a transfer learning classification technology based on learning cross-data domain subspace, which can be applied to the fields of image content identification and the like. In the case where our target data field lacks training data, we learn with the training data of the relevant auxiliary data field to obtain the required classifier and achieve satisfactory results in classification performance.
In order to achieve the purpose, the technical scheme of the invention is as follows:
a classification method based on learning of image content recognition across data domain subspaces comprises the following steps:
s10: extracting the characteristics of each target image to be classified, and converting image data into numerical data which can be used for classification;
s20: inputting data characteristics of each target image data field and each auxiliary data field and generic label data for training corresponding to the auxiliary field data, and establishing a spectrogram adjacent map for geometric adjustment on the data;
s30: building a mathematical model for the image data, the label information and the built adjacency graph input in step S20;
s40: updating the construction coefficients of the basis vectors in the subspace, the new representation features of the data in the subspace, the classifier coefficients and the class-like labels predicted on the target domain image data in an alternating iteration mode according to the mathematical model established in the S30 and the derived updating formula of each variable;
s50: and predicting a classification label of the target domain image data by using the quasi-class label matrix obtained in the step S40, and classifying the target image data by using the classification label.
Further, step S20 includes:
s201: inputting training sample data represented by image features of an auxiliary data field and a target data field, comprising: image data represented by SIFT (Scale-invariant feature transform) features of an auxiliary data domain, a corresponding label information matrix, a quasi-class label matrix and image data represented by SIFT features of a target domain;
s202: an adjacency graph of the image data of the target data field is constructed.
Further, step S30 includes:
s301: establishing a cross-data-domain subspace learning model;
s302: establishing a classification model in the subspace obtained by learning;
s303: performing local manifold structural constraint on a prediction label of target domain image data, and adding a regular term on the prediction label for regulation;
s304: and integrating the model of learning the cross-data-domain subspace obtained in the step S301 with the classification model obtained in the step S302 and the regular term obtained in the step S303 to obtain a uniform mathematical model.
Further, step S40 includes:
s401: updating the construction coefficients of the basis vectors across the data domain subspace;
s402: updating the new representation characteristics of the data in the subspace;
s403: updating the classifier coefficients;
s404: and updating the quasi-class label matrix of the target domain data.
Another object of the present invention is to provide a classification apparatus based on learning of image content recognition across data domain subspaces, comprising:
an image preprocessing module: extracting the characteristics of each target image to be classified, and converting image data into numerical data which can be used for classification;
the data input processing module: inputting data characteristics of each target image data field and each auxiliary data field and generic label data for training corresponding to the auxiliary field data from an image preprocessing module, and establishing a spectrogram adjacency graph for geometric adjustment on the data;
a modeling module: establishing a mathematical model according to the data output from the data input processing module, the label information and the established adjacency graph: establishing a unified mathematical model and outputting the mathematical model by combining a cross-data-domain subspace learning model, a classification model acting on subspace data representation characteristics and geometric structure adjustment of a target-domain data local manifold;
a parameter iteration updating module: according to a mathematical model output by a modeling module, deducing an updating formula of each variable, and updating a construction coefficient of a subspace basis vector across a data domain, a new expression characteristic of data in a subspace, a classifier coefficient and a quasi-class label predicted on target domain image data in an alternate iteration mode;
an image data classification module: and predicting a classification label of the target domain image data by using the quasi-class label obtained in the parameter iteration updating module, and classifying the target image data by using the classification label.
Further, the data input processing module is configured to include:
inputting training sample data of image feature representations of an auxiliary data field and a target data field, comprising: the data represented by the image features for the auxiliary data field, the corresponding label information matrix, the quasi-class label matrix and the data represented by the image features for the target field;
and constructing a data adjacency graph of the target data domain.
Further, the modeling module is configured to include:
establishing a model for learning a subspace crossing a data domain;
establishing a classification model in the subspace obtained by learning;
performing local manifold structural constraint on a prediction label of target domain image data, and adding a regular term on the prediction label for regulation;
and integrating the model of learning the subspace crossing the data domain, the classification model in the subspace and the regular term on the prediction label to obtain a uniform mathematical model.
The invention has the following conception and advantages: in the algorithm, a low-dimensional subspace base vector is constructed by utilizing representative data with stable characteristics in each data domain; the original data can be linearly reconstructed by the basis vectors of the subspace, and a new expression is obtained in the subspace. In order to reduce the inter-domain difference on the new representation features of the data, the invention applies regular term constraints which minimize the inter-domain difference on the new feature representation of the data, thereby further optimizing the learned subspace. In order to increase the accuracy and stability of the target domain on the prediction label, the algorithm introduces an orthogonality structural constraint on the prediction label, and adjusts a regular term by applying a graph on a data spectrogram, so that the predicted label result keeps the manifold structure inherent in the target domain data set. Compared with the existing transfer learning classification technology, the classification method provided by the invention has the advantages that the accuracy and the stability are greatly improved.
Drawings
FIG. 1 is a flow chart of a method according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
On the contrary, the invention is intended to cover alternatives, modifications, equivalents and alternatives which may be included within the spirit and scope of the invention as defined by the appended claims. Furthermore, in the following detailed description of the present invention, certain specific details are set forth in order to provide a better understanding of the present invention. It will be apparent to one skilled in the art that the present invention may be practiced without these specific details.
Referring to fig. 1, a flow chart of a classification method based on learning image content identification across data domain subspaces according to an embodiment of the present invention is shown, which includes the following steps:
s10: extracting the features of each target image to be classified, taking the SIFT features of the extracted images as an example, and converting the image data into numerical data which can be used for classification;
s20: and inputting SIFT features of image data of each target image data field and each auxiliary data field and generic label data for training corresponding to the auxiliary field data, and establishing a spectrogram adjacency graph for geometric adjustment on the data. Specifically, the method comprises steps S201 to S202:
s201, inputting an auxiliary data field DsAnd a target data field DtThe training sample data represented by the SIFT features of the image data comprises:
data matrix for auxiliary data field SIFT feature descriptionWherein the superscript s is used for representing the auxiliary data field, ns represents the data number of the auxiliary data field, d represents the data dimension of the SIFT feature of the first view,denotes d × nsA real number domain space of dimensions;
label information matrix corresponding to auxiliary domain data,fi∈{0,1}c×1Where c is the class number of the class label,n representing that the element value can only take 0 and 1s× c dimension, if dataBelongs to the kth class, then the label vector fi1 is taken as the kth element of (1); otherwise, 0 is taken.
Quasi-tag matrix
And a data matrix of SIFT feature description of the target data fieldWhere the superscript t is used to denote the target data field, ntIndicating the number of data of the target data field,denotes d × ntA real number domain space of dimensions;
s202, constructing a data adjacency graph S of a target domain for the image data represented by the SIFT features. The edge weights between the points of the adjacency graph are as follows:
wherein N isk(xi) Representing data xiThe nearest k data of (2).k can be set to 5 in general.
S30: establishing a mathematical model for the image data, the label information and the established adjacency graph input at S20, specifically including steps S301 to S305:
s301: establishing a cross-data-domain subspace learning model:
wherein,is a combined coefficient, matrix, used to construct a subspace basis vectorColumn element of (1)Is a new representation feature of the data in the auxiliary domain; matrix arrayColumn element of (1)Is a new representation feature of the data in the target domain. In order to have the data of the different data domains (target domain and auxiliary domain) have the same properties in the statistical distribution, we impose constraints on the data factors of the two different domains in the objective function of the model:. Under this constraint, the data of different domains have the same characteristics in the mean statistics.
S302: establishing a classification model in the subspace obtained by learning:
s.t.ZZT=I
whereinIs a quasi-class label, | Y-W predicted for target domain dataTUs2Is a classification loss function in the auxiliary domain, | Z-WTUt2Is the classification loss function of the target domain,is an adjustment term applied to the classification coefficients to prevent the classification model from overfitting, β is a weight coefficient, and the orthogonal constraint ZZ on the variable ZTI, noise in the prediction result can be reduced, making the predicted labeling result more robust.
S303: and performing local manifold structural constraint on the prediction label of the target domain image data by using the neighborhood map S in the S202. Adding the following regularization terms to the prediction labels for minimization adjustment:
where D is a diagonal matrix whose diagonal elements are:. The term L ═ D-S can be simplified to j (z) ═ Tr (ZLZ)T)。
S304: integrating the subspace learning model obtained in step S301, the classification model on the subspace data representation characteristics obtained in step S302, and the regular term obtained in step S303 to obtain a unified mathematical model:
s.t.ZZT=I, V,Us,Ut,Z≥0
s40: according to the mathematical model established in S30, deriving the updating formula of each variable, and updating the coefficient matrix V for constructing the subspace base vector in an alternating iteration mode, wherein the data in the subspace represents the characteristic Us,UtAnd a quasi-class label Z predicted on the target domain image data. Each iteration specifically includes steps S401 to S402:
s401: updating a coefficient matrix V for constructing a subspace base vector:
for the sake of brevity of expression, we remember Kaa=ATA,Kas=ATXs,Kat=ATXt
The coefficient matrix V used to construct the subspace base vectors is updated with the following equation:
s402: updating the new representation characteristics of each domain data in the subspace:
for the sake of brevity of presentation, let R denotes=UsM,Rt=UtMT,Rs +=1/2(|Rs|+Rs),Rt +=1/2(|Rt|+Rt),Rs -=1/2(|Rs|-Rs),Rt -=1/2(|Rt|-Rt);Qs=WY, Ps=WWTUsQt=WZ,Pt=WWTUt
The representative characteristics of the secondary domain data are updated with the following formula:
the representation characteristics of the target domain data are updated with the following formula:
s403: updating the four-homing coefficients of the classification model in the subspace:
beta is the weight coefficient of the adjusting item, and the inverse matrix in the formula is guaranteed to be nonsingular.
S404: updating the prediction tag of the target domain data:
for the sake of brevity of presentation, let R denotew=WTUt,Rw +=1/2(|Rw|+Rw),Rw -=1/2(|Rw|-Rw),L+=1/2(|L|+L),L-1/2(| L | -L). Updating the quasi-class label matrix of the target domain data by using the following formula:
where ξ is the weight coefficient of the quasi-class label matrix orthogonality constraint.
S50: predicting the target domain image data by using the parameter quasi-class label matrix Z obtained in S40The classification label of (2), so as to classify the target image data by the classification label.
Wherein for the target domain image data,ziThe subscript of the maximum element in the vector is the generic tag number, and specifically, the generic tag is predicted by the following formula: k is argmax1≤j≤czi(j) Where c is the class number of the class label. k is the target image dataThe tag number of (1).
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principle of the present invention are intended to be included within the scope of the present invention.

Claims (2)

1. A classification method based on learning of image content recognition across data domain subspaces, comprising the steps of:
s10: extracting the characteristics of each target image to be classified, and converting image data into numerical data which can be used for classification;
s20: inputting data characteristics of each target image data field and each auxiliary data field and generic label data for training corresponding to the auxiliary field data, and establishing a spectrogram adjacency graph for geometric adjustment on the data, specifically comprising:
s201: inputting training sample data represented by image features of an auxiliary data field and a target data field, comprising: the image data represented by the SIFT features of the auxiliary data domain, the corresponding label information matrix, the quasi-class label matrix and the image data represented by the SIFT features of the target domain;
s202: constructing an adjacency graph of the image data of the target data domain;
s30: building a mathematical model for the image data, the label information and the created adjacency graph input in step S20, specifically including: s301: establishing a cross-data-domain subspace learning model;
s302: establishing a classification model in the subspace obtained by learning;
s303: performing local manifold structural constraint on a prediction label of target domain image data, and adding a regular term on the prediction label for regulation;
s304: integrating the model of learning the cross-data-domain subspace obtained in the step S301 with the classification model obtained in the step S302 and the regular term obtained in the step S303 to obtain a uniform mathematical model;
s40: updating the construction coefficients of the basis vectors in the subspace, the new representation features of the data in the subspace, the classifier coefficients and the class-like labels predicted on the target domain image data in an alternating iteration mode according to the mathematical model established in the S30 and the derived updating formula of each variable;
s50: and predicting a classification label of the target domain image data by using the quasi-class label matrix obtained in the step S40, and classifying the target image data by using the classification label.
2. A classification apparatus based on learning image content recognition across data domain subspaces, comprising:
an image preprocessing module: extracting the characteristics of each target image to be classified, and converting image data into numerical data which can be used for classification;
the data input processing module: inputting data characteristics of each target image data field and each auxiliary data field and generic label data for training corresponding to the auxiliary field data from an image preprocessing module, and establishing a spectrogram adjacency graph for geometric adjustment on the data;
a modeling module: establishing a mathematical model according to the data output from the data input processing module, the label information and the established adjacency graph: establishing a unified mathematical model and outputting the mathematical model by combining a cross-data-domain subspace learning model, a classification model acting on subspace data representation characteristics and geometric structure adjustment of a target-domain data local manifold;
a parameter iteration updating module: according to a mathematical model output by a modeling module, deducing an updating formula of each variable, and updating a construction coefficient of a subspace basis vector across a data domain, a new expression characteristic of data in a subspace, a classifier coefficient and a quasi-class label predicted on target domain image data in an alternate iteration mode;
an image data classification module: predicting a classification label of the target domain image data by using the quasi-class label obtained in the parameter iteration updating module, and classifying the target image data by using the classification label;
wherein the data input processing module is configured to include:
inputting training sample data of image feature representations of an auxiliary data field and a target data field, comprising: the data represented by the image features for the auxiliary data field, the corresponding label information matrix, the quasi-class label matrix and the data represented by the image features for the target field;
constructing a data adjacency graph of a target data domain;
the modeling module is configured to include:
establishing a model for learning a subspace crossing a data domain;
establishing a classification model in the subspace obtained by learning;
performing local manifold structural constraint on a prediction label of target domain image data, and adding a regular term on the prediction label for regulation;
and integrating the model of learning the subspace crossing the data domain, the classification model in the subspace and the regular term on the prediction label to obtain a uniform mathematical model.
CN201410228632.3A 2014-05-27 2014-05-27 Sorting technique and device that a kind of picture material based on study across data field subspace is recognized Expired - Fee Related CN103971129B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410228632.3A CN103971129B (en) 2014-05-27 2014-05-27 Sorting technique and device that a kind of picture material based on study across data field subspace is recognized

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410228632.3A CN103971129B (en) 2014-05-27 2014-05-27 Sorting technique and device that a kind of picture material based on study across data field subspace is recognized

Publications (2)

Publication Number Publication Date
CN103971129A CN103971129A (en) 2014-08-06
CN103971129B true CN103971129B (en) 2017-07-07

Family

ID=51240600

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410228632.3A Expired - Fee Related CN103971129B (en) 2014-05-27 2014-05-27 Sorting technique and device that a kind of picture material based on study across data field subspace is recognized

Country Status (1)

Country Link
CN (1) CN103971129B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104376120B (en) * 2014-12-04 2018-01-23 浙江大学 A kind of information retrieval method and system
CN105701509B (en) * 2016-01-13 2019-03-12 清华大学 A kind of image classification method based on across classification migration Active Learning
CN106815643B (en) * 2017-01-18 2019-04-02 中北大学 Infrared spectroscopy Model Transfer method based on random forest transfer learning
CN108898157B (en) * 2018-05-28 2021-12-24 浙江理工大学 Classification method for radar chart representation of numerical data based on convolutional neural network
CN110378366B (en) * 2019-06-04 2023-01-17 广东工业大学 Cross-domain image classification method based on coupling knowledge migration
CN110930367B (en) * 2019-10-31 2022-12-20 上海交通大学 Multi-modal ultrasound image classification method and breast cancer diagnosis device
CN114783072B (en) * 2022-03-17 2022-12-30 哈尔滨工业大学(威海) Image identification method based on remote domain transfer learning

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103164167A (en) * 2011-12-15 2013-06-19 深圳市腾讯计算机系统有限公司 Data migration method and data migration device
CN103473366A (en) * 2013-09-27 2013-12-25 浙江大学 Classification method and device for content identification of multi-view cross data field image

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100011025A1 (en) * 2008-07-09 2010-01-14 Yahoo! Inc. Transfer learning methods and apparatuses for establishing additive models for related-task ranking
US20110320387A1 (en) * 2010-06-28 2011-12-29 International Business Machines Corporation Graph-based transfer learning

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103164167A (en) * 2011-12-15 2013-06-19 深圳市腾讯计算机系统有限公司 Data migration method and data migration device
CN103473366A (en) * 2013-09-27 2013-12-25 浙江大学 Classification method and device for content identification of multi-view cross data field image

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Multi-view transfer learning with a large margin approach;Dan Zhang 等;《Proceeding of the 17th ACM SIGKDD international conference on knowledge discovery and data mining》;20110824;1208-1216 *

Also Published As

Publication number Publication date
CN103971129A (en) 2014-08-06

Similar Documents

Publication Publication Date Title
CN103971129B (en) Sorting technique and device that a kind of picture material based on study across data field subspace is recognized
CN110097075B (en) Deep learning-based marine mesoscale vortex classification identification method
Konovalov et al. Underwater fish detection with weak multi-domain supervision
CN110956185A (en) Method for detecting image salient object
CN106446890B (en) A kind of candidate region extracting method based on window marking and super-pixel segmentation
CN110705591A (en) Heterogeneous transfer learning method based on optimal subspace learning
CN103473366B (en) A kind of various visual angles are across the sorting technique of data field picture material identification and device
CN103853724A (en) Multimedia data sorting method and device
CN111401426A (en) Small sample hyperspectral image classification method based on pseudo label learning
CN110008365B (en) Image processing method, device and equipment and readable storage medium
CN112488229A (en) Domain self-adaptive unsupervised target detection method based on feature separation and alignment
CN109472733A (en) Image latent writing analysis method based on convolutional neural networks
CN114998602A (en) Domain adaptive learning method and system based on low confidence sample contrast loss
JPWO2015146113A1 (en) Identification dictionary learning system, identification dictionary learning method, and identification dictionary learning program
CN113052017A (en) Unsupervised pedestrian re-identification method based on multi-granularity feature representation and domain adaptive learning
CN114863125A (en) Intelligent scoring method and system for calligraphy/fine art works
Jayanth et al. Land-use/land-cover classification using elephant herding algorithm
JP2007115245A (en) Learning machine considering global structure of data
CN113901924A (en) Document table detection method and device
CN110378384B (en) Image classification method combining privilege information and ordering support vector machine
CN108460406B (en) Scene image attribute identification method based on minimum simplex fusion feature learning
CN106997601B (en) Video sequence classification method based on viscous fluid particle motion model
Zou et al. An automatic recognition approach for traffic congestion states based on traffic video
CN111651433B (en) Sample data cleaning method and system
CN110414595B (en) Method for estimating direction field of texture image with direction consistency

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20170707

Termination date: 20200527

CF01 Termination of patent right due to non-payment of annual fee