CN109117859A - Multi-tag clustering method for computer vision - Google Patents

Multi-tag clustering method for computer vision Download PDF

Info

Publication number
CN109117859A
CN109117859A CN201810622492.6A CN201810622492A CN109117859A CN 109117859 A CN109117859 A CN 109117859A CN 201810622492 A CN201810622492 A CN 201810622492A CN 109117859 A CN109117859 A CN 109117859A
Authority
CN
China
Prior art keywords
label
function
instance
probability
computer vision
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810622492.6A
Other languages
Chinese (zh)
Inventor
邱兰馨
姚杨
姚一杨
江樱
曾仕途
王彦波
王剑
樊华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
State Grid Zhejiang Electric Power Co Ltd
Information and Telecommunication Branch of State Grid Zhejiang Electric Power Co Ltd
Original Assignee
State Grid Zhejiang Electric Power Co Ltd
Information and Telecommunication Branch of State Grid Zhejiang Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by State Grid Zhejiang Electric Power Co Ltd, Information and Telecommunication Branch of State Grid Zhejiang Electric Power Co Ltd filed Critical State Grid Zhejiang Electric Power Co Ltd
Priority to CN201810622492.6A priority Critical patent/CN109117859A/en
Publication of CN109117859A publication Critical patent/CN109117859A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Image Analysis (AREA)

Abstract

The present invention provides the multi-tag clustering methods for computer vision, and the matching probability including determining sample instance in object instance and code library is differentiated based on label of the matching probability to object instance;To example x0Probability Estimation is calculated for each class, is calculated with y0For the example x of label0Classification boundaries;WSRF objective function is constructed, minimum is carried out to objective function and handles to obtain minimizing function and handling obtained minimum function under Weakly supervised effect, complete cluster.By the way that Weakly supervised cluster is turned to finite goal function minimization problem, authentication code library can be realized in the case where not violating packet level label.Applied to image clustering, three kinds of semantic image segmentation, Multi-target position popular Computer Vision Tasks, the performance of related application is effectively improved.

Description

Multi-tag clustering method for computer vision
Technical field
The invention belongs to field of image processings, in particular to are used for the multi-tag clustering method of computer vision.
Background technique
Cluster is a kind of useful statistical tool of computer vision and machine learning.Many image classification models are by office Portion's is clustered into code book.When amount of training data is huge, it is expensive to distribute accurate label.Existing technology is by that will wrap Layer label is transferred to instance-level descriptor to handle this problem.However, each sack has the hypothesis of a label seriously to limit Application range is made.
F.Zhang etc. proposes the unsupervised feature learning to scene classification, by one group that extracts prominent image-region Representative patch is input to a unsupervised frame, to learn one group of characteristic extractor.Q.Shi proposes a kind of based on office Portion enhances the semi-supervised Dimensionality reduction algorithm of technique of alignment, which is protected by using the sample and unlabelled sample of mark Hold the inherent geometry of data.For this method for the big application program based on data, identification work amount is huge. It is very sensitive to flag data, result in the unstability of cluster.In existing clustering schemes, there is also following disadvantages:
The case where 1. proposing this method of covering label in existing method nevertheless suffers from limitation, and visual signature clusters Under, if occurring multiple semantic concepts in image, it is not able to satisfy single label and assumes.
2. the covering label proposed at present, there is certain limitation, for the multiple semantic concepts occurred in image, then Single label is not able to satisfy to assume.
3. it is expensive that distribution accurately distributes accurate label, and is difficult to efficiently and stably complete when amount of training data is huge At cluster.
Summary of the invention
In order to solve shortcoming and defect existing in the prior art, the present invention provides more marks for computer vision Clustering method is signed, multiple labels can be distributed for packet.
It is described the present invention provides the multi-tag clustering method for computer vision in order to reach above-mentioned technical purpose Multi-tag clustering method, comprising:
Step 1 determines the matching probability of sample instance in object instance and code library, based on matching probability to target reality The label of example is differentiated;
Step 2, to example x0Probability Estimation is calculated for each class, is calculated with y0For the example x of label0Classification boundaries;
Step 3 constructs WSRF objective function, carries out minimum to objective function and handles to obtain minimum function, weak Under supervisory role, obtained minimum function is handled, completes cluster.
Optionally, the matching probability of the determining object instance and sample instance in code library, based on matching probability to mesh The label of mark example is differentiated, comprising:
Matching probability F (x is obtained according to formula one0,k)
In formula, C () is clustering function, and return value is cluster belonging to the example, C (x0) indicate example x0Affiliated Cluster, yijIt is instance-level label;
According to obtained matching probability F (x0, k) and label differentiation is carried out, obtain L (x0,y0)
In formula,Refer to when it is maximum that example x0, which belongs to the probability of certain class k, at this time classification k Value.
Optionally, described to example x0Probability Estimation is calculated for each class, is calculated with y0For the example x of label0Classification side Boundary, comprising:
Probability Estimation is carried out based on formula three
In formula, pi(k|x0) indicate for example x0, i-th Pterostyrax in the probability Estimation of kth class,
It calculates with y0For the example x of label0Classification boundaries
Optionally, described to carry out minimum to objective function and handle to obtain to minimize function, it is right under Weakly supervised effect Obtained minimum function is handled, and cluster is completed, comprising:
Again it is formulated for WSRF study to minimize function
In formula, T expression cooling parameter, p (k | xij) each tree is expressed as to example xijBelong to the probability Estimation of kth class, MG (xij, k) and indicate example xijBelong to the classification boundaries of kth class.
It is generated according to stationary distribution p and by Weakly supervised constraintInstance-level label yij
RF is constructed with fixed instance-level label, maximizes edge MG;
It is adjusted and is distributed by objective function.
Optionally, described to be generated according to stationary distribution p and by Weakly supervised constraintInstance-level label yij , comprising:
To the example x in each packetijDistribution is readjusted on the basis of considering legitimate tag
It is whereinNormalization factor, i.e., fromMiddle generationI.e.
The packet level label legal for each of each packet
If there is no yij=k is enabledWherein
Technical solution provided by the invention has the benefit that
It, can be in the feelings for not violating packet level label by the way that Weakly supervised cluster is turned to finite goal function minimization problem Authentication code library is realized under condition.Applied to image clustering, semantic image segmentation, three kinds of Multi-target position popular computer views Feel task effectively improves the performance of related application.
Detailed description of the invention
It, below will be to attached needed in embodiment description in order to illustrate more clearly of technical solution of the present invention Figure is briefly described, it should be apparent that, drawings in the following description are only some embodiments of the invention, for ability For the those of ordinary skill of domain, without creative efforts, it can also be obtained according to these attached drawings other attached Figure.
Fig. 1 is the flow diagram provided by the present invention for the multi-tag clustering method of computer vision.
Specific embodiment
To keep structure and advantage of the invention clearer, structure of the invention is made below in conjunction with attached drawing further Ground description.
Embodiment one
The present invention provides the multi-tag clustering methods for computer vision, as shown in Figure 1, the multi-tag clusters Method, comprising:
Step 1 determines the matching probability of sample instance in object instance and code library, based on matching probability to target reality The label of example is differentiated;
Step 2, to example x0Probability Estimation is calculated for each class, is calculated with y0For the example x of label0Classification boundaries;
Step 3 constructs WSRF objective function, carries out minimum to objective function and handles to obtain minimum function, weak Under supervisory role, obtained minimum function is handled, completes cluster.
In an implementation, by the way that Weakly supervised cluster is turned to finite goal function minimization problem, packet level can not violated Authentication code library is realized in the case where label.Then, the maximum that a kind of weak cluster being subjected to supervision of WSRF is limited limit is proposed Change problem.Marginal maximization problems is solved using DA algorithm.Finally, the cluster instance under the guidance of given packet level label.This Invention improves traditional clustering method, more effectively and is efficiently clustered, and can be applied to image clustering, grapheme As three kinds of segmentation, Multi-target position popular Computer Vision Tasks, the performance of related application is effectively improved.
Step 1 determines the matching probability of sample instance in object instance and code library, based on matching probability to target reality The label of example is differentiated, including two processing stages.
Stage one obtains matching probability according to formula one
C ()={ 1,2 ... K } is enabled to indicate tag set, yij={ Bi}I=1 ..., nIndicate the set of the packet of training, wherein It each include multiple examplesAnd each multiple packet level labels of Bao Douyu It is associated, when the related packet of packet level label, referred to as legitimate tag yij=1, when the packet of packet level label onrelevant, referred to as Illegal label yij=0.
C () is clustering function in the formula, and return value is cluster belonging to the example, C (x0) indicate example x0Institute The cluster of category assumes an adaptation function F (), F (x based on C () between example and classification0, k) and indicate example x0Matching F (x0, k) can be defined as the density of the class label of cluster x0, in order to draw subsequent L by the probability of kth class (x0,y0)。yijIt is instance-level label, it is unavailable in the training stage.
Stage two carries out label differentiation according to obtained matching probability, obtains L (x0,y0)
Refer to when it is maximum that example x0, which belongs to the probability of certain class k, at this time the value of classification k, F(x0,y0) refer to the corresponding instance-level y0 of example x0 label probability, further define L (x0,y0) as to an example x0Label carry out being predicted as y0Differentiation, behind global loss on differentiating establish objective function.
Cluster is to be expressed into a constrained the minimization of object function problem, it is above-mentioned it is done be exactly to draw This objective function.
Since in Weakly supervised cluster, true instance-level label is yijUnknown, therefore cluster process will lead to Following objective function:
Entire example is divided into K subregion, each subregion includes ni sample.First constraint it is meant that if from It is legal that any one packet level label k in { 1,2 ... K }, which corresponds to i-th of packet, then at least one in this sack A example belongs to label k ∈ { 1,2, K }.Second constraint is it is meant that if from any one packet level mark in { 1,2 ... K } It is illegal for signing k to correspond to i-th of packet, then this none interior example of packet belongs to label k ∈ { 1,2, K }.
Latter two of formula are to indicate to be minimized (target to the global loss in differentiation with Weakly supervised constraint Function).For each packet, all legal packet level labels all should be associated with one or more examples.At the same time, Illegal packet level label should not be associated with any example.
Step 2 is the Weakly supervised random forest WSRF (weakly supersived random forest) of building.
Random forest (Random Forest): RF is a kind of algorithm that decision is optimized by more decision trees.Certainly Plan tree is a kind of algorithm that decision is carried out using tree structure, according to known conditions or feature is made to divide sample data Fork, finally establishes one tree, and the leaf tubercle of tree identifies final decision.New data can be sentenced according to this tree Disconnected, the classification results of data are voted by decision tree depending on the score how much formed.
Effect: the solution to above-mentioned Weakly supervised clustering problem is the extension based on RF, has apparent speed excellent Gesture, the blade of RF include the valuable information of feature space position, and position attribution is particularly useful for cluster.Second, On empirical meaning, RF maximises the boundary of classification.
One RF is by multiple decision tree F=(t1,t2,...,tN) composition, it is each stand-alone training.In training rank Section, by construction tree come learning classification function, each internal node of RF is used to divide data space RF.One RF is by more A decision tree composition: being each stand-alone training.In the training stage, RF is by construction tree come learning classification function.RF's Each internal node is used to divide data space.
Step 2, using RF to example x0Probability Estimation is calculated for each class:
N indicates the number of decision tree, pi(k|x0) indicate for example x0, i-th Pterostyrax is in kth class
Probability Estimation, it can be calculated as ratio of voting obtained in the leaf that classification k is set from i-th:
li(x0) indicate x0Fall in the leaf node of the i-th tree.The whole decision function of RF is defined as:
Wherein Gk(x0)=p (k | x0) formula seven
Gk(x0) indicate that the probability estimated with RF, formula six are illustrated as example x0When taking maximum probability in kth class, The value of k at this time.
It calculates with y0For the example x of label0Classification boundariesIt is specific to calculate mark Label are y0Example x0Classification boundaries, formula is as follows:
It is poly- by being defined in the case where random forest from formula eight and formula two it can be seen that the two is closely similar Class function connects their relationship, and the cluster result of an example is established as to the set for the leaf node that he is fallen.Cause This, can be used n-dimensional vector to indicate cluster result:
C(x0)=[l1(x0),...,lN(x0)] formula nine
Indicate that, for an example x0, he falls on N number of leaf node of N tree of a random forest RF, each leaf Comprising a subset, the multiple training samples in the inside, these samples belong to the same cluster.Then formula 1 is made again It is fixed:
Formula ten is brought into formula eight, is obtained in conjunction with formula two:
The formula reflect construction one RF be equivalent to will differentiate on global clustering loss minimize.
Construct WSRF objective function
Since RF is generally used for instance-level cluster, in order to overcome this kind of limitation, Weakly supervised constraint is introduced, is supported multiple Packet level label is clustered based on Weakly supervised random forest WSRF.Objective function Equation is minimized by one WSRF of building
Step 3 utilizes DA algorithm optimization learning model parameter
Common RF can be minimized the boundary of multiclass classification, and since instance-level label is unknown, the objective function of WSRF is One non-convex problem, computationally hardly results in the solution of an approximation, uses DA algorithm in the method, DA is as continuous Optimization, with random continuous variable replacement discrete variable, defines the space of a probability distribution.The target of optimization is converted to The space of definition point counting cloth of falling into a trap to minimize the desired value of objective function further increases entropy item, keeps problem raised.
Wherein y indicates known variables, and J (y) is the objective function defined by formula 12, and p is the distribution of y, EpIt is distribution The expectation of p, H are Distribution Entropies, and T is that a cooling parameter is known as temperature.
When temperature is larger, entropy is leading term, and model is uncertain.When T to zero being arranged carrying out cooling system, optimization will be by Gradually concentrate on original form.Therefore, at a given temperature, WSRF can be learnt to be planned to minimize with minor function again:
In mathematicsIt is the defined formula of entropy, T indicates cooling parameter, p (k | xij) each tree is expressed as to example xijBelong to the probability Estimation of kth class, MG(xij, k) and indicate example xijBelong to point of kth class Class boundary.
Under Weakly supervised effect, minimum processing, step are carried out to the minimum function of previous step are as follows:
(1) it is generated according to stationary distribution p and by Weakly supervised constraintInstance-level label yij
Due in order to minimize Q (G, p), carrying out following three iterative steps to formula 14 by Weakly supervised constraint:
(a) according to the distribution of initialization and under Weakly supervised constraint
Generate instance-level label yij, specific algorithm is as follows:
A1 is to the example x in each packetijDistribution is readjusted on the basis of considering legitimate tag, is acted on as every A packet only generates instance-level label from legal packet level list of labels.
It is whereinNormalization factor, i.e., fromMiddle generationI.e.
Packet level label a2 legal for each of each packet
If there is no yij=k is enabledWherein(effect: it is each legal to check Packet level label, to ensure at least one associated example).
(b) RF is constructed with fixed instance-level label, maximizes edge MG;
To the instance-level label given, the label of classification is maximized, the function of optimization are as follows:
(c) distribution is adjusted by objective function Equation (14) after, by Q (G, p) derivation
And by derivative take 0 obtain p (k | xij) solution:
WhereinIt is a normalization factor.The stopping when reaching set the number of iterations Iteration.
Model parameter y is obtained finally by above optimization processij, it is brought into objective function Equation 12 and is clustered The solution of function C ().
Weakly supervised cluster is turned into limited the minimization of object function problem, it will by Weakly supervised random forest WSRF It is modeled as constrained edge maximization problems, and (effect of random forest is the valuable information for the feature space position that it includes With the boundary for maximising classification on empirical meaning, edge, which maximizes, herein just refers to that the boundary of classification maximizes), so Instance-level descriptor is clustered under the guidance of given packet level label afterwards.
The present invention provides the multi-tag clustering methods for computer vision, including determine object instance and code library The matching probability of middle sample instance is differentiated based on label of the matching probability to object instance;To example x0For each class meter Probability Estimation is calculated, is calculated with y0For the example x of label0Classification boundaries;WSRF objective function is constructed, objective function is carried out most Smallization, which handles to obtain, to be minimized function and handles under Weakly supervised effect obtained minimum function, and cluster is completed.It is logical It crosses and Weakly supervised cluster is turned into finite goal function minimization problem, mirror can be realized in the case where not violating packet level label Other code library.Applied to image clustering, three kinds of semantic image segmentation, Multi-target position popular Computer Vision Tasks, effectively Ground improves the performance of related application.
Each serial number in above-described embodiment is for illustration only, does not represent in the assembling or use process of each component Sequencing.
The above description is only an embodiment of the present invention, is not intended to limit the invention, all in the spirit and principles in the present invention Within, any modification, equivalent replacement, improvement and so on should all be included in the protection scope of the present invention.

Claims (5)

1. being used for the multi-tag clustering method of computer vision, which is characterized in that the multi-tag clustering method, comprising:
Step 1 determines the matching probability of sample instance in object instance and code library, based on matching probability to object instance Label is differentiated;
Step 2, to example x0Probability Estimation is calculated for each class, is calculated with y0For the example x of label0Classification boundaries;
Step 3 constructs WSRF objective function, carries out minimum to objective function and handles to obtain minimum function, in Weakly supervised work Under, obtained minimum function is handled, completes cluster.
2. the multi-tag clustering method according to claim 1 for computer vision, which is characterized in that the determining mesh The matching probability for marking sample instance in example and code library, is differentiated based on label of the matching probability to object instance, comprising:
Matching probability F (x is obtained according to formula one0,k);
In formula, C () is clustering function, and return value is cluster belonging to the example, C (x0) indicate example x0Affiliated cluster, yijIt is instance-level label;
According to obtained matching probability F (x0, k) and label differentiation is carried out, obtain L (x0,y0);
In formula,Refer to when it is maximum that example x0, which belongs to the probability of certain class k, at this time the value of classification k.
3. the multi-tag clustering method according to claim 2 for computer vision, which is characterized in that described to example x0Probability Estimation is calculated for each class, is calculated with y0For the example x of label0Classification boundaries, comprising:
Probability Estimation is carried out based on formula three;
In formula, pi(k|x0) indicate for example x0, i-th Pterostyrax in the probability Estimation of kth class,
It calculates with y0For the example x of label0Classification boundaries
4. the multi-tag clustering method according to claim 2 for computer vision, which is characterized in that described to target Function carries out minimum and handles to obtain minimizing function and handling obtained minimum function under Weakly supervised effect, complete At cluster, comprising:
Again it is formulated for WSRF study to minimize function Q (G, p);
In formula, T expression cooling parameter, p (k | xij) each tree is expressed as to example xijBelong to the probability Estimation of kth class, MG(xij, K) example x is indicatedijBelong to the classification boundaries of kth class;
It is generated according to stationary distribution p and by Weakly supervised constraintInstance-level label yi j
RF is constructed with fixed instance-level label, maximizes edge MG;
It is adjusted and is distributed by objective function.
5. the multi-tag clustering method according to claim 4 for computer vision, which is characterized in that the basis is solid Determine distribution p and is generated by Weakly supervised constraintInstance-level label yij , comprising:
To the example x in each packetijDistribution is readjusted on the basis of considering legitimate tag
It is whereinNormalization factor, i.e., fromMiddle generationI.e.
The packet level label legal for each of each packet
If there is no yij=k is enabledWherein
CN201810622492.6A 2018-06-15 2018-06-15 Multi-tag clustering method for computer vision Pending CN109117859A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810622492.6A CN109117859A (en) 2018-06-15 2018-06-15 Multi-tag clustering method for computer vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810622492.6A CN109117859A (en) 2018-06-15 2018-06-15 Multi-tag clustering method for computer vision

Publications (1)

Publication Number Publication Date
CN109117859A true CN109117859A (en) 2019-01-01

Family

ID=64822411

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810622492.6A Pending CN109117859A (en) 2018-06-15 2018-06-15 Multi-tag clustering method for computer vision

Country Status (1)

Country Link
CN (1) CN109117859A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109872350A (en) * 2019-02-18 2019-06-11 重庆市勘测院 A kind of new point cloud autoegistration method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105608471A (en) * 2015-12-28 2016-05-25 苏州大学 Robust transductive label estimation and data classification method and system
CN106971201A (en) * 2017-03-23 2017-07-21 重庆邮电大学 Multi-tag sorting technique based on integrated study
CN107273927A (en) * 2017-06-13 2017-10-20 西北工业大学 Sorting technique is adapted to based on the unsupervised field matched between class

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105608471A (en) * 2015-12-28 2016-05-25 苏州大学 Robust transductive label estimation and data classification method and system
CN106971201A (en) * 2017-03-23 2017-07-21 重庆邮电大学 Multi-tag sorting technique based on integrated study
CN107273927A (en) * 2017-06-13 2017-10-20 西北工业大学 Sorting technique is adapted to based on the unsupervised field matched between class

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109872350A (en) * 2019-02-18 2019-06-11 重庆市勘测院 A kind of new point cloud autoegistration method

Similar Documents

Publication Publication Date Title
Bendale et al. Towards open world recognition
Papernot et al. Practical black-box attacks against machine learning
Zhang et al. Zero-shot learning via joint latent similarity embedding
JP7266674B2 (en) Image classification model training method, image processing method and apparatus
CN103425996B (en) A kind of large-scale image recognition methods of parallel distributed
Ge et al. Modelling local deep convolutional neural network features to improve fine-grained image classification
Yan et al. Unsupervised and semi‐supervised learning: The next frontier in machine learning for plant systems biology
CN113887643B (en) New dialogue intention recognition method based on pseudo tag self-training and source domain retraining
Claypo et al. Opinion mining for thai restaurant reviews using K-Means clustering and MRF feature selection
Gabourie et al. Learning a domain-invariant embedding for unsupervised domain adaptation using class-conditioned distribution alignment
Wang et al. Towards calibrated hyper-sphere representation via distribution overlap coefficient for long-tailed learning
Deng et al. Citrus disease recognition based on weighted scalable vocabulary tree
Najar et al. A new hybrid discriminative/generative model using the full-covariance multivariate generalized Gaussian mixture models
CN111611395B (en) Entity relationship identification method and device
CN107993311B (en) Cost-sensitive latent semantic regression method for semi-supervised face recognition access control system
Jung et al. A novel on automatic K value for efficiency improvement of K-means clustering
CN109117859A (en) Multi-tag clustering method for computer vision
Mehmood et al. Classifier ensemble optimization for gender classification using genetic algorithm
Chen et al. DVHN: A deep hashing framework for large-scale vehicle re-identification
Jaffel et al. A symbiotic organisms search algorithm for feature selection in satellite image classification
Huang et al. Efficient optimization for linear dynamical systems with applications to clustering and sparse coding
CN113516199B (en) Image data generation method based on differential privacy
Zamzami et al. An accurate evaluation of msd log-likelihood and its application in human action recognition
Saito et al. Demian: Deep modality invariant adversarial network
Zhang et al. EMD metric learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20190101