CN105844303A - Sampling type clustering integration method based on local and global information - Google Patents

Sampling type clustering integration method based on local and global information Download PDF

Info

Publication number
CN105844303A
CN105844303A CN201610217372.9A CN201610217372A CN105844303A CN 105844303 A CN105844303 A CN 105844303A CN 201610217372 A CN201610217372 A CN 201610217372A CN 105844303 A CN105844303 A CN 105844303A
Authority
CN
China
Prior art keywords
clustering
sampling
local
global information
data set
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610217372.9A
Other languages
Chinese (zh)
Inventor
杨云
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yunnan University YNU
Original Assignee
Yunnan University YNU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yunnan University YNU filed Critical Yunnan University YNU
Priority to CN201610217372.9A priority Critical patent/CN105844303A/en
Publication of CN105844303A publication Critical patent/CN105844303A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/19Recognition using electronic means
    • G06V30/192Recognition using electronic means using simultaneous comparisons or correlations of the image signals with a plurality of references
    • G06V30/194References adjustable by an adaptive method, e.g. learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Databases & Information Systems (AREA)
  • Multimedia (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a sampling type clustering integration method based on the local and global information. Firstly, mixed sampling of a target data set is carried out, a learning sample is further generated, clustering analysis on the learning sample space is carried out, clustering division is further generated, quality evaluation on the clustering division is carried out, and a weight vector of the target data set is updated according to the evaluation result; the previous steps are repeated for multiple times, and multiple clustering divisions are generated; the multiple clustering divisions are fused to be one new characteristic expression, a traditional clustering algorithm is utilized to carry out clustering analysis on the characteristic expression, and the integration clustering result is generated. According to the method, integration learning has relatively strong noise immunity, and relatively high problem-data solving capability is further realized; new characteristics can effectively and comprehensively express the global and local cluster structure information, so the integration learning algorithm generates good effects on data sets having different characteristics.

Description

A kind of sampling type clustering ensemble method based on local with global information
Technical field
The invention belongs to machine learning field, particularly relate to a kind of sampling type clustering ensemble based on local with global information Method.
Background technology
The invention discloses a kind of sampling type clustering ensemble method based on local with global information, relate generally to learn sample This sampling mechanism and two aspects of clustering ensemble learning algorithm: (1) learning sample sampling mechanism data sampling techniques mainly includes Three kinds: lack sampling, over-sampling and mixing sampling.Stochastical sampling is one relatively simple in Undersampling technique, and it removes at random Most class samples in data set, reduce the calculating cost in learning process, when especially there is noise in data set, use Random Undersampling technique effect is preferable;Another is exactly weight sampling method, each in data set of this method of sampling Sample one weights of distribution, the probability of the learning sample extracted is determined by the size of its weights, and it makes learning process have There is directivity, focus on problem sample, but, this type of method is the most sensitive for noise and exceptional sample, easily causes That practises is inaccurate.The shortcoming of sampling technique is to easily cause the overfitting of grader, the most also increases and calculates cost.Right When most classes use Undersampling technique, some useful informations lost in most class data can be there are, and to minority class data Oversampling technique, the time of training can be the highest with complexity, sometimes results even in the over-fitting of grader.ALTERNATE SAMPLING method Mainly in supervised learning problem, its sampling mechanism is to be realized by the data set category information of priori, and this information is non-supervisory Problem concerning study does not provide.So for the stochastical sampling of clustering ensemble algorithm, only Bagging and the weighting of Boosting The method of sampling is the most feasible.But all there is respective defect in both approaches.(2) clustering ensemble algorithm, clustering ensemble study Purpose is multiple cluster analysis results of same target data set to be combined generation one have higher performance Whole clustering ensemble result.In general, the structure of Ensemble Learning Algorithms is made up of two parts, i.e. member cluster the generation of device with Merging, existing clustering ensemble learning algorithm mainly there are differences in these two aspects.Generate multiple members in the first step and cluster device Time, if can produce high-quality and the big member of difference cluster device set be determine integrated learning outcome quality important because of Element.For clustering ensemble learning algorithm, a lot of methods can be used to produce multiple member and cluster device, and conventional method can be divided into: Same data set uses different clustering algorithms, to produce different cluster results;Use same clustering algorithm, but combine Different initialization and parameter arrange and produce different cluster results;Phase is used in multiple feature spaces of same data set Same clustering algorithm, to produce different cluster results;Target data set is carried out learning sample sampling, at different study samples This space uses identical clustering algorithm, to produce different cluster results.
There is significant limitation in existing clustering ensemble learning algorithm: just for single feature data set effectively and logarithm Have stronger hypothetical according to the clustering architecture of collection.To this end, develop a kind of cluster set preconceived plan being generally applicable to different characteristics data set Method is the most urgent.
Summary of the invention
It is an object of the invention to provide a kind of sampling type clustering ensemble method based on local with global information, it is intended to solve Certainly existing clustering ensemble learning algorithm exists the data set just for single feature effectively and has stronger to the clustering architecture of data set Hypothetical problem.
The present invention is achieved in that a kind of sampling type clustering ensemble method based on local with global information, described Sampling type clustering ensemble method based on local and global information comprises the following steps:
First target data set is carried out mixing sampling and generates learning sample, learning sample space carries out cluster point Analyse and generate clustering, next clustering being carried out quality evaluation, and updates target data set according to assessment result Weight vectors, repeats many wheels, and then produces multiple clustering;
Then multiple clusterings are permeated a new character representation, and use clustering algorithm that this character representation is done Cluster analysis, and generate integrated cluster result.
Further, the clustering ensemble fusion function of described clustering algorithm according to the local of clustering architecture and global information by same Multiple divisions of data set are converted into a new character representation, in this feature space, use clustering algorithm to generate final Integrated division.
Further, described mixing sampling includes stochastical sampling and two kinds of method of samplings of weight sampling, method particularly includes:
Step one, the method for use stochastical sampling are concentrated from target data and are extracted initial learn sample;
Step 2, the method for use weight sampling extract final learning sample from initial learn sample.
Further, to target data setCarry out taking turns sampling more, and use K-means clustering algorithm to sample space Carry out cluster analysis, produce multiple initial clustering and divideK-means clustering algorithm is as follows:
Step one, arbitrarily select k object as initial cluster center from n data object;
Step 2, average according to each clustering object, calculate the distance of each object and center object;And according to minimum Distance carries out clustering to corresponding object again;
Step 3, recalculate the average of each cluster;
Step 4, circulation step two to step 3 are until each cluster no longer changes.
Further, described clustering ensemble fusion function integrates local and the global information of clustering architecture, by same data setMultiple divisionsIt is converted into a new character representation H={ α1H1.....αtHt, wherein,And αt Respectively divide ptCharacter representation and weights;
h i , j = exp ( ( 1 | N B | Σ | N B ( x i ) | d ( N B ( x i ) , μ j ) ) 2 / 2 σ i σ j )
xiFor the mark of sample i, | NB | is self-defining Size of Neighborhood,Point in neighborhood to bunch Characterize some μjAverage distance,WithIt is in x respectivelyiPoint is characterized with bunch μjLocal scaling factor.
The mixing method of sampling that the present invention proposes has merged stochastical sampling and weight sampling in one, mixing sampling mechanism Stochastical sampling can reduce the noise data point in learning sample, the weight sampling in mixing sampling mechanism can select again simultaneously Take more difficult learning sample and carry out that there is cluster analysis targetedly, its combine two kinds of classical Integrated Algorithm Bagging with The advantage of Boosting;Clustering ensemble fusion function according to the local of clustering architecture and global information by multiple strokes of same data set Divide and be converted into a new character representation;In this feature space, it is possible to use any traditional clustering algorithm generates Whole integrated division, this character representation can effectively and all sidedly characterize the clustering architecture information of overall situation and partial situation so that integrated Practise the effect that algorithm produces on the data set of different characteristics.First the present invention carries out mixing sampling life to target data set Becoming learning sample, the stochastical sampling in mixing sampling mechanism can reduce the noise data point in learning sample, mixing simultaneously is adopted Weight sampling in model machine system can be chosen more difficult learning sample and carry out having cluster analysis targetedly, and generates cluster stroke Point;Next clustering is carried out quality evaluation, and according to the weight vector of assessment result renewal target data set, has relatively The weight of the data point of high clustering result quality reduces, and the weight of the data point with poor clustering result quality raises, then according to renewal Weight vector be weighted sampling, the data point of poor clustering result quality will be selected in next round, and have it Cluster analysis targetedly, above work repeats many wheels, and then produces multiple clustering;Then multiple clusterings are merged It is a new character representation, and uses traditional clustering algorithm that this character representation is done cluster analysis, and generate clustering ensemble As a result, this character representation can effectively and all sidedly characterize the clustering architecture information of overall situation and partial situation so that Ensemble Learning Algorithms exists The effect produced on the data set of different characteristics.
Accompanying drawing explanation
Fig. 1 is the sampling type clustering ensemble method flow diagram based on local with global information that the embodiment of the present invention provides.
Fig. 2 is the artificial data-method comparison of the different densities distribution clustering architecture that the embodiment of the present invention provides.
Fig. 3 is the artificial data-method comparison of the uneven clustering architecture that the embodiment of the present invention provides.
Fig. 4 is the artificial data-method comparison of the special clustering architecture that the embodiment of the present invention provides.
Detailed description of the invention
For the summary of the invention of the present invention, feature and effect can be further appreciated that, hereby enumerate following example, and coordinate accompanying drawing Describe in detail as follows.
Refer to Fig. 1:
A kind of sampling type clustering ensemble method based on local with global information, including:
S101: target data;
S102: initial clustering generation module based on blended learning sample collection mechanism;
S103: clustering ensemble fusion function module based on overall situation and partial situation's clustering architecture information;
S104: clustering ensemble result.
Further, described initial clustering generation module based on blended learning sample collection mechanism is random by optimum organization Sampling and two kinds of method of samplings of weight sampling, carry out hybrid sampling, method particularly includes:
Step one, the method for use stochastical sampling are concentrated from target data and are extracted initial learn sample;
Step 2, the method for use weight sampling extract final learning sample from initial learn sample.
This new learning sample acquisition method not only makes integrated study have the anti-noise that Bagging Integrated Algorithm is the same Property, also make it have the same specific aim of Boosting Integrated Algorithm simultaneously and solve the ability of problem data.From the angle of theory analysis Degree, can prove the reasonability of mixing sampling mechanism by the cost function that derivation clustering ensemble learns.Such as formula (eq.1) Shown in, it can be deduced that conclusion: the integrated study cost function of exponential form is the upper limit of actual cost function in fact.
L e = Σ n p ( x n ) exp ( Σ t α t l t ( x n ) ) = Σ n p ( x n ) exp ( α T l T ( x n ) ) Π t = 1 T - 1 exp ( α t l t ( x n ) ) = Σ n p ( x n ) w T ( x n ) exp ( α T l T ( x n ) ) - - - ( e q .1 )
Wherein,The weights of Pt, reaction is divided for its initial clustering Divide the clustering result quality of Pt;p(xn) it is data setPrior probability, be unknown, be commonly defined as p (xn)=1/N; lt(xn) it is that clustering Pt is in data point xnCost value.Its specific formula for calculation is as follows:
l t ( x n ) = 1 - m a x ( h n t ) + m i n ( h n t ) - - - ( e q .2 )
For clustering Pt,WithRepresent data point x respectivelynBe assigned to from it nearest bunch with farthest Bunch Confidence, the computing formula of Confidence such as (eq.5).
It follows that the cost function by its exponential form of deriving further to formula (eq.1) is as follows:
L = &Sigma; t &alpha; t &Sigma; n p ( x n ) l t ( x n ) = &Sigma; n p ( x n ) &Sigma; t &alpha; t l t ( x n ) < exp ( &Sigma; n p ( x n ) &Sigma; t &alpha; t l t ( x n ) ) &le; &Sigma; n p ( x n ) exp ( &Sigma; t &alpha; t l t ( x n ) ) - - - ( e q .3 )
It can be appreciated thatActually react integrated Practising the cluster analysis quality of model, this point is consistent with Boosting algorithm.Therefore to take turns generation one initially poly-each After class divides Pt, in mixing sampling, weight sampling mechanism will be calculated as follows to each data point from one weights of new distribution:
w t + 1 ( x n ) = w t ( x n ) exp ( &alpha; t l t ( x n ) ) &Sigma; n w t ( x n ) exp ( &alpha; t l t ( x n ) ) - - - ( e q .4 )
It addition, the p (x in formula (eq.3)nThe stochastical sampling mechanism of)=1/N just corresponding Bagging algorithm.So, Mixing sampling mechanism proposed by the invention the most most preferably optimizes the exponential form cost function Le of clustering ensemble study.
Further, described initial clustering generation module based on blended learning sample collection mechanism is to target data setCarry out taking turns sampling more, and use K-means clustering algorithm that sample space is carried out cluster analysis, produce multiple initial poly- Class divides
Further, K-means clustering algorithm is described as follows:
Step one, arbitrarily select k object as initial cluster center from n data object;
Step 2, average (center object) according to each clustering object, calculate each object and these center object Distance;And again corresponding object is carried out clustering according to minimum range;
Step 3, recalculate the average (center object) that each (changing) clusters;
Step 4, circulation step two to step 3 are until each cluster no longer changes.
Further, in clustering ensemble fusion function module based on overall situation and partial situation's clustering architecture information, new fusion function Integrate local and the global information of clustering architecture, by same data setMultiple divisionsIt is converted into a new feature Represent H={ α1H1.....αtHt, wherein,And αtRespectively divide ptCharacter representation and weights;
h i , j = exp ( ( 1 | N B | &Sigma; | N B ( x i ) | d ( N B ( x i ) , &mu; j ) ) 2 / 2 &sigma; i &sigma; j ) - - - ( e q .5 )
In formula (eq.5), xiFor the mark of sample i, | NB | is self-defining Size of Neighborhood,It is Point in neighborhood puts a μ to a bunch signjAverage distance,WithIt is respectively It is positioned at xiPoint μ is characterized with bunchjLocal scaling factor.In this feature space, it is possible to use any traditional cluster is calculated Method, such as K-means, generates final clustering ensemble result.
In order to verify that the performance of the present invention generates many sets and has the two-dimentional data set of complicated clustering architecture, and use this respectively The clustering ensemble algorithm of Invention Announce, K-means algorithm, Bagging Integrated Algorithm, Boosting Integrated Algorithm to overlapping two dimension more Data set carries out cluster analysis, and the classification accuracy rate of its performance unified standard is weighed, and computing formula is described as follows:
C A ( L , P ) = ( &Sigma; i = 1 K * m a x j = { 1 , ... , K ) { 2 | G i &cap; C j | | G i | + | C j | } ) / K * - - - ( e q .6 )
Wherein, L={G1.....GK*Represent data set true class mark, P={C1.....CKRepresent cluster result.
Because the clustering ensemble algorithm of Invention Announce, Bagging Integrated Algorithm, Boosting Integrated Algorithm is based on adopting The learning model of sample, in order to test the fairness compared, three kinds of algorithms are arranged identical sample rate is 10%.According to research and development Mixing sampling mechanism, S=SR×Sw, set stochastical sampling rate S furtherR=50%, weight sampling rate SR=20%;As Fig. 2- Shown in 4.a, being artificially generated three set two-dimentional data sets, different labellings represents different classes of data point;Wherein, Fig. 2 .a table Show is the artificial data collection with different densities distribution clustering architecture, and it has a data point of three class equal numbers, but each class Data point has different Density Distribution;What Fig. 3 .a represented is the artificial data collection with uneven clustering architecture, and it has four classes The data point of equal densities distribution, but the number of each class data point differs;What Fig. 4 .a represented has special clustering architecture Artificial data collection, it has the data point of two class varying numbers, and respective distribution has special clustering architecture;Two are overlapped three Our tested K-means algorithm on dimension data collection, Bagging Integrated Algorithm, Boosting Integrated Algorithm is poly-with Invention Announce Class Integrated Algorithm;As in Figure 2-4, the present invention is after the method using mixing sampling, in the artificial data to different clustering architectures Carry out (Fig. 2-4.e) in cluster analysis and be better than other algorithms compared, after adding new fusion function, its performance (Fig. 2- 4.f) obtain further lifting.
The above is only to presently preferred embodiments of the present invention, and the present invention not makees any pro forma restriction, Every technical spirit according to the present invention, to any simple modification made for any of the above embodiments, equivalent variations and modification, belongs to In the range of technical solution of the present invention.

Claims (5)

1. one kind based on local and the sampling type clustering ensemble method of global information, it is characterised in that described based on local and The sampling type clustering ensemble method of global information comprises the following steps:
First target data set carried out mixing sampling and generates learning sample, in learning sample space, carrying out cluster analysis also Generate clustering, next clustering is carried out quality evaluation, and updates the weight of target data set according to assessment result Vector, repeats many wheels, and then produces multiple clustering;
Then multiple clusterings are permeated a new character representation, and use clustering algorithm that this character representation is clustered Analyze, and generate integrated cluster result.
2. as claimed in claim 1 based on local and the sampling type clustering ensemble method of global information, it is characterised in that described Multiple divisions of same data set are turned by the clustering ensemble fusion function of clustering algorithm according to local and the global information of clustering architecture Turn to a new character representation, in this feature space, use clustering algorithm to generate final integrated division.
3. as claimed in claim 1 based on local and the sampling type clustering ensemble method of global information, it is characterised in that described Mixing sampling includes stochastical sampling and two kinds of method of samplings of weight sampling, method particularly includes:
Step one, the method for use stochastical sampling are concentrated from target data and are extracted initial learn sample;
Step 2, the method for use weight sampling extract final learning sample from initial learn sample.
4. as claimed in claim 1 based on local and the sampling type clustering ensemble method of global information, it is characterised in that to mesh Mark data setCarrying out taking turns sampling more, and use K-means clustering algorithm that sample space is carried out cluster analysis, generation is many Individual initial clustering dividesK-means clustering algorithm is as follows:
Step one, arbitrarily select k object as initial cluster center from n data object;
Step 2, average according to each clustering object, calculate the distance of each object and center object;And according to minimum range Again corresponding object is carried out clustering;
Step 3, recalculate the average of each cluster;
Step 4, circulation step two to step 3 are until each cluster no longer changes.
5. as claimed in claim 2 based on local and the sampling type clustering ensemble method of global information, it is characterised in that described Clustering ensemble fusion function integrates local and the global information of clustering architecture, by same data setMultiple divisionsTurn Turn to a new character representation H={ α1H1.....αtHt, wherein,And αtRespectively divide ptCharacter representation And weights;
h i , j = exp ( ( 1 | N B | &Sigma; | N B ( x i ) | d ( N B ( x i ) , &mu; j ) ) 2 / 2 &sigma; i &sigma; j )
xiFor the mark of sample i, | NB | is self-defining Size of Neighborhood,It is that the point in neighborhood is to a bunch sign point μjAverage distance,WithIt is in x respectivelyiPoint μ is characterized with bunchjOffice Portion's scale factor.
CN201610217372.9A 2016-04-08 2016-04-08 Sampling type clustering integration method based on local and global information Pending CN105844303A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610217372.9A CN105844303A (en) 2016-04-08 2016-04-08 Sampling type clustering integration method based on local and global information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610217372.9A CN105844303A (en) 2016-04-08 2016-04-08 Sampling type clustering integration method based on local and global information

Publications (1)

Publication Number Publication Date
CN105844303A true CN105844303A (en) 2016-08-10

Family

ID=56597227

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610217372.9A Pending CN105844303A (en) 2016-04-08 2016-04-08 Sampling type clustering integration method based on local and global information

Country Status (1)

Country Link
CN (1) CN105844303A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107273915A (en) * 2017-05-17 2017-10-20 西北工业大学 The target classification identification method that a kind of local message is merged with global information
CN107423764A (en) * 2017-07-26 2017-12-01 西安交通大学 K Means clustering methods based on NSS AKmeans and MapReduce processing big data
CN109582706A (en) * 2018-11-14 2019-04-05 重庆邮电大学 The neighborhood density imbalance data mixing method of sampling based on Spark big data platform
CN110766032A (en) * 2018-07-27 2020-02-07 国网江西省电力有限公司九江供电分公司 Power distribution network data clustering integration method based on hierarchical progressive strategy
CN111126419A (en) * 2018-10-30 2020-05-08 顺丰科技有限公司 Dot clustering method and device
CN113918785A (en) * 2021-10-11 2022-01-11 广东工业大学 Enterprise data analysis method based on cluster ensemble learning

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107273915A (en) * 2017-05-17 2017-10-20 西北工业大学 The target classification identification method that a kind of local message is merged with global information
CN107273915B (en) * 2017-05-17 2019-10-29 西北工业大学 A kind of target classification identification method that local message is merged with global information
CN107423764A (en) * 2017-07-26 2017-12-01 西安交通大学 K Means clustering methods based on NSS AKmeans and MapReduce processing big data
CN110766032A (en) * 2018-07-27 2020-02-07 国网江西省电力有限公司九江供电分公司 Power distribution network data clustering integration method based on hierarchical progressive strategy
CN111126419A (en) * 2018-10-30 2020-05-08 顺丰科技有限公司 Dot clustering method and device
CN111126419B (en) * 2018-10-30 2023-12-01 顺丰科技有限公司 Dot clustering method and device
CN109582706A (en) * 2018-11-14 2019-04-05 重庆邮电大学 The neighborhood density imbalance data mixing method of sampling based on Spark big data platform
CN113918785A (en) * 2021-10-11 2022-01-11 广东工业大学 Enterprise data analysis method based on cluster ensemble learning
CN113918785B (en) * 2021-10-11 2024-06-25 广东工业大学 Enterprise data analysis method based on cluster ensemble learning

Similar Documents

Publication Publication Date Title
CN105844303A (en) Sampling type clustering integration method based on local and global information
Cai et al. Classification of power quality disturbances using Wigner-Ville distribution and deep convolutional neural networks
Xu et al. Accurate and interpretable bayesian mars for traffic flow prediction
CN112382352B (en) Method for quickly evaluating structural characteristics of metal organic framework material based on machine learning
CN111860982A (en) Wind power plant short-term wind power prediction method based on VMD-FCM-GRU
CN107766929B (en) Model analysis method and device
CN109033170B (en) Data repairing method, device and equipment for parking lot and storage medium
Biswas et al. Visualization of time-varying weather ensembles across multiple resolutions
CN108540988B (en) Scene division method and device
CN106127179A (en) Based on the Classification of hyperspectral remote sensing image method that adaptive layered is multiple dimensioned
CN103971136A (en) Large-scale data-oriented parallel structured support vector machine classification method
CN105046323A (en) Regularization-based RBF network multi-label classification method
CN115526246A (en) Self-supervision molecular classification method based on deep learning model
CN109961129A (en) A kind of Ocean stationary targets search scheme generation method based on improvement population
Zheng et al. Increase: Inductive graph representation learning for spatio-temporal kriging
CN109242039A (en) It is a kind of based on candidates estimation Unlabeled data utilize method
CN109978051A (en) Supervised classification method based on hybrid neural networks
CN102521202B (en) Automatic discovery method of complex system oriented MAXQ task graph structure
CN108764296A (en) More sorting techniques of study combination are associated with multitask based on K-means
CN108846845A (en) SAR image segmentation method based on thumbnail and hierarchical fuzzy cluster
CN115759291B (en) Spatial nonlinear regression method and system based on ensemble learning
Hamid et al. Marginalising over stationary kernels with Bayesian quadrature
CN105183804A (en) Ontology based clustering service method
CN109345537A (en) Based on the SAR image segmentation method that the multiple dimensioned CRF of high-order is semi-supervised
Bakhtiarnia et al. PromptMix: Text-to-image diffusion models enhance the performance of lightweight networks

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20160810