CN109388664A - A kind of middle and small river basin similitude method of discrimination - Google Patents
A kind of middle and small river basin similitude method of discrimination Download PDFInfo
- Publication number
- CN109388664A CN109388664A CN201811147711.6A CN201811147711A CN109388664A CN 109388664 A CN109388664 A CN 109388664A CN 201811147711 A CN201811147711 A CN 201811147711A CN 109388664 A CN109388664 A CN 109388664A
- Authority
- CN
- China
- Prior art keywords
- cluster
- clustering
- collective
- algorithm
- construction
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 34
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 67
- 239000011159 matrix material Substances 0.000 claims abstract description 54
- 238000010276 construction Methods 0.000 claims abstract description 40
- 238000004458 analytical method Methods 0.000 claims abstract description 21
- 238000011156 evaluation Methods 0.000 claims description 35
- 238000005070 sampling Methods 0.000 claims description 15
- 230000003595 spectral effect Effects 0.000 claims description 9
- 238000004364 calculation method Methods 0.000 claims description 7
- 238000002156 mixing Methods 0.000 claims description 4
- 238000000354 decomposition reaction Methods 0.000 claims description 3
- 238000005259 measurement Methods 0.000 claims description 3
- 238000005516 engineering process Methods 0.000 abstract description 5
- 238000007418 data mining Methods 0.000 abstract description 4
- 230000004927 fusion Effects 0.000 abstract 1
- 238000002474 experimental method Methods 0.000 description 13
- 230000000694 effects Effects 0.000 description 10
- 238000013461 design Methods 0.000 description 9
- 238000013459 approach Methods 0.000 description 5
- 230000008859 change Effects 0.000 description 3
- 238000013480 data collection Methods 0.000 description 3
- 241001269238 Data Species 0.000 description 2
- 238000010835 comparative analysis Methods 0.000 description 2
- 230000000052 comparative effect Effects 0.000 description 2
- 230000007423 decrease Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 235000013399 edible fruits Nutrition 0.000 description 2
- 238000001228 spectrum Methods 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 238000012876 topography Methods 0.000 description 2
- 230000007812 deficiency Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000003064 k means clustering Methods 0.000 description 1
- 238000013508 migration Methods 0.000 description 1
- 230000005012 migration Effects 0.000 description 1
- 230000002265 prevention Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000004393 prognosis Methods 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 238000000638 solvent extraction Methods 0.000 description 1
- 238000012800 visualization Methods 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
- 239000002023 wood Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
- G06F18/232—Non-hierarchical techniques
- G06F18/2321—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
- G06F18/23213—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/06—Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
- G06Q10/063—Operations research, analysis or management
- G06Q10/0639—Performance analysis of employees; Performance analysis of enterprise or organisation operations
- G06Q10/06393—Score-carding, benchmarking or key performance indicator [KPI] analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/10—Services
- G06Q50/26—Government or public services
Landscapes
- Business, Economics & Management (AREA)
- Engineering & Computer Science (AREA)
- Human Resources & Organizations (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Tourism & Hospitality (AREA)
- Development Economics (AREA)
- Educational Administration (AREA)
- Economics (AREA)
- General Physics & Mathematics (AREA)
- Strategic Management (AREA)
- Data Mining & Analysis (AREA)
- Entrepreneurship & Innovation (AREA)
- Marketing (AREA)
- General Business, Economics & Management (AREA)
- Game Theory and Decision Science (AREA)
- Bioinformatics & Computational Biology (AREA)
- Probability & Statistics with Applications (AREA)
- Operations Research (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Quality & Reliability (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Primary Health Care (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention discloses a kind of middle and small river basin similitude method of discrimination, first construction cluster data collective: selected characteristic index construction feature subset, and character subset input base clustering algorithm is obtained cluster collective;Then construction clusters the similarity matrix of collective: input matrix of the similarity matrix of the construction cluster collective as preset Cluster-Fusion algorithm;It finally carries out Matrix Cluster fusion: Cluster-Fusion being carried out to the similarity matrix using preset Cluster-Fusion algorithm, realize similitude judgement, the present invention makes full use of the geodata and hydrographic data feature of middle small watershed, and using the similarity analysis of small watershed in data mining technology implementation, solves hydrology basin similitude and be difficult to the technical issues of judging.
Description
Technical field
The invention belongs to data mining technology fields, and in particular to a kind of middle and small river basin similitude method of discrimination.
Background technique
In China, flood is one of occurrence probability highest, the natural calamity for endangering most serious.Currently, China is big to great river
The improvement in river has tended to be perfect, and provides heavily fortified point to the huge investment of dyke, river and dam and reservoir for the flood control flood control in China
Real basis.However, middle small watershed can also generate either large or small flood damage after heavy rain or heavy rain, it is big relative to great river
River, there is no by enough attention in terms of flood control for many middle small watersheds.According to incompletely statistics, China have 50,000 it is a plurality of in
Small watershed, and the bank that China has the geographical location in 85% city to be in small watershed in these, and China was sent out again and again in recent years
Raw extraordinary weather and extreme weather conditions brings immense pressure to the prevention and treatment of middle Flood of small drainage area.Therefore Flood of small drainage area is administered in
It is very urgent, but many middle small watersheds that disaster occurs are insufficient in no hydrological data area or hydrological data, can not be the hydrology
Personnel provide enough hydrological datas and parameter carries out hydrologic(al) prognosis.
For the Problems of Hydrological Prediction for solving the insufficient area of these hydrological datas, the current most common method is hydrologic analogy
Method, i.e., search Choosing Hydrological Reference Basin similar with basin is designed by basin similarity analysis, then by the hydrological data of Choosing Hydrological Reference Basin,
The hydrographic features such as hydrological statistics parameter, hydrologic characteristic value are transplanted to the design basin of hydrological analysis and prediction to be carried out, final complete
At related hydrological simulation, realize to the effective flood control and disaster relief in hydrological data deficiency basin.However hydrological data is lacked medium and small
Basin carries out data migration, and first step is exactly the similarity analysis of middle small watershed.But at this stage, middle small watershed similitude point
Analysis relies primarily on manual analysis, without complete automated analysis model.
Now, the analysis of analogy basin and the determining artificial decision for relying on substantially hydrology expert, this analysis mode are deposited
In much not objective and inaccurate situation.Meanwhile the process sheet of basin similarity analysis is carried out in the way of the pure hydrology
Body also includes many uncertainties, so basin is difficult to carry out quantitative similarity analysis by common qualitative fashion really.
Summary of the invention
To solve the above problems, the present invention proposes a kind of middle and small river basin similitude method of discrimination, make full use of medium and small
The geodata and hydrographic data feature in basin, and use the similarity analysis of small watershed in data mining technology implementation, solution
Certainly hydrology basin similitude is difficult to the technical issues of judging.
The present invention adopts the following technical scheme that, a kind of middle and small river basin similitude method of discrimination, comprising the following steps:
1) cluster data collective: selected characteristic index construction feature subset is constructed, character subset input base cluster is calculated
Method obtains cluster collective;
2) similarity matrix of construction cluster collective: the similarity matrix of the construction cluster collective is as preset cluster
The input matrix of blending algorithm;
3) Matrix Cluster merges: Cluster-Fusion is carried out to the similarity matrix using preset Cluster-Fusion algorithm, it is real
Existing similitude judgement.
Preferably, cluster data collective is constructed in the step 1) method particularly includes: sample using based on Weighted random
Cluster collective construction algorithm CCE-WRS construction cluster collective, i.e., referred to using feature of the Weighted random sampling method WRS to data set
Mark is weighted stochastical sampling, obtains multiple and different character subsets, construction cluster collective, specific steps are as follows:
11) judge whether to be first iteration, if first iteration, then it is initial weight to be carried out to the characteristic index of data set
Change, subsequently into step 12), if not first iteration is then directly entered step 12);
12) stochastical sampling, construction n different feature are weighted according to the weight proportion of the characteristic index of data set
Collection;
13) base cluster is carried out to character subset based on base clustering algorithm, constructs the cluster collective for clustering member containing m,
Character subset n is identical with cluster member m numerical value;
14) weight update is carried out to characteristic index according to the evaluation index of the cluster collective, specifically: if current iteration
Cluster collective evaluation index be greater than last iteration obtain cluster collective evaluation index, record the cluster set of current iteration
Body updates the evaluation index of cluster collective, carries out weight update, specific characteristic subset to the characteristic index in specific characteristic subset
Character subset corresponding to the maximum cluster member of as clustering result quality OCQ;
15) repeat step 12) to step 15), number or iteration termination condition until meeting iteration.
Preferably, pretreatment is normalized to data set before the step 11).
Preferably, base clustering algorithm described in the step 13) is K-Medoids clustering algorithm, construction cluster collective
Specific steps are as follows:
131) k number strong point is randomly selected from the character subset of input as initial cluster center;By character subset its
Remainder strong point is assigned to itself apart from nearest initial cluster center, forms k clustering cluster;
132) the non-central point p in each clustering cluster is traversed, after calculating the cluster centre o for substituting the clustering cluster with p
Cost E selects the smallest non-central point p of cost E to replace original cluster centre o;
133) step 131) and 132) is repeated, until cluster centre no longer changes, is finally obtained comprising m cluster member
Cluster collective.
Preferably, the evaluation index of cluster collective is OCQ-NMI comprehensive evaluation index in the step 14), will be based on OCQ
The evaluation index of clustering result quality and difference appraisal index based on normalised mutual information NMI are combined, and set clustering result quality
Balance weight between otherness obtains measuring the OCQ-NMI comprehensive evaluation index of cluster collective's comprehensive quality, OCQ-NMI
(ω) index are as follows:
Wherein ω represents the balance weight between clustering result quality and otherness, and the bigger expression comprehensive evaluation index of ω is more by poly-
The influence of class quality;For the average cluster quality for clustering member in cluster collective, setting cluster intensive and cluster are neighbouring
Property between balance weight obtain it is each cluster member clustering result quality, i.e. clustering result quality Ocq (ξ) are as follows:
Ocq (ξ)=1- [ξ × Cmp+ (1- ξ) × Prox]
Wherein Cmp indicates cluster intensive;Prox indicates cluster propinquity;ξ indicates cluster intensive and cluster propinquity
Between balance weight;Clustering intensive Cmp indicates the concentration of each clustering cluster in a cluster member, clusters intensive meter
It is as follows to calculate formula:
Wherein kmIndicate clustering cluster number;Indicate the ith cluster cluster of m-th of cluster member;Indicate cluster
ClusterVariance, the calculation formula of cluster internal variance var (X) is as follows:
Wherein N indicates the data point number of data set X;Indicate data point xiWithThe distance between;It indicates
The mean value of data set X;
The close degree between neighbour's property Prox expression clustering cluster is clustered, the distance between clustering cluster is inversely proportional, clusters
Neighbour's property Prox calculation formula is as follows:
It is wherein σ Gaussian constant;For clustering clusterCentral point;For clustering clusterCentral point;
Indicate clustering clusterCentral point and clustering clusterThe distance between central point;
Mutual information MI indicates the measurement of degree of interdependence between two stochastic variables, will be mutual by normalised mutual information NMI
Information MI is limited in the range of 0 to 1, it follows that two cluster member πaAnd πbBetween NMI value are as follows:
Wherein kaAnd kbRespectively indicate cluster member πaWith cluster member πbThe number of clustering cluster in respective cluster member;
ni,jRepresent cluster member πaIth cluster cluster and cluster member πbJ-th of clustering cluster in identical data point number;niGeneration
Table clusters member πaIth cluster cluster in data point number;njRepresent cluster member πbJ-th of clustering cluster in data point
Number;
The NMI value of entire cluster collective be the average value for clustering the NMI value between any two of cluster member in collective, is defined such as
Under:
Wherein C indicates the number of cluster member, and since NMI value and otherness are inversely proportional, i.e., the bigger otherness of NMI value is instead
It is smaller, difference appraisal index of the NMIBDM as final cluster collective is calculated as follows.
NMIBDM=1-NMI
Preferably, the step 2) clusters the similar ternary of collective specifically by weighting ternary join algorithm WCT construction
Connection matrix CTS, specific steps are as follows:
21) according to clustering clusterAnd clustering clusterTernary connection group set calculate clustering clusterAnd clustering clusterPhase
Like degree;
22) data point x in cluster member is calculatedi,xjBetween similarity;
23) it calculates according to the similarity in cluster collective between each data point, forms CTS similarity matrix.
Preferably, in the step 3), the Cluster-Fusion algorithm is the spectral clustering blending algorithm based on fuzzy C-mean algorithm
SP-FCM carries out the specific steps of Cluster-Fusion are as follows:
31) CTS similarity matrix is handled using spectral clustering, specifically: CTS similarity matrix is converted into La Pula
This matrix, then matrix decomposition is carried out by laplacian eigenmaps LM, generate and choose the smallest k of CTS similarity matrix
A characteristic value and corresponding feature vector;
32) data set that k feature vector composition is analyzed using FCM clustering algorithm FCM, obtains cluster result, according to this
Cluster result centering small watershed carries out final similarity analysis.
It invents achieved the utility model has the advantages that the present invention is a kind of middle and small river basin similitude method of discrimination, makes full use of
The geodata and hydrographic data feature of middle small watershed, and use the similitude point of small watershed in data mining technology implementation
Analysis solves hydrology basin similitude and is difficult to the technical issues of judging.The geographical feature element and meteorology of the present invention therefrom small watershed
Centering small watershed similarity analysis is filtered out in element influences significant multiple features as analysis indexes, centering small watershed data
Collection carries out data prediction, cube needed for obtaining middle small watershed similarity analysis;Refer in conjunction with OCQ-NMI overall merit
Mark proposes the iterative cluster collective construction algorithm based on Weighted random sampling, which is acted in pretreated
Small watershed data set, it is ultimately constructed to go out to have the cluster collective of higher clustering result quality and otherness.Meanwhile the present invention is according to cluster
Integrated correlation theory, studies emphatically the clustering ensemble algorithm based on middle small watershed data, constructs relatively objective reasonable cluster set
At model, to replace discriminant approach of the tradition taking human as consciousness for standard, realizes the judgement of middle and small river basin similitude, will look for
To Choosing Hydrological Reference Basin other hydrographic features be transplanted to design basin on, for realize design River Basin Hydrology data expansion provide more
Add accurately theory and technology support.
Detailed description of the invention
Fig. 1 is the method flow diagram of the embodiment of the present invention;
Fig. 2 is small watershed data set two-dimensional map figure in the embodiment of the present invention;
Fig. 3 is that the cluster efficiency comparative of CCE-RS and CCE-WRS algorithm of the present invention schemes.
Specific embodiment
Below according to attached drawing and technical solution of the present invention is further elaborated in conjunction with the embodiments.
Fig. 1 is the method flow diagram of the embodiment of the present invention.
A kind of middle and small river basin similitude method of discrimination, comprising the following steps:
1) cluster data collective: selected characteristic index construction feature subset is constructed, character subset input base cluster is calculated
Method obtains cluster collective;
2) similarity matrix of construction cluster collective: the similarity matrix of the construction cluster collective is as preset cluster
The input matrix of blending algorithm;
3) Matrix Cluster merges: Cluster-Fusion is carried out to the similarity matrix using preset Cluster-Fusion algorithm, it is real
Existing similitude judgement.
The screening criteria according to as defined in Ministry of Water Resources is from digital elevation model element (Digital in the present embodiment
Elevation Model, DEM) middle small watershed of the catchment area less than 50 sq-kms is selected in data, choose 27 features
Index, including 18 topography and geomorphology indexs and 9 meteorological index, wherein topography and geomorphology index includes: that drainage area, basin are long
Degree, basin mean inclination, form factor, ratio of elongation, river density, river maintain constant, average river chain length, river chain to be averaged charge for remittance
Area, total channel length, river frequency, river chain frequency, normal fluidity main stem length, normal fluidity main stem are than drop, maximum flow path
Distance, drainage basin height difference area under the curve, approximation constant K and the area gradient, meteorological index include: default time average rainfall, June
To the average rainfall, 1 hour flood season (June to September) maximum rainfall, 3 hours flood seasons (June to September) maximum of September every month
Rainfall, 6 hours flood seasons (June to September) maximum rainfall and 12 hours flood seasons (June to September) maximum rainfall, and from the hydrology
88 middle small watersheds for possessing partial data, composition data collection are filtered out in database.Since the data set contains in 88
Small watershed and 27 characteristic indexs, therefore the data set size is 88 × 27.Due to middle small watershed data set respectively from DEM element,
Multiple characteristic dimensions are had chosen in meteorological element, therefore middle small watershed data set has high-dimensional data characteristic.
In the present embodiment,
Data set: X={ x1,x2..., xN, wherein x1,x2, xNRespectively indicate the 1st, the 2nd and n-th data point, N
Indicate the number for the data point that data set is included;
Cluster collective: Π={ π1,π2,…,πM, wherein π1,π2,πMIt respectively indicates the 1st, the 2nd and m-th is clustered into
Member, M indicate the number for the cluster member that cluster collective is included;
Cluster member:WhereinRespectively indicate m-th of cluster member πmThe 1st
A, the 2nd and k-th clustering cluster, K indicate the number for the clustering cluster that cluster member is included;
Clustering cluster (cluster):Indicate m-th of cluster member πmK-th of clustering cluster;
As a kind of preferred embodiment, the step 1) constructs cluster data collective method particularly includes: uses and is based on
Cluster collective construction algorithm (the Constructing Clustering Ensembles by of Weighted random sampling
WeightedRandom Sampling, CCE-WRS) construction cluster collective, that is, use Weighted random sampling method (weighted
Random sampling, WRS) stochastical sampling is weighted to the characteristic index of data set, multiple and different character subsets are obtained,
Construction cluster collective, specific steps are as follows:
11) judge whether to be first iteration, if first iteration, then it is initial weight to be carried out to the characteristic index of data set
Change, subsequently into step 12), is otherwise directly entered step 12);
12) stochastical sampling, construction n different feature are weighted according to the weight proportion of the characteristic index of data set
Collection;
13) base cluster is carried out to character subset based on base clustering algorithm, constructs the cluster collective for clustering member containing m,
Character subset n is identical with cluster member m numerical value in Fig. 1;
14) weight update is carried out to characteristic index according to the evaluation index of the cluster collective, specifically: if current iteration
Cluster collective evaluation index be greater than last iteration obtain cluster collective evaluation index, record the cluster set of current iteration
Body updates the evaluation index of cluster collective, carries out weight update, the present embodiment middle finger to the characteristic index in specific characteristic subset
Determining character subset is feature corresponding to the maximum cluster member of clustering result quality (overall cluster quality, OCQ)
Subset;
15) repeat step 12) to step 15), number or iteration termination condition until meeting iteration, the present embodiment
Middle iteration termination condition is that the evaluation index of cluster collective tends to be steady.
As a kind of preferred embodiment, pretreatment is normalized to data set before the step 11).In view of the data
The magnitude of each characteristic index is concentrated to differ larger, pretreatment is normalized can be by the data processing of each dimension of data set to together
One magnitude obtains being used directly for 88 × 27 data set that middle small watershed similitude differentiates.
As a kind of preferred embodiment, base clustering algorithm described in the step 13) is K-Medoids clustering algorithm,
The specific steps of construction cluster collective are as follows:
131) k number strong point is randomly selected from the character subset of input as initial cluster center;By character subset its
Remainder strong point is assigned to itself apart from nearest initial cluster center, forms k clustering cluster;
132) the non-central point p in each clustering cluster is traversed, after calculating the cluster centre o for substituting the clustering cluster with p
Cost E selects the smallest non-central point p of cost E to replace original cluster centre o;
133) step 131) and 132) is repeated, until cluster centre no longer changes, is finally obtained comprising m cluster member
Cluster collective.
When compared to being clustered in tradition cluster collective's construction algorithm by multiple K-Means algorithm, by noise point band
The error interference come use in the present embodiment K-Medoids clustering algorithm that common K-Means clustering algorithm is replaced to gather as base
Class algorithm effectively increases the clustering result quality for respectively clustering member in cluster collective, i.e., more has robustness.
As a kind of preferred embodiment, the evaluation index of cluster collective is OCQ-NMI overall merit in the step 14)
Index both guarantees that clustering each cluster member that collective includes gathers around in order to obtain the cluster collective of a high comprehensive quality
There is high clustering result quality, and guarantee to keep otherness between each cluster member, by evaluation index and base based on OCQ clustering result quality
It is combined, sets in the difference appraisal index of normalised mutual information (NormalizedMutualInformation:NMI)
Balance weight between clustering result quality and otherness obtains measuring the OCQ-NMI comprehensive evaluation index of cluster collective's comprehensive quality,
OCQ-NMI (ω) index are as follows:
Wherein ω represents the balance weight between clustering result quality and otherness, and the bigger expression comprehensive evaluation index of ω is more by poly-
The influence of class quality, for the balanced clustering result quality and otherness for considering cluster member, the value of ω is set as in the present embodiment
0.5, that is, indicate that the two importance is identical;For the average cluster quality for clustering member in cluster collective, setting cluster is intensively
Property and cluster propinquity between balance weight obtain it is each cluster member clustering result quality, i.e. clustering result quality Ocq (ξ) are as follows:
Ocq (ξ)=1- [ξ × Cmp+ (1- ξ) × Prox]
Wherein Cmp indicates cluster intensive;Prox indicates cluster propinquity;ξ indicates cluster intensive and cluster propinquity
Between balance weight, be set as 0.5 in the present embodiment, that is, both indicate that importance is identical;Clustering intensive Cmp indicates one
The concentration for clustering each clustering cluster in member, is mainly calculated by cluster internal variance.In general, cluster internal variance is smaller
Then illustrate that this cluster member is more intensive, cluster intensive calculation formula is as follows:
Wherein kmIndicate clustering cluster number;Indicate the ith cluster cluster of m-th of cluster member;Indicate poly-
Class clusterVariance, the calculation formula of cluster internal variance var (X) is as follows:
Wherein N indicates the data point number of data set X;Indicate data point xiWithThe distance between;It indicates
The mean value of data set X;
The close degree between neighbour's property Prox expression clustering cluster is clustered, the distance between clustering cluster is inversely proportional, generally
For, the distance the big between cluster, illustrates these clustering clusters more disperses, and cluster neighbour's property Prox calculation formula is as follows:
It is wherein σ Gaussian constant;For clustering clusterCentral point;For clustering clusterCentral point;
Indicate clustering clusterCentral point and clustering clusterThe distance between central point;
Mutual information (MutualInformation:MI) indicates the measurement of degree of interdependence between two stochastic variables, different
In related coefficient, it is not limited to real-valued random variable, generally by the joint probability distribution of two stochastic variables and respective side
Edge probability distribution codetermines, by normalised mutual information (NormalizedMutualInformation:NMI) by mutual information
MI is limited in the range of 0 to 1, it follows that two cluster member πaAnd πbBetween NMI value are as follows:
Wherein kaAnd kbRespectively indicate cluster member πaWith cluster member πbThe number of clustering cluster in respective cluster member;
ni,jRepresent cluster member πaIth cluster cluster and cluster member πbJ-th of clustering cluster in identical data point number;niGeneration
Table clusters member πaIth cluster cluster in data point number;njRepresent data point in j-th of clustering cluster of cluster member π b
Number;
The NMI value of entire cluster collective be the average value for clustering the NMI value between any two of cluster member in collective, is defined such as
Under:
Wherein C indicates the number of cluster member, and NMI is applied in entire cluster collective and is used to calculate whole difference
Property, since NMI value and otherness are inversely proportional, i.e. the bigger otherness of NMI value is smaller instead.NMIBDM conduct is calculated as follows
The difference appraisal index of final cluster collective.
NMIBDM=1-NMI
After the completion of step 1) iteration, the m cluster members based on middle small watershed data set are obtained, these cluster members have
Higher clustering result quality and each other have larger difference, collectively constitute the cluster collective based on middle small watershed data set.
As a kind of preferred embodiment, the step 2) is specifically by weighting ternary join algorithm (Weighted
Connected-Triple, WCT) construct similar ternary connection matrix (the Connected-Triple Based for clustering collective
Similarity, CTS).
Weight size between two clustering clusters:
WhereinIndicate clustering clusterWithBetween weight size,WithRespectively indicate clustering clusterWith
The set of middle data point;
Ternary connection group Λ={ V, E } are as follows: for vertex set V={ v1,v2,v3And line set
Wherein vertex v1And vertex v2Respectively with the same vertex v3There are two sidesWithSo vertex v1And vertex v2Just by
It is considered similar, and the line set of the vertex set of these three vertex composition and two sides composition collectively constitutes ternary connection
Group;
Utilize the similarity matrix specific steps of WCT algorithm construction cluster collective are as follows:
21) according to clustering clusterAnd clustering clusterTernary connection group set calculate clustering clusterAnd clustering clusterPhase
Like degree;
22) data point x in cluster member is calculatedi,xjBetween similarity;
23) it calculates according to the similarity in cluster collective Π between each data point, forms CTS similarity matrix.
As a kind of preferred embodiment, in the step 3), the Cluster-Fusion algorithm is the spectrum based on fuzzy C-mean algorithm
Cluster-Fusion algorithm (Spectral Clustering based on Fuzzy C-means, SP-FCM) carries out Cluster-Fusion
Specific steps are as follows:
31) CTS similarity matrix is handled using spectral clustering, specifically: CTS similarity matrix is converted into La Pula
This matrix, then matrix decomposition is carried out by laplacian eigenmaps (LaplacianEigenmaps, LM), it generates and chooses
The smallest k characteristic value of CTS similarity matrix and corresponding feature vector;
32) data set that k feature vector composition is analyzed using FCM clustering algorithm (Fuzzy C-means, FCM), is obtained
Cluster result.Final similarity analysis is carried out according to the cluster result centering small watershed.
Spectral clustering is with spectrogram division for basic criterion, and cluster is converted into dividing the multichannel of non-directed graph.Spectrum is poly-
Class algorithm uses simply, and is often better than the result directly obtained using K-Means to the treatment effect of complex data type.By
In classical spectral clustering K-Means obtain be hard plot cluster result, in view of the purpose of middle small watershed similarity analysis,
The cluster result of hard plot can not intuitively find out the similar implementations between design basin and analogy basin, in the present embodiment only
Using k feature vector of similarity matrix is obtained the step of front in spectral clustering, base used in final step is clustered and is calculated
Method replaces with FCM clustering algorithm, to obtain the cluster result based on fuzzy partitioning.
It tests one: comparing and cluster collective is generated based on K-Means and K-Medoids cluster collective's generating algorithm, it is specific interior
Hold are as follows: repeatedly call K-Means and K-Medoids to obtain multiple cluster members to input data set respectively, cluster member is common
Composition cluster collective;Then obtained cluster collective is calculated compared with using the evaluation index of proposition.In view of K-
Means and K-Medoids can cause result different because initial center point randomly selects, and pass through many experiments in the present embodiment
The mode being averaged afterwards compares, and the design parameter of algorithm is as follows:
Input data: small watershed data set in 88 × 27 Jiangxi Province is used;
Clustering cluster number S: obtaining according to CCE-WRS, and maximum clusters number is defined as follows:
Wherein SmaxFor maximum cluster numbers, N is the sample number of data set.S takes closest in the present embodimentInteger value;
Cluster membership I:10;
It is tested according to parameter setting above, finally obtained result such as table 1:
The Clustering Effect comparison for tradition cluster collective's construction algorithm that table 1 is clustered based on different bases
According to the experimental result in table 1 it can be seen that when using K-Medoids as base clustering algorithm, generate
Cluster member clustering result quality it is preferable, reason be used as experiment input middle small watershed data set contain some noise points,
It is visualized by multidimensional Zoom method (Multidimensional Scaling, MDS), by the middle small watershed data of 27 dimensions
Collection drop re-maps the mode on figure at two-dimensional data collection and is verified, and visualization result is as shown in Figure 2.Description of test K-
This characteristic that Medoids is not influenced by noise point can improve the cluster of each cluster member in cluster collective to a certain extent
Quality embodies the superiority on middle small watershed data set.
Since the central point of K-Medoids is chosen from the point in data set, therefore as can be seen from Table 1
The otherness between member is respectively clustered in the cluster set that K-Medoids algorithm ultimately generates more than the difference that K-Means is obtained
Property is small.And since OCQ-NMI comprehensive evaluation index is the index for comprehensively considering clustering result quality simultaneously and clustering otherness, so base
Compare base instead in the OCQ-NMI comprehensive evaluation index for the cluster collective that tradition cluster collective's construction algorithm of K-Medoids obtains
It is low in the OCQ-NMI comprehensive evaluation index of K-Means.
Experiment two: base clustering algorithm use K-Medoids in the case where, respectively using tradition cluster collective's structured approach and
Iterative cluster collective structured approach (the Constructing Clustering Ensembles by based on stochastical sampling
Random Sampling, CCE-RS) construction cluster collective, and in-service evaluation index is compared in terms of Clustering Effect, specifically
Parameter is as follows:
Input data: small watershed data set in 88 × 27 Jiangxi Province is used;
Clustering cluster number S: it is obtained according to CCE-WRS;
Cluster membership I:10;
The number of iterations S:1000;
It is tested according to parameter setting above, finally obtained result such as table 2:
The comparison of the Clustering Effect of 2 conventional construction method of table and CCE-RS algorithm
According to the experimental result in table 2 it can be seen that the comprehensive evaluation index for the cluster collective that conventional construction method constructs
It is very low, because otherness index is too low, i.e., it is excessively similar member respectively to be clustered in cluster collective, reason one is that this method always will
Complete data set cannot keep the otherness of input data as input;Reason two is that the algorithmic characteristic of K-Medoids causes
The decline of otherness.Based on the structured approach of RS by the way that the characteristic index of complete data set is randomly choosed after, will be formed
Biodiversity Characteristics subset, and as input data set, so that the otherness respectively clustered between member in cluster collective is improved,
Although causing clustering result quality to be declined due to the missing of dimension, from the point of view of OCQ-NMI comprehensive evaluation index, very great Cheng
The comprehensive quality of cluster collective is improved on degree.Structured approach based on RS is random selection character subset, therefore it is difficult to ensure that
To the stability of cluster collective's comprehensive quality.
Experiment three: it in the case where base clustering algorithm uses K-Medoids, is constructed respectively using CCE-RS and CCE-WRS
Collective is clustered, is then compared respectively in terms of Clustering Effect and cluster efficiency, design parameter is as follows:
Input data: small watershed data set in 88 × 27 Jiangxi Province is used;
Clustering cluster number S: it is obtained according to CCE-WRS;
Cluster membership I:10;
The number of iterations S:1000;
It is tested according to parameter setting above, finally obtained Clustering Effect such as table 3, Clustering Effect such as Fig. 3:
The Clustering Effect of 3 CCE-RS and CCE-WRS algorithm of table compares
According to the experimental result in table 3 it can be seen that the cluster collective that CCE-WRS algorithm generates calculates than the construction based on RS
The clustering result quality for the cluster collective that method generates is slightly higher, but otherness is slightly lower, and the OCQ-NMI for the cluster collective that the two generates is comprehensive
Evaluation index is not much different.Because WRS reduces the characteristic index for generating ill effect to cluster and is selected probability, to guarantee
The stability of cluster member's mass, but equally decrease the diversity of character subset, it reduces between cluster member
Otherness.
According to Fig. 3 it can be seen that the overall target for the cluster collective that CCE-RS algorithm generates can generate many peak values, become
It is very big to change fluctuating, unstable state may be still within after iteration is multiple, it is necessary to just can after completing whole the number of iterations
Terminate iteration, cluster efficiency is lower, and CCE-WRS algorithm is after the number of iterations reaches 100 times or so, overall target just reach compared with
Big value simultaneously tends to be steady, because the algorithm is updated by the dynamic to characteristic index weight, reduces and generates bad effect to cluster
The characteristic index of fruit is selected probability, to terminate iteration rapidly.It can thus be seen that the cluster efficiency of CCE-WRS is much higher than
CCE-RS。
Experiment four: the cluster collective based on middle small watershed data set is handled, generates the mutual of the cluster collective respectively
Incidence matrix and CTS matrix, design parameter are as follows:
Input data: the cluster collective generated using the CCE-WRS algorithm in experiment three;Mutual correlation is carried out to input data
The mutual correlation matrix of the cluster collective is calculated in degree, and the partial results of the matrix are as shown in table 4:
The mutual correlation matrix that table 4 is obtained based on the cluster collective of small watershed data set in Jiangxi Province
Input data: the cluster collective generated using CCE-WRS algorithm in experiment three;WCT algorithm is used to input data
The CTS matrix that the cluster collective is calculated is carried out, the partial results of the matrix are as shown in table 5:
The CTS matrix of cluster collective of the table 5 based on middle small watershed data set
According to experimental result, comparison obtains numerical value in CTS matrix and illustrates this hair all greater than the numerical value in mutual correlation matrix
The CTS matrix that bright embodiment uses has found the hiding relationship between data point, that is, enhances the similarity degree between data point.
Experiment five: direct FCM cluster experiment and the experiment of SP-FCM Cluster-Fusion are carried out respectively, and clusters experiment to two
As a result it is analyzed, the design parameter of experiment is as follows:
Input data: small watershed data set in 88 × 27 Jiangxi Province is used;
It is directly tested using FCM clustering algorithm according to input data above, finally obtained partial results such as table 6
It is shown:
Table 6 is directly using the cluster result after FCM cluster
Cluster 1 | Cluster 2 | Cluster 3 | Cluster 4 | Cluster 5 | Cluster 6 | Cluster 7 | Cluster 8 | Cluster 9 | Cluster 10 | |
The wooden mouth | 0.0014 | 0.0010 | 0.0014 | 0.0022 | 0.0016 | 0.0092 | 0.9764 | 0.0017 | 0.0036 | 0.0015 |
Pioneer | 0.0037 | 0.0026 | 0.0035 | 0.0058 | 0.0042 | 0.0291 | 0.9333 | 0.0042 | 0.0099 | 0.0039 |
Yongxin | 0.0100 | 0.0077 | 0.0097 | 0.0144 | 0.0114 | 0.0485 | 0.8538 | 0.0113 | 0.0227 | 0.0106 |
Three all | 0.0235 | 0.0149 | 0.0215 | 0.0409 | 0.0266 | 0.3775 | 0.3651 | 0.0280 | 0.0772 | 0.0247 |
Ganzhou | 0.1103 | 0.0872 | 0.1109 | 0.1046 | 0.1180 | 0.0858 | 0.0605 | 0.1085 | 0.1003 | 0.1139 |
… | … | … | … | … | … | … | … | … | … | … |
Input data: obtained in experiment four 88 × 88 CTS matrix is used;
It is tested according to input data above using SP-FCM Cluster-Fusion algorithm, finally obtained partial results are such as
Shown in table 7:
Cluster result after 7 SP-FCM Cluster-Fusion of table
Cluster 1 | Cluster 2 | Cluster 3 | Cluster 4 | Cluster 5 | Cluster 6 | Cluster 7 | Cluster 8 | Cluster 9 | Cluster 10 | |
The wooden mouth | 0.0005 | 0.9830 | 0.0028 | 0.0069 | 0.0003 | 0.0049 | 0.0000 | 0.0000 | 0.0003 | 0.0013 |
Yongxin | 0.0005 | 0.9830 | 0.0028 | 0.0069 | 0.0003 | 0.0049 | 0.0000 | 0.0000 | 0.0003 | 0.0013 |
Pioneer | 0.0005 | 0.9830 | 0.0028 | 0.0069 | 0.0003 | 0.0049 | 0.0000 | 0.0000 | 0.0003 | 0.0013 |
Three all | 0.0013 | 0.9428 | 0.0084 | 0.0246 | 0.0011 | 0.0172 | 0.0000 | 0.0001 | 0.0009 | 0.0035 |
Woods hole | 0.0120 | 0.4630 | 0.0812 | 0.1715 | 0.0070 | 0.2191 | 0.0001 | 0.0004 | 0.0085 | 0.0370 |
… | … | … | … | … | … | … | … | … | … | … |
It can be seen that according to the partial data in table 6 and 7 in the cluster result directly using FCM cluster, the wooden mouth station, pioneer
Stand, Yongxin station has 85% or more probability to belong to same cluster (cluster 7), and three probability that all station belongs to cluster 7 only have 36%;?
In cluster result after SP-FCM Cluster-Fusion, the wooden mouth station, Xian Fengzhan, Yongxin station have 98% or more probability to belong to same cluster
(cluster 2), it is consistent with judging in the cluster result of direct FCM cluster, but wherein three probability that all station belongs to cluster 2 are up to 94%,
Think three all station and the wooden mouth station be also particularly likely that it is similar.
It is just basic to can consider if many years maximum flood peak in basin, magnanimity gap are within 10% where two survey stations
The two is similar.Therefore it is verified using the method for this many years maximum flood peak, magnanimity comparative analysis herein with the wooden mouth station and forever
Newly, the similar situation that pioneer, three stand.Due to many years data to be used, and Yongxin only has 2 years data, therefore excludes verifying.
Following table 8 lists the wooden mouth and pioneer, three all many years flood peak discharges at the two stations and maximum 1,3,6 and 12 hour magnanimity
Comparative situation:
8 magnanimity comparative analysis situation of table
From upper table, it can be concluded that, three all stand and the wooden mouth station is stood and the gap at wood mouth station in the gap of flood peak and magnanimity and pioneer
It is similar, nearly all within 10%.So it is considered that three all stations and wooden mouth station are also similar, i.e., three basins all where station
It is the analogy basin in basin where the wooden mouth station, to prove that the middle small watershed similarity analysis based on clustering ensemble is more accurately looked for
The analogy basin in design basin is gone out.
Secondly, three all stand and the wooden mouth station in the gap on flood peak and magnanimity all than the gap of pioneer station and the wooden mouth
Greatly, it is possible to which basin where basin is stood with pioneer where thinking the wooden mouth station is increasingly similar, and the conclusion and the i.e. final cluster of table 7 are tied
The degree of membership that pioneer stands in fruit table is greater than three all station this case and is consistent.
Claims (7)
1. a kind of middle and small river basin similitude method of discrimination, which comprises the following steps:
1) cluster data collective: selected characteristic index construction feature subset is constructed, character subset input base clustering algorithm is obtained
To cluster collective;
2) similarity matrix of construction cluster collective: the similarity matrix of the construction cluster collective is as preset Cluster-Fusion
The input matrix of algorithm;
3) Matrix Cluster merges: carrying out Cluster-Fusion to the similarity matrix using preset Cluster-Fusion algorithm, realizes phase
Like property judgement.
2. a kind of middle and small river basin similitude method of discrimination according to claim 1, which is characterized in that the step 1)
Middle construction cluster data collective method particularly includes: using the cluster collective construction algorithm CCE-WRS sampled based on Weighted random
Construction cluster collective, i.e., be weighted stochastical sampling using characteristic index of the Weighted random sampling method WRS to data set, obtain more
A different characteristic subset, construction cluster collective, specific steps are as follows:
11) judge whether to be first iteration, if first iteration, then weights initialisation is carried out to the characteristic index of data set, so
After enter step 12), if not first iteration is then directly entered step 12);
12) stochastical sampling, n different character subsets of construction are weighted according to the weight proportion of the characteristic index of data set;
13) base cluster is carried out to character subset based on base clustering algorithm, constructs the cluster collective for clustering member containing m, feature
Subset n is identical with cluster member m numerical value;
14) weight update is carried out to characteristic index according to the evaluation index of the cluster collective, specifically: if current iteration is poly-
The evaluation index of class collective is greater than the evaluation index for the cluster collective that last iteration obtains, and records the cluster collective of current iteration,
The evaluation index for updating cluster collective carries out weight update to the characteristic index in specific characteristic subset, and specific characteristic subset is
For character subset corresponding to the maximum cluster member of clustering result quality OCQ;
15) repeat step 12) to step 15), number or iteration termination condition until meeting iteration.
3. a kind of middle and small river basin similitude method of discrimination according to claim 2, which is characterized in that the step
11) pretreatment is normalized to data set before.
4. a kind of middle and small river basin similitude method of discrimination according to claim 2, which is characterized in that the step
13) base clustering algorithm described in is K-Medoids clustering algorithm, the specific steps of construction cluster collective are as follows:
131) k number strong point is randomly selected from the character subset of input as initial cluster center;By its remainder of character subset
Strong point is assigned to itself apart from nearest initial cluster center, forms k clustering cluster;
132) the non-central point p in each clustering cluster is traversed, the cost after calculating the cluster centre o for substituting the clustering cluster with p
E selects the smallest non-central point p of cost E to replace original cluster centre o;
133) step 131) and 132) is repeated, until cluster centre no longer changes, finally obtains the cluster comprising m cluster member
Collective.
5. a kind of middle and small river basin similitude method of discrimination according to claim 2, which is characterized in that the step
14) evaluation index of cluster collective is OCQ-NMI comprehensive evaluation index in, by evaluation index and base based on OCQ clustering result quality
It is combined in the difference appraisal index of normalised mutual information NMI, sets the balance weight between clustering result quality and otherness
Obtain measuring the OCQ-NMI comprehensive evaluation index of cluster collective's comprehensive quality, OCQ-NMI (ω) index are as follows:
Wherein ω represents the balance weight between clustering result quality and otherness, and the bigger expression comprehensive evaluation index of ω is more by cluster matter
The influence of amount;For cluster collective in cluster member average cluster quality, setting cluster intensive and cluster propinquity it
Between balance weight obtain it is each cluster member clustering result quality, i.e. clustering result quality Ocq (ξ) are as follows:
Ocq (ξ)=1- [ξ × Cmp+ (1- ξ) × Prox]
Wherein Cmp indicates cluster intensive;Prox indicates cluster propinquity;ξ is indicated between cluster intensive and cluster propinquity
Balance weight;Clustering intensive Cmp indicates that the concentration of each clustering cluster in a cluster member, cluster intensive calculate public
Formula is as follows:
Wherein kmIndicate clustering cluster number;Indicate the ith cluster cluster of m-th of cluster member;Indicate clustering clusterVariance, the calculation formula of cluster internal variance var (X) is as follows:
Wherein N indicates the data point number of data set X;Indicate data point xiWithThe distance between;Indicate data
Collect the mean value of X;
The close degree between neighbour's property Prox expression clustering cluster is clustered, the distance between clustering cluster is inversely proportional, and clusters neighbour
Property Prox calculation formula is as follows:
It is wherein σ Gaussian constant;For clustering clusterCentral point;For clustering clusterCentral point;It indicates
Clustering clusterCentral point and clustering clusterThe distance between central point;
Mutual information MI indicates the measurement of degree of interdependence between two stochastic variables, by normalised mutual information NMI by mutual information
MI is limited in the range of 0 to 1, it follows that two cluster member πaAnd πbBetween NMI value are as follows:
Wherein kaAnd kbRespectively indicate cluster member πaWith cluster member πbThe number of clustering cluster in respective cluster member;ni,jGeneration
Table clusters member πaIth cluster cluster and cluster member πbJ-th of clustering cluster in identical data point number;niIt represents poly-
Class members πaIth cluster cluster in data point number;njRepresent cluster member πbJ-th of clustering cluster in data point
Number;
The NMI value of entire cluster collective be to cluster the average value that member's NMI value between any two is clustered in collective, is defined as follows:
Wherein C indicates the number of cluster member, and since NMI value and otherness are inversely proportional, i.e., the bigger otherness of NMI value is instead more
It is small, difference appraisal index of the NMIBDM as final cluster collective is calculated as follows.
NMIBDM=1-NMI
6. a kind of middle and small river basin similitude method of discrimination according to claim 1, which is characterized in that the step 2)
Specifically by the similar ternary connection matrix CTS of weighting ternary join algorithm WCT construction cluster collective, specific steps are as follows:
21) according to clustering clusterAnd clustering clusterTernary connection group set calculate clustering clusterAnd clustering clusterIt is similar
Degree;
22) data point x in cluster member is calculatedi,xjBetween similarity;
23) it calculates according to the similarity in cluster collective between each data point, forms CTS similarity matrix.
7. a kind of middle and small river basin similitude method of discrimination according to claim 1, which is characterized in that the step 3)
In, the Cluster-Fusion algorithm is the spectral clustering blending algorithm SP-FCM based on fuzzy C-mean algorithm, carries out the specific step of Cluster-Fusion
Suddenly are as follows:
31) CTS similarity matrix is handled using spectral clustering, specifically: CTS similarity matrix is converted into Laplce's square
Battle array, then matrix decomposition is carried out by laplacian eigenmaps LM, generate and choose the smallest k spy of CTS similarity matrix
Value indicative and corresponding feature vector;
32) data set that k feature vector composition is analyzed using FCM clustering algorithm FCM, obtains cluster result, according to the cluster
As a result centering small watershed carries out final similarity analysis.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811147711.6A CN109388664A (en) | 2018-09-29 | 2018-09-29 | A kind of middle and small river basin similitude method of discrimination |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811147711.6A CN109388664A (en) | 2018-09-29 | 2018-09-29 | A kind of middle and small river basin similitude method of discrimination |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109388664A true CN109388664A (en) | 2019-02-26 |
Family
ID=65418306
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811147711.6A Pending CN109388664A (en) | 2018-09-29 | 2018-09-29 | A kind of middle and small river basin similitude method of discrimination |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109388664A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110020607A (en) * | 2019-03-13 | 2019-07-16 | 河海大学 | A method of analogy basin is found based on Spatial Fractal Dimension theory |
CN110659823A (en) * | 2019-09-21 | 2020-01-07 | 四川大学工程设计研究院有限公司 | Similar watershed analysis method, model, system and computer storage medium |
CN111698700A (en) * | 2019-03-15 | 2020-09-22 | 大唐移动通信设备有限公司 | Method and device for judging working state of cell |
CN113887635A (en) * | 2021-10-08 | 2022-01-04 | 河海大学 | Basin similarity classification method and classification device |
-
2018
- 2018-09-29 CN CN201811147711.6A patent/CN109388664A/en active Pending
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110020607A (en) * | 2019-03-13 | 2019-07-16 | 河海大学 | A method of analogy basin is found based on Spatial Fractal Dimension theory |
CN110020607B (en) * | 2019-03-13 | 2022-08-16 | 河海大学 | Method for searching similar watershed based on space dimension division theory |
CN111698700A (en) * | 2019-03-15 | 2020-09-22 | 大唐移动通信设备有限公司 | Method and device for judging working state of cell |
CN111698700B (en) * | 2019-03-15 | 2021-08-27 | 大唐移动通信设备有限公司 | Method and device for judging working state of cell |
CN110659823A (en) * | 2019-09-21 | 2020-01-07 | 四川大学工程设计研究院有限公司 | Similar watershed analysis method, model, system and computer storage medium |
CN113887635A (en) * | 2021-10-08 | 2022-01-04 | 河海大学 | Basin similarity classification method and classification device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109388664A (en) | A kind of middle and small river basin similitude method of discrimination | |
CN106650767B (en) | Flood forecasting method based on cluster analysis and real-time correction | |
Philipp et al. | Cost733cat–A database of weather and circulation type classifications | |
Hargrove et al. | Potential of multivariate quantitative methods for delineation and visualization of ecoregions | |
CN111582386A (en) | Random forest based geological disaster multi-disaster comprehensive risk evaluation method | |
CN107578104A (en) | A kind of Chinese Traditional Medicine knowledge system | |
CN108733966A (en) | A kind of multidimensional electric energy meter field thermodynamic state verification method based on decision woodlot | |
CN113378473B (en) | Groundwater arsenic risk prediction method based on machine learning model | |
Roshan et al. | Assessment of the climatic potential for tourism in Iran through biometeorology clustering | |
CN109657616A (en) | A kind of remote sensing image land cover pattern automatic classification method | |
Jacobeit | Classifications in climate research | |
Nhita | A rainfall forecasting using fuzzy system based on genetic algorithm | |
CN112700324A (en) | User loan default prediction method based on combination of Catboost and restricted Boltzmann machine | |
CN109242174A (en) | A kind of adaptive division methods of seaonal load based on decision tree | |
Hu et al. | A modified regional L-moment method for regional extreme precipitation frequency analysis in the Songliao River Basin of China | |
CN113379116A (en) | Cluster and convolutional neural network-based line loss prediction method for transformer area | |
CN113570191B (en) | Intelligent diagnosis method for dangerous situations of ice plugs in river flood | |
CN111461197A (en) | Spatial load distribution rule research method based on feature extraction | |
Dalelane et al. | A robust estimator for the intensity of the Poisson point process of extreme weather events | |
CN117010274B (en) | Intelligent early warning method for harmful elements in underground water based on integrated incremental learning | |
Goy et al. | Grouping techniques for building stock analysis: A comparative case study | |
Koolagudi | Long-range prediction of Indian summer monsoon rainfall using data mining and statistical approaches | |
CN116258279B (en) | Landslide vulnerability evaluation method and device based on comprehensive weighting | |
Ma et al. | Anomaly Detection of Mountain Photovoltaic Power Plant Based on Spectral Clustering | |
CN116701974A (en) | Precipitation multi-element space-time change analysis and attribution identification method under climate change |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190226 |
|
RJ01 | Rejection of invention patent application after publication |