CN108510050A - It is a kind of based on shuffling the feature selection approach to leapfrog - Google Patents

It is a kind of based on shuffling the feature selection approach to leapfrog Download PDF

Info

Publication number
CN108510050A
CN108510050A CN201810265031.8A CN201810265031A CN108510050A CN 108510050 A CN108510050 A CN 108510050A CN 201810265031 A CN201810265031 A CN 201810265031A CN 108510050 A CN108510050 A CN 108510050A
Authority
CN
China
Prior art keywords
frog
group
population
worst
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810265031.8A
Other languages
Chinese (zh)
Inventor
张涛
丁碧云
赵鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University
Original Assignee
Tianjin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University filed Critical Tianjin University
Priority to CN201810265031.8A priority Critical patent/CN108510050A/en
Publication of CN108510050A publication Critical patent/CN108510050A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

It is a kind of based on shuffling the feature selection approach to leapfrog:One initial characteristics of input integrate dimension as the data of M, and each feature includes several samples;Frog population is initialized, determines maximum iteration;The position of frog individual is subjected to fitness calculating using fitness function;Descending sort is carried out to entire frog population according to the fitness value of frog individual, according to mould because the method for dividision into groups is grouped frog population, determines optimal frog in the group in every group, the global optimum frog in group in worst frog and entire frog population;First with worst frog in optimal frog update group in group in every group, if finding fitness value of the fitness value better than the worst frog before update of worst frog in updated group, and meets the constraints of optimization, then complete a renewal process;Updated frog population is updated again, until iterations reach maximum iteration, obtains character subset most preferably.The present invention improves the efficiency of data mining and pattern-recognition.

Description

It is a kind of based on shuffling the feature selection approach to leapfrog
Technical field
The present invention relates to a kind of feature selection approach.More particularly to a kind of challenge for high-dimensional feature description Based on shuffling the feature selection approach to leapfrog.
Background technology
1, the algorithm brief introduction that leapfrogs is shuffled
It is that Eusuff et al. was carried in 2003 to shuffle the algorithm that leapfrogs (ShuffledFrogLeapingAlgorithm, SFLA) Go out a kind of novel Swarm Intelligent Algorithm, it simulate frog population look for food and transition process.Whole process can retouch State into such as under type:More frogs have collectively constituted a big population, wherein several initial sub- populations are divided into again, per height Population includes several frogs.The adaptability to environment of every frog is all different.Here indicate green at a distance from food The adaptability of the frog.In order to obtain food, that frog that every frog all can be nearest apart from food in sub- population, also It is every frog can all jump to the frog nearest apart from food, so as to shorten oneself at a distance from food, improves the suitable of oneself It should be able to power.After entire frog population completes primary jump, information exchange can be carried out between different frogs and is shared, it is therefore an objective to Improve the adaptability of entire frog population.In sub- population, in order to avoid frog accumulates in same position, information is exchanged every time Later, new sub- population is re-formed.Every frog all carries respective information and new sub- population is added.It is fitted in sub- population Should be able to the highest frog of power be the frog inside sub- population jump direction, regenerate every time sub- population ensure that sub- population it Between information exchange.Pass through jump (local search) and all blueness of the every frog to optimal frog inside alternately sub- population Shuffling again (global search) between the frog population, it is ensured that entire frog population is advanced towards optimal direction.
The mathematical description of algorithm is as follows:Firstly generate the initial population P={ X being made of N frog individual1,X2,…, Xn, after generating initial population, descending arrangement is carried out first, in accordance with the adaptability of every frog, by what is made number one Frog is denoted as Xg, it is adaptability is best in entire population one.Then entire population is divided into n sub- populations, each Sub- population includes m frog, meets relationship N=m*n.The classifying rules of initial sub- population is as follows:1st frog is divided into the 1st In a sub- population, the 2nd frog is divided into the 2nd sub- population, and n-th frog is divided into n-th of sub- population, then by (n+1)th Frog is divided into the 1st sub- population, and so on, until all frogs have all been divided into certain sub- population.Entire distribution Process can be described as follows:
Mk={ Xk+m(l-1)∈P|1≤l≤n},1≤k≤m
Wherein, xi,j(i=1,2 ... n;J=1,2 ..., m) indicate i-th of sub- population in jth frog position. In every sub- population, the best frog of adaptability is denoted as Xb, the worst frog of adaptability is denoted as Xw.Then to every height X in populationwIt jumps, update rule is as follows:
D=r* (Xb-Xw)
X′w=Xw+D
Wherein, r is the random number between 0 to 1, and D indicates frog skip distance.If after update, new frog X 'wThan original The adaptive faculty come is strong, then instead of original;If do not improved, X is usedgInstead of XbIt is updated;If still without improvement, Randomly generate a new frog substitution Xw.After every sub- population all completes local search, all frogs are re-mixed And sort, sub- population is then repartitioned, then carry out local search, until reaching the scheduled condition of convergence, such as reaches mixing Frog does not jump in iterations, given number of iterations.
2, feature selecting principle
Either signal data or characteristic, when data volume increase to a certain extent when, all can be to the foundation of model It generates.Generally there are strong correlation, weak related and unrelated three kinds of features for data, in addition to this between feature there is also correlation, this A little features are known as redundancy feature.The presence of these extraneous features, redundancy feature, can not only reduce reduces data mining and pattern knowledge Other performance, can also data processing complexity.
It is exactly feature selecting to solve an excessive method of feature.Feature selecting refers to from primitive character collection F={ f1,f2,…, fnOne character subset S={ f of middle selection1,f2,…,fm, meet (m < < n).Wherein:N is the dimension of primitive character collection;m It is characterized the dimension of subset.The target of feature selecting is therefrom to select one under the premise of not changing primitive character collection property Dtex is levied, and a new feature space is formed.
One typical feature selection approach includes four basic steps, its process is divided into:Generating process assesses letter Number, stopping criterion and verification process.Generating process is a search process, and valuation functions are to assess the subset checked, are stopped Only criterion is to determine when feature selecting stops, and verification process is to check whether subset is effective.It can be by a feature selection issues Regard an optimization problem as, the final purpose of optimization is that obtain a dimension sufficiently small but can effectively identify clarification of objective Collection.Character subset is indicated with one containing a unordered unduplicated sequences formed by 0 to the integer n-1 of m, in sequence Every number correspond to a feature.
Obviously, the dimension of initial characteristics subset is bigger, and corresponding different characteristic subset categories are more, and between the two It has exponent relation.Dimension is integrated as n for an initial characteristics, a character subset, corresponding sequence are generated by random manner Row are denoted as:S={ s1,s2,...,sm},s1,s2,...,sm∈{0,1,...,n-1}.Character subset has 2NThe different group of kind Close, feature selecting seek to from this 2NOptimal combination is searched in middle combination.
3, feature selecting existing method
The result of feature selecting directly affects the complexity, precision and stability of grader.With huge database It creates and the requirement to good machine learning techniques, big measure feature is extracted preferably to characterize target.However, and not all Feature be all effective.Therefore, in development mode categorizing system, it is vital to select suitable character subset.
Search strategy and interpretational criteria are focused primarily upon to the research of feature selection approach at present.According to the shape of character subset At process, the basic search strategy of feature selecting can be divided into following 3 kinds:Global optimum, random search and heuristic search.It is global Although searching method such as branch and bound method can obtain optimal character subset, it is high there are computation complexity the problems such as.With Machine searching method such as Relief serial algorithms have higher uncertainty, need to carry out compared with multi-cycle, and the setting of parameter is deposited In certain difficulty.Before heuristic search such as independent optimum combination, sequence to before selection method, broad sense sequence to selection method (GSFS), sequence backward selection method (SBS), broad sense sequence backward selection method (GSBS) increase l and r selection methods, broad sense are gone to increase L goes r selection methods, floating search method etc..Illumination scan is although efficient, but to sacrifice global optimum as generation Valence.One specific searching algorithm can use two or more basic search strategies, such as genetic algorithm is to be based on random search And heuristic search algorithm.
Feature selecting has the problem of correlation because search space is big between feature, makes it have prodigious challenge Property.Currently, a large amount of validity feature selection method is suggested, but there is many deficiencies in actual application.It is most of Feature selection approach there is a problem of local optimum either computation complexity it is high.
Invention content
The technical problem to be solved by the invention is to provide a kind of bases that can improve data mining and pattern-recognition efficiency In shuffling the feature selection approach to leapfrog.
The technical solution adopted in the present invention is:It is a kind of based on the feature selection approach to leapfrog is shuffled, include the following steps:
1) initial characteristics are inputted and integrates dimension as the data of M, each feature includes several samples;
2) frog population is initialized, and determines algorithm maximum iteration;
3) position of frog individual is subjected to fitness calculating using fitness function, fitness function expression formula is:
Here S is character subset, siAnd sjIndicate that the ith and jth feature in character subset S, L indicate sample number respectively According to corresponding target class label, I (si;L the Average of ith feature and target class label in character subset S) is indicated,
I(si;sj) indicate the Average of ith feature and j-th of feature in character subset S.
4) descending sort is carried out to entire frog population according to the fitness value of frog individual, and according to mould because of the method for dividision into groups pair Frog population is grouped, and frog population is divided into p groups, every group has q frog, wherein p and q to meet the quantity N=p*q of frog Relationship, determine optimal frog in the group in every group, the global optimum frog in group in worst frog and entire frog population;
5) first with worst frog in optimal frog update group in group in every group, if finding worst blueness in updated group The fitness value of the frog is better than the fitness value of the worst frog before update, and meets the constraints of optimization, then completes once more New process;Otherwise, it is updated using global optimum frog, if before the fitness value of worst frog is better than update in updated group The fitness value of worst frog, and meet the unduplicated constraints of feature in character subset, then complete renewal process;If still It can not be updated successfully, then optimal frog, iterations obtain updated from adding 1 in update group by the way of randomly updating Frog population;
6) step 4) is repeated to updated frog population and arrives step 5), until iterations reach maximum iteration, The position of global optimum frog is best character subset in the frog population obtained at this time.
Step 2) includes:It is first randomly generated the frog population being made of N frog, the position of each frog corresponds to one Unordered unduplicated sequence, the sequence contain m integer, and the m integer is 0 to the integer between M-1, in sequence Each integer correspond to a feature, the position of frog, which corresponds to from the initial characteristics that dimension is M, concentrates one randomly generated Dimension is the character subset of m, wherein M and m meets m < < M relationships, and algorithm maximum iteration is determined as IterMax, initially Iterations iter=0.
In step 4):N indicates the quantity of frog in whole frog population, the relationship of the sequence serial number x and grouping group number k of frog For:
Mark fitness value in every group maximum for optimal frog X in group simultaneouslyb, fitness value minimum be group in it is worst Frog XwAnd the global maximum global optimum frog X in entire frog population of fitness valueg
Worst frog formula is as follows in optimal frog update group in group described in step 5):
X'w=Xw+round(r)×(Xb-Xw)
In formula, XbOptimal frog, X in expression groupwWorst frog, X' in expression groupwIndicate worst blueness in updated group The frog, what r was indicated is the random number for the 0-1 that random manner generates, and round (r) is indicated by the way of rounding up to random Number r roundings.
Global optimum frog more new formula described in step 5) is as follows:
X'w=Xw+round(r)×(Xg-Xw)
In formula, XgIndicate the global optimum frog in entire frog population, XwWorst frog, X' in expression groupwIndicate update Worst frog in group afterwards, what r was indicated is the random number for the 0-1 that random manner generates, and round (r) is indicated using four houses five The mode entered is to random number r roundings.
The present invention's is a kind of based on the feature selection approach to leapfrog is shuffled, and is to be directed to a given higher-dimension initial characteristics Collection finds best character subset, improves data mining and pattern is known by that will shuffle search and the optimizing ability of the algorithm that leapfrogs Other efficiency.It can effectively solve have the problems such as search capability is poor, convergence rate is slow existing for feature selecting algorithm at present, more It is efficiently completed feature selecting task, the final efficiency for improving data mining and pattern-recognition.
Description of the drawings
Fig. 1 is that invention is a kind of based on the flow chart for shuffling the feature selection approach to leapfrog.
Specific implementation mode
The a kind of of the present invention is made in detail based on shuffling the feature selection approach to leapfrog with reference to embodiment and attached drawing Explanation.
As shown in Figure 1, the present invention's is a kind of based on the feature selection approach to leapfrog is shuffled, include the following steps:
1) initial characteristics are inputted and integrates dimension as the data of M, each feature includes several samples;
2) frog population is initialized, and determines algorithm maximum iteration;Including:It is first randomly generated by N only The position of the frog population of frog composition, each frog corresponds to a unordered unduplicated sequence, and it is a whole that the sequence contains m Number, the m integer are 0 to the integer between M-1, each integer one feature of correspondence in sequence, the position pair of frog The initial characteristics that Ying Yucong dimensions are M concentrate the character subset that the dimension randomly generated is m, wherein M and m meets m < < M relationships, algorithm maximum iteration are determined as IterMax, primary iteration number iter=0.In the present embodiment, greatest iteration time Number IterMax=1000.
3) position of frog individual is subjected to fitness calculating using fitness function, that is, calculated in feature selecting, it should The valuation functions value of the corresponding character subset of scheme.Fitness function expression formula is:
Here S is character subset, siAnd sjIndicate that the ith and jth feature in character subset S, L indicate sample number respectively According to corresponding target class label, I (si;L the Average of ith feature and target class label in character subset S) is indicated, I(si;sj) indicate the Average of ith feature and j-th of feature in character subset S.
Position where each of frog population individual is assessed, i.e., real value coded sequence is sent into evaluation module, comments The valuation functions for estimating the selection of module character pair are preferably that target progress valuation functions evaluation is commented with character subset validity Estimate as a result, finding out current best character subset.
4) descending sort is carried out to entire frog population according to the fitness value of frog individual, and according to mould because of the method for dividision into groups pair Frog population is grouped, and frog population is divided into p groups, every group has q frog, wherein p and q to meet the relationship of N=p*q, really Optimal frog in group in fixed every group, the global optimum frog in group in worst frog and entire frog population;Wherein, N tables Show that the quantity of frog in whole frog population, the sequence serial number x and the relationship of grouping group number k of frog are:
Mark fitness value in every group maximum for optimal frog X in group simultaneouslyb, fitness value minimum be group in it is worst Frog XwAnd the global maximum global optimum frog X in entire frog population of fitness valueg
5) first with worst frog in optimal frog update group in group in every group, if finding worst blueness in updated group The fitness value of the frog is better than the fitness value of the worst frog before update, and meets the constraints of optimization, then completes once more New process;Otherwise, it is updated using global optimum frog, if before the fitness value of worst frog is better than update in updated group The fitness value of worst frog, and meet the unduplicated constraints of feature in character subset, then complete renewal process;If still It can not be updated successfully, then optimal frog, iterations obtain updated from adding 1 in update group by the way of randomly updating Frog population;Wherein,
Worst frog formula is as follows in optimal frog update group in the group:
X'w=Xw+round(r)×(Xb-Xw)
In formula, XbOptimal frog, X in expression groupwWorst frog, X' in expression groupwIndicate worst blueness in updated group The frog, what r was indicated is the random number for the 0-1 that random manner generates, and round (r) is indicated by the way of rounding up to random Number r roundings.
The global optimum frog more new formula is as follows:
X'w=Xw+round(r)×(Xg-Xw)
In formula, XgIndicate the global optimum frog in entire frog population, XwWorst frog, X' in expression groupwIndicate update Worst frog in group afterwards, what r was indicated is the random number for the 0-1 that random manner generates, and round (r) is indicated using four houses five The mode entered is to random number r roundings.
6) step 4) is repeated to updated frog population and arrives step 5), until iterations reach maximum iteration, The position of global optimum frog is best character subset in the frog population obtained at this time.

Claims (5)

1. a kind of based on shuffling the feature selection approach to leapfrog, which is characterized in that include the following steps:
1) initial characteristics are inputted and integrates dimension as the data of M, each feature includes several samples;
2) frog population is initialized, and determines algorithm maximum iteration;
3) position of frog individual is subjected to fitness calculating using fitness function, fitness function expression formula is:
Here S is character subset, siAnd sjIndicate that the ith and jth feature in character subset S, L indicate sample data pair respectively The target class label answered, I (si;L the Average of ith feature and target class label in character subset S, I (s) are indicatedi; sj) indicate the Average of ith feature and j-th of feature in character subset S.
4) descending sort is carried out to entire frog population according to the fitness value of frog individual, and according to mould because the method for dividision into groups is to frog Population is grouped, and frog population is divided into p groups, every group has q frog, wherein p and q to meet the pass of the quantity N=p*q of frog System determines optimal frog in the group in every group, the global optimum frog in group in worst frog and entire frog population;
5) first with worst frog in optimal frog update group in group in every group, if finding worst frog in updated group Fitness value is better than the fitness value of the worst frog before update, and meets the constraints of optimization, then completes once updated Journey;Otherwise, it is updated using global optimum frog, if the fitness value of worst frog is better than worst before update in updated group The fitness value of frog, and meet the unduplicated constraints of feature in character subset, then complete renewal process;If still can not It is updated successfully, then optimal frog, iterations obtain updated frog from adding 1 in update group by the way of randomly updating Population;
6) step 4) is repeated to updated frog population and arrives step 5), until iterations reach maximum iteration, at this time The position of global optimum frog is best character subset in obtained frog population.
2. according to claim 1 a kind of based on shuffling the feature selection approach to leapfrog, which is characterized in that step 2) is wrapped It includes:It is first randomly generated the frog population being made of N frog, the position of each frog corresponds to a unordered unduplicated sequence Row, the sequence contain m integer, and the m integer is 0 to the integer between M-1, each integer correspondence in sequence One feature, the initial characteristics that it is M from dimension that the position of frog, which corresponds to, concentrate feature that the dimension randomly generated is m Collection, wherein M and m meets m < < M relationships, and algorithm maximum iteration is determined as IterMax, primary iteration number iter=0.
3. according to claim 1 a kind of based on shuffling the feature selection approach to leapfrog, which is characterized in that in step 4):N Indicate that the quantity of frog in whole frog population, the sequence serial number x and the relationship of grouping group number k of frog are:
Mark fitness value in every group maximum for optimal frog X in group simultaneouslyb, fitness value minimum be group in worst frog XwAnd the global maximum global optimum frog X in entire frog population of fitness valueg
4. according to claim 1 a kind of based on shuffling the feature selection approach to leapfrog, which is characterized in that institute in step 5) Worst frog formula is as follows in optimal frog update group in the group stated:
X'w=Xw+round(r)×(Xb-Xw)
In formula, XbOptimal frog, X in expression groupwWorst frog, X' in expression groupwIndicate worst frog in updated group, r tables What is shown is the random number for the 0-1 that random manner generates, and round (r) expressions take random number r by the way of rounding up It is whole.
5. according to claim 1 a kind of based on shuffling the feature selection approach to leapfrog, which is characterized in that institute in step 5) The global optimum frog more new formula stated is as follows:
X'w=Xw+round(r)×(Xg-Xw)
In formula, XgIndicate the global optimum frog in entire frog population, XwWorst frog, X' in expression groupwIndicate updated Worst frog in group, what r was indicated is the random number for the 0-1 that random manner generates, and round (r) is indicated using rounding up Mode is to random number r roundings.
CN201810265031.8A 2018-03-28 2018-03-28 It is a kind of based on shuffling the feature selection approach to leapfrog Pending CN108510050A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810265031.8A CN108510050A (en) 2018-03-28 2018-03-28 It is a kind of based on shuffling the feature selection approach to leapfrog

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810265031.8A CN108510050A (en) 2018-03-28 2018-03-28 It is a kind of based on shuffling the feature selection approach to leapfrog

Publications (1)

Publication Number Publication Date
CN108510050A true CN108510050A (en) 2018-09-07

Family

ID=63379023

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810265031.8A Pending CN108510050A (en) 2018-03-28 2018-03-28 It is a kind of based on shuffling the feature selection approach to leapfrog

Country Status (1)

Country Link
CN (1) CN108510050A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109640080A (en) * 2018-11-01 2019-04-16 广州土圭垚信息科技有限公司 A kind of simplified method of the depth image mode division based on SFLA algorithm
CN110014550A (en) * 2019-04-17 2019-07-16 明光利拓智能科技有限公司 Opening control method for material inlet and outlet valves of material dryer based on mixed frog leaping algorithm
CN110580517A (en) * 2019-09-12 2019-12-17 石家庄铁道大学 Feature extraction method and device based on stacked self-encoder and terminal equipment
WO2020233083A1 (en) * 2019-05-21 2020-11-26 深圳壹账通智能科技有限公司 Image restoration method and apparatus, storage medium, and terminal device
CN112185419A (en) * 2020-09-30 2021-01-05 天津大学 Glass bottle crack detection method based on machine learning
CN112350745A (en) * 2020-11-27 2021-02-09 中国人民解放军空军通信士官学校 Sorting method of frequency hopping communication radio station
CN112836721A (en) * 2020-12-17 2021-05-25 北京仿真中心 Image identification method and device, computer equipment and readable storage medium

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109640080A (en) * 2018-11-01 2019-04-16 广州土圭垚信息科技有限公司 A kind of simplified method of the depth image mode division based on SFLA algorithm
CN110014550A (en) * 2019-04-17 2019-07-16 明光利拓智能科技有限公司 Opening control method for material inlet and outlet valves of material dryer based on mixed frog leaping algorithm
WO2020233083A1 (en) * 2019-05-21 2020-11-26 深圳壹账通智能科技有限公司 Image restoration method and apparatus, storage medium, and terminal device
CN110580517A (en) * 2019-09-12 2019-12-17 石家庄铁道大学 Feature extraction method and device based on stacked self-encoder and terminal equipment
CN112185419A (en) * 2020-09-30 2021-01-05 天津大学 Glass bottle crack detection method based on machine learning
CN112350745A (en) * 2020-11-27 2021-02-09 中国人民解放军空军通信士官学校 Sorting method of frequency hopping communication radio station
CN112836721A (en) * 2020-12-17 2021-05-25 北京仿真中心 Image identification method and device, computer equipment and readable storage medium
CN112836721B (en) * 2020-12-17 2024-03-22 北京仿真中心 Image recognition method and device, computer equipment and readable storage medium

Similar Documents

Publication Publication Date Title
CN108510050A (en) It is a kind of based on shuffling the feature selection approach to leapfrog
CN103116762B (en) A kind of image classification method based on self-modulation dictionary learning
CN103116766B (en) A kind of image classification method of encoding based on Increment Artificial Neural Network and subgraph
CN110379463A (en) Marine algae genetic analysis and concentration prediction method and system based on machine learning
CN107229940A (en) Data adjoint analysis method and device
CN106228183A (en) A kind of semi-supervised learning sorting technique and device
CN103631928A (en) LSH (Locality Sensitive Hashing)-based clustering and indexing method and LSH-based clustering and indexing system
CN104268629A (en) Complex network community detecting method based on prior information and network inherent information
CN106991442A (en) The self-adaptive kernel k means method and systems of shuffled frog leaping algorithm
CN110059875A (en) Public bicycles Demand Forecast method based on distributed whale optimization algorithm
CN106021990A (en) Method for achieving classification and self-recognition of biological genes by means of specific characters
CN106708659A (en) Filling method for adaptive nearest neighbor missing data
CN107392307A (en) The Forecasting Methodology of parallelization time series data
CN104866903B (en) The most U.S. path navigation algorithm of based on genetic algorithm
CN105678401A (en) Global optimization method based on strategy adaptability differential evolution
CN110288075A (en) A kind of feature selection approach based on improvement shuffled frog leaping algorithm
CN104966106A (en) Biological age step-by-step predication method based on support vector machine
CN103473465B (en) Land resource spatial configuration optimal method based on multiple target artificial immune system
CN109840551A (en) A method of the optimization random forest parameter for machine learning model training
CN105740949A (en) Group global optimization method based on randomness best strategy
Sun et al. A survey of MEC: 1998-2001
CN107067035A (en) The SVMs Study on wetland remote sensing method of coevolution algorithm optimization
CN105825075A (en) Protein structure predicting method based on NGA-TS algorithm
CN113011091A (en) Automatic-grouping multi-scale light-weight deep convolution neural network optimization method
CN104392317A (en) Project scheduling method based on genetic culture gene algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20180907