CN107908688A - A kind of data classification Forecasting Methodology and system based on improvement grey wolf optimization algorithm - Google Patents

A kind of data classification Forecasting Methodology and system based on improvement grey wolf optimization algorithm Download PDF

Info

Publication number
CN107908688A
CN107908688A CN201711048597.7A CN201711048597A CN107908688A CN 107908688 A CN107908688 A CN 107908688A CN 201711048597 A CN201711048597 A CN 201711048597A CN 107908688 A CN107908688 A CN 107908688A
Authority
CN
China
Prior art keywords
grey
mrow
grey wolf
wolf
wolves
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201711048597.7A
Other languages
Chinese (zh)
Other versions
CN107908688B (en
Inventor
陈慧灵
罗杰
赵学华
蔡振闹
童长飞
黄辉
李俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wenzhou University
Original Assignee
Wenzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wenzhou University filed Critical Wenzhou University
Priority to CN201711048597.7A priority Critical patent/CN107908688B/en
Publication of CN107908688A publication Critical patent/CN107908688A/en
Application granted granted Critical
Publication of CN107908688B publication Critical patent/CN107908688B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/28Databases characterised by their database models, e.g. relational or object models
    • G06F16/284Relational databases
    • G06F16/285Clustering or classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment of the invention discloses a kind of based on the data classification Forecasting Methodology and system of improving grey wolf optimization algorithm, including acquisition historical data, and the historical data got is normalized and classified;Training sample using the historical data after the normalized as support vector machines, it is wide to optimize the penalty coefficient of the support vector machines and core using default improvement grey wolf optimization algorithm;Penalty coefficient and core after being optimized according to the support vector machines is wide, builds prediction model;Testing data is obtained, and is imported the testing data as sample to be tested in the prediction model, obtains classification and the corresponding predicted value of each classification of the testing data.Implement the present invention, can solve grey wolf optimization algorithm be absorbed in locally optimal solution, the problems such as convergence rate is slow, realize and the problem of specific field classified and predicted, improve the precision of decision-making.

Description

A kind of data classification Forecasting Methodology and system based on improvement grey wolf optimization algorithm
Technical field
The present invention relates to big data technical field, more particularly to a kind of grey wolf optimization algorithm that improves to classify in advance to optimize data The method and system of survey.
Background technology
With the development of technology, the field of big data application is also increasingly wider, therefore classification to big data and prediction etc. Processing proposes new challenge, especially Swarm Intelligent Algorithm and is used in the classification and prediction of big data.
It is generally known that Swarm Intelligent Algorithm is by simulating the various biologies of nature and being showed for nonliving system The swarm intelligence behavior gone out, using cooperating between individual in population, exchanges and achievees the purpose that optimizing.These colony's intelligence Can algorithm be more famous has:Ant group algorithm, particle cluster algorithm, artificial bee colony algorithm, chicken group's algorithm etc..
However, grey wolf optimization algorithm was also proposed in 2014 for Swarm Intelligent Algorithm, Mirjalili et al. (Grey Wolf Optimizer), this is a kind of new Swarm Intelligence Algorithm, and the predation of the algorithm simulation grey wolf is searched Rope optimal solution, the algorithm introduce the level mechanism of grey wolf, the fitness highest grey wolf of three are defined as Alpha, Beta successively, Delta, remaining is defined as Omega grey wolves, and the direction of motion of Omega grey wolves is come by Alpha, this three grey wolves of Beta, Delta Determine, the results show algorithm has stronger search capability.But the algorithm processing there are a large amount of local optimums During problem, local optimum is easily trapped into, it is difficult to find globally optimal solution so that data are classified and deviation occurs in prediction.Therefore, pin To the above problem, the algorithm need to be improved from the angle for the hierarchical organization for improving grey wolf, so as to improve data classification and prediction Accuracy.
The content of the invention
The purpose of the embodiment of the present invention is to provide a kind of based on the data classification Forecasting Methodology for improving grey wolf optimization algorithm And system, can solve grey wolf optimization algorithm be absorbed in locally optimal solution, the problems such as convergence rate is slow, realize and specific field asked Topic is classified and is predicted, improves the precision of decision-making.
In order to solve the above-mentioned technical problem, an embodiment of the present invention provides a kind of based on the data for improving grey wolf optimization algorithm Classification Forecasting Methodology, including step:
Step S1, historical data is obtained, and the historical data got is normalized and classified;
Step S2, the training sample using the historical data after the normalized as support vector machines, using default Improvement grey wolf optimization algorithm it is wide to optimize the penalty coefficient of the support vector machines and core;
Step S3, the penalty coefficient and core after being optimized according to the support vector machines are wide, build prediction model;
Step S4, testing data is obtained, and is imported the testing data as sample to be tested in the prediction model, is obtained Classification and the corresponding predicted value of each classification to the testing data.
Wherein, the step S2 is specifically included:
Step 2.1:Parameter initialization, specifically includes:Maximum iteration T, of grey wolf population number n, Beta grey wolf Number ω, the search space [C of penalty coefficient C of number β, Omega grey wolfmin, Cmax] and the wide γ of core search space [γmin, γmax];
Step 2.2:N grey wolf position is initialized, specifically, using equation below (2) and (3) by the position of each grey wolf Put in the search range for being mapped to setting, obtain the position X of n grey wolfi=(xI, 1, xI, 2);
XI, 1=(Cmax-Cmin)*r+Cmin, (i=1,2 ..., n) (2);
XI, 2=(γmaxmin)*r+γmin, (i=1,2 ..., n) (3);
Wherein, random decimals of the r between [0,1];CiRepresent C values of the grey wolf i at current location, γiRepresent grey wolf i γ values at current location;
Step 2.3:Calculate the fitness f of every grey wolf ii, and by the fitness f of every grey wolf iiAfter descending sequence, Filter out fitness in n grey wolf and be more than Alpha grey wolves fitness and for maximum grey wolf, then Alpha grey wolves are substituted for and worked as The grey wolf of preceding filtered out fitness maximum, further according to the fitness of n grey wolf, by ω grey wolf mark of fitness minimum Omega grey wolves are denoted as, and remaining (n- ω) grey wolf is labeled as Delta grey wolves;Wherein, the fitness fi of the grey wolf i For C the and γ values based on grey wolf i current locations, the accuracy of support vector machines is calculated with internal K folding cross validation strategies ACC;
Step 2.4:Beta grey wolves are generated from Alpha grey wolves based on step 2.3, specifically, raw according to equation below (4) Into β Beta grey wolf, and the fitness of β Beta grey wolf is calculated, and further filter out fitness in β Beta grey wolf and be more than The grey wolf of Alpha grey wolf fitness, then replace Alpha grey wolves with filtered out grey wolf;
BetaI, j=Alphaj+ 2*D*r-D, (i=1,2 ..., β;J=1,2) (4);
Wherein, DeltabestFor the highest grey wolf of fitness in Delta grey wolves;
Step 2.5:The position of Delta grey wolves is updated, specifically, calculating every Delta grey wolf according to formula (6)-(9) New position;
A=2 τ r1-τ (6);
C=2r2(7);
L=| C*Alphaj-DeltaI, j| (8);
Wherein, τ with iterations by linear decrease between 2 to 0;R1 and r2 is the random number between [0,1];
Step 2.6:Update the position of Omega grey wolves, specifically, influence of the Omega grey wolves from Alpha grey wolves, its Search space random motion, can calculate the random site of each Omega grey wolves according to formula (2) and (3);
Step 2.6:Judge whether to reach maximum iteration T, step 2.7 is gone to if having reached, otherwise goes to step 2.3;
Step 2.7:The position of Alpha wolves is exported, that is, obtains optimal penalty coefficient C and the wide γ of core.
Wherein, " prediction model " in the step S3 passes through formulaTo realize; Wherein,
K(xi,xj)=exp (- r | | xi-xj||2);xjFor the sample to be tested, xi (i=1...l) is the trained sample This, yi(i=1...l) label of classification is corresponded to for the training sample, b is default threshold value, αiFor Lagrange coefficient.
The embodiment of the present invention additionally provides another method improved grey wolf optimization algorithm and predicted to optimize data to classify, bag Include following steps:
Step S21, historical data is obtained, and the historical data got is normalized and classified;
Step S22, determine that grey wolf optimizes algorithm, and grey wolf optimization algorithm parameter is initialized, specifically include:It is maximum Iterations T, the number ω of number β, Omega grey wolf of grey wolf population number n, Beta grey wolf, the search space of penalty coefficient C [Cmin, Cmax] and the wide γ of core search space [γmin, γmax];
Step S23:N grey wolf position is initialized, specifically, using equation below (b) and (c) by the position of each grey wolf Put in the search range for being mapped to setting, obtain the position X of n grey wolfi=(xI, 1, xI, 2);
XI, 1=(Cmax-Cmin)*r+Cmin, (i=1,2 ..., n) (b);
XI, 2=(γmaxmin)*r+γmin, (i=1,2 ..., n) (c);
Wherein, random decimals of the r between [0,1];CiRepresent C values of the grey wolf i at current location, γiRepresent grey wolf i γ values at current location;
Step S24:Training sample using the historical data after the normalized as intelligence machine learning model, meter Calculate the fitness f of every grey wolf ii, and by the fitness f of every grey wolf iiAfter descending sequence, filter out in n grey wolf and fit Response is more than Alpha grey wolves fitness and for maximum grey wolf, then Alpha grey wolves are substituted for currently to filtered out fitness is most Big grey wolf, further according to the fitness of n grey wolf, Omega grey wolves are labeled as by ω grey wolf of fitness minimum, and Remaining (n- ω) grey wolf is labeled as Delta grey wolves;Wherein, the fitness fi of the grey wolf i is based on grey wolf i current locations C and γ values, the accuracy ACC of support vector machines is calculated with internal K folding cross validation strategies;
Step S25:Beta grey wolves are generated from Alpha grey wolves based on step S24, specifically, raw according to equation below (d) Into β Beta grey wolf, and the fitness of β Beta grey wolf is calculated, and further filter out fitness in β Beta grey wolf and be more than The grey wolf of Alpha grey wolf fitness, then replace Alpha grey wolves with filtered out grey wolf;
BetaI,J=Alphaj+ 2*D*r-D, (i=1,2 ..., β;J=1,2) (d);
Wherein, DeltabestFor the highest grey wolf of fitness in Delta grey wolves;
Step S26:The position of Delta grey wolves is updated, specifically, calculating every Delta grey wolf according to formula (f)-(i) New position;
A=2 τ r1-τ (f);
C=2r2(g);
L=| C*Alphaj-DeltaI, j| (h);
Wherein, τ with iterations by linear decrease between 2 to 0;R1 and r2 is the random number between [0,1];
Step S27:Update the position of Omega grey wolves, specifically, influence of the Omega grey wolves from Alpha grey wolves, its Search space random motion, can calculate the random site of each Omega grey wolves according to formula (b) and (c);
Step S28:Judge whether current iteration number reaches maximum iteration T, step S29 gone to if having reached, Otherwise step S24 is gone to;
Step 29:The position of Alpha wolves is exported, obtains optimal penalty coefficient C and the wide γ of core;
Step S30, the penalty coefficient and core after being optimized according to the intelligence machine learning model are wide, build prediction model;
Step S31, testing data is obtained, and is imported the testing data as sample to be tested in the prediction model, Obtain classification and the corresponding predicted value of each classification of the testing data.
Wherein, " prediction model " in the step S30 passes through formulaCome real It is existing;Wherein,
K(xi,xj)=exp (- r | | xi-xj||2);xjFor the sample to be tested, xi(i=1...l) it is the trained sample This, yi(i=1...l) label of classification is corresponded to for the training sample, b is default threshold value, αiFor Lagrange coefficient.
The embodiment of the present invention additionally provides a kind of system improved grey wolf optimization algorithm and predicted to optimize data to classify, bag Include:
Data acquisition and processing unit, normalizing is carried out for obtaining historical data, and by the historical data got Change and handle and classify;
Model parameter improves unit, for the training using the historical data after the normalized as support vector machines Sample, it is wide to optimize the penalty coefficient of the support vector machines and core using default improvement grey wolf optimization algorithm;
Model Reconstruction unit, structure prediction mould wide for the penalty coefficient after being optimized according to the support vector machines and core Type;
Data classification predicting unit, institute is imported for obtaining testing data, and using the testing data as sample to be tested State in prediction model, obtain classification and the corresponding predicted value of each classification of the testing data.
Wherein, the model parameter is improved unit and is included:
First initialization module, for parameter initialization, specifically includes:Maximum iteration T, grey wolf population number n, The number ω of number β, the Omega grey wolf of Beta grey wolves, the search space [C of penalty coefficient Cmin, Cmax] and the wide γ of core search Space [γmin, γmax];
Second initialization module, for initializing n grey wolf position, specifically, will be every using equation below (2) and (3) The position of one grey wolf is mapped in the search range of setting, obtains the position X of n grey wolfi=(xI, 1, xI, 2);
XI, 1=(Cmax-Cmin)*r+Cmin, (i=1,2 ..., n) (2);
XI, 2=(γmaxmin)*r+γmin, (i=1,2 ..., n) (3);
Wherein, random decimals of the r between [0,1];CiRepresent C values of the grey wolf i at current location, γiRepresent grey wolf i γ values at current location;
First computing module, for calculating the fitness f of every grey wolf ii, and by the fitness f of every grey wolf iiBy greatly to After small sequence, filter out fitness in n grey wolf and be more than Alpha grey wolves fitness and for maximum grey wolf, then by Alpha grey wolves The grey wolf of currently filtered out fitness maximum is substituted for, further according to the fitness of n grey wolf, by the ω of fitness minimum Grey wolf is labeled as Omega grey wolves, and remaining (n- ω) grey wolf is labeled as Delta grey wolves;Wherein, the grey wolf i Fitness fi is C the and γ values based on grey wolf i current locations, and support vector machines is calculated with internal K folding cross validation strategies Accuracy ACC;
Second computing module, for generating Beta grey wolves from Alpha grey wolves, specifically, being generated according to equation below (4) β Beta grey wolf, and the fitness of β Beta grey wolf is calculated, and further filter out fitness in β Beta grey wolf and be more than The grey wolf of Alpha grey wolf fitness, then replace Alpha grey wolves with filtered out grey wolf;
BetaI, j=Alphaj+ 2*D*r-D, (i=1,2 ..., β;J=1,2) (4);
Wherein, DeltabestFor the highest grey wolf of fitness in Delta grey wolves;
First update module, for updating the position of Delta grey wolves, specifically, calculating every according to formula (6)-(9) The new position of Delta grey wolves;
A=2 τ r1-τ (6);
C=2r2(7);
L=| C*Alphaj-DeltaI, j| (8);
Wherein, τ with iterations by linear decrease between 2 to 0;R1 and r2 is the random number between [0,1];
Second update module, for updating the position of Omega grey wolves, specifically, Omega grey wolves are from Alpha grey wolves Influence, it, can be according to formula (2) and the random site of each Omega grey wolves of (3) calculating in search space random motion;
Judgment module, for judging whether to reach maximum iteration T;
Parameter output module, for exporting the position of Alpha wolves, that is, obtains optimal penalty coefficient C and the wide γ of core.
Wherein, the prediction model passes through formulaTo realize;Wherein,
K(xi,xj)=exp (- r | | xi-xj||2);xjFor the sample to be tested, xi(i=1...l) it is the trained sample This, yi(i=1...l) label of classification is corresponded to for the training sample, b is default threshold value, αiFor Lagrange coefficient.
Implement the embodiment of the present invention, have the advantages that:
A kind of new hierarchical structure mechanism is introduced grey wolf optimization algorithm by the present invention, improves the search of grey wolf optimization algorithm Ability, avoids being absorbed in locally optimal solution, and then this method is used for machine learning model such as support vector machines, the core limit and is learnt The parameter optimization of the models such as machine, builds optimal machine learning model, realizes and is classified to specific field problem, predicted.
Brief description of the drawings
In order to illustrate more clearly about the embodiment of the present invention or technical scheme of the prior art, below will be to embodiment or existing There is attached drawing needed in technology description to be briefly described, it should be apparent that, drawings in the following description are only this Some embodiments of invention, for those of ordinary skill in the art, without having to pay creative labor, according to These attached drawings obtain other attached drawings and still fall within scope of the invention.
Fig. 1 is a kind of method improved grey wolf optimization algorithm and predicted to optimize data to classify provided in an embodiment of the present invention Flow chart;
Fig. 2 is another method for improving grey wolf and optimizing algorithm and being predicted to optimize data to classify provided in an embodiment of the present invention Flow chart;
Fig. 3 is a kind of system improved grey wolf optimization algorithm and predicted to optimize data to classify provided in an embodiment of the present invention Structure diagram.
Embodiment
In order to make the purpose , technical scheme and advantage of the present invention be clearer, with reference to the accompanying drawings and embodiments, it is right The present invention is further elaborated.It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, and It is not used in the restriction present invention.
As shown in Figure 1, in the embodiment of the present invention, a kind of improvement grey wolf optimization algorithm of proposition is pre- to optimize data classification The method of survey, including step:
Step S1, historical data is obtained, and the historical data got is normalized and classified;
Detailed process is acquisition and the relevant historical data of problem to be studied, and is normalized and classifies, its In, the normalized of standard is carried out to it using formula (1);
Wherein, the attribute of classification includes data attribute and category attribute.
For example, the data instance distinguished with the Benign And Malignant Nodules of Thyroid Glands based on ultrasonic feature, data attribute value are divided into two Major class, i.e. data attribute X1-X8Illustrate the ultrasonic attribute for Benign And Malignant Nodules of Thyroid Glands disease, X9Illustrate the data sample This classification:Distinguish benign protuberance and Malignant Nodules;If sample is Malignant Nodules:It is worth for 1, if sample is benign protuberance:Value For -1.
For example, with business failure risk profile data instance, single sample attribute property value is divided into two major classes, i.e. data category Property has X1-XnA such relevant financial index such as ATTRIBUTE INDEX such as debt ratio, total assets, then Xn+1Represent class label:I.e. The enterprise is -1 without clean risk of liquidation label if it is 1 to have clean risk of liquidation label in the presence for whether having clean risk of liquidation in two years.
Step S2, the training sample using the historical data after the normalized as support vector machines, using default Improvement grey wolf optimization algorithm it is wide to optimize the penalty coefficient of the support vector machines and core;
Detailed process is to utilize the penalty coefficient C and the wide γ of core for improving grey wolf algorithm optimization support vector machines:
Step 2.1:Parameter initialization, specifically includes:Maximum iteration T, of grey wolf population number n, Beta grey wolf Number ω, the search space [C of penalty coefficient C of number β, Omega grey wolfmin, Cmax] and the wide γ of core search space [γmin, γmax];
Step 2.2:N grey wolf position is initialized, specifically, using equation below (2) and (3) by the position of each grey wolf Put in the search range for being mapped to setting, obtain the position X of n grey wolfi=(xi,1,xi,2);
XI, 1=(Cmax-Cmin)*r+Cmin, (i=1,2 ..., n) (2);
XI, 2=(γmaxmin)*r+γmin, (i=1,2 ..., n) (3);
Wherein, random decimals of the r between [0,1];CiRepresent C values of the grey wolf i at current location, γiRepresent grey wolf i γ values at current location;
Step 2.3:Calculate the fitness f of every grey wolf ii, and by the fitness f of every grey wolf iiAfter descending sequence, Filter out fitness in n grey wolf and be more than Alpha grey wolves fitness and for maximum grey wolf, then Alpha grey wolves are substituted for and worked as The grey wolf of preceding filtered out fitness maximum, further according to the fitness of n grey wolf, by ω grey wolf mark of fitness minimum Omega grey wolves are denoted as, and remaining (n- ω) grey wolf is labeled as Delta grey wolves;Wherein, the fitness fi of the grey wolf i For C the and γ values based on grey wolf i current locations, the accuracy of support vector machines is calculated with internal K folding cross validation strategies ACC;
Step 2.4:Beta grey wolves are generated from Alpha grey wolves based on step 2.3, specifically, raw according to equation below (4) Into β Beta grey wolf, and the fitness of β Beta grey wolf is calculated, and further filter out fitness in β Beta grey wolf and be more than The grey wolf of Alpha grey wolf fitness, then replace Alpha grey wolves with filtered out grey wolf;
BetaI, j=Alphaj+ 2*D*r-D, (i=1,2 ..., β;J=1,2) (4);
Wherein, DeltabestFor the highest grey wolf of fitness in Delta grey wolves;
Step 2.5:The position of Delta grey wolves is updated, specifically, calculating every Delta grey wolf according to formula (6)-(9) New position;
A=2 τ r1-τ (6);
C=2r2(7);
L=| C*Alphaj-DeltaI, j| (8);
Wherein, τ with iterations by linear decrease between 2 to 0;R1 and r2 is the random number between [0,1];
Step 2.6:Update the position of Omega grey wolves, specifically, influence of the Omega grey wolves from Alpha grey wolves, its Search space random motion, can calculate the random site of each Omega grey wolves according to formula (2) and (3);
Step 2.6:Judge whether to reach maximum iteration T, step 2.7 is gone to if having reached, otherwise goes to step 2.3;
Step 2.7:The position of Alpha wolves is exported, that is, obtains optimal penalty coefficient C and the wide γ of core.
Step S3, the penalty coefficient and core after being optimized according to the support vector machines are wide, build prediction model;
Detailed process is to pass through formulaTo build prediction model;Wherein,
K(xi,xj)=exp (- r | | xi-xj||2);xjFor the sample to be tested, xi(i=1...l) it is the trained sample This, yi(i=1...l) label of classification is corresponded to for the training sample, b is default threshold value, αiFor Lagrange coefficient
Step S4, testing data is obtained, and is imported the testing data as sample to be tested in the prediction model, is obtained Classification and the corresponding predicted value of each classification to the testing data.
As shown in Fig. 2, in the embodiment of the present invention, there is provided another improve grey wolf optimization algorithm to optimize data classification The method of prediction, comprises the following steps:
Step S21, historical data is obtained, and the historical data got is normalized and classified;
Detailed process is acquisition and the relevant historical data of problem to be studied, and is normalized and classifies, its In, the normalized of standard is carried out to it using formula (a);
Wherein, the attribute of classification includes data attribute and category attribute.
For example, the data instance distinguished with the Benign And Malignant Nodules of Thyroid Glands based on ultrasonic feature, data attribute value are divided into two Major class, i.e. data attribute X1-X8Illustrate the ultrasonic attribute for Benign And Malignant Nodules of Thyroid Glands disease, X9Illustrate the data sample This classification:Distinguish benign protuberance and Malignant Nodules;If sample is Malignant Nodules:It is worth for 1, if sample is benign protuberance:Value For -1.
For example, with business failure risk profile data instance, single sample attribute property value is divided into two major classes, i.e. data category Property has X1-XnA such relevant financial index such as ATTRIBUTE INDEX such as debt ratio, total assets, then Xn+1Represent class label:I.e. The enterprise is -1 without clean risk of liquidation label if it is 1 to have clean risk of liquidation label in the presence for whether having clean risk of liquidation in two years.
Step S22, determine that grey wolf optimizes algorithm, and grey wolf optimization algorithm parameter is initialized, specifically include:It is maximum Iterations T, the number ω of number β, Omega grey wolf of grey wolf population number n, Beta grey wolf, the search space of penalty coefficient C [Cmin, Cmax] and the wide γ of core search space [γmin, γmax];
Step S23:N grey wolf position is initialized, specifically, using equation below (b) and (c) by the position of each grey wolf Put in the search range for being mapped to setting, obtain the position X of n grey wolfi=(xi,1,xi,2);
XI, 1=(Cmax-Cmin)*r+Cmin, (i=1,2 ..., n) (b);
XI, 2=(γmaxmin)*r+γmin, (i=1,2 ..., n) (c);
Wherein, random decimals of the r between [0,1];CiRepresent C values of the grey wolf i at current location, γiRepresent grey wolf i γ values at current location;
Step S24:Training sample using the historical data after the normalized as intelligence machine learning model, meter Calculate the fitness f of every grey wolf ii, and by the fitness f of every grey wolf iiAfter descending sequence, filter out in n grey wolf and fit Response is more than Alpha grey wolves fitness and for maximum grey wolf, then Alpha grey wolves are substituted for currently to filtered out fitness is most Big grey wolf, further according to the fitness of n grey wolf, Omega grey wolves are labeled as by ω grey wolf of fitness minimum, and Remaining (n- ω) grey wolf is labeled as Delta grey wolves;Wherein, the fitness fi of the grey wolf i is based on grey wolf i current locations C and γ values, the accuracy ACC of support vector machines is calculated with internal K folding cross validation strategies;
Step S25:Beta grey wolves are generated from Alpha grey wolves based on step S24, specifically, raw according to equation below (d) Into β Beta grey wolf, and the fitness of β Beta grey wolf is calculated, and further filter out fitness in β Beta grey wolf and be more than The grey wolf of Alpha grey wolf fitness, then replace Alpha grey wolves with filtered out grey wolf;
BetaI, j=Alphaj+ 2*D*r-D, (i=1,2 ..., β;J=1,2) (d);
Wherein, DeltabestFor the highest grey wolf of fitness in Delta grey wolves;
Step S26:The position of Delta grey wolves is updated, specifically, calculating every Delta grey wolf according to formula (f)-(i) New position;
A=2 τ r1-τ (f);
C=2r2(g);
L=| C*Alphaj-DeltaI, j| (h);
Wherein, τ with iterations by linear decrease between 2 to 0;R1 and r2 is the random number between [0,1];
Step S27:Update the position of Omega grey wolves, specifically, influence of the Omega grey wolves from Alpha grey wolves, its Search space random motion, can calculate the random site of each Omega grey wolves according to formula (b) and (c);
Step S28:Judge whether current iteration number reaches maximum iteration T, step S29 gone to if having reached, Otherwise step S24 is gone to;
Step 29:The position of Alpha wolves is exported, obtains optimal penalty coefficient C and the wide γ of core;
Step S30, the penalty coefficient and core after being optimized according to the intelligence machine learning model are wide, build prediction model;
Step S31, testing data is obtained, and is imported the testing data as sample to be tested in the prediction model, Obtain classification and the corresponding predicted value of each classification of the testing data.
Wherein, the prediction model passes through formulaTo realize;Wherein,
K(xi,xj)=exp (- r | | xi-xj||2);xjFor the sample to be tested, xi(i=1...l) it is the trained sample This, yi(i=1...l) label of classification is corresponded to for the training sample, b is default threshold value, αiFor Lagrange coefficient.
As shown in figure 3, in the embodiment of the present invention, there is provided a kind of to improve grey wolf optimization algorithm pre- to optimize data classification The system of survey, including:
Data acquisition and processing unit 110, are returned for obtaining historical data, and by the historical data got One changes processing and classifies;
Model parameter improves unit 120, for using the historical data after the normalized as support vector machines Training sample, it is wide to optimize the penalty coefficient of the support vector machines and core using default improvement grey wolf optimization algorithm;
Model Reconstruction unit 130, structure prediction wide for the penalty coefficient after being optimized according to the support vector machines and core Model;
Data classification predicting unit 140, imports for obtaining testing data, and using the testing data as sample to be tested In the prediction model, classification and the corresponding predicted value of each classification of the testing data are obtained.
Wherein, the model parameter is improved unit 110 and is included:
First initialization module, for parameter initialization, specifically includes:Maximum iteration T, grey wolf population number n, The number ω of number β, the Omega grey wolf of Beta grey wolves, the search space [C of penalty coefficient Cmin, Cmax] and the wide γ of core search Space [γmin, γmax];
Second initialization module, for initializing n grey wolf position, specifically, will be every using equation below (2) and (3) The position of one grey wolf is mapped in the search range of setting, obtains the position X of n grey wolfi=(xI, 1, xI, 2);
XI, 1=(Cmax-Cmin)*r+Cmin, (i=1,2 ..., n) (2);
XI, 2=(γmaxmin)*r+γmin, (i=1,2 ..., n) (3);
Wherein, random decimals of the r between [0,1];CiRepresent C values of the grey wolf i at current location, γiRepresent grey wolf i γ values at current location;
First computing module, for calculating the fitness f of every grey wolf ii, and by the fitness f of every grey wolf iiBy greatly to After small sequence, filter out fitness in n grey wolf and be more than Alpha grey wolves fitness and for maximum grey wolf, then by Alpha grey wolves The grey wolf of currently filtered out fitness maximum is substituted for, further according to the fitness of n grey wolf, by the ω of fitness minimum Grey wolf is labeled as Omega grey wolves, and remaining (n- ω) grey wolf is labeled as Delta grey wolves;Wherein, the grey wolf i Fitness fi is C the and γ values based on grey wolf i current locations, and support vector machines is calculated with internal K folding cross validation strategies Accuracy ACC;
Second computing module, for generating Beta grey wolves from Alpha grey wolves, specifically, being generated according to equation below (4) β Beta grey wolf, and the fitness of β Beta grey wolf is calculated, and further filter out fitness in β Beta grey wolf and be more than The grey wolf of Alpha grey wolf fitness, then replace Alpha grey wolves with filtered out grey wolf;
BetaI, j=Alphaj+ 2*D*r-D, (i=1,2 ..., β;J=1,2) (4);
Wherein, DeltabestFor the highest grey wolf of fitness in Delta grey wolves;
First update module, for updating the position of Delta grey wolves, specifically, calculating every according to formula (6)-(9) The new position of Delta grey wolves;
A=2 τ r1-τ (6);
C=2r2(7);
L=| C*Alphaj-Deltai,j|(8);
Wherein, τ with iterations by linear decrease between 2 to 0;R1 and r2 is the random number between [0,1];
Second update module, for updating the position of Omega grey wolves, specifically, Omega grey wolves are from Alpha grey wolves Influence, it, can be according to formula (2) and the random site of each Omega grey wolves of (3) calculating in search space random motion;
Judgment module, for judging whether to reach maximum iteration T;
Parameter output module, for exporting the position of Alpha wolves, that is, obtains optimal penalty coefficient C and the wide γ of core.
Wherein, the prediction model passes through formulaTo realize;Wherein,
K(xi,xj)=exp (- r | | xi-xj||2);xjFor the sample to be tested, xi(i=1...l) it is the trained sample This, yi(i=1...l) label of classification is corresponded to for the training sample, b is default threshold value, αiFor Lagrange coefficient.
Implement the embodiment of the present invention, have the advantages that:
A kind of new hierarchical structure mechanism is introduced grey wolf optimization algorithm by the present invention, improves the search of grey wolf optimization algorithm Ability, avoids being absorbed in locally optimal solution, and then this method is used for machine learning model such as support vector machines, the core limit and is learnt The parameter optimization of the models such as machine, builds optimal machine learning model, realizes and is classified to specific field problem, predicted.
It is worth noting that, in said system embodiment, included each system unit simply according to function logic into Row division, but above-mentioned division is not limited to, as long as corresponding function can be realized;In addition, each functional unit Specific name is also only to facilitate mutually distinguish, the protection domain being not intended to limit the invention.
Can be with one of ordinary skill in the art will appreciate that realizing that all or part of step in above-described embodiment method is Relevant hardware is instructed to complete by program, the program can be stored in a computer read/write memory medium, The storage medium, such as ROM/RAM, disk, CD.
The foregoing is merely illustrative of the preferred embodiments of the present invention, is not intended to limit the invention, all essences in the present invention All any modification, equivalent and improvement made within refreshing and principle etc., should all be included in the protection scope of the present invention.

Claims (8)

  1. It is 1. a kind of based on the data classification Forecasting Methodology for improving grey wolf optimization algorithm, it is characterised in that to comprise the following steps:
    Step S1, historical data is obtained, and the historical data got is normalized and classified;
    Step S2, the training sample using the historical data after the normalized as support vector machines, is changed using default It is wide to optimize the penalty coefficient of the support vector machines and core into grey wolf optimization algorithm;
    Step S3, the penalty coefficient and core after being optimized according to the support vector machines are wide, build prediction model;
    Step S4, testing data is obtained, and is imported the testing data as sample to be tested in the prediction model, obtains institute State classification and the corresponding predicted value of each classification of testing data.
  2. 2. the method as described in claim 1, it is characterised in that the step S2 is specifically included:
    Step 2.1:Parameter initialization, specifically includes:Maximum iteration T, grey wolf population number n, Beta grey wolf number β, The number ω of Omega grey wolves, the search space [C of penalty coefficient Cmin, Cmax] and the wide γ of core search space [γmin, γmax];
    Step 2.2:N grey wolf position is initialized, specifically, the position of each grey wolf is reflected using equation below (2) and (3) It is mapped in the search range of setting, obtains the position X of n grey wolfi=(xi,1,xi,2);
    Xi,1=(Cmax-Cmin)*r+Cmin, (i=1,2 ..., n) (2);
    Xi,2=(γmaxmin)*r+γmin, (i=1,2 ..., n) (3);
    Wherein, random decimals of the r between [0,1];CiRepresent C values of the grey wolf i at current location, γiRepresent that grey wolf i is working as γ values during front position;
    Step 2.3:Calculate the fitness f of every grey wolf ii, and by the fitness f of every grey wolf iiAfter descending sequence, screening Go out fitness in n grey wolf and be more than Alpha grey wolves fitness and for maximum grey wolf, then Alpha grey wolves are substituted for current institute The grey wolf of fitness maximum is filtered out, further according to the fitness of n grey wolf, ω grey wolf of fitness minimum is labeled as Omega grey wolves, and remaining (n- ω) grey wolf are labeled as Delta grey wolves;Wherein, the fitness fi of the grey wolf i is base C and γ values in grey wolf i current locations, the accuracy ACC of support vector machines is calculated with internal K folding cross validation strategies;
    Step 2.4:Beta grey wolves are generated from Alpha grey wolves based on step 2.3, specifically, generating β according to equation below (4) Beta grey wolves, and the fitness of β Beta grey wolf is calculated, and further filter out fitness in β Beta grey wolf and be more than The grey wolf of Alpha grey wolf fitness, then replace Alpha grey wolves with filtered out grey wolf;
    Betai,j=Alphaj+ 2*D*r-D, (i=1,2 ..., β;J=1,2) (4);
    <mrow> <mi>D</mi> <mo>=</mo> <mroot> <mrow> <msubsup> <mi>&amp;Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>2</mn> </msubsup> <msup> <mrow> <mo>(</mo> <msub> <mi>Alpha</mi> <mi>j</mi> </msub> <mo>-</mo> <msub> <mi>Delta</mi> <mrow> <mi>b</mi> <mi>e</mi> <mi>s</mi> <mi>t</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> <mn>2</mn> </mroot> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>5</mn> <mo>)</mo> </mrow> <mo>;</mo> </mrow>
    Wherein, DeltabestFor the highest grey wolf of fitness in Delta grey wolves;
    Step 2.5:The position of Delta grey wolves is updated, specifically, calculating the new of every Delta grey wolf according to formula (6)-(9) Position;
    A=2 τ r1-τ (6);
    C=2r2(7);
    L=| C*Alphaj-Deltai,j| (8);
    <mrow> <msubsup> <mi>Delta</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mrow> <mo>(</mo> <mi>t</mi> <mo>+</mo> <mn>1</mn> <mo>)</mo> </mrow> </msubsup> <mo>=</mo> <msub> <mi>Alpha</mi> <mi>j</mi> </msub> <mo>-</mo> <mi>A</mi> <mo>*</mo> <mi>L</mi> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>9</mn> <mo>)</mo> </mrow> <mo>;</mo> </mrow>
    Wherein, τ with iterations by linear decrease between 2 to 0;R1 and r2 is the random number between [0,1];
    Step 2.6:The position of Omega grey wolves is updated, specifically, influence of the Omega grey wolves from Alpha grey wolves, it is being searched for Space random motion, can calculate the random site of each Omega grey wolves according to formula (2) and (3);
    Step 2.6:Judge whether to reach maximum iteration T, step 2.7 is gone to if having reached, otherwise goes to step 2.3;
    Step 2.7:The position of Alpha wolves is exported, that is, obtains optimal penalty coefficient C and the wide γ of core.
  3. 3. the method as described in claim 1, it is characterised in that " prediction model " in the step S3 passes through formulaTo realize;Wherein,
    K(xi,xj)=exp (- r | | xi-xj||2);xjFor the sample to be tested, xi(i=1...l) it is the training sample, yi(i =1...l) label of classification is corresponded to for the training sample, b is default threshold value, αiFor Lagrange coefficient.
  4. It is 4. a kind of based on the data classification Forecasting Methodology for improving grey wolf optimization algorithm, it is characterised in that to comprise the following steps:
    Step S21, historical data is obtained, and the historical data got is normalized and classified;
    Step S22, determine that grey wolf optimizes algorithm, and grey wolf optimization algorithm parameter is initialized, specifically include:Greatest iteration Number T, the number ω of number β, Omega grey wolf of grey wolf population number n, Beta grey wolf, the search space of penalty coefficient C [Cmin, Cmax] and the wide γ of core search space [γmin, γmax];
    Step S23:N grey wolf position is initialized, specifically, the position of each grey wolf is reflected using equation below (b) and (c) It is mapped in the search range of setting, obtains the position X of n grey wolfi=(xi,1,xi,2);
    Xi,1=(Cmax-Cmin)*r+Cmin, (i=1,2 ..., n) (b);
    Xi,2=(γmaxmin)*r+γmin, (i=1,2 ..., n) (c);
    Wherein, random decimals of the r between [0,1];CiRepresent C values of the grey wolf i at current location, γiRepresent that grey wolf i is working as γ values during front position;
    Step S24:Training sample using the historical data after the normalized as intelligence machine learning model, calculates every The fitness f of grey wolf ii, and by the fitness f of every grey wolf iiAfter descending sequence, fitness in n grey wolf is filtered out More than Alpha grey wolves fitness and it is maximum grey wolf, then it is maximum Alpha grey wolves to be substituted for currently filtered out fitness Grey wolf, further according to the fitness of n grey wolf, Omega grey wolves are labeled as by ω grey wolf of fitness minimum, and remaining (n- ω) grey wolf be labeled as Delta grey wolves;Wherein, the fitness fi of the grey wolf i is the C based on grey wolf i current locations With γ values, the accuracy ACC of support vector machines is calculated with internal K folding cross validation strategies;
    Step S25:Beta grey wolves are generated from Alpha grey wolves based on step S24, specifically, generating β according to equation below (d) Beta grey wolves, and the fitness of β Beta grey wolf is calculated, and further filter out fitness in β Beta grey wolf and be more than The grey wolf of Alpha grey wolf fitness, then replace Alpha grey wolves with filtered out grey wolf;
    Betai,j=Alphaj+ 2*D*r-D, (i=1,2 ..., β;J=1,2) (d);
    <mrow> <mi>D</mi> <mo>=</mo> <mroot> <mrow> <msubsup> <mi>&amp;Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>2</mn> </msubsup> <msup> <mrow> <mo>(</mo> <msub> <mi>Alpha</mi> <mi>j</mi> </msub> <mo>-</mo> <msub> <mi>Delta</mi> <mrow> <mi>b</mi> <mi>e</mi> <mi>s</mi> <mi>t</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> <mn>2</mn> </mroot> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mi>e</mi> <mo>)</mo> </mrow> <mo>;</mo> </mrow>
    Wherein, DeltabestFor the highest grey wolf of fitness in Delta grey wolves;
    Step S26:The position of Delta grey wolves is updated, specifically, calculating the new of every Delta grey wolf according to formula (f)-(i) Position;
    A=2 τ r1-τ (f);
    C=2r2(g);
    L=| C*Alphaj-Deltai,j| (h);
    <mrow> <msubsup> <mi>Delta</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mrow> <mo>(</mo> <mi>t</mi> <mo>+</mo> <mn>1</mn> <mo>)</mo> </mrow> </msubsup> <mo>=</mo> <msub> <mi>Alpha</mi> <mi>j</mi> </msub> <mo>-</mo> <mi>A</mi> <mo>*</mo> <mi>L</mi> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> <mo>;</mo> </mrow>
    Wherein, τ with iterations by linear decrease between 2 to 0;R1 and r2 is the random number between [0,1];
    Step S27:The position of Omega grey wolves is updated, specifically, influence of the Omega grey wolves from Alpha grey wolves, it is being searched for Space random motion, can calculate the random site of each Omega grey wolves according to formula (b) and (c);
    Step S28:Judge whether current iteration number reaches maximum iteration T, step S29 is gone to if having reached, otherwise Go to step S24;
    Step 29:The position of Alpha wolves is exported, obtains optimal penalty coefficient C and the wide γ of core;
    Step S30, the penalty coefficient and core after being optimized according to the intelligence machine learning model are wide, build prediction model;
    Step S31, testing data is obtained, and is imported the testing data as sample to be tested in the prediction model, is obtained The classification of the testing data and the corresponding predicted value of each classification.
  5. 5. method as claimed in claim 4, it is characterised in that " prediction model " in the step S30 passes through formulaTo realize;Wherein,
    K(xi,xj)=exp (- r | | xi-xj||2);xjFor the sample to be tested, xi(i=1...l) it is the training sample, yi(i =1...l) label of classification is corresponded to for the training sample, b is default threshold value, αiFor Lagrange coefficient.
  6. It is 6. a kind of based on the data classification forecasting system for improving grey wolf optimization algorithm, it is characterised in that including:
    Data acquisition and processing unit, place is normalized for obtaining historical data, and by the historical data got Manage and classify;
    Model parameter improves unit, for the training sample using the historical data after the normalized as support vector machines This, it is wide to optimize the penalty coefficient of the support vector machines and core using default improvement grey wolf optimization algorithm;
    Model Reconstruction unit, it is wide for the penalty coefficient after being optimized according to the support vector machines and core, build prediction model;
    Data classification predicting unit, for obtaining testing data, and the testing data is described pre- as sample to be tested importing Survey in model, obtain classification and the corresponding predicted value of each classification of the testing data.
  7. 7. system as claimed in claim 6, it is characterised in that the model parameter, which improves unit, to be included:
    First initialization module, for parameter initialization, specifically includes:Maximum iteration T, grey wolf population number n, Beta ash The number ω of number β, the Omega grey wolf of wolf, the search space [C of penalty coefficient Cmin, Cmax] and the wide γ of core search space [γmin, γmax];
    Second initialization module, for initializing n grey wolf position, specifically, using equation below (2) and (3) by each The position of grey wolf is mapped in the search range of setting, obtains the position X of n grey wolfi=(xi,1,xi,2);
    Xi,1=(Cmax-Cmin)*r+Cmin, (i=1,2 ..., n) (2);
    Xi,2=(γmaxmin)*r+γmin, (i=1,2 ..., n) (3);
    Wherein, random decimals of the r between [0,1];CiRepresent C values of the grey wolf i at current location, γiRepresent that grey wolf i is working as γ values during front position;
    First computing module, for calculating the fitness f of every grey wolf ii, and by the fitness f of every grey wolf iiDescending row After sequence, filter out fitness in n grey wolf and be more than Alpha grey wolves fitness and for maximum grey wolf, then replace Alpha grey wolves It is further according to the fitness of n grey wolf, the ω of fitness minimum is only grey into the grey wolf of currently filtered out fitness maximum Wolf is labeled as Omega grey wolves, and remaining (n- ω) grey wolf is labeled as Delta grey wolves;Wherein, the adaptation of the grey wolf i Degree fi is C the and γ values based on grey wolf i current locations, and the standard of support vector machines is calculated with internal K folding cross validation strategies Exactness ACC;
    Second computing module, for generating Beta grey wolves from Alpha grey wolves, specifically, generating β only according to equation below (4) Beta grey wolves, and the fitness of β Beta grey wolf is calculated, and further filter out fitness in β Beta grey wolf and be more than Alpha The grey wolf of grey wolf fitness, then replace Alpha grey wolves with filtered out grey wolf;
    Betai,j=Alphaj+ 2*D*r-D, (i=1,2 ..., β;J=1,2) (4);
    <mrow> <mi>D</mi> <mo>=</mo> <mroot> <mrow> <msubsup> <mi>&amp;Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>2</mn> </msubsup> <msup> <mrow> <mo>(</mo> <msub> <mi>Alpha</mi> <mi>j</mi> </msub> <mo>-</mo> <msub> <mi>Delta</mi> <mrow> <mi>b</mi> <mi>e</mi> <mi>s</mi> <mi>t</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> <mn>2</mn> </mroot> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>5</mn> <mo>)</mo> </mrow> <mo>;</mo> </mrow>
    Wherein, DeltabestFor the highest grey wolf of fitness in Delta grey wolves;
    First update module, for updating the position of Delta grey wolves, specifically, calculating every according to formula (6)-(9) The new position of Delta grey wolves;
    A=2 τ r1-τ (6);
    C=2r2(7);
    L=| C*Alphaj-Deltai,j| (8);
    <mrow> <msubsup> <mi>Delta</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mrow> <mo>(</mo> <mi>t</mi> <mo>+</mo> <mn>1</mn> <mo>)</mo> </mrow> </msubsup> <mo>=</mo> <msub> <mi>Alpha</mi> <mi>j</mi> </msub> <mo>-</mo> <mi>A</mi> <mo>*</mo> <mi>L</mi> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>9</mn> <mo>)</mo> </mrow> <mo>;</mo> </mrow>
    Wherein, τ with iterations by linear decrease between 2 to 0;R1 and r2 is the random number between [0,1];
    Second update module, for updating the position of Omega grey wolves, specifically, influence of the Omega grey wolves from Alpha grey wolves, It can calculate the random site of each Omega grey wolves in search space random motion according to formula (2) and (3);
    Judgment module, for judging whether to reach maximum iteration T;
    Parameter output module, for exporting the position of Alpha wolves, that is, obtains optimal penalty coefficient C and the wide γ of core.
  8. 8. system as claimed in claim 6, it is characterised in that the prediction model passes through formulaTo realize;Wherein,
    K(xi,xj)=exp (- r | | xi-xj||2);xjFor the sample to be tested, xi(i=1...l) it is the training sample, yi(i =1...l) label of classification is corresponded to for the training sample, b is default threshold value, αiFor Lagrange coefficient.
CN201711048597.7A 2017-10-31 2017-10-31 A kind of data classification prediction technique and system based on improvement grey wolf optimization algorithm Active CN107908688B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711048597.7A CN107908688B (en) 2017-10-31 2017-10-31 A kind of data classification prediction technique and system based on improvement grey wolf optimization algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711048597.7A CN107908688B (en) 2017-10-31 2017-10-31 A kind of data classification prediction technique and system based on improvement grey wolf optimization algorithm

Publications (2)

Publication Number Publication Date
CN107908688A true CN107908688A (en) 2018-04-13
CN107908688B CN107908688B (en) 2018-12-28

Family

ID=61842162

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711048597.7A Active CN107908688B (en) 2017-10-31 2017-10-31 A kind of data classification prediction technique and system based on improvement grey wolf optimization algorithm

Country Status (1)

Country Link
CN (1) CN107908688B (en)

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108694390A (en) * 2018-05-15 2018-10-23 南京邮电大学 A kind of cuckoo search improves the modulated signal sorting technique of grey wolf Support Vector Machines Optimized
CN108875793A (en) * 2018-05-25 2018-11-23 云南电网有限责任公司电力科学研究院 A kind of dirty area's grade appraisal procedure based on CSO-LSSVM
CN110007661A (en) * 2019-04-10 2019-07-12 河北工业大学 A kind of boiler combustion control system intelligent failure diagnosis method
CN110069817A (en) * 2019-03-15 2019-07-30 温州大学 A method of prediction model is constructed based on California gray whale optimization algorithm is improved
CN110119778A (en) * 2019-05-10 2019-08-13 辽宁大学 A kind of equipment method for detecting health status improving chicken group's optimization RBF neural
CN110167138A (en) * 2019-05-23 2019-08-23 西安电子科技大学 Based on the Location System for Passive TDOA optimizing location method for improving grey wolf optimization algorithm
CN110333462A (en) * 2019-08-08 2019-10-15 首都师范大学 A kind of lithium ion battery life-span prediction method under random discharge environment based on DGWO-ELM
CN110376458A (en) * 2019-07-03 2019-10-25 东华大学 Optimize the diagnosing fault of power transformer system of twin support vector machines
CN110378526A (en) * 2019-07-15 2019-10-25 安徽理工大学 The mobile method for predicting of bus station based on GW and SVR, system and storage medium
CN110619176A (en) * 2019-09-18 2019-12-27 福州大学 Aviation kerosene flash point prediction method based on DBN-RLSSVM
CN111024433A (en) * 2019-12-30 2020-04-17 辽宁大学 Industrial equipment health state detection method for optimizing support vector machine by improving wolf algorithm
CN111242005A (en) * 2020-01-10 2020-06-05 西华大学 Heart sound classification method based on improved wolf colony algorithm optimization support vector machine
CN111429003A (en) * 2020-03-23 2020-07-17 北京互金新融科技有限公司 Data processing method and device
CN111639695A (en) * 2020-05-26 2020-09-08 温州大学 Method and system for classifying data based on improved drosophila optimization algorithm
CN111708865A (en) * 2020-06-18 2020-09-25 海南大学 Technology forecasting and patent early warning analysis method based on improved XGboost algorithm
CN113449464A (en) * 2021-06-11 2021-09-28 淮阴工学院 Wind power prediction method based on improved depth extreme learning machine
CN117276600A (en) * 2023-09-05 2023-12-22 淮阴工学院 PSO-GWO-DELM-based proton exchange membrane fuel cell system fault diagnosis method
CN117726461A (en) * 2024-02-07 2024-03-19 湖南招采猫信息技术有限公司 Financial risk prediction method and system for electronic recruitment assistance

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8131086B2 (en) * 2008-09-24 2012-03-06 Microsoft Corporation Kernelized spatial-contextual image classification
CN106022517A (en) * 2016-05-17 2016-10-12 温州大学 Risk prediction method and device based on nucleus limit learning machine
CN106355192A (en) * 2016-08-16 2017-01-25 温州大学 Support vector machine method based on chaos and grey wolf optimization

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8131086B2 (en) * 2008-09-24 2012-03-06 Microsoft Corporation Kernelized spatial-contextual image classification
CN106022517A (en) * 2016-05-17 2016-10-12 温州大学 Risk prediction method and device based on nucleus limit learning machine
CN106355192A (en) * 2016-08-16 2017-01-25 温州大学 Support vector machine method based on chaos and grey wolf optimization

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
QIANG LI 等: "An Enhanced Grey Wolf Optimization Based Feature Selection Wrapped Kernel Extreme Learning", 《COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE》 *
YAN WEI 等: "An Improved Grey Wolf Optimization Strategy Enhanced SVM and Its Application in Predicting the Second Major", 《MATHEMATICAL PROBLEMS IN ENGINEERING》 *
徐达宇: "改进GWO优化SVM的云计算资源负载短期预测研究", 《计算机工程与应用》 *

Cited By (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108694390A (en) * 2018-05-15 2018-10-23 南京邮电大学 A kind of cuckoo search improves the modulated signal sorting technique of grey wolf Support Vector Machines Optimized
CN108694390B (en) * 2018-05-15 2022-06-14 南京邮电大学 Modulation signal classification method for cuckoo search improved wolf optimization support vector machine
CN108875793A (en) * 2018-05-25 2018-11-23 云南电网有限责任公司电力科学研究院 A kind of dirty area's grade appraisal procedure based on CSO-LSSVM
CN108875793B (en) * 2018-05-25 2021-10-08 云南电网有限责任公司电力科学研究院 Dirty area grade evaluation method based on CSO-LSSVM
CN110069817A (en) * 2019-03-15 2019-07-30 温州大学 A method of prediction model is constructed based on California gray whale optimization algorithm is improved
CN110007661A (en) * 2019-04-10 2019-07-12 河北工业大学 A kind of boiler combustion control system intelligent failure diagnosis method
CN110119778A (en) * 2019-05-10 2019-08-13 辽宁大学 A kind of equipment method for detecting health status improving chicken group's optimization RBF neural
CN110119778B (en) * 2019-05-10 2024-01-05 辽宁大学 Equipment health state detection method for improving chicken flock optimization RBF neural network
CN110167138A (en) * 2019-05-23 2019-08-23 西安电子科技大学 Based on the Location System for Passive TDOA optimizing location method for improving grey wolf optimization algorithm
CN110376458A (en) * 2019-07-03 2019-10-25 东华大学 Optimize the diagnosing fault of power transformer system of twin support vector machines
CN110378526A (en) * 2019-07-15 2019-10-25 安徽理工大学 The mobile method for predicting of bus station based on GW and SVR, system and storage medium
CN110333462B (en) * 2019-08-08 2021-04-30 首都师范大学 DGWO-ELM-based lithium ion battery life prediction method in random discharge environment
CN110333462A (en) * 2019-08-08 2019-10-15 首都师范大学 A kind of lithium ion battery life-span prediction method under random discharge environment based on DGWO-ELM
CN110619176A (en) * 2019-09-18 2019-12-27 福州大学 Aviation kerosene flash point prediction method based on DBN-RLSSVM
CN111024433A (en) * 2019-12-30 2020-04-17 辽宁大学 Industrial equipment health state detection method for optimizing support vector machine by improving wolf algorithm
CN111242005A (en) * 2020-01-10 2020-06-05 西华大学 Heart sound classification method based on improved wolf colony algorithm optimization support vector machine
CN111242005B (en) * 2020-01-10 2023-05-23 西华大学 Heart sound classification method based on improved wolf's swarm optimization support vector machine
CN111429003A (en) * 2020-03-23 2020-07-17 北京互金新融科技有限公司 Data processing method and device
CN111429003B (en) * 2020-03-23 2023-11-03 北京互金新融科技有限公司 Data processing method and device
CN111639695A (en) * 2020-05-26 2020-09-08 温州大学 Method and system for classifying data based on improved drosophila optimization algorithm
CN111639695B (en) * 2020-05-26 2024-02-20 温州大学 Method and system for classifying data based on improved drosophila optimization algorithm
CN111708865A (en) * 2020-06-18 2020-09-25 海南大学 Technology forecasting and patent early warning analysis method based on improved XGboost algorithm
CN111708865B (en) * 2020-06-18 2021-07-09 海南大学 Technology forecasting and patent early warning analysis method based on improved XGboost algorithm
CN113449464B (en) * 2021-06-11 2023-09-22 淮阴工学院 Wind power prediction method based on improved deep extreme learning machine
CN113449464A (en) * 2021-06-11 2021-09-28 淮阴工学院 Wind power prediction method based on improved depth extreme learning machine
CN117276600A (en) * 2023-09-05 2023-12-22 淮阴工学院 PSO-GWO-DELM-based proton exchange membrane fuel cell system fault diagnosis method
CN117276600B (en) * 2023-09-05 2024-06-11 淮阴工学院 PSO-GWO-DELM-based fault diagnosis method for proton exchange membrane fuel cell system
CN117726461A (en) * 2024-02-07 2024-03-19 湖南招采猫信息技术有限公司 Financial risk prediction method and system for electronic recruitment assistance

Also Published As

Publication number Publication date
CN107908688B (en) 2018-12-28

Similar Documents

Publication Publication Date Title
CN107908688A (en) A kind of data classification Forecasting Methodology and system based on improvement grey wolf optimization algorithm
Frazzetto et al. Prescriptive analytics: a survey of emerging trends and technologies
Vanegas et al. Inverse design of urban procedural models
US20160217371A1 (en) Method and System for Universal Problem Resolution with Continuous Improvement
Zhang et al. An explainable artificial intelligence approach for financial distress prediction
CN107230108A (en) The processing method and processing device of business datum
Rahman et al. Discretization of continuous attributes through low frequency numerical values and attribute interdependency
CN103034922A (en) Refinement and calibration method and system for improving classification of information assets
Akerkar Advanced data analytics for business
Liu et al. A multi-objective model for discovering high-quality knowledge based on data quality and prior knowledge
De Bock et al. Explainable AI for operational research: A defining framework, methods, applications, and a research agenda
Wu et al. Optimized deep learning framework for water distribution data-driven modeling
Li et al. Explain graph neural networks to understand weighted graph features in node classification
Shehab et al. Toward feature selection in big data preprocessing based on hybrid cloud-based model
Cai et al. A survey on deep reinforcement learning for data processing and analytics
Moran et al. Curious instance selection
Akman et al. Assessing innovation capabilities of manufacturing companies by combination of unsupervised and supervised machine learning approaches
Giráldez et al. Knowledge-based fast evaluation for evolutionary learning
Baumann et al. Complexity and competitive advantage
Dutta et al. Linking reaction systems with rough sets
Hilal et al. Artificial intelligence based optimal functional link neural network for financial data Science
Elwakil Knowledge discovery based simulation system in construction
Abiteboul et al. Research directions for Principles of Data Management (Dagstuhl perspectives workshop 16151)
Rofi'i Analysis of E-Commerce Purchase Patterns Using Big Data: An Integrative Approach to Understanding Consumer Behavior
Raamesh et al. Data mining based optimization of test cases to enhance the reliability of the testing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
EE01 Entry into force of recordation of patent licensing contract
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20180413

Assignee: Big data and Information Technology Research Institute of Wenzhou University

Assignor: Wenzhou University

Contract record no.: X2020330000098

Denomination of invention: A data classification and prediction method and system based on improved gray wolf optimization algorithm

Granted publication date: 20181228

License type: Common License

Record date: 20201115

EE01 Entry into force of recordation of patent licensing contract
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20180413

Assignee: Zhejiang SOS Technology Co.,Ltd.

Assignor: Wenzhou University

Contract record no.: X2023330000972

Denomination of invention: A Data Classification Prediction Method and System Based on Improved Grey Wolf Optimization Algorithm

Granted publication date: 20181228

License type: Common License

Record date: 20231229