CN108830370A - Based on the feature selection approach for enhancing learning-oriented flora foraging algorithm - Google Patents

Based on the feature selection approach for enhancing learning-oriented flora foraging algorithm Download PDF

Info

Publication number
CN108830370A
CN108830370A CN201810508479.8A CN201810508479A CN108830370A CN 108830370 A CN108830370 A CN 108830370A CN 201810508479 A CN201810508479 A CN 201810508479A CN 108830370 A CN108830370 A CN 108830370A
Authority
CN
China
Prior art keywords
value
microorganism
feedback
thallus
updated
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810508479.8A
Other languages
Chinese (zh)
Other versions
CN108830370B (en
Inventor
姜慧研
董万鹏
马连博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northeastern University China
Original Assignee
Northeastern University China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northeastern University China filed Critical Northeastern University China
Priority to CN201810508479.8A priority Critical patent/CN108830370B/en
Publication of CN108830370A publication Critical patent/CN108830370A/en
Application granted granted Critical
Publication of CN108830370B publication Critical patent/CN108830370B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Image Analysis (AREA)

Abstract

The present invention discloses a kind of feature selection approach based on the learning-oriented flora foraging algorithm of enhancing, and method includes:Initialize position, the largest loop numerical value, the number of iterations initial value of bacterial community;Each thallus indicates the weight vectors of feature vector to be selected in bacterial community;According to maximizing historical experience value strategy in RL, selecting a kind of motor behavior for each thallus and executing, the fitness value for updating position and updating each thallus behind position of updated each thallus is obtained;Value of feedback is obtained for the variation of each thallus fitness value based on RL rule;The historical experience value that each thallus is accumulated is updated according to value of feedback, the number of iterations is increased by 1, is repeated the above process, until exporting bacterial community when the number of iterations is greater than largest loop numerical value.Method of the invention can obtain better recognition result and time-consuming is less using learning-oriented optimal way is enhanced instead of traditional probabilistic type optimal way.

Description

Based on the feature selection approach for enhancing learning-oriented flora foraging algorithm
Technical field
The invention belongs to Feature Selection more particularly to a kind of feature choosings based on the learning-oriented flora foraging algorithm of enhancing Selection method.
Background technique
In recent years, biologically inspired computation achieves great development.Researcher is by biosystem in reply complex environment When the robustness that shows and adaptivity inspired, propose many computation models for simulating biological foraging behaviors and algorithm to solve The certainly all kinds of complicated optimum problems in complex engineering are applied to the fields such as networking engineering calculation, image procossing with can be convenient.
Swarm Intelligence Algorithm belongs to biological heuristic value.This kind of novel heuristic value has potential The features such as concurrency, distribution and reconfigurability.It is the mathematical model set up by simulating nature biotechnology group behavior, Optimization problem to be solved is described in the form of objective function.Bacterial foraging algorithm (Bacterial foraging Optimization algorithm, BFO) it is the Optimized model for simulating bacterial community foraging behavior, belong to Swarm Intelligence Algorithm In a member.Although BFO shows fine search characteristic and global optimizing ability in low-dimensional continuity optimization problem;However, When facing higher-dimension discreteness problem, it generates pre- Convergent Phenomenon because being easily trapped into locally optimal solution again.Therefore, how is BFO Solve these problems the hot spot as swarm intelligence area research.
Summary of the invention
For the problems of the prior art, the present invention provides a kind of based on the feature choosing for enhancing learning-oriented flora foraging algorithm Selection method, this method will not fall into the constringent problem of locally optimal solution when facing higher-dimension discreteness problem.
In a first aspect, the present invention provides a kind of feature selection approach based on the learning-oriented flora foraging algorithm of enhancing, including:
Step S1, the position of bacterial community is initialized, largest loop numerical value and iteration time is arranged in setting largest loop numerical value Number initial value;Each microorganism indicates the weight vectors of feature vector to be selected in the bacterial community;
Step S2, historical experience value strategy is maximized according in enhancing study RL, is each bacterium in the bacterial community Thallus selects a kind of motor behavior;
Step S3, after each microorganism executes motor behavior, each updated microorganism is obtained more New position;
Step S4, the fitness value of each microorganism after updating position is obtained;
Step S5, based on RL rule, the adaptation of position and updated position before being updated for each microorganism The variation of angle value obtains value of feedback;
Step S6, according to the value of feedback, the historical experience value that each thallus is accumulated is updated, exports bacterial community;
Step S7, the number of iterations is increased by 1, repeats step S2 to step S6, until the number of iterations is more than or equal to most When systemic circulation numerical value, bacterial community is exported.
Optionally, the motor behavior includes following one or more:
Adaptive chemotactic behavior;
Replication;
The property strengthened disperses behavior;
Amalgamation intersects behavior.
Optionally, the step S2 includes:
Step 21 sets original state and initial actuating for each microorganism;Q- matrix is set for each microorganism To save the historical experience of accumulation, and Q- matrix initialisation is 0;
Step 22, the current state s for each microorganismt, according to the content of Q- matrix, select optimal movement at
Step 23, each microorganism execute respective movement at
Correspondingly, the update position of each updated microorganism of acquisition in the step S3 includes:
Step S31, the data item (s of Q- matrix is updatedt,at), more new state is st+1
Correspondingly, the acquisition value of feedback in the step S5 includes:
Obtain the feedback immediately i.e. value of feedback r of each microorganismt+1
Optionally, the step 22 includes:
Optimal movement a is selected according to following formula (1)t
at=Max [Q (state, actions)] (1)
Wherein, optimal movement atAccording to the maximum selection rule of current state in Q- matrix.
Optionally, the S5 includes:
Step S51:Compare the size that each microorganism updates fitness value before and after position, is obtained according to following formula (2) Take value of feedback r;
Wherein, when the fitness value of microorganism obtains optimization, value of feedback r=1 is obtained;When situation is opposite, value of feedback r =-1.
Optionally, the calculating process of the Q- matrix in the step S31:
Wherein, γ indicates the discount factor for belonging to [0,1];rt+1It indicates to be in current state stAgent executed movement atThe feedback immediately obtained afterwards;α indicates learning rate to balance search process and development process, and iter and MaxCycle distinguish table Show current iteration number and total the number of iterations.
Second aspect, the present invention provide also a kind of electronic equipment, and described includes memory, processor, bus and storage On a memory and the computer program that can run on a processor, the processor are realized as described above when executing described program The step of.
The device have the advantages that as follows:
Optimize for traditional BFO there are the balance quality of global search and local search and in face of higher-dimension, discreteness The constringency performance that problem occurs is poor, solves this two large problems using the mechanism of enhancing study in the present invention.Improved optimization is calculated Method mainly includes several behaviors:The behavior of adaptivity chemotactic, enhancement migratory behaviour, intersects behavior at replication.Due to difference Behavior can search different solutions, when call which kind of behavior become key problem.According to the mechanism of enhancing study, study Body agent (i.e. thallus) can select next behavior to give maximum reward to obtain academic environment according to historical experience.Therefore, it ties Close enhancing learning mechanic solves when to call the critical problem of certain behavior.
In addition, a set of reference function can be used in specifically used to verify the learning-oriented flora convergence energy of enhancing And validity.
In swarm intelligence, feature selection issues can be regarded as discretization and the high challenge of dimension.Therefore, traditional The feature selecting algorithm of Optimized model combining classification criterion be difficult to reach that nicety of grading is high and time-consuming few target.The present invention Optimization method is improved, using the learning-oriented optimal way of enhancing instead of traditional probabilistic type optimal way, in this way It can obtain better recognition result and time-consuming is less.
Further, for biological heuritic approach, the present invention can obtain balance appropriate between exploration and exploitation. The present invention uses RL mechanism, and the efficiency of looking for food of thallus can be improved, reach the convergence state of objective function as early as possible.Using based on RL's Optimization algorithm, and using Fisher criterion as the judgment criteria of feature selecting, help to improve nicety of grading.
Detailed description of the invention
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this Some embodiments of invention without any creative labor, may be used also for those of ordinary skill in the art To obtain other attached drawings according to these attached drawings.
Fig. 1 and Fig. 2 is respectively the feature choosing based on the learning-oriented flora foraging algorithm of enhancing that one embodiment of the invention provides The flow diagram of selection method.
Specific embodiment
In order to preferably explain the present invention, in order to understand, with reference to the accompanying drawing, by specific embodiment, to this hair It is bright to be described in detail.
In the following description, multiple and different aspects of the invention will be described, however, for common skill in the art For art personnel, the present invention can be implemented just with some or all structures or process of the invention.In order to explain Definition for, specific number, configuration and sequence are elaborated, however, it will be apparent that these specific details the case where Under the present invention also can be implemented.It in other cases, will no longer for some well-known features in order not to obscure the present invention It is described in detail.
Currently, design the Swarm Intelligence Algorithm of a robustness key be how in searching process balance explore and Development process.Theoretically, effective management to local search, i.e. allocating time and tune can be attributed to by solving this problem Use frequency.In addition, there is also some disadvantages for traditional flora foraging algorithm, i.e., " precocity " and calculate problem at high cost.Enhancing is learned The novelty of habit type flora Optimized model (abbreviation RBCFO) is embodied in the following aspects:
(1) a kind of flora foraging algorithm learning-oriented based on enhancing makes group have intelligence.Namely based on RL rule energy It is enough to select intelligent behaviour in multi-level (chemotactic, duplication, elimination-dispersion and intersection) behavior.This thallus model can be given Give flora look for food optimization method adaptivity, cooperative and intelligence.
(2) dynamically balance detection process and development process in searching process.That is, thallus can be adaptively adjusted respectively You move step-length.
(3) keep bacterium colony internal information shared using information crossover mechanism between somatic cells.
Referring to figs. 1 and 2, the method for the present embodiment includes the following steps:
Step S1:The position of bacterial community is initialized, largest loop numerical value is set, and the number of iterations is initially 0.
In the present embodiment, in the bacterial community each microorganism indicate the weight of feature vector to be selected to Amount.
For the content for better understanding the present embodiment, the weight of group, solution, feature is illustrated below:Initialization Group/population is assumed to be indicated with the matrix of 50*100 size;
50 indicate the number of bacterium;100 indicate the dimension of feature vector before feature selecting.
The row vector of one 1*100 indicates the weight of feature vector, indicates a solution.
Certain update is passed through by the group of initialization, generates last group.So renewal process is exactly to be looked for food with flora What algorithm was realized.Only this foraging algorithm improves, and is the mechanism that joined enhancing study.After shutting down procedure, one can be obtained Group solution, that is, one group of weight;
A solution can be selected from this group of solution at random;This solution selected, means that the weight of feature vector.Analogy It says, last solution is 1*100, i.e. (0.2,0.3,0.8,0.96,0.2,0.1 ...), sets a threshold value, will be less than this in solution The column of threshold value are deleted, and remaining column are exactly the feature vector finally selected.
Step S2:It is every according to historical experience value strategy is maximized in enhancing study (ReinforcementLearning) A microorganism selects suitable behavior (the i.e. behavior of adaptivity chemotactic, replication, enhancement migratory behaviour, crossing lines For), and execute.
For example, the enhancing study in the present embodiment can be made of 3 parts:1) policy (strategy):From state to dynamic A kind of mapping for making behavior is the core of enhancing study;2) reward (feedback/reward):Movement can obtain after having executed Reward refers to reward brought by environment state changes caused by single behavior, for reward such as value of feedback r immediatelyt+1;3)value function:Long-term reward is the reward i.e. Q (s of accumulationt,at)。
Step S3:Fitness value (i.e. Fisher functional value) f (ω) is calculated to updated group position.
Wherein, Fi,jIndicate the feature vector of jth sample under the i-th classification;niIndicate the number of sample in the i-th classification;C table Show the number of classification.miIndicate the average value of the middle feature vector of the i-th classification;
M indicates the average value of all feature vectors;Sw indicates the flat of all feature vectors and the i-th category feature vector average value Equal Weighted distance;SBIndicate the average weighted distance of the feature vector between classification.The ω of formula 3 is exactly to utilize in the present embodiment Feature selection approach find out.
That is, obtaining each updated microorganism after each microorganism executes motor behavior Position is updated, the fitness value of each microorganism after updating position is obtained.
Step S4:According to RL rule (i.e. enhancing study mechanism), for the position before the update of each microorganism and more The variation of the fitness value of position after new obtains value of feedback.
That is, observing the variation of thallus fitness value and giving its corresponding feedback.
Step S5:According to the value of feedback, the historical experience value that each thallus is accumulated is updated, exports bacterial community.
That is, updating the reward that the historical experience value that thallus is accumulated adds up according to RL rule.
Step S6:By the number of iterations plus 1, repeat the above steps S2 to step S6, and if judging that current iteration number is big It in largest loop numerical value, then shuts down procedure, exports bacterial community, otherwise return step S2.
In the present embodiment, after shutting down procedure, one group of solution, that is, one group of weight can be obtained, at random from this group of Xie Zhongxuan A solution can out;This solution selected, means that the weight of feature vector.For example, last solution be (0.2, 0.3,0.8,0.96,0.2,0.1 ...) threshold value, is set, the column that this threshold value is less than in solution are deleted, remaining column are just It is the feature vector finally selected.
The iterative process of the present embodiment is input of the optimal solution of the corresponding output of bacterial community as logistic regression (RL), To improve classification accuracy.
The flora foraging algorithm of the present embodiment surrounds one group of solution always and is updated.If centre generates good solution and can replace Before, always it is exactly a bacterial community, includes multiple thallus in a bacterial community.
The method of the present embodiment provides a highly effective method and approach to improve classification problem, it can also be used to other Image procossing in terms of.
Further, for example, aforementioned step S2 may include following sub-step:
Sub-step S21:For each study body agent (i.e. each corresponding microorganism of study body) setting original state And initial actuating;Historical experience of the Q- matrix to save accumulation is set for each study body, and Q- matrix initialisation is 0.
In the present embodiment, the agent for enhancing study corresponds to each thallus of flora foraging algorithm.In addition, enhancing study Movement and state corresponds to the adaptive chemotactic behavior of flora foraging algorithm, replication, the property strengthened disperse behavior, amalgamation Intersection behavior.
Step S22:For learning the current state s of bodyt, according to Q- matrix content (data i.e. in Q- matrix, it is above-mentioned The experience of history accumulation), select optimal movement at
at=Max [Q (state, actions)] (1)
Wherein, optimal movement atAccording to the maximum selection rule of current state (state) in Q- matrix.
Step S23:Agent execution acts at;It gives agent and feeds back r immediatelyt+1
" feeding back immediately " in the present embodiment is a value of feedback, for example, can be obtained by following manner:
The fitness value of the new position of individual is calculated, the size of fitness value before and after more individual update position;
Wherein, when the fitness value of individual is improved (such as fitness value increase/raising), r is fed back in acquisition immediately =1;When situation is opposite, r=-1 is fed back immediately.
Step S24:Update the data item (s of Q- matrixt,at), more new state is st+1
The calculating process of the Q- matrix:
Wherein, γ indicates the discount factor for belonging to [0,1];rt+1It indicates to be in current state stAgent executed movement atThe feedback immediately obtained afterwards;α indicates learning rate to balance search process and development process, and iter and MaxCycle distinguish table Show current iteration number and total the number of iterations.
In above-mentioned formula (9), α (t) is a number, but is not definite value, and when the number of iterations iter is smaller, α is answered This is bigger, this stage just compares emphasis search.When iter is bigger, α is smaller, focuses on existing experience (Q Value).
The behavior referred in aforementioned step S2 can be illustrated below:
1) adaptive chemotactic behavior:
Simulation Escherichia coli are dynamic to the region You for being more suitable for existence by the rotation of flagellum.
Pi(t)=Pi(t-1)+Ci(t-1)*φ(t-1) (2)
Wherein,Indicate random flip angle, Pi(t) the i-th thallus is indicated in the position of t moment, and position corresponds to spy here Levy the weight of each dimension of feature vector in selection method;One group of optimal weight, which is found, by optimization algorithm combines (weight Matrix).
Typically, the vital task for designing swarm intelligence (SwarmIntelligence) algorithm is how to search for Adaptively balance explores (exploration) and exploitation (exploition) two big process in journey.Wherein, it is in exploration state Population it is dynamic to look for potential globally optimal solution to " strange " region You, and the population in development status is in " potential " area Domain is nearby searched for.You moves step-length can the big process of dynamic equilibrium above-mentioned two:
Wherein, a is constant factor, and iter indicates that current iteration number, MaxCycle indicate iteration total degree.
2) replication:
Step 1:From big to small according to fitness value, all populations are subjected to descending sort;
Step 2:According to the ranking result of step 1, N/2 individual after ranking is removed, replicates individual (the N representative species of N/2 before ranking The size of group).
3) property strengthened disperses behavior:
Due to nutrient consumption or other unknown causes, bacterial population is likely to be dispersed to new region.The position root of individual i It is updated according to optimal location:
Pid=rand1*(Xmax-Xmin)+rand2*(gbestd-Pid) (4)
Wherein, PidIndicate the dimension d of the individual position i;Xmax, XminRespectively indicate the up-and-down boundary of search space;rand1, rand2Respectively obey that mean value is 0, standard deviation is 1 to be just distributed very much;gbestdIndicate optimal location of the group at dimension d.
4) amalgamation intersects behavior:
Organisms individual adjacent thereto carries out information exchange, it is desired to be able to combine beneficial information, define For:
vij=gbestj+beta*(Paj-Pbj) (6)
Wherein, PijIndicate the dimension d of the individual position i;LCRIndicate the crossover probability in [0,1] range;K be individual with Machine dimension;PajAnd PbjIndividual a and individual b are respectively indicated in the position of dimension j;gbestjIndicate the optimal location of entire population; Beta indicates the scale factor in [0,1] range.
The application scenarios of the method for the present embodiment are illustrated below:
Application scenarios 1:
The following characteristics of lung CT image are extracted in the classification of lung's bronchitis:
7 kinds of textural characteristics, i.e. entropy, mean value, variance, gray level co-occurrence matrixes GLCM, local binary patterns (LBP), Haralick textural characteristics, local phase quantization (LPQ);
5 kinds of geometrical characteristics, i.e. area, perimeter, external circularity, rectangular degree, elongation;
315-D feature vector is extracted in total.
Using the feature selection approach of the present embodiment, feature vector subset is selected from 315-D feature vector, as The input of support vector machines (SVM) classifier, to improve nicety of grading.
Application scenarios 2:
Brain tumor classification (5 classes:Background, oedema, necrosis, enhancement tumour and non-reinforcing tumour):
Data:Multi-modal brain MRI image;
The each pixel being sliced for every chooses 25*25 neighborhood, and to extraction neighborhood Gabor, average gray.
That is, a neighborhood is with 164-D, (image has 4 kinds of mode, the average gray of every kind of mode;Gabor characteristic 5 Kind size, 8 kinds of directions) feature vector.
Note:4*(5*8+1).
Using the feature selection approach of the present embodiment, feature vector subset is selected from 164-D feature vector, as branch The input of vector machine (SVM) classifier is held, to improve nicety of grading.
According to another aspect of an embodiment of the present invention, the embodiment of the present invention also provides a kind of electronic equipment, the electronic equipment Including memory, processor, bus and store the computer program that can be run on a memory and on a processor, the place Manage the method and step realized when device executes described program such as above-mentioned any embodiment.The electronic equipment of the present embodiment can be movement Terminal, fixed terminal etc..
Further, the present embodiment also provides a kind of computer storage medium, is stored thereon with computer program, the journey The method and step such as above-mentioned any embodiment is realized when sequence is executed by processor.
Finally it should be noted that:Above-described embodiments are merely to illustrate the technical scheme, rather than to it Limitation;Although the present invention is described in detail referring to the foregoing embodiments, those skilled in the art should understand that: It can still modify to technical solution documented by previous embodiment, or to part of or all technical features into Row equivalent replacement;And these modifications or substitutions, it does not separate the essence of the corresponding technical solution various embodiments of the present invention technical side The range of case.

Claims (7)

1. a kind of based on the feature selection approach for enhancing learning-oriented flora foraging algorithm, which is characterized in that including:
Step S1, the position of bacterial community is initialized, largest loop numerical value and the number of iterations initial value are set;The bacterial community In each microorganism indicate the weight vectors of feature vector to be selected;
Step S2, historical experience value strategy is maximized according in enhancing study RL, is each microorganism in the bacterial community Select a kind of motor behavior;
Step S3, after each microorganism executes motor behavior, the update position of each updated microorganism is obtained It sets;
Step S4, the fitness value of each microorganism after updating position is obtained;
Step S5, based on RL rule, the fitness value of position and updated position before being updated for each microorganism Variation, obtain value of feedback;
Step S6, according to the value of feedback, the historical experience value that each thallus is accumulated is updated, exports bacterial community;
Step S7, the number of iterations is increased by 1, repeats step S2 to step S6, until the number of iterations is followed more than or equal to maximum When number of rings value, bacterial community is exported.
2. the method according to claim 1, wherein the motor behavior includes following one or more:
Adaptive chemotactic behavior;
Replication;
The property strengthened disperses behavior;
Amalgamation intersects behavior.
3. according to the method described in claim 2, it is characterized in that, the step S2 includes:
Step 21 sets original state and initial actuating for each microorganism;Q- matrix is set for each microorganism to protect Tired historical experience is stockpiled, and Q- matrix initialisation is 0;
Step 22, the current state s for each microorganismt, according to the content of Q- matrix, select optimal movement at
Step 23, each microorganism execute respective movement at
Correspondingly, the update position of each updated microorganism of acquisition in the step S3 includes:
Step S31, the data item (s of Q- matrix is updatedt,at), more new state is st+1
Correspondingly, the acquisition value of feedback in the step S5 includes:
Obtain the feedback immediately i.e. value of feedback r of each microorganismt+1
4. according to the method described in claim 3, it is characterized in that, the step 22 includes:
Optimal movement a is selected according to following formula (1)t
at=Max [Q (state, actions)] (1)
Wherein, optimal movement atAccording to the maximum selection rule of current state in Q- matrix.
5. according to the method described in claim 2, it is characterized in that, the S5 includes:
Step S51:Compare the size that each microorganism updates fitness value before and after position, is obtained according to following formula (2) anti- Feedback value r;
Wherein, when the fitness value of microorganism obtains optimization, value of feedback r=1 is obtained;When situation is opposite, value of feedback r=-1.
6. according to the method described in claim 3, it is characterized in that,
The calculating process of Q- matrix in the step S31:
Wherein, γ indicates the discount factor for belonging to [0,1];rt+1It indicates to be in current state stAgent executed movement atAfterwards The feedback immediately obtained;α indicates learning rate to balance search process and development process, and iter is respectively indicated with MaxCycle to be worked as Preceding the number of iterations and total the number of iterations.
7. a kind of electronic equipment, which is characterized in that it is described include memory, processor, bus and storage on a memory simultaneously The computer program that can be run on a processor, the processor are realized when executing described program as claim 1-6 is any one The step of item.
CN201810508479.8A 2018-05-24 2018-05-24 Feature selection method based on reinforced learning type flora foraging algorithm Active CN108830370B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810508479.8A CN108830370B (en) 2018-05-24 2018-05-24 Feature selection method based on reinforced learning type flora foraging algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810508479.8A CN108830370B (en) 2018-05-24 2018-05-24 Feature selection method based on reinforced learning type flora foraging algorithm

Publications (2)

Publication Number Publication Date
CN108830370A true CN108830370A (en) 2018-11-16
CN108830370B CN108830370B (en) 2020-11-10

Family

ID=64148590

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810508479.8A Active CN108830370B (en) 2018-05-24 2018-05-24 Feature selection method based on reinforced learning type flora foraging algorithm

Country Status (1)

Country Link
CN (1) CN108830370B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110232971A (en) * 2019-05-24 2019-09-13 深圳市翩翩科技有限公司 A kind of doctor's recommended method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101136080A (en) * 2007-09-13 2008-03-05 北京航空航天大学 Intelligent unmanned operational aircraft self-adapting fairway planning method based on ant colony satisfactory decision-making
US20170278018A1 (en) * 2013-10-08 2017-09-28 Google Inc. Methods and apparatus for reinforcement learning
CN107480702A (en) * 2017-07-20 2017-12-15 东北大学 Towards the feature selecting and Feature fusion of the identification of HCC pathological images
US20180129648A1 (en) * 2016-09-12 2018-05-10 Sriram Chakravarthy Methods and systems of automated assistant implementation and management
CN108038538A (en) * 2017-12-06 2018-05-15 西安电子科技大学 Multi-objective Evolutionary Algorithm based on intensified learning

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101136080A (en) * 2007-09-13 2008-03-05 北京航空航天大学 Intelligent unmanned operational aircraft self-adapting fairway planning method based on ant colony satisfactory decision-making
US20170278018A1 (en) * 2013-10-08 2017-09-28 Google Inc. Methods and apparatus for reinforcement learning
US20180129648A1 (en) * 2016-09-12 2018-05-10 Sriram Chakravarthy Methods and systems of automated assistant implementation and management
CN107480702A (en) * 2017-07-20 2017-12-15 东北大学 Towards the feature selecting and Feature fusion of the identification of HCC pathological images
CN108038538A (en) * 2017-12-06 2018-05-15 西安电子科技大学 Multi-objective Evolutionary Algorithm based on intensified learning

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
D.H.KIM ETAL.: "Ahybridgeneticalgorithmandbacterial foragingapproachforglobaloptimization", 《INFORMATIONSCIENCE》 *
晏晓辉 等: "基于多种群细菌觅食算法的机械设计优化", 《现代制造工程》 *
梁晓丹: "基于觅食行为的智能优化算法研究及应用", 《中国博士学位论文全文数据库 信息科技辑》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110232971A (en) * 2019-05-24 2019-09-13 深圳市翩翩科技有限公司 A kind of doctor's recommended method and device
CN110232971B (en) * 2019-05-24 2022-04-12 深圳市翩翩科技有限公司 Doctor recommendation method and device

Also Published As

Publication number Publication date
CN108830370B (en) 2020-11-10

Similar Documents

Publication Publication Date Title
Zhao et al. An effective multi-objective artificial hummingbird algorithm with dynamic elimination-based crowding distance for solving engineering design problems
Gong et al. A novel hybrid multi-objective artificial bee colony algorithm for blocking lot-streaming flow shop scheduling problems
Das et al. Recent advances in differential evolution–an updated survey
Ergezer et al. Oppositional biogeography-based optimization
Sun et al. Improved monarch butterfly optimization algorithm based on opposition-based learning and random local perturbation
Hammouche et al. A comparative study of various meta-heuristic techniques applied to the multilevel thresholding problem
Tsai et al. ACODF: a novel data clustering approach for data mining in large databases
Mahajan et al. Image segmentation using multilevel thresholding based on type II fuzzy entropy and marine predators algorithm
Wang et al. A self-adaptive weighted differential evolution approach for large-scale feature selection
Li et al. A survey on firefly algorithms
Bonyadi et al. A hybrid particle swarm with a time-adaptive topology for constrained optimization
Sheng et al. Adaptive multisubpopulation competition and multiniche crowding-based memetic algorithm for automatic data clustering
Cai et al. Imbalanced evolving self-organizing learning
Zhang et al. Evolving ensembles using multi-objective genetic programming for imbalanced classification
Zhou et al. Advanced orthogonal learning and Gaussian barebone hunger games for engineering design
Pilát et al. Aggregate meta-models for evolutionary multiobjective and many-objective optimization
Sheng et al. Multilocal search and adaptive niching based memetic algorithm with a consensus criterion for data clustering
Xiong et al. Multi-feature fusion and selection method for an improved particle swarm optimization
Sharifai et al. Multiple filter-based rankers to guide hybrid grasshopper optimization algorithm and simulated annealing for feature selection with high dimensional multi-class imbalanced datasets
Wang et al. Multiple surrogates and offspring-assisted differential evolution for high-dimensional expensive problems
Ma et al. Multi-neighborhood learning for global alignment in biological networks
Wang et al. Efficient utilization on PSSM combining with recurrent neural network for membrane protein types prediction
Kalra et al. A Novel Binary Emperor Penguin Optimizer for Feature Selection Tasks.
Zhen et al. Neighborhood evolutionary sampling with dynamic repulsion for expensive multimodal optimization
CN110069498A (en) High quality mode method for digging based on multi-objective evolutionary algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant