CN115481727A - Intention recognition neural network generation and optimization method based on evolutionary computation - Google Patents

Intention recognition neural network generation and optimization method based on evolutionary computation Download PDF

Info

Publication number
CN115481727A
CN115481727A CN202211120997.5A CN202211120997A CN115481727A CN 115481727 A CN115481727 A CN 115481727A CN 202211120997 A CN202211120997 A CN 202211120997A CN 115481727 A CN115481727 A CN 115481727A
Authority
CN
China
Prior art keywords
neural network
individual
node
individuals
fitness
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211120997.5A
Other languages
Chinese (zh)
Inventor
陈爱国
罗光春
赵太银
付波
沙泽鑫
宣朋羽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN202211120997.5A priority Critical patent/CN115481727A/en
Publication of CN115481727A publication Critical patent/CN115481727A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/086Learning methods using evolutionary algorithms, e.g. genetic algorithms or genetic programming

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Physiology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an intention recognition neural network generation and optimization method based on evolutionary computation, which comprises the following steps: constructing a neural network loss function according to the problem characteristics and training set data, and taking the reciprocal of the loss function as a target function and a fitness function of evolutionary search; designing a coding method aiming at the nodes and the connection of the neural network, and further designing a crossover and mutation evolution operator aiming at the neural network according to the coding method to be used as a main part operation of evolution search; after iterative evolutionary search, the population fitness gradually increases, the neural network loss function gradually decreases, and after the maximum algebra of evolution is reached, the optimized optimal neural network is output for predicting the complex nonlinear problem. The method is used for generating and optimizing the neural network facing the complex nonlinear problem, and can effectively improve the neural network performance.

Description

Intention recognition neural network generation and optimization method based on evolutionary computation
Technical Field
The invention relates to the field of neural network optimization, in particular to an intention recognition neural network generation and optimization method based on evolutionary computation.
Background
The existing neural network generation and optimization methods can be divided into two types, one is a method utilizing back propagation and gradient descent, and the other is a neural network parameter updating method combining genetic algorithm. Both methods can realize network updating optimization by optimizing weight parameters in the neural network, but because both methods aim at parameter updating under a fixed network structure, the neural network has poor prediction effect, and actual use requirements are difficult to meet.
The traditional neural network generation and parameter updating adopts an error back propagation method, after signal forward propagation, the error back propagation method acts on a network output layer, then an output error is calculated according to an actual result in a training set, then each weight and threshold parameter in a hidden layer are adjusted according to the gradient descending direction of the error, the neural network error is continuously reduced, and when the minimum error is obtained, the corresponding network model parameter is the optimal neural network parameter. In practical application, the method usually has an unexpected effect, and has three disadvantages: firstly, the final effect achieved by the neural network depends on the initialization result of the weight parameter; secondly, the generalization capability of the model under different problems is limited by the fixed neural network hierarchical structure; finally, using gradient-descent parameter update methods may fall into local convergence when the model is complex.
A neural network parameter optimization method combining a genetic algorithm utilizes the coding and group optimization ideas of the genetic algorithm to code parameters in the neural network into decision vectors capable of calculating loss function gradients, then a group of different parameter combinations are obtained through random strategy initialization to serve as an initial population of the optimization algorithm, a new population is generated through crossing and variation operations in the genetic algorithm, the populations are sorted and separated according to fitness, namely loss function values, a complete round of genetic optimization is completed, finally optimized neural network parameters are obtained when the maximum round is reached, the optimal generation of the neural network is realized, and the optimal generation is further used for outputting results of an intention recognition problem model. Compared with a neural network generation method based on error back propagation, the neural network parameter optimization method combined with a genetic algorithm can avoid local convergence, but still has the following defects: firstly, because the algorithm coding object is a network parameter, the neural network structure cannot be optimized, and therefore the network generalization capability is poor; in addition, the population optimization features of the genetic algorithm result in large error calculation amount of parameter combinations, and easily result in overhigh algorithm time complexity. The resulting neural network behaves generally.
Disclosure of Invention
Aiming at the technical defects in the prior art, the invention provides a neural network generation and optimization method based on evolutionary computation, aiming at solving the technical problems that the generation and optimization of the neural network may face local convergence and low generalization degree or rely on parameter initialization in the face of different complex scenes, and finally realizing the generation and optimization of the supervised learning neural network facing the complex problem scenes.
In order to achieve the purpose, the technical scheme adopted by the invention is as follows:
an intention recognition neural network generation and optimization method based on evolutionary computation comprises the following steps:
step 1, according to a training set of a specific problem scene, providing a loss function representation method of a neural network training model as a fitness function in an evolutionary search algorithm;
the loss function adopts a mean square error loss function, and in order to ensure that a neural network with a smaller loss function has higher fitness, a fitness function value is the reciprocal of the loss function;
in the invention, the oriented intention recognition problem is a nonlinear regression problem in a supervised learning problem, an output result is taken as a label of corresponding data in a training set of known input data and an output result, and the error between an output prediction label of a current neural network and an actual label is gradually reduced in the training or optimizing search process, so that the fitting of a trained function and an actual problem function is realized.
Step 2, neural network coding and initializing:
respectively carrying out multidimensional vector coding operation on the hidden layer node weight parameters of the neural network and the connection between the nodes, so that the neural network can be converted into a linear structured data structure which can be optimally searched by using evolutionary computation, namely individual chromosomes of a population in the evolutionary computation;
and generating a plurality of neural networks with different structures and different weight parameters by using a random strategy, and using the neural networks as initial population input of the evolutionary computation.
Step 3, designing an evolution operator aiming at the coded neural network data structure:
respectively carrying out cross operator design aiming at the node weight and the connection state of the neural network, realizing the exchange of partial genetic information of the parent individuals and generating new individuals through cross operation after randomly selecting two neural network models as parent individuals of the cross operation, thereby realizing the search of individual neighborhoods, and generating no new genetic information after the cross operation;
designing mutation operators aiming at the node weight and the connection state of the neural network respectively, carrying out mutation operation on a newly generated neural network model individual after crossing if a certain mutation probability alpha epsilon (0,1) is met after the crossing operation, and changing part of genetic information in the newly generated neural network model individual by changing the node weight or the connection mode of the newly generated neural network model individual to realize the generation of the new neural network model individual carrying completely new genetic information;
and designing a selection operator in an optimized search process, and respectively controlling individual selection for crossing and individual selection for forming a next generation population before and after the crossing and mutation operations are started according to the fitness of the neural network model.
Step 4, evolutionary computing search optimization:
and (3) based on the loss function of the ideogram recognition problem in the step 1, carrying out optimization search on the neural network model by using the evolution operation in the step 3, and when the set maximum iteration algebra is met, obtaining an optimized neural network population and individuals and using the optimized neural network population and individuals as a prediction classifier of the specific problem scene.
Wherein, the fitness function in the optimization search is L = -sigma k t k log y k
Where L represents the degree of conformity of the neural network to the problem scenario, t k Predicting a result label for the kth test data, t when the prediction result is correct k The value is 1, when the prediction result is wrong, t k The value is 0, log is the logarithm based on the natural logarithmic constant e, y k Is the objective function value.
And 5, performing prediction classification on the actual problem by applying the generated neural network.
And 5, optimizing a complex problem prediction result of the neural network prediction classifier based on evolutionary computation.
And analyzing or collecting a data set for training for a given specific problem scene, and calculating a loss function expression as a fitness function for expressing the goodness and badness of the neural network model to the search environment in the evolutionary search.
A coding mode for the neural network is designed based on problem characteristics, and multidimensional vector coding is carried out on the node weight numerical values of the neural network and the connection modes between the nodes respectively, so that the neural network can be directly operated in an evolutionary computation process. And designing an evolution operation operator aiming at the network node weight and the network structure. A plurality of initial neural networks are constructed through a random strategy and used as initial input of the evolutionary search.
And before the iteration times reach the maximum evolution algebra, carrying out evolution operation on the neural network model population through an evolution operator to obtain a new generation of neural network individuals, sequencing the fitness of all the neural network individuals according to the prediction error of each neural network, and selecting the individuals forming the next generation of population. And outputting the optimal neural network after the last generation of optimization as a prediction classifier for solving the current problem.
Further, in step 1, the problem scenario is mainly a supervised learning problem of prediction classification that can use nonlinear function fitting, and a neural network with a hidden layer is required to be used for model construction.
Further, in step 1, the data in the training set is constructed as a raw data sample and the true tag data of the corresponding data.
Further, in step 2, the specific encoding mode of the neural network is a multidimensional vector/matrix mode, which includes two multidimensional vectors: the first is a vector of length n, the value w of the ith vector i Representing the weight parameter value represented by the ith node, wherein n represents the total number of nodes in the neural network; the second is a matrix with length of m dimension, wherein each vertical component stores connection information between nodes, including node numbers, current connection states and the like.
In summary, the invention provides a neural network generation and optimization method based on the evolutionary computation principle and aiming at the training and prediction for complex nonlinear problems. The optimization of the weight parameters and the network structure of the neural network is realized through the omnibearing coding of the neural network, so that the defects of low generalization capability and large redundant calculation amount caused by the fixed neural network structure in other methods and the prediction effect of the generated neural network are overcome, wherein the prediction result of the existing back propagation neural network excessively depends on the parameter initialization result; the invention further improves the neural network prediction performance aiming at the complex problem by pertinently designing the evolution operation operator.
Drawings
FIG. 1 is a schematic diagram of neural network generation and optimization of the present invention;
FIG. 2 is a diagram of the evolutionary search concept of the present invention;
FIG. 3 is a diagram of a model overview of the present invention;
fig. 4 is a schematic diagram of the training process of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail with reference to the following embodiments and accompanying drawings.
Referring to fig. 1, the present invention provides an intent recognition neural network generation and optimization method based on evolutionary computation, for a given specific complex nonlinear problem, performing corresponding neural network generation and actual prediction through the following steps:
example one
Firstly, identifying actual characteristics of a complex nonlinear problem according to intentions, and analyzing and providing a loss function between a prediction result and an actual result in step S1;
s2, obtaining a coding mode for converting the neural network into a multi-dimensional vector;
s3, utilizing the neural network coding mode obtained in the S2, and using random trial and error initialization to obtain a plurality of neural network models which are used as initial input of evolutionary computing search;
s4, taking the loss function obtained in the step S1 as an optimization target of evolutionary search, and realizing search and optimization of a neural network model population through a cross operator and a mutation operator which are provided in a targeted manner;
and S5, simultaneously using the new and old neural network individuals obtained in the step S4 as input, respectively performing prediction through input of a neural network input layer, and detecting an error between an actual result and an actual result to be used as a fitness index of a corresponding neural network. Sequencing all individuals in the population according to the fitness, and preferentially selecting the neural network individuals as the next generation of population;
and S6, based on a maximum evolution algebra threshold value set in advance, carrying out fitness sequencing on the neural network population obtained in the step S5 after the last generation of optimization, and selecting the neural network with the highest fitness as the final neural network model for output and using the neural network model for final complex problem prediction.
In this embodiment, the main process is the evolution search in step S4, which is different from the error gradient descent method of the back propagation network, the evolution search expands the search space by using the algorithm randomness, so as to avoid the local convergence in the conventional neural network parameter generation, and the loss function obtained in step S1 is used as the optimization target to replace the gradient descent method. The core operation steps of the evolution search are the above steps S3-S5.
The evolution search is specifically implemented as follows:
the evolutionary search mainly comprises three parts, namely a crossover operation, a mutation operation and a selection operation, and is divided into the following three steps as shown in fig. 2:
step 401, crossover operation for individual neural networks, an example is shown in fig. 3. Before the crossover operation is carried out, the fitness of the individual coded by each neural network is firstly calculated, namely the error degree generated by prediction by using the neural network. By roulette algorithm, according to
Figure BDA0003846627840000051
The probability of whether to participate in the crossover operation is selected. Wherein f is i′ (x) The fitness value of the ith' individual is ensured, and meanwhile, the individual more adaptive to the problem environment is provided with a higher chance to participate in the generation of a new individual while each individual is ensured to have an opportunity to participate in the cross operation. After two parent individuals participating in cross operation are randomly selected, connecting mode chromosomes of two neural networks are aligned according to the same adjacent nodes, and then new individuals of filial generations are selected from aligned chromosome sites of the two parent individuals to form a new connecting mode chromosome. Secondly, a weight parameter chromosome of an offspring individual is constructed, when a certain node exists in both the two parent individuals, the weight of the node in the offspring individual is any one of the weight parameters in the two parent individuals, and when a certain node of the offspring neural network only exists in one parent, the weight information of the node is directly inherited. And finally obtaining a child neural network generated by crossing two parent neural networks, and adding the child neural network into a population formed by the neural network coding individuals.
Step 402, performing mutation operation on individual neural networks, an example of which is shown in fig. 4. The new descendant individuals obtained in step 401 will perform mutation operation under the mutation probability α (α ∈ (0,1)) to generate node and connection information that has not appeared in the past evolutionary search process, and rand in fig. 2 represents a random number generated in the range of 0 to 1. Aiming at node information and connection modes in a neural network, a chromosome variation method is respectively designed: firstly, the mutation of node information is carried out, and when the mutation probability is met, the addition and deletion of nodes can be selected, and meanwhile, the corresponding connection information is modified; then designing mutation operators of the connection mode among the nodes, and when the mutation probability is met, selecting to modify the attributes of adjacent points and the like of the connection edges, or adding or deleting the connection. And adding the mutated individuals into the neural network population.
And 403, aiming at the neural network population obtained in the step 402, executing selection operation according to the population size N, and selecting N individuals to form a next generation population of the current iteration round. After crossing and mutation operations, the training data are used for sequentially using each neural network for prediction, fitness function values are calculated, then the fitness is sorted in a descending order, and the top N sorted neural networks are individuals of a new population.
While the invention has been described with reference to specific embodiments, any feature disclosed in this specification may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise; all of the disclosed features, or all of the method or process steps, may be combined in any combination, except mutually exclusive features and/or steps.

Claims (3)

1. An intention recognition neural network generation and optimization method based on evolutionary computation is characterized by comprising the following steps of:
step 1: according to a training set of the intention recognition problem scene, a loss function representation method of a neural network training model aiming at the intention recognition problem scene is provided and used as a fitness function in an evolution search algorithm; the loss function adopts a mean square error loss function, and in order to ensure that a neural network with a smaller loss function has higher fitness, a fitness function value is the reciprocal of the loss function;
step 2: neural network coding and initializing:
respectively carrying out multidimensional vector coding operation on the hidden layer node weight parameters of the neural network and the connection between the nodes, so that the neural network can be converted into a linear structured data structure which can be optimally searched by using evolutionary computation, namely individual chromosomes of a population in the evolutionary computation; generating a plurality of neural networks with different structures and different weight parameters by using a random strategy, and using the neural networks as initial population input of evolutionary computation;
and 3, step 3: designing an evolution operator aiming at the coded neural network data structure:
respectively carrying out cross operator design aiming at the node weight and the connection state of the neural network, realizing the exchange of partial genetic information of the parent individuals and generating new individuals through cross operation after randomly selecting two neural network models as parent individuals of the cross operation, thereby realizing the search of individual neighborhoods, and generating no new genetic information after the cross operation; designing mutation operators aiming at the node weight and the connection state of the neural network respectively, carrying out mutation operation on a newly generated neural network model individual after crossing if a certain mutation probability alpha epsilon (0,1) is met after the crossing operation, and changing part of genetic information in the newly generated neural network model individual by changing the node weight or the connection mode of the newly generated neural network model individual to realize the generation of the new neural network model individual carrying completely new genetic information; designing a selection operator in an optimized search process, and respectively controlling individual selection for crossing and individual selection for forming a next generation population before and after the crossing and mutation operations are started according to the fitness of a new neural network model;
and 4, step 4: evolutionary computing search optimization:
based on the loss function of the neural network training model aiming at the intention recognition problem scene in the step 1, carrying out optimization search on the neural network by using the evolution operation in the step 3, and when the set maximum iteration algebra is met, obtaining an optimized neural network population and individuals as a prediction classifier aiming at the intention recognition problem scene; wherein, the fitness function in the optimization search is L = -sigma k t k log y k L represents the degree of conformity of the neural network to the intention recognition problem scenario, t k Predicting a result label for the kth test data when the result is predictedWhen correct t k The value is 1, when the prediction result is wrong, t k The value is 0, log is the logarithm based on the natural logarithmic constant e, y k Is the objective function value;
and 5: and performing predictive classification of the actual problem based on the predictive classifier for the purpose recognition problem scene.
2. The method for generating and optimizing an evolutionary-computing-based intention recognition neural network as claimed in claim 1, wherein the specific encoding mode of the neural network adopted in the step 2 is a multidimensional vector/matrix mode, comprising two multidimensional vectors: the first is a vector of length n, the value w of the ith vector i Representing the weight parameter value represented by the ith node, wherein n represents the total number of nodes in the neural network; the second is a matrix with length of m dimension, wherein each vertical component stores the connection information between nodes, including node numbers and current connection state.
3. The method for generating and optimizing an evolutionary-computing-based intent-recognition neural network of claim 2, wherein the step 3 specifically comprises:
step 301: crossover operations for individual neural networks: before the crossover operation, the fitness of the individual coded by each neural network is calculated, namely the error degree generated by prediction by using the neural network is calculated, and the fitness is calculated according to the fitness of each neural network individual by a roulette algorithm
Figure FDA0003846627830000021
Is selected whether to participate in the crossover operation, wherein f i′ (x) The fitness value of the ith 'individual neural network is provided, n' is the total number of the individual neural networks, and the probability that each individual is involved in cross operation is guaranteed, and meanwhile, a higher chance is provided for the individual more adaptive to the problem environment to participate in generation of a new individual; after randomly selecting two parent individuals participating in cross operation, connecting mode chromosomes of two neural networks are arranged according to the sameAligning adjacent nodes, and then selecting new individuals of the filial generations at the aligned chromosome sites of the two parent individuals to form a new connecting mode chromosome; secondly, constructing a weight parameter chromosome of an offspring individual, wherein when a certain node exists in both the two parent individuals, the weight of the node in the offspring individual is any one of the weight parameters in the two parent individuals, and when a certain node of the offspring neural network only exists in one parent, the weight information of the node is directly inherited; finally, obtaining a child neural network generated by crossing two parent neural networks, and adding the child neural network into a population formed by neural network coding individuals;
step 302: mutation operation for individual neural networks: the offspring neural network obtained in step 301 performs mutation operation under the mutation probability α ∈ (0,1) to generate nodes and connection information that have not appeared in the past evolutionary search process, and designs a chromosome mutation method for the node information and connection mode in the neural network respectively: firstly, the variation of node information is carried out, when the variation probability is met, the addition and deletion of nodes are selected, and meanwhile, the corresponding connection information is modified; then designing a mutation operator of a node-node connection mode, and selecting to modify the attribute of the adjacent point of the connection edge or adding or deleting the connection when the mutation probability is met, and adding the mutated individuals into the neural network population;
step 303: and (2) aiming at the neural network population obtained in the step (302), according to the population size N, executing selection operation, selecting N individuals to form a next generation population of the current iteration round, after crossing and mutation operation, sequentially using each neural network to predict by using training set data, calculating a fitness function value, and then performing descending ordering on the fitness, wherein the first N sorted neural networks are individuals of a new population.
CN202211120997.5A 2022-09-15 2022-09-15 Intention recognition neural network generation and optimization method based on evolutionary computation Pending CN115481727A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211120997.5A CN115481727A (en) 2022-09-15 2022-09-15 Intention recognition neural network generation and optimization method based on evolutionary computation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211120997.5A CN115481727A (en) 2022-09-15 2022-09-15 Intention recognition neural network generation and optimization method based on evolutionary computation

Publications (1)

Publication Number Publication Date
CN115481727A true CN115481727A (en) 2022-12-16

Family

ID=84392397

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211120997.5A Pending CN115481727A (en) 2022-09-15 2022-09-15 Intention recognition neural network generation and optimization method based on evolutionary computation

Country Status (1)

Country Link
CN (1) CN115481727A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117194740A (en) * 2023-11-08 2023-12-08 武汉大学 Geographic information retrieval intention updating method and system based on guided iterative feedback
CN117668701A (en) * 2024-01-30 2024-03-08 云南迅盛科技有限公司 AI artificial intelligence machine learning system and method

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117194740A (en) * 2023-11-08 2023-12-08 武汉大学 Geographic information retrieval intention updating method and system based on guided iterative feedback
CN117194740B (en) * 2023-11-08 2024-01-30 武汉大学 Geographic information retrieval intention updating method and system based on guided iterative feedback
CN117668701A (en) * 2024-01-30 2024-03-08 云南迅盛科技有限公司 AI artificial intelligence machine learning system and method
CN117668701B (en) * 2024-01-30 2024-04-12 云南迅盛科技有限公司 AI artificial intelligence machine learning system and method

Similar Documents

Publication Publication Date Title
Lobato et al. Multi-objective genetic algorithm for missing data imputation
CN115481727A (en) Intention recognition neural network generation and optimization method based on evolutionary computation
US20030055614A1 (en) Method for optimizing a solution set
CN111861013B (en) Power load prediction method and device
Lin et al. A self-adaptive neural fuzzy network with group-based symbiotic evolution and its prediction applications
WO2022252455A1 (en) Methods and systems for training graph neural network using supervised contrastive learning
JP2007200302A (en) Combining model-based and genetics-based offspring generation for multi-objective optimization using convergence criterion
CN114373101A (en) Image classification method for neural network architecture search based on evolution strategy
CN114328048A (en) Disk fault prediction method and device
CN112734051A (en) Evolutionary ensemble learning method for classification problem
Bai et al. A joint multiobjective optimization of feature selection and classifier design for high-dimensional data classification
CN113887694A (en) Click rate estimation model based on characteristic representation under attention mechanism
Zhang et al. Embedding multi-attribute decision making into evolutionary optimization to solve the many-objective combinatorial optimization problems
CN116611504A (en) Neural architecture searching method based on evolution
CN115620046A (en) Multi-target neural architecture searching method based on semi-supervised performance predictor
Chen et al. On balancing neighborhood and global replacement strategies in MOEA/D
CN115661546A (en) Multi-objective optimization classification method based on feature selection and classifier joint design
Hu et al. Apenas: An asynchronous parallel evolution based multi-objective neural architecture search
Naldi et al. Genetic clustering for data mining
Jarraya et al. Evolutionary multi-objective optimization for evolving hierarchical fuzzy system
Li et al. A multi-granularity genetic algorithm
CN117591675B (en) Node classification prediction method, system and storage medium for academic citation network
Sherstnev et al. Comparative analysis of algorithms for optimizing the weight coefficients of a neural network when adjusting its structure applying genetic programming method
Feng et al. Enhanced hierarchical fuzzy model using evolutionary GA with modified ABC algorithm for classification problem
Chen et al. Evolving MIMO Flexible Neural Trees for Nonlinear System Identification.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination