CN113971367A - Automatic design method of convolutional neural network framework based on shuffled frog-leaping algorithm - Google Patents
Automatic design method of convolutional neural network framework based on shuffled frog-leaping algorithm Download PDFInfo
- Publication number
- CN113971367A CN113971367A CN202110993267.5A CN202110993267A CN113971367A CN 113971367 A CN113971367 A CN 113971367A CN 202110993267 A CN202110993267 A CN 202110993267A CN 113971367 A CN113971367 A CN 113971367A
- Authority
- CN
- China
- Prior art keywords
- neural network
- convolutional neural
- network framework
- individual
- frog
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000013527 convolutional neural network Methods 0.000 title claims abstract description 67
- 238000000034 method Methods 0.000 title claims abstract description 41
- 238000013461 design Methods 0.000 title claims abstract description 30
- 238000005457 optimization Methods 0.000 claims abstract description 11
- 238000011156 evaluation Methods 0.000 claims abstract description 10
- 210000002569 neuron Anatomy 0.000 claims abstract description 6
- 238000012549 training Methods 0.000 claims description 21
- 238000012360 testing method Methods 0.000 claims description 17
- 238000011176 pooling Methods 0.000 claims description 8
- 230000001172 regenerating effect Effects 0.000 claims description 2
- 238000011160 research Methods 0.000 abstract description 2
- 239000010410 layer Substances 0.000 description 33
- 241000269350 Anura Species 0.000 description 4
- 238000013528 artificial neural network Methods 0.000 description 2
- 239000012792 core layer Substances 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000019637 foraging behavior Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000002156 mixing Methods 0.000 description 1
- 238000003058 natural language processing Methods 0.000 description 1
- 239000002245 particle Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F30/00—Computer-aided design [CAD]
- G06F30/20—Design optimisation, verification or simulation
- G06F30/27—Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/004—Artificial life, i.e. computing arrangements simulating life
- G06N3/006—Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2111/00—Details relating to CAD techniques
- G06F2111/04—Constraint-based CAD
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Software Systems (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Mathematical Physics (AREA)
- Computational Linguistics (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Medical Informatics (AREA)
- Computer Hardware Design (AREA)
- Geometry (AREA)
- Image Analysis (AREA)
Abstract
A method for automatically designing a convolutional neural network framework based on a shuffled frog-leaping algorithm comprises the following steps: modeling the convolutional neural network framework design problem into a constrained combinatorial optimization problem, and designing a network framework meeting the requirements of target parameter quantity. The output of the convolutional layer, the number of layers of the full-connection layer, the number of neurons in each layer and other hyper-parameters are coded to represent a convolutional neural network framework, and then the optimal network framework is searched by using a shuffle frog-leap algorithm and using the classification precision on an MNIST data set as a fitness evaluation value to obtain the optimal convolutional neural network framework. The method for automatically designing the convolutional neural network framework based on the shuffled frog leaps has the characteristics of full automation, high efficiency and the like, and is very friendly to scientific research beginners with insufficient experience.
Description
Technical Field
The invention relates to an automatic design technology of a convolutional neural network framework, which helps scientific researchers with insufficient experience to completely and autonomously design a high-performance neural network framework.
Background
1. Convolutional neural network framework design problem
In recent years, Convolutional Neural Networks (CNNs) have been significantly successful in the fields of image classification, object detection, natural language processing, and the like due to their excellent performance. The CNN architecture is complex in structure and numerous in parameters, the parameters are mutually influenced, the network model is too small and too shallow, so that the under-fitting phenomenon is easily caused, and the network model is too large and too deep, so that the over-fitting and other problems are easily caused, and therefore the CNN model with good performance is very dependent on the configuration of the CNN architecture.
At present, aiming at the problem of convolutional neural network framework design, a common method is that personnel continuously and manually adjust the network framework according to abundant experience of the personnel, so that a high-performance network framework is obtained. However, the manual design method requires repeated trial and error, which is time-consuming, and on the other hand, the manual design method requires the experience of scientific researchers, and it is very difficult for inexperienced non-scientific researchers to design a high-performance network framework by the manual design method. With the progress of the traditional manual operation of all walks of life to the automatic, informatization and intelligent production process, the CNN frame design provides a new challenge of full-automatic design.
2. Convolutional neural network framework
The convolutional neural network consists of an input layer, a hidden layer and an output layer, wherein the hidden layer mainly comprises a convolutional layer, a pooling layer and a full-connection layer. The convolutional layer is a core layer of a convolutional neural network, the parameters of the layer are composed of a group of learnable filters, and by performing feature extraction on data such as an image by using convolutional kernel, more abstract features can be extracted as the number of layers of the convolutional layer is larger. The pooling layer is a layer sandwiched between convolutional layers and is divided into maximum pooling and average pooling, which aims to compress the amount of data and parameters, or to reduce the size of the input by downsampling, but retains important information. The fully-connected layer is the last layer of the neural network, and the main task is to classify the features extracted by the convolutional layer by utilizing the output of the convolutional process.
3. Shuffled frog leaping algorithm
The Shuffled Frog-Leaping Algorithm (SFLA) is a new heuristic intelligent evolutionary Algorithm first proposed by Eusuff and Lansey et al in 2006, which is inspirational from the foraging behavior of frogs. SFLAs perform local searches in subgroups of frogs through a modulo-factorial algorithm, frogs use a blending strategy to blend the hopping algorithm, and exchange information in the local searches. The SFLA combines the advantages of a modulo-factorial algorithm and a particle swarm algorithm, carries out information exchange in local search and global search respectively, and well combines two information exchange modes. Generally speaking, SFLAs are very searchable and easy to implement, and can be used to solve many non-linear, undetectable and multi-state problems.
The flow of SFLA implementation is shown in fig. 1, and its mathematical description is as follows:
randomly generating an initial population omega, wherein the size N of the population is equal to { x ═ x1,x2,...,xNLet d be the dimension of the problem to be solved, then xi=[xi1,xi2,...,xid];
And grouping the populations by adopting a modular factorization method. Assuming that the population is divided into m modular groups, and the number of individuals in each group is N, N is m × N, the specific method of modular grouping is: all individuals in the population are sorted according to the fitness value, the first frog is arranged in the first factor group, the second frog is arranged in the second factor group, the mth frog is arranged in the mth factor group, the m +1 th frog is arranged in the first factor group, and the rest is done until all the frogs are arranged in the corresponding groups.
Updating population, wherein in each factor group, the individual with the best fitness value and the worst individual are respectively defined as xbestAnd xworstAnd the global fitness value optimal individual is defined as xgreatThen, updating in each factor group, and updating the worst individual in the group to the optimal individual in the group, wherein the individual updating mode is expressed as:
xnew=xworst+rand()×(xbest-xworst)
where rand () represents a random number between 0-1.
In the process of position adjustment, if the worst individual in the module is subjected to the above process, a better result can be generatedGood position, then x is usednewIn place of xworstOtherwise, updating in the global, updating the worst individual in the group to the global optimal individual, and using xgreatIn place of xbestThe individual update mode is expressed as:
xnew=xworst+rand()×(xgreat-xworst)
and if the updated individual still does not meet the condition, randomly generating a new individual according to the method for generating the individual.
Disclosure of Invention
The technical problem to be solved by the invention is to provide a simple and easy-to-use automatic design method of a convolutional neural network framework, which is very friendly to primary scientific researchers with insufficient experience and can meet the performance requirement of the obtained network architecture performance, aiming at the problems that the design of the conventional convolutional neural network framework completely depends on the scientific research experts with abundant experience to carry out the traditional method of manually and repeatedly trial and error, the design cost is high, the efficiency is low and the like.
The technical scheme adopted by the invention is as follows: a convolutional neural network framework automatic design method based on a shuffled frog-leaping algorithm comprises the following steps:
1. modeling the automatic design method of the convolutional neural network framework into a constrained combinatorial optimization problem, wherein the constraint is the parameter number of the model;
2. designing a discrete variable length coding mode to represent a convolutional neural network framework to obtain a mathematical representation of the convolutional neural network framework;
3. designing a convolutional neural network frame by using a shuffled frog-leaping algorithm, taking the image classification precision on an MNIST data set as a fitness evaluation value, taking the continuously improved classification precision as an optimization target, and continuously iterating and optimizing by using the shuffled frog-leaping algorithm to obtain the convolutional neural network frame with optimal performance;
4. fully training the optimal convolutional neural network framework again;
drawings
Fig. 1 is a flow chart of the shuffled frog-jump algorithm of the present invention.
Fig. 2 is a schematic diagram of an algorithm encoding method.
FIG. 3 is a flow chart of the convolutional neural network framework automatic design based on the shuffled frog-leaping algorithm of the present invention.
Detailed Description
The following describes in detail a convolutional neural network framework automatic design method based on the shuffled frog-leaping algorithm with reference to examples and drawings.
As shown in fig. 2, the automatic design method of the convolutional neural network framework based on the shuffled frog-leaping algorithm of the present invention includes the following steps:
modeling the automatic design method of the convolutional neural network framework into a constrained combinatorial optimization problem, wherein the constraint is the parameter number of the model;
1. setting the parameter number of the convolutional neural network as lambda, the constraint interval as psi, and defining the frame of the convolutional neural network to be evaluated as a vector x ═ { x1,x2,...,xdIn which xiRepresenting the hyper-parameters determining the framework of the convolutional neural network, d representing the number of hyper-parameters, each hyper-parameter xiHas a constraint range of phii. Under the constraint of the parameter quantity, the convolutional neural network framework automatic design method is modeled as an expression of a combinatorial optimization problem, wherein the expression is as follows:
where Γ (·) represents a complete convolutional neural network framework, the objective of the method is to obtain an estimate-optimized convolutional neural network framework.
2. A discrete variable length coding method is designed to represent the convolutional neural network framework, resulting in a mathematical representation of the convolutional neural network framework, as shown in fig. 3. The steps include
(1) Firstly, defining a block, wherein the block comprises m convolutional layers and a maximum pooling layer, and the block is marked as: theta ═ epsilon1,ε2,...,εmγ }, where εiRepresents the output size in each convolutional layer, and gamma represents the maximum pooling layer of the last connection;
(2) and then stacking the blocks, namely stacking a plurality of blocks, wherein the number of the blocks is nb, and the expression after stacking is as follows:
(3) defining the number of the full connection layers as nf, and the number of the neurons in each full connection layer as eta, wherein the expression after the full connection layers are overlapped is as follows:
{η1,η2,...,ηnf}
(4) connecting the superposed convolutional layer and the full-link layer to form a convolutional neural network framework, wherein the expression is as follows:
3. the shuffled frog-leaping algorithm is adopted to design the convolutional neural network framework, the image classification precision on the MNIST data set is used as a fitness evaluation value, the classification precision is continuously improved as an optimization target, and the shuffled frog-leaping algorithm is used for continuous iterative optimization to obtain the convolutional neural network framework with the optimal performance. The method comprises the following steps:
(1) setting initial parameters of the shuffled frog-leaping algorithm, comprising: the number N of frog populations, the number of frog grouping groups, the maximum iteration number of the algorithm, the defined iteration number iter which is 0, the parameter range of the blocks in the coding mode, the output range of the convolution layer in each block, the layer number range of the fully-connected layers, the neuron number range of each layer and the parameter quantity constraint range of the model;
(2) initializing the set parameters of the frog population, coding according to a discrete variable length coding mode, if a frog individual is generated, firstly judging whether the model parameter number of the convolutional neural network corresponding to the individual meets a constraint range, if so, the individual is effective, otherwise, regenerating a new individual until N individuals meeting the constraint range of the model parameter number are generated to form the frog population;
(3) carrying out fitness evaluation on the generated population;
(a) in order to reduce training time, only selecting theta% of all data sets at random for training and testing when fitness evaluation is carried out, namely alpha multiplied by theta% training pictures and beta multiplied by theta% testing pictures;
(b) selecting population individuals n as 0, and reversely decoding according to a coding mode to obtain a corresponding convolutional neural network framework;
(c) training a convolutional neural network frame obtained by decoding by using a training picture, and then testing on a test picture to obtain test precision as a fitness evaluation value;
(d) the population of individuals n is added with one. And (c) if the number of the population individuals is less than the total number of the population individuals, turning to the step (b), otherwise, outputting the fitness values of the total population individuals.
(4) Sorting the whole frog population in a descending order according to the fitness value of the population, grouping the population according to a modular factorization method, and determining the optimal individual in each group, the worst individual in each group and the global optimal individual in the whole population;
(5) updating the individuals, namely updating the worst individual in each group by using the information of the optimal individual in each group, and finishing an updating process if the fitness value of the worst individual in each group after updating is superior to the fitness value of the worst individual in the group before; otherwise, updating the worst individual in the group by using the information of the globally optimal individual, and finishing an updating process if the fitness value of the worst individual in the updated group is superior to that of the previous individual; if the updating still cannot be successful, updating the worst individual in the group by adopting a random updating mode;
(6) the iteration number iter is added to one;
(7) all individuals were shuffled. If the iteration times are smaller than the maximum iteration times of the algorithm, turning to the step (4), otherwise, outputting globally optimal individuals to obtain an optimal convolutional neural network framework;
4. fully training the optimal convolutional neural network framework again;
(1) setting the number of iterations required by the full training process as iiter, and adopting all data pictures in the MNIST data set, namely alpha training pictures and beta testing pictures;
(2) training the optimal convolutional neural network frame on training data, and testing on test data to obtain the final precision as the actual application precision.
The optimal parameter settings are given below:
(1) in the initialization parameters of the shuffled frog-jump algorithm, the optimal parameter of the frog population number is set to be 20, the optimal parameter of the frog population number in each group is set to be 5, and the optimal parameter of the maximum iteration number of the algorithm is set to be 50.
(2) In the parameter setting in the encoding mode, the parameter range of the block is optimally set to [1, 4], the parameter range of the convolution output in each block is optimally set to [1, 5], the parameter range of the number of layers of the fully-connected layer is optimally set to [1, 10], and the parameter range of the number of neurons in the fully-connected layer is optimally set to [32, 1024 ].
(3) In the data set used in the optimization process, the best pictures in the training set are 60000, the best pictures in the testing set are 10000, and the best parameter of the random proportion of the data set is set to be 10%.
(4) The optimal parameter for the number of iterations required to fully train the optimal convolutional neural network framework is set to 200.
Claims (4)
1. A convolutional neural network framework automatic design method based on a shuffled frog-leaping algorithm is characterized by comprising the following steps:
1) modeling the automatic design method of the convolutional neural network framework into a constrained combinatorial optimization problem, wherein the constraint is the parameter number of the model;
2) designing a discrete variable length coding mode to represent a convolutional neural network framework to obtain a mathematical representation of the convolutional neural network framework;
3) designing a convolutional neural network frame by using a shuffled frog-leaping algorithm, taking the image classification precision on an MNIST data set as a fitness evaluation value, taking the continuously improved classification precision as an optimization target, and continuously iterating and optimizing by using the shuffled frog-leaping algorithm to obtain the convolutional neural network frame with optimal performance;
4) the optimal convolutional neural network framework is fully trained again.
2. The automatic design method of the convolutional neural network framework based on the shuffled frog-leaping algorithm as claimed in claim 1, wherein the step 1) comprises:
(1) modeling the automatic design method of the convolutional neural network framework as a combined optimization problem under the constraint of model parameters.
3. The automatic design method of the convolutional neural network framework based on the shuffled frog-leaping algorithm as claimed in claim 1, wherein the step 2) comprises:
(1) defining a block, wherein the block comprises m convolutional layers and a maximum pooling layer, and the pooling layer is connected to the back of the convolutional layers to form the block;
(2) stacking blocks, namely stacking and connecting a plurality of blocks, wherein the number of the blocks is nb;
(3) defining the number of full connection layers as nf, wherein the number of neurons in each full connection layer is eta, and connecting nf full connection layers;
(4) and finally, connecting the stacked blocks and the full connection layer again to form a convolutional neural network framework.
4. The automatic design method of the convolutional neural network framework based on the shuffled frog-leaping algorithm as claimed in claim 1, wherein the step 3) comprises:
(1) setting initial parameters of the shuffled frog-leaping algorithm, comprising: the number N of frog populations, the number of frog grouping groups, the maximum iteration number of the algorithm, the defined iteration number iter which is 0, the parameter range of the blocks in the coding mode, the output range of the convolution layer in each block, the layer number range of the fully-connected layers, the neuron number range of each layer and the parameter quantity constraint range of the model;
(2) carrying out population initialization on the initial parameters of the frog population, carrying out coding according to a discrete variable length coding mode, generating a frog individual, firstly judging whether the model parameter number of the convolutional neural network corresponding to the individual meets a constraint range, if so, the individual is effective, otherwise, regenerating a new individual until N individuals meeting the constraint range of the model parameter number are generated, and forming the frog population;
(3) carrying out fitness evaluation on the generated population;
(a) in order to reduce training time, only selecting theta% of all data sets at random for training and testing when fitness evaluation is carried out, namely alpha multiplied by theta% training pictures and beta multiplied by theta% testing pictures;
(b) setting a population individual n to be 0, and reversely decoding the individual according to a coding mode to obtain a corresponding convolutional neural network framework;
(c) training a convolutional neural network frame obtained by decoding by using a training picture, and then testing on a test picture to obtain test precision as a fitness evaluation value;
(d) the population of individuals n is added with one. If the number of the population individuals is less than the number of all the individuals of the population, turning to the step (b), otherwise, outputting the fitness values of all the individuals of the population;
(4) sorting the whole frog population in a descending order according to the fitness value of the population, grouping the population according to a modular factorization method, and determining the optimal individual in each group, the worst individual in each group and the global optimal individual in the whole population;
(5) updating the individuals, namely updating the worst individual in each group by using the information of the optimal individual in each group, and finishing an updating process if the fitness value of the worst individual in each group after updating is superior to the fitness value of the worst individual in the group before; otherwise, updating the worst individual in the group by using the information of the globally optimal individual, and finishing an updating process if the fitness value of the worst individual in the updated group is superior to that of the previous individual; if the updating still cannot be successful, updating the worst individual in the group by adopting a random updating mode;
(6) the iteration number iter is added to one;
(7) all individuals were shuffled. If the iteration times are smaller than the maximum iteration times of the algorithm, turning to the step (4), otherwise, outputting globally optimal individuals to obtain an optimal convolutional neural network framework;
(8) setting the number of iterations required by the full training process as iiter, and adopting all data pictures in the MNIST data set, namely alpha training pictures and beta testing pictures; training the optimal convolutional neural network frame on training data, and testing on test data to obtain the final precision as the actual application precision.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110993267.5A CN113971367B (en) | 2021-08-27 | 2021-08-27 | Automatic convolutional neural network framework design method based on shuffled frog-leaping algorithm |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110993267.5A CN113971367B (en) | 2021-08-27 | 2021-08-27 | Automatic convolutional neural network framework design method based on shuffled frog-leaping algorithm |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113971367A true CN113971367A (en) | 2022-01-25 |
CN113971367B CN113971367B (en) | 2024-07-12 |
Family
ID=79586373
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110993267.5A Active CN113971367B (en) | 2021-08-27 | 2021-08-27 | Automatic convolutional neural network framework design method based on shuffled frog-leaping algorithm |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113971367B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114819151A (en) * | 2022-06-23 | 2022-07-29 | 天津大学 | Biochemical path planning method based on improved agent-assisted shuffled frog leaping algorithm |
CN115801451A (en) * | 2023-01-29 | 2023-03-14 | 河北因朵科技有限公司 | Dual-network isolation method for file protection cloud perception management platform |
CN118153633A (en) * | 2023-07-14 | 2024-06-07 | 天津大学 | Improved CNN architecture optimization design method |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106849814A (en) * | 2017-03-27 | 2017-06-13 | 无锡开放大学 | Leapfroged Fuzzy Neural PID linear synchronous generator control method based on fruit bat |
CN110929775A (en) * | 2019-11-18 | 2020-03-27 | 南通大学 | Convolutional neural network weight optimization method for retinopathy classification |
CN111612096A (en) * | 2020-06-01 | 2020-09-01 | 南通大学 | Large-scale fundus image classification system training method based on Spark platform |
CN113011091A (en) * | 2021-03-08 | 2021-06-22 | 西安理工大学 | Automatic-grouping multi-scale light-weight deep convolution neural network optimization method |
CN113052489A (en) * | 2021-04-15 | 2021-06-29 | 淮阴工学院 | MPPT method of photovoltaic system based on leapfrog and mode search neural network |
-
2021
- 2021-08-27 CN CN202110993267.5A patent/CN113971367B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106849814A (en) * | 2017-03-27 | 2017-06-13 | 无锡开放大学 | Leapfroged Fuzzy Neural PID linear synchronous generator control method based on fruit bat |
CN110929775A (en) * | 2019-11-18 | 2020-03-27 | 南通大学 | Convolutional neural network weight optimization method for retinopathy classification |
CN111612096A (en) * | 2020-06-01 | 2020-09-01 | 南通大学 | Large-scale fundus image classification system training method based on Spark platform |
CN113011091A (en) * | 2021-03-08 | 2021-06-22 | 西安理工大学 | Automatic-grouping multi-scale light-weight deep convolution neural network optimization method |
CN113052489A (en) * | 2021-04-15 | 2021-06-29 | 淮阴工学院 | MPPT method of photovoltaic system based on leapfrog and mode search neural network |
Non-Patent Citations (1)
Title |
---|
张涛;左倩;: "基于混合蛙跳算法的油纸绝缘变压器拓展德拜等值电路参数求解", 水电能源科学, no. 10, 25 October 2016 (2016-10-25) * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114819151A (en) * | 2022-06-23 | 2022-07-29 | 天津大学 | Biochemical path planning method based on improved agent-assisted shuffled frog leaping algorithm |
CN115801451A (en) * | 2023-01-29 | 2023-03-14 | 河北因朵科技有限公司 | Dual-network isolation method for file protection cloud perception management platform |
CN118153633A (en) * | 2023-07-14 | 2024-06-07 | 天津大学 | Improved CNN architecture optimization design method |
Also Published As
Publication number | Publication date |
---|---|
CN113971367B (en) | 2024-07-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Liashchynskyi et al. | Grid search, random search, genetic algorithm: a big comparison for NAS | |
CN113971367A (en) | Automatic design method of convolutional neural network framework based on shuffled frog-leaping algorithm | |
Cai et al. | Path-level network transformation for efficient architecture search | |
CN109948029A (en) | Based on the adaptive depth hashing image searching method of neural network | |
CN109829541A (en) | Deep neural network incremental training method and system based on learning automaton | |
CN111079795B (en) | Image classification method based on CNN (content-centric networking) fragment multi-scale feature fusion | |
CN104751842B (en) | The optimization method and system of deep neural network | |
CN104850890B (en) | Instance-based learning and the convolutional neural networks parameter regulation means of Sadowsky distributions | |
CN109389207A (en) | A kind of adaptive neural network learning method and nerve network system | |
CN111681718B (en) | Medicine relocation method based on deep learning multi-source heterogeneous network | |
CN112465120A (en) | Fast attention neural network architecture searching method based on evolution method | |
CN106503654A (en) | A kind of face emotion identification method based on the sparse autoencoder network of depth | |
CN111898689A (en) | Image classification method based on neural network architecture search | |
CN109582782A (en) | A kind of Text Clustering Method based on Weakly supervised deep learning | |
Bakhshi et al. | Fast automatic optimisation of CNN architectures for image classification using genetic algorithm | |
CN116503676B (en) | Picture classification method and system based on knowledge distillation small sample increment learning | |
CN114943345A (en) | Federal learning global model training method based on active learning and model compression | |
Bakhshi et al. | Fast evolution of CNN architecture for image classification | |
Chen et al. | Application of improved convolutional neural network in image classification | |
CN113011091A (en) | Automatic-grouping multi-scale light-weight deep convolution neural network optimization method | |
CN109902808A (en) | A method of convolutional neural networks are optimized based on floating-point numerical digit Mutation Genetic Algorithms Based | |
Li et al. | Few-shot image classification via contrastive self-supervised learning | |
CN114329233A (en) | Cross-region cross-scoring collaborative filtering recommendation method and system | |
CN115294402B (en) | Semi-supervised vehicle classification method based on redundancy elimination multi-stage hybrid training | |
CN111967528A (en) | Image identification method for deep learning network structure search based on sparse coding |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |