CN110929798A - Image classification method and medium based on structure optimization sparse convolution neural network - Google Patents

Image classification method and medium based on structure optimization sparse convolution neural network Download PDF

Info

Publication number
CN110929798A
CN110929798A CN201911197205.2A CN201911197205A CN110929798A CN 110929798 A CN110929798 A CN 110929798A CN 201911197205 A CN201911197205 A CN 201911197205A CN 110929798 A CN110929798 A CN 110929798A
Authority
CN
China
Prior art keywords
model
training
neural network
connection structure
coding sequence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911197205.2A
Other languages
Chinese (zh)
Inventor
唐贤伦
徐瑾
李洁
代宇艳
陈瑛洁
余新弦
孔德松
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University of Post and Telecommunications
Original Assignee
Chongqing University of Post and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Post and Telecommunications filed Critical Chongqing University of Post and Telecommunications
Priority to CN201911197205.2A priority Critical patent/CN110929798A/en
Publication of CN110929798A publication Critical patent/CN110929798A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/086Learning methods using evolutionary algorithms, e.g. genetic algorithms or genetic programming
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Physiology (AREA)
  • Image Analysis (AREA)

Abstract

The invention requests to protect an image classification method and medium based on a structure optimization sparse convolution neural network. And (3) aiming at the convolution layer of the convolutional neural network, performing sparsification on the connection structure of the input characteristic diagram channel by using a genetic algorithm, and performing image classification by using a sparse convolution model. Firstly, pre-training a convolution model and storing pre-training weights; secondly, binary coding is carried out on the connection of the input characteristic channels of the convolutional layers according to the convolutional layers except the model input layer, and a plurality of binary sequences are generated to serve as initial populations; then, selecting, crossing and mutating the binary codes by using a genetic algorithm; and finally, after a plurality of iterations, decoding the obtained optimal binary sequence to obtain a sparse characteristic channel connection structure, and recovering the classification accuracy of the model through weight fine tuning.

Description

Image classification method and medium based on structure optimization sparse convolution neural network
Technical Field
The invention belongs to the technical field of image processing, and particularly belongs to the technical field of model structure sparseness and image classification methods.
Background
Convolutional Neural Networks (CNNs) have already occupied an important position in the field of image recognition, and the powerful feature extraction capability of the CNNs avoids the complex preprocessing process necessary in the conventional image recognition method. In addition, the convolutional neural network also has the characteristics of weight sharing, local receptive field and downsampling, compared with an earlier multilayer perceptron, the model parameters are greatly reduced, and the calculation complexity is reduced. In recent years, various improved convolutional neural networks have also been developed with good image classification accuracy.
There are roughly two trends in the development of convolutional neural networks. Firstly, the number of layers of the network is increased, so that the expression capability and the abstract capability of the network to the characteristics are enhanced, but the problems of gradient explosion and gradient disappearance exist, and the method represented by a residual module in ResNet better solves the problem. Secondly, the convolution layer becomes wider and wider, and having a larger number of convolution kernels means that more features can be extracted, but in practical application, the increase of the number of convolution kernels not only brings large cost to calculation, but also brings many redundant features, and the features do not bring any contribution to the classification effect of the model. Some model clipping methods have been developed in recent years to discard redundant connections or channels, but these methods introduce new variables or even multiple variables to measure the importance of the connections or channels, and there is no uniform measure. The two states of reservation and deletion of convolutional layer channel connection can be reasonably represented by using binary coding. The genetic algorithm is a calculation model for simulating the biological evolution process, and can obtain an optimal solution or an approximately optimal solution in a discrete space by matching with a binary coding mode.
Therefore, an image classification method based on a structure optimization sparse convolutional neural network is needed, which can perform sparsification on channel connection of convolutional layers in a model in a discrete space according to a current learning task, and can determine the importance degree of the channel connection without introducing a new measurement standard.
Disclosure of Invention
The present invention is directed to solving the above problems of the prior art. An image classification method based on a structure optimization sparse convolution neural network is provided. The technical scheme of the invention is as follows:
an image classification method based on a structure optimization sparse convolution neural network comprises the following steps:
step 1: acquiring a training set sample of an image, pre-training a convolutional neural network by taking the image training set sample as input, taking a loss function between a minimized predicted value and a real label as an optimization target, and storing a pre-training weight of the convolutional neural network;
step 2: binary coding is carried out according to the convolutional layer channel connection structure in the pre-training model, a plurality of binary sequences are generated randomly as initial population and correspond to a plurality of random channel connection structures;
and step 3: applying a channel connection structure corresponding to each coding sequence in the current population to a pre-training model, respectively calculating the fitness of each connection structure, and repeatedly selecting a plurality of binary coding sequences, wherein the coding sequence with higher fitness has higher probability of being selected;
and 4, step 4: carrying out cross and variation operation on the binary coding sequence selected in the step 3 to obtain a new generation of population, wherein the cross probability and the variation probability are adjustable parameters;
and 5: repeating the step 3-4 until the current iteration times are equal to the total iteration times, and ending the iteration;
step 6: selecting a coding sequence with the maximum fitness in the population of the last generation, decoding the coding sequence into a corresponding channel connection structure, using the channel connection structure, loading the weight of a pre-training model as an initial weight, carrying out weight fine adjustment, and recovering the classification accuracy of the model; inputting the test set into the model with the fine-tuned weight, and classifying the images of the test set;
further, the step 1: with a loss function between a minimized predicted value and a real label as an optimization target and a training set sample as an input, pre-training a convolutional neural network, and storing model pre-training weights, specifically: building a convolutional neural network, training by using an image training set to be learned until a model converges, and updating the weight of the model by using an Adam adaptive moment estimation optimization algorithm with a minimum loss function as a target, wherein the Adam algorithm specifically comprises the following steps:
(1) initializing a network parameter theta, a first moment variable s is 0, a second moment variable r is 0, and an initial time step t is 0;
(2) taking a small batch containing m samples from the training set { x(1),...,x(m)Each sample has a label value of { y }(1),...,y(m)};
(3) Calculating gradients
Figure BDA0002294971210000031
t ← t + 1; l, f represent the mapping of the loss function and model from input to output, respectively;
(4) updating the biased first moment estimate and the biased second moment estimate, p1And ρ2Exponential decay rate, with values in the [0,1) interval:
s←ρ1s+(1-ρ1)g;
r←ρ2r+(1-ρ2)g⊙g;
s and r respectively represent biased first moment estimation and biased second moment estimation;
(5) correcting the deviation of the first order moment and the second order moment:
Figure BDA0002294971210000032
Figure BDA0002294971210000033
respectively representing the first moment estimation and the second moment estimation after correcting the deviation;
(6) and (3) calculating and updating:
Figure BDA0002294971210000034
where ε represents the learning rate and σ is a small constant for numerical stability, typically 10-8
(7) Application updating: θ ← θ + Δ θ;
(8) repeating (2) - (7) until t reaches the weight updating iteration number;
using a cross entropy function as a loss function, and the specific expression is as follows:
Figure BDA0002294971210000035
the cross entropy represents the degree of difference between the true label p and the prediction label q, where x represents the input sample, H (p, q) represents the cross entropy between the prediction distribution and the true distribution, p (x) represents the true distribution of the input sample label, and q (x) represents the prediction distribution of the model to the input sample label.
Further, the step 2 specifically comprises:
randomly generating p binary sequences, wherein the length of each sequence is d and is represented as follows:
Figure BDA0002294971210000041
wherein p is population capacity, the nth coding sequence snExpressed as:
sn=[i1,i2,...,im,...,id]1×d1≤m≤d
wherein d is the number of genes of each individual, and represents the connection number of all convolution layers except the input layer in the pre-training convolution model in the step 1 to the input characteristic channel, wherein the mth element imA value of 1 indicates that the connection at the corresponding position is reserved, and a value of 0 indicates that the connection at the position is deleted.
Further, the step 3 specifically includes:
decoding each coding sequence in the current population into a corresponding channel connection structure according to the positions of elements and the values of the elements, loading the weights pre-trained in the step 1 under the current connection structure, calculating the respective fitness, and obtaining a fitness set F ═ F { (F) of the current population1,f2,...,fn,...,fpIn which the nth element fnFor the fitness of the nth code sequence in the current population, p code sequences are selected from the current population repeatedly, and the probability of the nth code sequence being selected is fn/∑(f1+f2+...+fp)。
Further, the step 4: and (3) carrying out cross and variation operation on the binary coding sequence selected in the step (3) to obtain a new generation of population, wherein the cross probability and the variation probability are adjustable parameters, and specifically are as follows: the crossover probability and mutation probability are set as the probability of crossover operation between populations and the probability of mutation for each individual in the genetic algorithm.
Further, the step 6 specifically includes:
after the iteration of the genetic algorithm is finished, selecting a coding sequence with the maximum fitness from the last generation of population, decoding the coding sequence into a corresponding channel connection structure, loading parameters of a corresponding position in a pre-training convolution model as initial values according to the new channel connection structure, performing weight fine adjustment on the new channel connection structure of the model by using an Adam optimization algorithm, namely inputting a training set image again to retrain the new channel connection structure, and recovering the classification accuracy of the model.
A medium having stored therein a computer program which, when read by a processor, performs the method of any of the above.
The invention has the following advantages and beneficial effects:
the method utilizes a genetic algorithm to carry out sparsification on the channel connection structure of the convolution layer of the convolution neural network, and is applied to an image classification task. By encoding the channel connection structure, a genetic algorithm is used in a discrete space, and after a plurality of iterations, the optimal sparse channel connection structure for the current learning task is obtained.
First, theoretically, the more convolution kernels a convolutional layer has, the more features it can extract, and the more powerful the convolutional layer has. However, in practical applications, too many convolution kernels often extract many redundant features, which brings higher computational complexity to the model. The invention can ensure that the channel connection structure of the convolution layer is thinned under the condition of ensuring that the model accuracy rate does not obviously decline according to the current learning task, automatically searches to obtain the optimal sparse channel connection structure, and deletes some redundant connections. Secondly, the sparsification method is determined by using a genetic algorithm, and the hyper-parameters related to the sparsity rate are not introduced, so that the trouble that the model needs to adjust more hyper-parameters is avoided. Finally, in the existing model pruning or thinning method, people often try to define a certain dimension to measure the importance degree of an object to be thinned, so that a plurality of different measurement standards appear in recent years, but a unified measurement standard does not exist.
Drawings
FIG. 1 is a flow chart of an image classification method based on a structure optimization sparse convolutional neural network according to a preferred embodiment of the present invention.
FIG. 2 is an architecture diagram of an image classification method based on a structure-optimized sparse convolutional neural network.
Detailed Description
The technical solutions in the embodiments of the present invention will be described in detail and clearly with reference to the accompanying drawings. The described embodiments are only some of the embodiments of the present invention.
The technical scheme for solving the technical problems is as follows:
as shown in fig. 1, the image classification method based on the structure-optimized sparse convolutional neural network provided in this embodiment includes the following steps:
step 1: taking a cross entropy function between a minimized predicted value and a real label as an optimization target, taking a training set sample as input, taking an Adam algorithm as an optimization algorithm, setting the learning rate to be 0.001, pre-training a LeNet-5 convolutional neural network until the model converges on the training set, and storing the pre-training weight of the model. The expression of the cross entropy function is:
Figure BDA0002294971210000061
wherein p is a sample label and q is a predicted value of the model.
Step 2: and coding the first pooling layer and the second pooling layer of the pre-trained LeNet-5 convolution model. Randomly generating a plurality of binary sequences as an initial population, wherein each sequence corresponds to a channel connection structure, and the specific coding mode is as follows:
randomly generating p binary sequences, wherein the length of each sequence is d and is represented as follows:
Figure BDA0002294971210000062
wherein p is population capacity, the nth coding sequence snExpressed as:
sn=[i1,i2,...,im,...,id]1×d1≤m≤d (3)
wherein d is the number of genes of each individual and represents the number of connections of the second convolutional layer to the input feature channel in the pre-trained convolutional model in the step 1. Further, wherein the m-th element imA value of 1 indicates that the connection at the corresponding position is reserved, and a value of 0 indicates that the connection at the position is deletedConnecting;
and step 3: decoding and calculating the fitness. Decoding each coding sequence in the current population into a corresponding channel connection structure according to the positions of elements and the values of the elements, loading the weights pre-trained in the step 1 under the current connection structure, calculating the respective fitness, and obtaining a fitness set F ═ F { (F) of the current population1,f2,...,fn,...,fpIn which the nth element fnFor the fitness of the nth code sequence in the current population, p code sequences are selected from the current population repeatedly, and the probability of the nth code sequence being selected is fn/∑(f1+f2+...+fp) Wherein the value of the individual fitness is the reciprocal of the cross entropy;
and 4, step 4: and (4) carrying out cross and variation operation on the binary coding sequences selected in the step (3) to obtain a new generation of population. The cross probability is set to be 0.8, the mutation probability is 0.003, and the numerical value can be adjusted according to the actual situation.
And 5: repeating the step 3-4 until the current iteration times are equal to the total iteration times, and ending the iteration;
step 6: after the iteration of the genetic algorithm is finished, selecting a coding sequence with the maximum fitness from the last generation of population, decoding the coding sequence into a corresponding channel connection structure serving as a new channel connection structure of the LeNet-5 convolution model, setting the learning rate to be 0.001 by using the Adam optimization algorithm again, loading the weight of the LeNet-5 convolution model pre-trained in the step 1 according to the new channel connection structure serving as an initial weight, carrying out weight fine adjustment on the convolution model with the new channel connection structure, and recovering the classification accuracy of the model.
As shown in fig. 2, in view of the general architecture, the image classification method based on the structure-optimized sparse convolutional neural network provided in this embodiment may be divided into the following two modules:
module 1: and pre-training a convolution model part. In this part, a convolutional neural network needs to be pre-trained on the image classification data set until the model converges on the training set, and pre-training parameters of the convolutional neural network are stored; after the channel connection structure sparse module searches out the optimal connection, the part can apply a new channel connection structure, load the weight of the stored pre-training model as the initial weight, and carry out weight fine tuning to restore the classification accuracy of the model.
And (3) module 2: and the channel is connected with the structure sparse module. The part comprises coding and decoding of a channel connection structure in a pre-training convolution model, initialization of a coding sequence, selection, crossing and variation operations of the coding sequence in a discrete space, and selection of an initial optimal coding sequence according to fitness after iteration is finished.
The above examples are to be construed as merely illustrative and not limitative of the remainder of the disclosure. After reading the description of the invention, the skilled person can make various changes or modifications to the invention, and these equivalent changes and modifications also fall into the scope of the invention defined by the claims.

Claims (7)

1. An image classification method based on a structure optimization sparse convolution neural network is characterized by comprising the following steps:
step 1: acquiring a training set sample of an image, pre-training a convolutional neural network by taking the image training set sample as input, taking a loss function between a minimized predicted value and a real label as an optimization target, and storing a pre-training weight of the convolutional neural network;
step 2: binary coding is carried out according to the convolutional layer channel connection structure in the pre-training model, a plurality of binary sequences are generated randomly as initial population and correspond to a plurality of random channel connection structures;
and step 3: applying a channel connection structure corresponding to each coding sequence in the current population to a pre-training model, respectively calculating the fitness of each connection structure, and repeatedly selecting a plurality of binary coding sequences, wherein the coding sequence with higher fitness has higher probability of being selected;
and 4, step 4: carrying out cross and variation operation on the binary coding sequence selected in the step 3 to obtain a new generation of population, wherein the cross probability and the variation probability are adjustable parameters;
and 5: repeating the step 3-4 until the current iteration times are equal to the total iteration times, and ending the iteration;
step 6: selecting a coding sequence with the maximum fitness in the population of the last generation, decoding the coding sequence into a corresponding channel connection structure, using the channel connection structure, loading the weight of a pre-training model as an initial weight, carrying out weight fine adjustment, and recovering the classification accuracy of the model; and then inputting the test set into the model after the weight fine adjustment, and classifying the test set images.
2. The image classification method based on the structure optimization sparse convolutional neural network of claim 1, wherein the step 1: with a loss function between a minimized predicted value and a real label as an optimization target and a training set sample as an input, pre-training a convolutional neural network, and storing model pre-training weights, specifically: building a convolutional neural network, training by using an image training set to be learned until a model converges, and updating the weight of the model by using an Adam adaptive moment estimation optimization algorithm with a minimum loss function as a target, wherein the Adam algorithm specifically comprises the following steps:
(1) initializing a network parameter theta, a first moment variable s is 0, a second moment variable r is 0, and an initial time step t is 0;
(2) taking a small batch containing m samples from the training set { x(1),...,x(m)Each sample has a label value of { y }(1),...,y(m)};
(3) Calculating gradients
Figure FDA0002294971200000021
t ← t +1, L, f denote the mapping of the loss function and model from input to output, respectively;
(4) updating the biased first moment estimate and the biased second moment estimate, p1And ρ2Exponential decay rate, with values in the [0,1) interval:
s←ρ1s+(1-ρ1)g;
r←ρ2r+(1-ρ2)g⊙g;
s and r respectively represent biased first moment estimation and biased second moment estimation;
(5) correcting the deviation of the first order moment and the second order moment:
Figure FDA0002294971200000022
Figure FDA0002294971200000023
respectively representing the first moment estimation and the second moment estimation after correcting the deviation;
(6) and (3) calculating and updating:
Figure FDA0002294971200000024
where ε represents the learning rate and σ is a small constant for numerical stability, typically 10-8
(7) Application updating: θ ← θ + Δ θ;
(8) repeating (2) - (7) until t reaches the weight updating iteration number;
using a cross entropy function as a loss function, and the specific expression is as follows:
Figure FDA0002294971200000025
the cross entropy represents the degree of difference between the true label p and the prediction label q, where x represents the input sample, H (p, q) represents the cross entropy between the prediction distribution and the true distribution, p (x) represents the true distribution of the input sample label, and q (x) represents the prediction distribution of the model to the input sample label.
3. The image classification method based on the structure optimization sparse convolutional neural network as claimed in claim 2, wherein the step 2 is specifically:
randomly generating p binary sequences, wherein the length of each sequence is d and is represented as follows:
Figure FDA0002294971200000031
wherein p is population capacity, the nth coding sequence snExpressed as:
sn=[i1,i2,...,im,...,id]1×d1≤m≤d
wherein d is the number of genes of each individual, and represents the connection number of all convolution layers except the input layer in the pre-training convolution model in the step 1 to the input characteristic channel, wherein the mth element imA value of 1 indicates that the connection at the corresponding position is reserved, and a value of 0 indicates that the connection at the position is deleted.
4. The image classification method based on the structure optimization sparse convolutional neural network as claimed in claim 3, wherein the step 3 is specifically:
decoding each coding sequence in the current population into a corresponding channel connection structure according to the positions of elements and the values of the elements, loading the weights pre-trained in the step 1 under the current connection structure, calculating the respective fitness, and obtaining a fitness set F ═ F { (F) of the current population1,f2,...,fn,...,fpIn which the nth element fnFor the fitness of the nth code sequence in the current population, p code sequences are selected from the current population repeatedly, and the probability of the nth code sequence being selected is fn/∑(f1+f2+...+fp)。
5. The image classification method based on the structure optimization sparse convolutional neural network of claim 4, wherein the step 4: and (3) carrying out cross and variation operation on the binary coding sequence selected in the step (3) to obtain a new generation of population, wherein the cross probability and the variation probability are adjustable parameters, and specifically are as follows: the crossover probability and mutation probability are set as the probability of crossover operation between populations and the probability of mutation for each individual in the genetic algorithm.
6. The image classification method based on the structure-optimized sparse convolutional neural network of claim 5, wherein the step 6 specifically comprises:
after the iteration of the genetic algorithm is finished, selecting a coding sequence with the maximum fitness from the last generation of population, decoding the coding sequence into a corresponding channel connection structure, loading parameters of a corresponding position in a pre-training convolution model as initial values according to the new channel connection structure, performing weight fine adjustment on the model of the new channel connection structure by using an Adam optimization algorithm, namely inputting a training set image again to retrain the new channel connection structure, and recovering the classification accuracy of the model.
7. A medium having a computer program stored therein, wherein the computer program, when read by a processor, performs the method of any of the preceding claims 1 to 6.
CN201911197205.2A 2019-11-29 2019-11-29 Image classification method and medium based on structure optimization sparse convolution neural network Pending CN110929798A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911197205.2A CN110929798A (en) 2019-11-29 2019-11-29 Image classification method and medium based on structure optimization sparse convolution neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911197205.2A CN110929798A (en) 2019-11-29 2019-11-29 Image classification method and medium based on structure optimization sparse convolution neural network

Publications (1)

Publication Number Publication Date
CN110929798A true CN110929798A (en) 2020-03-27

Family

ID=69847623

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911197205.2A Pending CN110929798A (en) 2019-11-29 2019-11-29 Image classification method and medium based on structure optimization sparse convolution neural network

Country Status (1)

Country Link
CN (1) CN110929798A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112561069A (en) * 2020-12-23 2021-03-26 北京百度网讯科技有限公司 Model processing method, device, equipment, storage medium and product
CN113657576A (en) * 2021-07-21 2021-11-16 浙江大华技术股份有限公司 Convolutional neural network model lightweight method and device, and image identification method
CN114509825A (en) * 2021-12-31 2022-05-17 河南大学 Strong convection weather prediction method and system for improving three-dimensional confrontation generation neural network based on hybrid evolution algorithm
CN114723960A (en) * 2022-04-02 2022-07-08 湖南三湘银行股份有限公司 Additional verification method and system for enhancing bank account security
CN117067921A (en) * 2023-10-18 2023-11-17 北京航空航天大学 Fault detection method of electric automobile and electric automobile
CN117261599A (en) * 2023-10-18 2023-12-22 北京航空航天大学 Fault detection method and device of electric automobile, electronic equipment and electric automobile

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020059154A1 (en) * 2000-04-24 2002-05-16 Rodvold David M. Method for simultaneously optimizing artificial neural network inputs and architectures using genetic algorithms
CN107609525A (en) * 2017-09-19 2018-01-19 吉林大学 Remote Sensing Target detection method based on Pruning strategy structure convolutional neural networks
CN110232341A (en) * 2019-05-30 2019-09-13 重庆邮电大学 Based on convolution-stacking noise reduction codes network semi-supervised learning image-recognizing method
CN110427965A (en) * 2019-06-25 2019-11-08 重庆邮电大学 Convolutional neural networks structural reduction and image classification method based on evolution strategy

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020059154A1 (en) * 2000-04-24 2002-05-16 Rodvold David M. Method for simultaneously optimizing artificial neural network inputs and architectures using genetic algorithms
CN107609525A (en) * 2017-09-19 2018-01-19 吉林大学 Remote Sensing Target detection method based on Pruning strategy structure convolutional neural networks
CN110232341A (en) * 2019-05-30 2019-09-13 重庆邮电大学 Based on convolution-stacking noise reduction codes network semi-supervised learning image-recognizing method
CN110427965A (en) * 2019-06-25 2019-11-08 重庆邮电大学 Convolutional neural networks structural reduction and image classification method based on evolution strategy

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
XIANGYU ZHANG等: "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices", 《2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》 *
唐贤伦等: "混合PSO优化卷积神经网络结构和参数", 《电子科技大学学报》 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112561069A (en) * 2020-12-23 2021-03-26 北京百度网讯科技有限公司 Model processing method, device, equipment, storage medium and product
CN113657576A (en) * 2021-07-21 2021-11-16 浙江大华技术股份有限公司 Convolutional neural network model lightweight method and device, and image identification method
CN114509825A (en) * 2021-12-31 2022-05-17 河南大学 Strong convection weather prediction method and system for improving three-dimensional confrontation generation neural network based on hybrid evolution algorithm
CN114509825B (en) * 2021-12-31 2022-11-08 河南大学 Strong convection weather prediction method and system for improving three-dimensional confrontation generation neural network based on hybrid evolution algorithm
CN114723960A (en) * 2022-04-02 2022-07-08 湖南三湘银行股份有限公司 Additional verification method and system for enhancing bank account security
CN114723960B (en) * 2022-04-02 2023-04-28 湖南三湘银行股份有限公司 Additional verification method and system for enhancing bank account security
CN117067921A (en) * 2023-10-18 2023-11-17 北京航空航天大学 Fault detection method of electric automobile and electric automobile
CN117261599A (en) * 2023-10-18 2023-12-22 北京航空航天大学 Fault detection method and device of electric automobile, electronic equipment and electric automobile
CN117067921B (en) * 2023-10-18 2024-01-05 北京航空航天大学 Fault detection method of electric automobile and electric automobile
CN117261599B (en) * 2023-10-18 2024-05-03 北京航空航天大学 Fault detection method and device of electric automobile, electronic equipment and electric automobile

Similar Documents

Publication Publication Date Title
CN110929798A (en) Image classification method and medium based on structure optimization sparse convolution neural network
CN107729999B (en) Deep neural network compression method considering matrix correlation
Gordon et al. Meta-learning probabilistic inference for prediction
CN107239825B (en) Deep neural network compression method considering load balance
CN110909926A (en) TCN-LSTM-based solar photovoltaic power generation prediction method
Peters et al. Probabilistic binary neural networks
JPH07296117A (en) Constitution method of sort weight matrix for pattern recognition system using reduced element feature section set
CN112465120A (en) Fast attention neural network architecture searching method based on evolution method
CN110188827B (en) Scene recognition method based on convolutional neural network and recursive automatic encoder model
CN111898689A (en) Image classification method based on neural network architecture search
CN110659725A (en) Neural network model compression and acceleration method, data processing method and device
WO2021042857A1 (en) Processing method and processing apparatus for image segmentation model
CN110110845B (en) Learning method based on parallel multi-level width neural network
CN108154186B (en) Pattern recognition method and device
CN111371611B (en) Weighted network community discovery method and device based on deep learning
CN110288002B (en) Image classification method based on sparse orthogonal neural network
CN116976405A (en) Variable component shadow quantum neural network based on immune optimization algorithm
CN113590748B (en) Emotion classification continuous learning method based on iterative network combination and storage medium
CN112949599B (en) Candidate content pushing method based on big data
CN115063374A (en) Model training method, face image quality scoring method, electronic device and storage medium
CN115392594A (en) Electrical load model training method based on neural network and feature screening
CN112884019B (en) Image language conversion method based on fusion gate circulation network model
CN108985371B (en) Image multi-resolution dictionary learning method and application thereof
Siegel et al. Training sparse neural networks using compressed sensing
Lee et al. Adaptive network sparsification via dependent variational beta-bernoulli dropout

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200327