CN108334843A - A kind of arcing recognition methods based on improvement AlexNet - Google Patents

A kind of arcing recognition methods based on improvement AlexNet Download PDF

Info

Publication number
CN108334843A
CN108334843A CN201810109141.5A CN201810109141A CN108334843A CN 108334843 A CN108334843 A CN 108334843A CN 201810109141 A CN201810109141 A CN 201810109141A CN 108334843 A CN108334843 A CN 108334843A
Authority
CN
China
Prior art keywords
layer
convolution
image
alexnet
arcing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810109141.5A
Other languages
Chinese (zh)
Other versions
CN108334843B (en
Inventor
范国海
张娜
何洪伟
何进
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu National Railways Electric Equipment Co Ltd
Original Assignee
Chengdu National Railways Electric Equipment Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu National Railways Electric Equipment Co Ltd filed Critical Chengdu National Railways Electric Equipment Co Ltd
Priority to CN201810109141.5A priority Critical patent/CN108334843B/en
Publication of CN108334843A publication Critical patent/CN108334843A/en
Application granted granted Critical
Publication of CN108334843B publication Critical patent/CN108334843B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of based on the arcing recognition methods for improving AlexNet, includes the following steps:S1. convolutional neural networks are established and obtain training pattern;S2. image is obtained;S3. the image got input training pattern is subjected to image recognition;Convolutional neural networks are improved AlexNet networks, and the structure of improved AlexNet networks is followed successively by input layer, multiple sequentially connected convolutional layers, full articulamentum and output layer;The convolutional layer being connect with input layer uses the convolution kernel framework of M × M, remaining convolutional layer to use the convolution kernel framework of 1 × M and M × 1;The present invention replaces the convolution kernel of M × M sizes in original AlexNet networks using the convolution kernel series connection of the sizes of 1 × M and M × 1, greatly reduces number of parameters, and increase a non-linear layer, keeps network structure deeper;Model is simplified, the training time is substantially reduced, improves training effectiveness, recognition accuracy is than original AlexNet networks higher.

Description

Arcing identification method based on improved AlexNet
Technical Field
The invention relates to the field of image recognition, in particular to an arc burning recognition method based on improved AlexNet.
Background
The railway contact net arcing is the arc generated by ionizing air into a conductor and then generating the conductor due to the waving of the contact net and the bouncing of a pantograph and the voltage exceeding the tolerance of the air; the electric locomotive is unstable in operation due to arcing, so that the power supply of the electric locomotive is intermittent, abnormal deceleration and acceleration are caused in the running process of a train, the discomfort in the journey is increased, a faulted contact line or a pantograph can be timely maintained through arcing alarm, and the occurrence of railway power supply safety accidents is reduced; after the arcing alarm is transmitted back to the data terminal, manual interpretation and confirmation are needed, if the alarm misjudgment rate is too high, a large amount of labor cost needs to be consumed, and if the failure rate is too high, the accident probability is increased, so that the labor cost can be greatly reduced by effectively identifying the arcing, the false alarm rate and the failure rate in the alarm are reduced, and the method has important significance.
The convolutional neural network is one of artificial neural networks and becomes a research hotspot in the field of current speech analysis and image recognition, the weight value sharing network structure of the convolutional neural network is more similar to a biological neural network, the complexity of a network model is reduced, and the number of weight values is reduced, so that the convolutional neural network has the advantages of being more obvious when the input of the network is a multi-dimensional image, enabling the image to be directly used as the input of the network, and avoiding the complicated characteristic extraction and data reconstruction process in the traditional recognition algorithm; the convolution network is a multilayer perceptron specially designed for identifying two-dimensional shapes, and the network structure has high invariance to translation, scaling, inclination or other forms of deformation;
AlexNet is a classic work proposed in 2012, and obtains the best performance of ImageNet in the current year, but the traditional AlexNet model has too many parameters and low training efficiency, and the invention provides an improved AlexNet model and applies the improved AlexNet model to the identification of contact net arcing.
Disclosure of Invention
In order to solve the problems, the invention provides an arc burning identification method based on improved AlexNet.
Specifically, the arc burning identification method based on the improved AlexNet comprises the following steps:
s1, establishing a convolutional neural network and obtaining a training model;
s2, acquiring an image;
s3, inputting the obtained image into a training model for image recognition;
the convolutional neural network is an improved AlexNet network, and the structure of the improved AlexNet network sequentially comprises an input layer, a plurality of convolutional layers, a full-connection layer and an output layer, wherein the convolutional layers are sequentially connected; the convolution layers connected with the input layer adopt an M multiplied by M convolution kernel architecture, and the other convolution layers adopt 1 multiplied by M and M multiplied by 1 convolution kernel architectures.
Further, a first convolution layer Conv1, a first pooling layer, a second convolution layer Conv2, a second pooling layer, a third convolution layer Conv3, a fourth convolution layer Conv4, a fifth convolution layer Conv5, a third pooling layer, a first full-connection layer FC6 and a second full-connection layer FC7 are sequentially arranged between the input layer and the output layer of the improved AlexNet network, and the convolution kernel size of the first convolution layer Conv1 is M × M.
Further, the second convolution layer Conv2, the third convolution layer Conv3, the fourth convolution layer Conv4 and the fifth convolution layer Conv5 respectively include convolution layers Conv2_1, Conv2_2, Conv3_1, Conv3_2, Conv4_1, Conv4_2, Conv5_1 and Conv5_2, which are sequentially arranged, wherein convolution kernel sizes of the convolution layers Conv2_1, Conv3_1, Conv4_1 and Conv5_1 are 1 × M, and convolution kernel sizes of the convolution layers Conv2_2, Conv3_2, Conv4_2 and Conv5_2 are M × 1.
Further, the input layer accepts image input of a size of 224 × 224.
Further, the number of nodes of the output layer is N, and N represents the total number of categories of the image.
Further, the output layer is a softmax classifier.
Further, the convolution kernel size of the first convolution layer Conv1 is 7 × 7.
Furthermore, the first pooling layer, the second pooling layer and the third pooling layer adopt a maximum pooling mode.
Further, the step S1 includes:
s11, establishing a convolutional neural network and initializing;
s12, collecting an image and preprocessing the image, wherein the preprocessing comprises cutting, compressing, mean value removing and normalization;
s13, adding a label to each image, wherein the label information indicates whether the image contains arcing or not;
s14, dividing the image into a training sample and a verification sample;
s15, inputting a training sample to train the convolutional neural network, verifying through a verification sample, judging whether the loss value of the convolutional neural network converges to a stable value or reaches a set maximum iteration step number, if so, executing S16, otherwise, executing S15;
and S16, finishing training and outputting a training model.
Further, the specific implementation method of step S15 is as follows: inputting training samples into the convolutional neural network in batches to execute forward propagation, comparing output results with actual categories and calculating loss values, if the loss values do not converge to stable values or do not reach set maximum iteration steps, executing backward propagation, updating weights, inputting verification samples into the convolutional neural network to execute forward propagation in each iteration P step, and verifying network performance.
The invention has the beneficial effects that: convolution kernels with the sizes of 1 multiplied by M and M multiplied by 1 are adopted to be connected in series to replace convolution kernels with the sizes of M multiplied by M in an original AlexNet network, so that the number of parameters is greatly reduced, and a nonlinear layer is added to enable the network structure to be deeper; the model is simplified, the training time is greatly shortened, the training efficiency is improved, and the recognition accuracy is higher than that of the original AlexNet network.
Drawings
FIG. 1 is a training flow diagram of an improved AlexNet-based arcing identification method of the present invention;
fig. 2 is a diagram of an improved AlexNet network architecture of the present invention.
Detailed Description
In order to more clearly understand the technical features, objects, and effects of the present invention, embodiments of the present invention will now be described with reference to the accompanying drawings.
As shown in fig. 1, an arc identification method based on improved AlexNet, which is applied to arc identification but not limited to arc identification, includes the following steps:
s1, establishing a convolutional neural network and obtaining a training model;
s2, acquiring an image;
s3, inputting the collected images into a training model for arc burning identification;
further, step S1 includes:
s11, establishing a convolutional neural network and initializing;
s12, extracting an image from a server for storing image data and preprocessing the image, wherein the preprocessing comprises cutting, compressing, mean value removing and normalization;
s13, adding a label to each image, wherein the image contains 1 type of arc-burning marks and contains no arc-burning marks as 0 type;
s14, dividing the image into training samples and verification samples, wherein the training samples are 40000 and the verification samples are 10000;
s15, inputting a training sample to train the convolutional neural network, verifying through a verification sample, judging whether the loss value of the convolutional neural network converges to a stable value or reaches a set maximum iteration step number, if so, executing S16, otherwise, executing S15;
and S16, finishing training and outputting a training model.
Further, the specific implementation method of step S15 is as follows: inputting training samples into a convolutional neural network in batches to execute forward propagation to obtain network output, comparing an output result with an actual class and calculating a loss value, if the loss value is not converged to a stable value or does not reach a set maximum iteration step number, executing backward propagation, updating a weight, training and checking a training effect in the process of model training, and inputting a verification sample into the convolutional neural network to execute forward propagation and verify the network performance in each iteration 100 steps; parameters of the model, namely the connection weight W and the threshold b of each layer, are continuously updated through forward propagation and backward propagation, so that the output of the training model is closer and closer to the expected output until the loss value of the network converges to a stable value or the maximum iteration step number is reached.
The forward propagation is implemented as follows:
inputting: a picture sample, the number L of layers of the model and the type of each hidden layer; for the convolutional layer, the number of convolutional kernels, filter _ num, the size of convolutional kernels, kernel _ size, the filling size pad, the step size stride, and an activation function need to be defined; for the pooling layer, the pooled size pool _ size, the step size stride and the pooling criterion (maximum pooling or average pooling, in this embodiment, maximum pooling) need to be defined; for a fully connected layer, the number of neurons and activation functions for each layer need to be defined.
And (3) outputting: output of the model aL
The calculation steps are as follows:
(1) initializing an input layer by using original input image pixels to obtain an input tensor a1
(2) And initializing the weight W and the bias b of each layer.
(3)forl=2 to l-1:
(a) If the l-th layer is a convolutional layer, the output is: a isl=f(zl)=f(al-1*Wl+ b); where denotes convolution, f (·) is the ReLU activation function;
(b) if the l-th layer is a pooling layer, the output is al=pool(al-1) Pool is the maximum;
(c) if the l-th layer is a fully connected layer, the output is: a isl=f(zl)=f(Wlal-1+bl) (ii) a Where f (-) is the ReLU activation function.
(4) For the output layer, i.e. L-th layer aL=softmax(zL)=softmax(WLaL-1+bL) (ii) a And the output layer calculates the probability that the sample belongs to each type by adopting a softmax function.
The specific implementation of the back propagation is as follows:
the method comprises the STEPs of inputting a picture sample, the number L of model layers and the types of all hidden layers, defining the number filter _ num of convolution kernels, the size kernel _ size of the convolution kernels, the filling size pad, the STEP size stride and an activation function for the convolution layers, defining the pooled size pool _ size, the STEP size stride and the pooling standard (maximum pooling or average pooling, in the embodiment, maximum pooling) for the pooling layers, defining the number of neurons and the activation function of each layer for the fully connected layers, setting a gradient iteration parameter iteration STEP α (learning rate) and the maximum iteration MAX times _ STEP.
And (3) outputting: and W, b of each hidden layer and each output layer of the model.
The calculation steps are as follows:
(1) computing residual δ of output layer by loss functionLIn this embodiment, the cross entropy loss function is adopted:
δLis calculated as follows:
where x is the input sample and n is the total number of samples.
(2) for L-1to 2, the residual δ of L-th layer is calculated by the back propagation algorithm according to the following 3 casesl
(a) If it is currently the fully connected layer:
(b) if it is currently a convolutional layer:
(c) if it is currently the pooling layer:
wherein,representing each element multiplication.
(3) for 2toL, update W of l-th layer according to the following 2 casesl,bl
(a) If it is currently the fully connected layer:
Wl=Wl-αδl(al-1)T
bl=bl-αδl
(b) if it is currently a convolution layer, for each convolution kernel there is:
Wl=Wl-αδl*rot180(al-1)
(4) if the maximum number of iterations MAX STEP is reached, the iteration loop is skipped to STEP (5).
(5) And outputting the coefficient matrix W and the offset vector b of each hidden layer and each output layer.
As shown in fig. 2, the convolutional neural network is an improved AlexNet network, and the structure of the improved AlexNet network sequentially comprises an input layer, a plurality of convolutional layers, a full-link layer and an output layer, which are sequentially connected; the convolution layers connected with the input layer adopt an M multiplied by M convolution kernel architecture, and the other convolution layers adopt 1 multiplied by M and M multiplied by 1 convolution kernel architectures.
Further, a first convolution layer Conv1, a first pooling layer, a second convolution layer Conv2, a second pooling layer, a third convolution layer Conv3, a fourth convolution layer Conv4, a fifth convolution layer Conv5, a third pooling layer, a first full-connection layer FC6 and a second full-connection layer FC7 are sequentially arranged between the input layer and the output layer of the improved AlexNet network; the second convolutional layer Conv2, the third convolutional layer Conv3, the fourth convolutional layer Conv4 and the fifth convolutional layer Conv5 respectively comprise convolutional layers Conv2_1, Conv2_2, Conv3_1, Conv3_2, Conv4_1, Conv4_2, Conv5_1 and Conv5_2 which are arranged in sequence.
The specific parameters of each layer are as follows:
conv 1: using 6 convolution kernels of size 7 × 7 with a step size of 4, padding ═ SAME', using the ReLU activation function, the resulting output size is 56 × 56 × 6; performing maximum pooling with the step length of 2 and the pooling size of 3 × 3 through the first pooling layer, normalizing with Local response Normalization to obtain 27 × 27 × 6 output;
conv2_ 1: using 16 convolution kernels of 1 × 5 size with a step size of 1, padding ═ SAME', using the ReLU activation function, the resulting output size is 27 × 27 × 16;
conv2_ 2: using 16 convolution kernels of 5 × 1 size with a step size of 1, padding ═ SAME', using the ReLU activation function, the resulting output size is 27 × 27 × 16; performing maximum pooling with the step length of 2 and the pooling size of 3 × 3 by using a second pooling layer, normalizing by using Local response Normalization to obtain an output of 13 × 13 × 16;
conv3_ 1: using 32 convolution kernels of 1 × 3 size with a step size of 1, padding ═ SAME', using the ReLU activation function, the resulting output size is 13 × 13 × 32;
conv3_ 2: using 32 convolution kernels of size 3 × 1 with a step size of 1, padding ═ SAME', using the ReLU activation function, the resulting output size is 13 × 13 × 32;
conv4_ 1: using 16 convolution kernels of 1 × 3 size with a step size of 1, padding ═ SAME', using the ReLU activation function, the resulting output size is 13 × 13 × 16;
conv4_ 2: using 16 convolution kernels of size 3 × 1 with a step size of 1, padding ═ SAME', using the ReLU activation function, the resulting output size is 13 × 13 × 16;
conv5_ 1: using 6 convolution kernels of 1 × 3 size with a step size of 1, padding ═ SAME', using the ReLU activation function, the resulting output size is 13 × 13 × 16;
conv5_ 2: using 6 convolution kernels of size 3 × 1 with a step size of 1, padding ═ SAME', using the ReLU activation function, the resulting output size is 13 × 13 × 16; performing maximum pooling with a step length of 2 and a pooling size of 3 × 3 through a third pooling layer, and padding ═ VALID', to obtain an output of 6 × 6 × 6;
FC 6: a full connection layer, wherein the number of nodes is 512, and a ReLU activation function is used;
FC 7: a full connection layer, with 256 nodes, uses a ReLU activation function;
softmax: the output layer, node number 2, uses the softmax function.
Compared with the original AlexNet network, the method is simplified, convolution kernels with the sizes of 1 multiplied by M and Mmultiplied by 1 are adopted to be connected in series to replace convolution kernels with the sizes of Mmultiplied by M in the original AlexNet network, the number of parameters is greatly reduced, a nonlinear layer is added, the network structure is deeper, and the number of convolution kernels of each layer and the number of nodes of a full connection layer are reduced.
Further, the input of the input layer is 224pixels × 224pixels × 1 channel.
Further, the number of nodes of the output layer is N, and N represents the total number of categories of the image.
Further, the output layer is a softmax classifier.
Further, 11 × 11 convolution kernels in the first convolution layer Conv1 of the original AlexNet network are replaced with 7 × 7 convolution kernels.
Furthermore, the first pooling layer, the second pooling layer and the third pooling layer adopt a maximum pooling mode.
It should be noted that, for simplicity of description, the above-mentioned embodiments of the method are described as a series of acts or combinations, but those skilled in the art should understand that the present application is not limited by the order of acts described, as some steps may be performed in other orders or simultaneously according to the present application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and elements referred to are not necessarily required in this application.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a ROM, a RAM, etc.
The above disclosure is only for the purpose of illustrating the preferred embodiments of the present invention, and it is therefore to be understood that the invention is not limited by the scope of the appended claims.

Claims (10)

1. An improved AlexNet-based arcing identification method is characterized by comprising the following steps:
s1, establishing a convolutional neural network and obtaining a training model;
s2, acquiring an image;
s3, inputting the obtained image into a training model for image recognition;
the convolutional neural network is an improved AlexNet network, and the structure of the improved AlexNet network sequentially comprises an input layer, a plurality of convolutional layers, a full-connection layer and an output layer, wherein the convolutional layers are sequentially connected; the convolution layers connected with the input layer adopt an M multiplied by M convolution kernel architecture, and the other convolution layers adopt 1 multiplied by M and M multiplied by 1 convolution kernel architectures.
2. The improved AlexNet-based arcing identification method according to claim 1, wherein: a first convolution layer Conv1, a first pooling layer, a second convolution layer Conv2, a second pooling layer, a third convolution layer Conv3, a fourth convolution layer Conv4, a fifth convolution layer Conv5, a third pooling layer, a first full-connection layer FC6 and a second full-connection layer FC7 are sequentially arranged between an input layer and an output layer of the improved AlexNet network, and the convolution kernel size of the first convolution layer Conv1 is M x M.
3. The improved AlexNet-based arcing identification method according to claim 2, characterized in that: the second convolution layer Conv2, the third convolution layer Conv3, the fourth convolution layer Conv4 and the fifth convolution layer Conv5 respectively comprise convolution layers Conv2_1, Conv2_2, Conv3_1, Conv3_2, Conv4_1, Conv4_2, Conv5_1 and Conv5_2 which are sequentially arranged, wherein convolution kernel sizes of the convolution layers Conv2_1, Conv3_1, Conv4_1 and Conv5_1 are 1 × M, and convolution kernel sizes of the convolution layers Conv2_2, Conv3_2, Conv4_2 and Conv5_2 are M × 1.
4. The improved AlexNet-based arcing identification method according to claim 1, wherein: the input layer accepts image input of size 224 x 224.
5. The improved AlexNet-based arcing identification method according to claim 1, wherein: the number of nodes of the output layer is N, which represents the total number of categories of the image.
6. The improved AlexNet-based arcing identification method according to claim 1, wherein: the output layer is the softmax classifier.
7. The improved AlexNet-based arcing identification method according to claim 2, characterized in that: the convolution kernel size of the first convolution layer Conv1 is 7 × 7.
8. The improved AlexNet-based arcing identification method according to claim 2, characterized in that: the first pooling layer, the second pooling layer and the third pooling layer adopt a maximum pooling mode.
9. The improved AlexNet based arc ignition identification method according to claim 1, wherein said step S1 includes:
s11, establishing a convolutional neural network and initializing;
s12, collecting an image and preprocessing the image, wherein the preprocessing comprises cutting, compressing, mean value removing and normalization;
s13, adding a label to each image, wherein the label information indicates whether the image contains arcing or not;
s14, dividing the image into a training sample and a verification sample;
s15, inputting a training sample to train the convolutional neural network, verifying through a verification sample, judging whether the loss value of the convolutional neural network converges to a stable value or reaches a set maximum iteration step number, if so, executing S16, otherwise, executing S15;
and S16, finishing training and outputting a training model.
10. The improved AlexNet based arc ignition identification method according to claim 9, wherein the step S15 is implemented as follows: inputting training samples into the convolutional neural network in batches to execute forward propagation, comparing output results with actual categories and calculating loss values, if the loss values do not converge to stable values or do not reach set maximum iteration steps, executing backward propagation, updating weights, inputting verification samples into the convolutional neural network to execute forward propagation in each iteration P step, and verifying network performance.
CN201810109141.5A 2018-02-02 2018-02-02 Arcing identification method based on improved AlexNet Active CN108334843B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810109141.5A CN108334843B (en) 2018-02-02 2018-02-02 Arcing identification method based on improved AlexNet

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810109141.5A CN108334843B (en) 2018-02-02 2018-02-02 Arcing identification method based on improved AlexNet

Publications (2)

Publication Number Publication Date
CN108334843A true CN108334843A (en) 2018-07-27
CN108334843B CN108334843B (en) 2022-03-25

Family

ID=62928009

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810109141.5A Active CN108334843B (en) 2018-02-02 2018-02-02 Arcing identification method based on improved AlexNet

Country Status (1)

Country Link
CN (1) CN108334843B (en)

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109284863A (en) * 2018-09-04 2019-01-29 南京理工大学 A kind of power equipment temperature predicting method based on deep neural network
CN109326299A (en) * 2018-11-14 2019-02-12 平安科技(深圳)有限公司 Sound enhancement method, device and storage medium based on full convolutional neural networks
CN109508627A (en) * 2018-09-21 2019-03-22 国网信息通信产业集团有限公司 The unmanned plane dynamic image identifying system and method for shared parameter CNN in a kind of layer
CN109740697A (en) * 2019-03-05 2019-05-10 重庆大学 Arena micro-image visible component recognition methods based on deep learning
CN109858495A (en) * 2019-01-16 2019-06-07 五邑大学 A kind of feature extracting method, device and its storage medium based on improvement convolution block
CN110197166A (en) * 2019-06-04 2019-09-03 西安建筑科技大学 A kind of car body loading condition identification device and method based on image recognition
CN110443801A (en) * 2019-08-23 2019-11-12 电子科技大学 A kind of salt dome recognition methods based on improvement AlexNet
CN110763958A (en) * 2019-09-23 2020-02-07 华为技术有限公司 Direct current arc detection method, device, equipment, system and storage medium
CN111461298A (en) * 2020-03-26 2020-07-28 广西电网有限责任公司电力科学研究院 Convolutional neural network and method for circuit breaker fault identification
CN111612733A (en) * 2020-04-02 2020-09-01 杭州电子科技大学 Convolutional neural network optimization method for medical image data analysis
CN111681215A (en) * 2020-05-29 2020-09-18 无锡赛睿科技有限公司 Convolutional neural network model training method, and workpiece defect detection method and device
CN111832581A (en) * 2020-09-21 2020-10-27 平安科技(深圳)有限公司 Lung feature recognition method and device, computer equipment and storage medium
CN111882535A (en) * 2020-07-21 2020-11-03 中国计量大学 Resistance welding shear strength identification method based on improved Unet network
CN112906626A (en) * 2021-03-12 2021-06-04 李辉 Fault identification method based on artificial intelligence
CN113255729A (en) * 2021-04-27 2021-08-13 埃特曼(北京)半导体技术有限公司 Epitaxial layer growth state judgment method and device based on convolutional neural network
CN113343791A (en) * 2021-05-21 2021-09-03 浙江邦业科技股份有限公司 Kiln head fire-watching video brightness identification method and device based on convolutional neural network
CN113761983A (en) * 2020-06-05 2021-12-07 杭州海康威视数字技术股份有限公司 Method and device for updating human face living body detection model and image acquisition equipment
CN114580487A (en) * 2020-11-30 2022-06-03 深圳市瑞图生物技术有限公司 Chromosome recognition method, device, equipment and storage medium based on deep learning
CN115049583A (en) * 2022-04-08 2022-09-13 上海电气集团股份有限公司 Pantograph arcing detection method and system, electronic device, and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103357987A (en) * 2013-06-28 2013-10-23 广州中医药大学 Automatic stability detecting method for process of CO2 electric arc welding short circuit transition welding
CN104538222A (en) * 2014-12-27 2015-04-22 中国西电电气股份有限公司 High-voltage switch phase selection controller based on artificial neural network and method
CN106651830A (en) * 2016-09-28 2017-05-10 华南理工大学 Image quality test method based on parallel convolutional neural network
CN106778999A (en) * 2016-12-02 2017-05-31 武汉网码云防伪技术有限公司 A kind of two-dimentional camber line figure code and a kind of antifake method for products
US20170220575A1 (en) * 2016-01-28 2017-08-03 Shutterstock, Inc. Identification of synthetic examples for improving search rankings

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103357987A (en) * 2013-06-28 2013-10-23 广州中医药大学 Automatic stability detecting method for process of CO2 electric arc welding short circuit transition welding
CN104538222A (en) * 2014-12-27 2015-04-22 中国西电电气股份有限公司 High-voltage switch phase selection controller based on artificial neural network and method
US20170220575A1 (en) * 2016-01-28 2017-08-03 Shutterstock, Inc. Identification of synthetic examples for improving search rankings
CN106651830A (en) * 2016-09-28 2017-05-10 华南理工大学 Image quality test method based on parallel convolutional neural network
CN106778999A (en) * 2016-12-02 2017-05-31 武汉网码云防伪技术有限公司 A kind of two-dimentional camber line figure code and a kind of antifake method for products

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ALEX KRIZHEVSKY等: ""ImageNet Classification with Deep Convolutional Neural Networks"", 《COMMUNICATIONS OF THE ACM》 *
CHRISTIAN SZEGEDY等: ""Rethinking the Inception Architecture for Computer Vision"", 《ARXIV》 *
杨恒: ""基于机器视觉的高速列车弓网动态性能检测方法研究"", 《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》 *

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109284863A (en) * 2018-09-04 2019-01-29 南京理工大学 A kind of power equipment temperature predicting method based on deep neural network
CN109508627A (en) * 2018-09-21 2019-03-22 国网信息通信产业集团有限公司 The unmanned plane dynamic image identifying system and method for shared parameter CNN in a kind of layer
CN109326299A (en) * 2018-11-14 2019-02-12 平安科技(深圳)有限公司 Sound enhancement method, device and storage medium based on full convolutional neural networks
CN109326299B (en) * 2018-11-14 2023-04-25 平安科技(深圳)有限公司 Speech enhancement method, device and storage medium based on full convolution neural network
CN109858495A (en) * 2019-01-16 2019-06-07 五邑大学 A kind of feature extracting method, device and its storage medium based on improvement convolution block
CN109858495B (en) * 2019-01-16 2023-09-22 五邑大学 Feature extraction method and device based on improved convolution block and storage medium thereof
CN109740697A (en) * 2019-03-05 2019-05-10 重庆大学 Arena micro-image visible component recognition methods based on deep learning
CN109740697B (en) * 2019-03-05 2023-04-14 重庆大学 Urinary sediment microscopic image visible component identification method based on deep learning
CN110197166A (en) * 2019-06-04 2019-09-03 西安建筑科技大学 A kind of car body loading condition identification device and method based on image recognition
CN110197166B (en) * 2019-06-04 2022-09-09 西安建筑科技大学 Vehicle body loading state recognition device and method based on image recognition
CN110443801A (en) * 2019-08-23 2019-11-12 电子科技大学 A kind of salt dome recognition methods based on improvement AlexNet
CN110763958A (en) * 2019-09-23 2020-02-07 华为技术有限公司 Direct current arc detection method, device, equipment, system and storage medium
CN111461298A (en) * 2020-03-26 2020-07-28 广西电网有限责任公司电力科学研究院 Convolutional neural network and method for circuit breaker fault identification
CN111612733A (en) * 2020-04-02 2020-09-01 杭州电子科技大学 Convolutional neural network optimization method for medical image data analysis
CN111681215A (en) * 2020-05-29 2020-09-18 无锡赛睿科技有限公司 Convolutional neural network model training method, and workpiece defect detection method and device
CN113761983A (en) * 2020-06-05 2021-12-07 杭州海康威视数字技术股份有限公司 Method and device for updating human face living body detection model and image acquisition equipment
CN113761983B (en) * 2020-06-05 2023-08-22 杭州海康威视数字技术股份有限公司 Method and device for updating human face living body detection model and image acquisition equipment
CN111882535A (en) * 2020-07-21 2020-11-03 中国计量大学 Resistance welding shear strength identification method based on improved Unet network
CN111882535B (en) * 2020-07-21 2023-06-27 中国计量大学 Resistance welding shear strength identification method based on improved Unet network
CN111832581B (en) * 2020-09-21 2021-01-29 平安科技(深圳)有限公司 Lung feature recognition method and device, computer equipment and storage medium
CN111832581A (en) * 2020-09-21 2020-10-27 平安科技(深圳)有限公司 Lung feature recognition method and device, computer equipment and storage medium
CN114580487A (en) * 2020-11-30 2022-06-03 深圳市瑞图生物技术有限公司 Chromosome recognition method, device, equipment and storage medium based on deep learning
CN112906626A (en) * 2021-03-12 2021-06-04 李辉 Fault identification method based on artificial intelligence
CN113255729A (en) * 2021-04-27 2021-08-13 埃特曼(北京)半导体技术有限公司 Epitaxial layer growth state judgment method and device based on convolutional neural network
CN113343791A (en) * 2021-05-21 2021-09-03 浙江邦业科技股份有限公司 Kiln head fire-watching video brightness identification method and device based on convolutional neural network
CN113343791B (en) * 2021-05-21 2023-06-16 浙江邦业科技股份有限公司 Kiln head fire-viewing video brightness identification method and device based on convolutional neural network
CN115049583A (en) * 2022-04-08 2022-09-13 上海电气集团股份有限公司 Pantograph arcing detection method and system, electronic device, and storage medium

Also Published As

Publication number Publication date
CN108334843B (en) 2022-03-25

Similar Documents

Publication Publication Date Title
CN108334843B (en) Arcing identification method based on improved AlexNet
CN108764471B (en) Neural network cross-layer pruning method based on feature redundancy analysis
Dai et al. Compressing neural networks using the variational information bottleneck
CN106934042B (en) Knowledge graph representation system and implementation method thereof
CN110083838B (en) Biomedical semantic relation extraction method based on multilayer neural network and external knowledge base
CN112418292B (en) Image quality evaluation method, device, computer equipment and storage medium
CN108062756A (en) Image, semantic dividing method based on the full convolutional network of depth and condition random field
CN113221687B (en) Training method of pressing plate state recognition model and pressing plate state recognition method
CN111651762A (en) Convolutional neural network-based PE (provider edge) malicious software detection method
CN112199536A (en) Cross-modality-based rapid multi-label image classification method and system
CN111400494B (en) Emotion analysis method based on GCN-Attention
CN111160096A (en) Method, device and system for identifying poultry egg abnormality, storage medium and electronic device
CN110210347B (en) Intelligent color jacket paper-cut design method based on deep learning
CN112263224B (en) Medical information processing method based on FPGA edge calculation
CN116152554A (en) Knowledge-guided small sample image recognition system
CN116485557A (en) Credit risk fusion prediction method and system based on knowledge graph
CN111079930A (en) Method and device for determining quality parameters of data set and electronic equipment
CN114359638A (en) Residual error capsule network classification model, classification method, equipment and storage medium of image
CN113688989B (en) Deep learning network acceleration method, device, equipment and storage medium
CN112801153B (en) Semi-supervised image classification method and system of image embedded with LBP (local binary pattern) features
CN114299495A (en) Small sample image classification method based on dimension self-adaption
CN114365155A (en) Efficient inference with fast point-by-point convolution
CN113688783A (en) Face feature extraction method, low-resolution face recognition method and device
CN113807370A (en) Data processing method, device, equipment, storage medium and computer program product
CN116645727B (en) Behavior capturing and identifying method based on Openphase model algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant