CN116958662A - Steel belt defect classification method based on convolutional neural network - Google Patents

Steel belt defect classification method based on convolutional neural network Download PDF

Info

Publication number
CN116958662A
CN116958662A CN202310840894.4A CN202310840894A CN116958662A CN 116958662 A CN116958662 A CN 116958662A CN 202310840894 A CN202310840894 A CN 202310840894A CN 116958662 A CN116958662 A CN 116958662A
Authority
CN
China
Prior art keywords
model
feature
neural network
convolutional neural
channel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310840894.4A
Other languages
Chinese (zh)
Inventor
边琳
叶飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dianchi College of Yunnan University
Original Assignee
Dianchi College of Yunnan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dianchi College of Yunnan University filed Critical Dianchi College of Yunnan University
Priority to CN202310840894.4A priority Critical patent/CN116958662A/en
Publication of CN116958662A publication Critical patent/CN116958662A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30136Metal

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a steel belt defect classification method based on a convolutional neural network, which belongs to the field of image recognition. The method provided by the invention has stronger robustness and generalization. Compared with the traditional identification model, the method has obvious advantages. The highest recognition rate is also achieved on the training set where the traditional recognition model is difficult.

Description

Steel belt defect classification method based on convolutional neural network
Technical Field
The invention belongs to the field of image recognition, and particularly relates to a steel belt defect classification method based on a convolutional neural network.
Background
The strip steel is widely used in a series of high-precision technical manufacturing industries such as aerospace industry, automobile industry and the like, and the production efficiency and the product of the strip steel are improved
The defect classification method based on deep learning mainly comprises the steps of extracting defect picture information through a convolutional neural network, training, and performing similar defect recognition by using a trained model. Compared with the traditional classification method, the method has the advantages of quick identification and high precision, and can be suitable for multi-class defect tasks. Khumaidi in 2017 adopts an original welding image, the welding image is directly captured from a network camera, and the extracted characteristics are subjected to image classification by using a neural network of convolution calculation and gradient descent method. The Gaussian kernel function is used for blurring the image, and the filter can minimize interference and noise. The accuracy of classification by the method is 95.83%, but filtering complex and weak defects at low resolution causes the problem of feature disappearance. After training, CAE is reserved as a feature extractor and input into a softmax layer to form a new classifier SGAN for semi-supervised learning, so that generalization capability is further improved. But the generated countermeasure network training is time-consuming and the data volume requirement is difficult to achieve in a practical scenario.
Disclosure of Invention
The method provided by the invention has stronger robustness and generalization by improving the convolutional neural network training model. Compared with the traditional identification model, the method has obvious advantages. The highest recognition rate is also achieved on the training set where the traditional recognition model is difficult.
In order to achieve the above purpose, the present invention is realized by adopting the following technical scheme: the defect classification method comprises the following steps of
Training data set selection, adopting NEU-CLS data set, wherein the data set comprises 1800 gray-scale images and 6 typical surface defect samples, and the original resolution of each image is 200x200 pixels;
the convolutional neural network training model is improved, and a first part adopts a main network part which is built by Xception, denseNet and a weight distribution network; the second part is a classifier composed of a pooling layer and a full-connection layer;
and evaluating the improved model training result, establishing an evaluation index, comparing the model accuracy, the accuracy and the recall rate with the existing model, and comparing the training iteration frequency effect.
Further, the convolutional neural network training model improvement comprises the following steps:
s1, designing a model;
s2, improving a feature extraction network;
s3, improving a weight redistribution network;
s4, selecting a loss function.
Furthermore, in the S1 model design, the model uses two different branch networks in the feature extraction part, two sets of feature information are obtained through different receptive fields, and in order to enhance the representation capability of the model, weight redistribution networks are respectively added behind the two branch networks.
Further, the S2 is improved in the feature extraction network; the structure used independently performs a spatial layer-by-layer convolution on each channel of the input data, and then performs a point-by-point convolution on the result. The structure of the method is between the common convolution and the depth separable convolution, the first step is to separate channels through 1X 1 convolution, the second step is to independently draw the spatial correlation of each output channel, the spatial correlation is independently processed through 3X 3 convolution, and finally the channel information fusion is carried out.
Further, the S3, weight redistribution network is improved; the model's weight redistribution network is divided into 3 parts:
(1) After obtaining a plurality of feature graphs, compressing each feature graph by adopting global average pooling operation to enable each feature graph to finally become a real number array, and adopting global average pooling operation to enable each channel to contain global information and establish a correlation relationship among channels;
(2) Using a fully connected neural network to perform nonlinear transformation on the result of the first step; generating weights for each of the characteristic channels by means of parameters, wherein the parameters are used to establish correlations between the characteristic channels;
(3) Using the result obtained in the step (2) as a weight, and multiplying the weight to the input feature; and (3) taking the weight output in the step (2) as an importance parameter of each feature channel after feature selection, and then weighting the importance parameter to the previous feature channel by channel through multiplication to finish the recalibration of the original feature in the channel dimension.
Further, the step 4 and the step 4 are the loss function selection, and the model adopts a Softmax function conversion classifier to output the predicted value of the standard probability distribution; it maps the outputs of multiple neurons into intervals to perform multi-classification tasks; wherein S is i Represents an array, V i Represent S i The ith element, V j Represent S i J elements in the array; the model uses a multi-class Cross-Entropy Cross Entropy function to measure the loss with the correct class labels; the Softmax function is as follows:
the Loss of Loss function is as follows:
where x is the classification probability of the model prediction, the j-th dimension of the x vector, i.e., x j Is the probability that the input image is predicted as class j, k is the true class index of the input image.
The invention has the beneficial effects that:
the method provided by the invention has stronger robustness and generalization by improving the convolutional neural network training model. Compared with the traditional identification model, the method has obvious advantages. The highest recognition rate is also achieved on the training set where the traditional recognition model is difficult.
Drawings
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is a diagram of the overall structure of the model of the present invention;
FIG. 3 is a primary structural diagram of Xreception;
FIG. 4 is a diagram of a Dense structure;
FIG. 5 is a weight distribution network;
FIG. 6 is a NEU-CLS dataset effect curve comparison;
fig. 7 is a BS4-CLS dataset effect curve comparison.
Detailed Description
In order that the invention may be readily understood, a more complete description of the invention will be rendered by reference to the appended drawings. Exemplary embodiments of the present invention are illustrated in the accompanying drawings. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention.
As shown in FIG. 1, the defect classification method comprises the following steps of
The training dataset was chosen using the NEU-CLS dataset comprising 1800 gray-scale images and 6 typical surface defect samples, each image having an original resolution of 200x200 pixels.
NEU-CLS is a northeast university surface defect dataset containing 6 typical hot rolled strip surface defects: rolled-in Scale, patches, crazing, pitted Surface, inclusion and Scratches, the dataset comprising 1800 grayscale images and 6 typical Surface defect samples. Table 1 shows the number of sample training data sets and test data sets for 6 typical surface defects in NEU-CLS. The original resolution of each image is 200x200 pixels, and the intra-class defects have large differences in appearance. In addition, the gray value of the intra-class defect image may change due to the influence of illumination and material change.
TABLE 1NEU-CLS dataset
Table 1 NEU-CLS datasets
Table 2 shows the image data set of the surface defects of the steel strip collected in the actual plant, including 4 types of surface defects: control, dark Spots, scratch, oxidation, the original resolution of each image is 256x256 pixels. The dataset included 1113 RGB-type defect images with a training set to test set ratio of 8:2. Sample training data set, number of test data sets for 4 typical surface defects in BS 4-CLS. Because the acquired image is of fixed brightness and angle, the pixel value change of the defect image is not obvious by NEU-CLS.
TABLE 2 BS4-CLS dataset
Table 2 BS4-CLS datasets
The convolutional neural network training model is improved, and a first part adopts a main network part which is built by Xception, denseNet and a weight distribution network; the second part is a classifier composed of a pooling layer and a full-connection layer;
the convolutional neural network training model improvement comprises the following steps:
s1, designing a model;
the convolutional neural network acquires different characteristic information through parameter learning, and the multi-model fusion and weight distribution network is used for extracting defect characteristic information with more attributes so as to improve the accuracy of steel belt defect classification. The model uses two different branch networks in the feature extraction part to obtain two sets of feature information through different receptive fields. In order to enhance the representation capability of the model, a weight redistribution network is added after two branch networks respectively, so that the calculation efficiency is improved, and the convergence performance of the model is accelerated. And finally, classifying tasks are carried out by adding a pooling layer and a full-connection layer. The overall network structure of the model design is shown in fig. 2.
S2, improving a feature extraction network;
xception consists mainly of a residual network with depth separable convolutions. It contains 36 convolutional layers, divided into 14 blocks, all with linear residual connections around them except the first and last. These convolution layers constitute a feature extraction portion of the network, and the model refers to the feature of depth-separable convolution, and performs spatial layer-by-layer convolution on each channel of the input data independently using the structure shown in fig. 3, and then performs point-by-point convolution on the result. The structure of the method is between the common convolution and the depth separable convolution, the first step is to separate channels through 1X 1 convolution, the second step is to independently draw the spatial correlation of each output channel, the spatial correlation is independently processed through 3X 3 convolution, and finally the channel information fusion is carried out.
When the defect classification network is trained, the model reduces feature mapping due to the reasons of convolution and downsampling, and the model classification capability is weakened due to the lack of feature information in the deep convolution transmission process, so that the effect of the model is affected. Aiming at the problem of insufficient information transfer characteristics of convolutional networks, gao Huang et al propose a DenseNet network, which contains a more aggressive dense connection mechanism than ResNet: each layer accepts as its additional input the outputs of all the convolutional layers preceding it. It connects each layer to the other layers in feed-forward mode, so that a layer receives as input all feature maps of the preceding layer. The concatenation of the feature maps of the layer, which is the nonlinear transformation function used to process the feature maps of the concatenation, is a combination operation that includes a series of BN (Batch Normalization), reLU, pooling and Conv operations. This enables DenseNet to reduce gradient vanishing, enhance feature propagation, facilitate feature reuse, and reduce the number of parameters.
S3, improving a weight redistribution network;
the weight redistribution network trains the importance degree of each channel in the learning network, promotes important characteristics according to the importance degree and suppresses other characteristics. The characteristic response coefficients of the channels are adaptively adjusted by establishing the interdependence relationship among the channels, the correlation among the channels is learned, and the characterization capability of the network model is improved. The model's weight redistribution network is divided into 3 parts: (1) After obtaining a plurality of feature graphs, compressing each feature graph by adopting global average pooling operation to enable each feature graph to finally become a real number array, and adopting global average pooling operation to enable each channel to contain global information and establish a correlation relationship among channels. (2) A fully connected neural network is used to perform a nonlinear transformation on the result of the first step. Weights are generated for each of the characteristic channels by parameters that are used to establish correlations between the characteristic channels. (3) And (3) taking the result obtained in the step (2) as a weight, and multiplying the weight to the input feature. And 2, taking the weight output in the step as an importance parameter of each feature channel after feature selection, and then weighting the importance parameter to the previous feature channel by channel through multiplication to finish recalibration of the original feature in the channel dimension.
S4, selecting a loss function.
Step 4, S4, loss function selection, wherein a model adopts a Softmax function conversion classifier to output a predicted value of standard probability distribution; it maps the outputs of multiple neurons into intervals to perform multi-classification tasks; wherein S is i Represents an array, V i Represent S i The ith element, V j Represent S i J elements in the array; the model uses a multi-class Cross-Entropy Cross Entropy function to measure the loss with the correct class labels; the Softmax function is as follows:
the Loss of Loss function is as follows:
where x is the classification probability of the model prediction, the j-th dimension of the x vector, i.e., x j Is the probability that the input image is predicted as class j, k is the true class index of the input image.
And evaluating the improved model training result, establishing an evaluation index, comparing the model accuracy, the accuracy and the recall with the existing model, and comparing the training iteration frequency effect.
And evaluating indexes of the defect classification convolutional network model, namely accuracy, precision and recall rate. Table 3 is a classification confusion matrix, TP indicates that the defects are classified as positive classes, and the number of defects actually predicted by the model is positive. FN indicates that the defects are classified as negative classes and that the model actually predicts the number of defects as negative classes. FP indicates the number of positive classes as defects that the model actually predicts are classified as negative classes. TN represents the number of negative classes of defects that the model actually predicts, as positive classes of defects.
TABLE 3 Classification confusion matrix description
Table 3 Description of classification confusion matrix
The recall is defined as the proportion of samples that are actually positive classes and that are predicted to be positive classes to all samples that are actually positive classes. Recall is more focused on classifying positive samples as negative samples, whose calculation formula is as follows:
the accuracy is defined as the proportion of samples that are actually positive classes and predicted to be positive classes to all samples predicted to be positive classes. It is more focused on the case of misclassification of the negative sample into a positive sample, and the calculation formula is as follows:
model accuracy, recall contrast
TABLE 4 model Effect comparison on NEU-CLS datasets
Table 4 Comparison of model effect on NEU-CLS datasets
Training was performed using 80% of the NEU-CLS data and testing was performed using 20% of the data. In order to further verify the effectiveness and superiority of the model provided by the study, the model is compared with the existing mainstream frame model in a test set, namely VGG19, denseNet, resNet and Xreception convolution network models respectively. To further verify the robustness of the framework, the experiments used four classes of defect images in the BS4-CLS dataset for classification accuracy testing.
TABLE 5 model Effect comparison on BS4-CLS dataset
Table 5 Comparison of model effect on BS4-CLS datasets
Table 5 shows that the proposed method performs optimally in five research models, with 75.52% accuracy, 76.17% accuracy, and 66.22% recall for the defect classification task.
As shown in tables 4 and 5, the four types of defects of BS4-CLS are difficult to classify because the surface defects are usually small and difficult to collect, the characteristic differences in the similar characteristics among the actual defect types are large, and the common convolution model cannot be well solved. Experimental results show that the proposed algorithm is obviously superior to the common convolution network model.
FIG. 6 is a graph showing the effect of the EDESPNet algorithm on the NEU-CLS dataset and the BS4-CLS dataset of FIG. 7, showing the trend of various indexes as the number of sample iterations increases. The horizontal axis represents the number of iterations and the vertical axis represents the Accuracy, precision, recall, loss value. As the number of iterations increases, the Loss value gradually decreases, and the other three evaluation criteria gradually increase and eventually stabilize. Because the video memory limiting model is smaller in Batchsize, the reverse propagation direction is not clear in the model training process, and therefore the curve has oscillation with a certain amplitude.
The BS4-CLS data set comprises four types of defect images acquired by an actual factory, the characteristics among the categories are similar, and the difference of the damage conditions of the characteristics in the categories is larger due to different acquisition time, so that the detection precision of the model on the BS4-CLS data set is lower. The EDESPNet still achieved the best classification effect on the BS4-CLS dataset compared to other algorithms.
Experimental results show that the proposed algorithm achieves the best effect on defect data sets with different resolutions and different types, and the model is proved to have higher robustness and generalization.

Claims (5)

1. A steel belt defect classification method based on a convolutional neural network is characterized by comprising the following steps of: the defect classification method comprises the following steps of
Training data set selection, adopting NEU-CLS data set, wherein the data set comprises 1800 gray-scale images and 6 typical surface defect samples, and the original resolution of each image is 200x200 pixels;
the convolutional neural network training model is improved, and a first part adopts a main network part which is built by Xception, denseNet and a weight distribution network; the second part is a classifier composed of a pooling layer and a full-connection layer;
and evaluating the improved model training result, establishing an evaluation index, comparing the model accuracy, the accuracy and the recall rate with the existing model, and comparing the training iteration frequency effect.
2. The method for classifying defects of steel strip based on convolutional neural network as set forth in claim 1, wherein: the convolutional neural network training model improvement comprises the following steps:
s1, designing a model;
s2, improving a feature extraction network;
s3, improving a weight redistribution network;
s4, selecting a loss function.
3. The method for classifying defects of steel strip based on convolutional neural network according to claim 2, wherein the method comprises the steps of: in the S1 model design, the model uses two different branch networks in the feature extraction part, two sets of feature information are obtained through different receptive fields, and in order to enhance the representation capability of the model, weight redistribution networks are respectively added behind the two branch networks.
4. The method for classifying defects of steel strip based on convolutional neural network according to claim 2, wherein the method comprises the steps of: s2, improving a feature extraction network; the structure used independently performs a spatial layer-by-layer convolution on each channel of the input data, and then performs a point-by-point convolution on the result. The structure of the method is between the common convolution and the depth separable convolution, the first step is to separate channels through 1X 1 convolution, the second step is to independently draw the spatial correlation of each output channel, the spatial correlation is independently processed through 3X 3 convolution, and finally the channel information fusion is carried out.
5. The method for classifying defects of steel strip based on convolutional neural network according to claim 2, wherein the method comprises the steps of: s3, improving the weight redistribution network; the model's weight redistribution network is divided into 3 parts:
(1) After obtaining a plurality of feature graphs, compressing each feature graph by adopting global average pooling operation to enable each feature graph to finally become a real number array, and adopting global average pooling operation to enable each channel to contain global information and establish a correlation relationship among channels;
(2) Using a fully connected neural network to perform nonlinear transformation on the result of the first step; generating weights for each of the characteristic channels by means of parameters, wherein the parameters are used to establish correlations between the characteristic channels;
(3) Using the result obtained in the step (2) as a weight, and multiplying the weight to the input feature; and (3) taking the weight output in the step (2) as an importance parameter of each feature channel after feature selection, and then weighting the importance parameter to the previous feature channel by channel through multiplication to finish the recalibration of the original feature in the channel dimension.
The method for classifying defects of steel strip based on convolutional neural network according to claim 2, wherein the method comprises the steps of: step 4, S4, loss function selection, wherein a model adopts a Softmax function conversion classifier to output a predicted value of standard probability distribution; it maps the outputs of multiple neurons into intervals to perform multi-classification tasks; wherein S is i Represents an array, V i Represent S i The ith element, V j Represent S i J elements in the array; the model uses a multi-class Cross-Entropy Cross Entropy function to measure the loss with the correct class labels;
the Softmax function is as follows:
the Loss of Loss function is as follows:
where x is the classification probability of the model prediction, the j-th dimension of the x vector, i.e., x j Is the probability that the input image is predicted as class j, k is the true class index of the input image.
CN202310840894.4A 2023-07-07 2023-07-07 Steel belt defect classification method based on convolutional neural network Pending CN116958662A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310840894.4A CN116958662A (en) 2023-07-07 2023-07-07 Steel belt defect classification method based on convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310840894.4A CN116958662A (en) 2023-07-07 2023-07-07 Steel belt defect classification method based on convolutional neural network

Publications (1)

Publication Number Publication Date
CN116958662A true CN116958662A (en) 2023-10-27

Family

ID=88461259

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310840894.4A Pending CN116958662A (en) 2023-07-07 2023-07-07 Steel belt defect classification method based on convolutional neural network

Country Status (1)

Country Link
CN (1) CN116958662A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117732886A (en) * 2024-02-07 2024-03-22 东北大学 Hot rolling quality pre-control method based on cascading intelligent diagnosis

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117732886A (en) * 2024-02-07 2024-03-22 东北大学 Hot rolling quality pre-control method based on cascading intelligent diagnosis
CN117732886B (en) * 2024-02-07 2024-04-30 东北大学 Hot rolling quality pre-control method based on cascading intelligent diagnosis

Similar Documents

Publication Publication Date Title
US20190228268A1 (en) Method and system for cell image segmentation using multi-stage convolutional neural networks
CN110321830B (en) Chinese character string picture OCR recognition method based on neural network
CN109684922B (en) Multi-model finished dish identification method based on convolutional neural network
CN111738363B (en) Alzheimer disease classification method based on improved 3D CNN network
CN110322445B (en) Semantic segmentation method based on maximum prediction and inter-label correlation loss function
CN107680116A (en) A kind of method for monitoring moving object in video sequences
CN114898151A (en) Image classification method based on deep learning and support vector machine fusion
CN116883393B (en) Metal surface defect detection method based on anchor frame-free target detection algorithm
CN116012337A (en) Hot rolled strip steel surface defect detection method based on improved YOLOv4
CN116958662A (en) Steel belt defect classification method based on convolutional neural network
CN112818893A (en) Lightweight open-set landmark identification method facing mobile terminal
CN112101467A (en) Hyperspectral image classification method based on deep learning
John et al. A comparative study of various object detection algorithms and performance analysis
CN114897802A (en) Metal surface defect detection method based on improved fast RCNN algorithm
CN116012291A (en) Industrial part image defect detection method and system, electronic equipment and storage medium
CN115272225A (en) Strip steel surface defect detection method and system based on countermeasure learning network
CN111008649A (en) Defect detection data set preprocessing method based on three decisions
CN113033106A (en) Steel material performance prediction method based on EBSD and deep learning method
CN112270404A (en) Detection structure and method for bulge defect of fastener product based on ResNet64 network
CN116543204A (en) Metal plate crack identification method based on 3D convolutional neural network and displacement response
CN115631186A (en) Industrial element surface defect detection method based on double-branch neural network
Mao et al. Attention-relation network for mobile phone screen defect classification via a few samples
CN112816408B (en) Flaw detection method for optical lens
Monteiro Pollen grain recognition through deep learning convolutional neural networks
CN116758010B (en) Method, system, equipment and medium for identifying surface defects of aircraft skin

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination