CN112132086A - Multi-scale martensite microstructure aging and damage grading method - Google Patents
Multi-scale martensite microstructure aging and damage grading method Download PDFInfo
- Publication number
- CN112132086A CN112132086A CN202011053202.4A CN202011053202A CN112132086A CN 112132086 A CN112132086 A CN 112132086A CN 202011053202 A CN202011053202 A CN 202011053202A CN 112132086 A CN112132086 A CN 112132086A
- Authority
- CN
- China
- Prior art keywords
- model
- training
- data set
- pictures
- microstructure
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 46
- 229910000734 martensite Inorganic materials 0.000 title claims abstract description 28
- 230000032683 aging Effects 0.000 title claims abstract description 26
- 239000000463 material Substances 0.000 claims abstract description 29
- 229910000831 Steel Inorganic materials 0.000 claims abstract description 27
- 239000010959 steel Substances 0.000 claims abstract description 27
- 238000013528 artificial neural network Methods 0.000 claims abstract description 13
- 229910052804 chromium Inorganic materials 0.000 claims abstract description 10
- 239000011651 chromium Substances 0.000 claims abstract description 10
- 230000006870 function Effects 0.000 claims abstract description 8
- 238000011478 gradient descent method Methods 0.000 claims abstract description 5
- 238000012549 training Methods 0.000 claims description 63
- 238000013135 deep learning Methods 0.000 claims description 16
- 239000013598 vector Substances 0.000 claims description 9
- 238000012795 verification Methods 0.000 claims description 7
- 238000010586 diagram Methods 0.000 claims description 6
- 238000013527 convolutional neural network Methods 0.000 claims description 5
- 238000007781 pre-processing Methods 0.000 claims description 4
- 238000002203 pretreatment Methods 0.000 claims description 3
- 238000011156 evaluation Methods 0.000 claims description 2
- 230000003321 amplification Effects 0.000 claims 3
- 238000003199 nucleic acid amplification method Methods 0.000 claims 3
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 claims 1
- 238000002372 labelling Methods 0.000 abstract 1
- 238000011160 research Methods 0.000 description 7
- 238000001514 detection method Methods 0.000 description 5
- 230000000694 effects Effects 0.000 description 4
- 238000004458 analytical method Methods 0.000 description 3
- 238000004519 manufacturing process Methods 0.000 description 3
- CWYNVVGOOAEACU-UHFFFAOYSA-N Fe2+ Chemical compound [Fe+2] CWYNVVGOOAEACU-UHFFFAOYSA-N 0.000 description 2
- 230000007547 defect Effects 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 230000011218 segmentation Effects 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000019771 cognition Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 239000007888 film coating Substances 0.000 description 1
- 238000009501 film coating Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 239000007769 metal material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000010248 power generation Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/69—Microscopic objects, e.g. biological cells or cellular parts
- G06V20/698—Matching; Classification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4007—Scaling of whole images or parts thereof, e.g. expanding or contracting based on interpolation, e.g. bilinear interpolation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
- G06T7/0008—Industrial image inspection checking presence/absence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/69—Microscopic objects, e.g. biological cells or cellular parts
- G06V20/695—Preprocessing, e.g. image segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10056—Microscopic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
- G06T2207/30136—Metal
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Molecular Biology (AREA)
- Biomedical Technology (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Biophysics (AREA)
- Evolutionary Biology (AREA)
- Computational Linguistics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Multimedia (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The application discloses a multi-scale martensite microstructure aging and damage grading method, which comprises the following steps: acquiring microstructure pictures of the high-chromium martensite heat-resistant steel material by using a metallographic microscope with a specific magnification to construct a data set, and labeling a grade label representing aging and damage degrees for each picture; reducing all pictures to construct a multi-scale data set containing different resolutions; initializing a neural network by utilizing pre-trained model parameters, and constructing a multi-scale metallographic grading model based on a residual error neural network characteristic pyramid; presetting learning rate and iteration number super parameters, adopting cross entropy as a loss function, and applying a random gradient descent method to finely adjust the constructed model; and acquiring high-chromium martensite heat-resistant steel microstructure pictures to be identified through metallographic microscopes with different resolutions, taking out a plurality of small pictures with determined sizes from the pictures, and grading by using a trained model. The model trained by the method can be expanded to be used in pictures with various resolutions.
Description
Technical Field
The invention relates to the field of aging and damage identification of a microstructure of high-chromium martensite heat-resistant steel, in particular to a method for automatically grading a metallographic structure.
Background
The microstructure characteristics of the steel material are one of the important factors determining the material performance. Due to the influence of factors such as environment, temperature, pressure and the like, the microstructure of the steel material is often aged and damaged in different degrees in the using process, and great hidden danger is brought to safe production. Therefore, how to scientifically and efficiently detect the aging and damage degree of the steel material becomes one of the problems to be solved urgently in theory and practice. In recent years, in the field of thermal power generation, high-chromium martensitic heat-resistant steels represented by P91 steel and P92 steel have been widely used as key pressure-receiving members of supercritical (super) unit main steam pipelines, reheat hot-stage pipelines, and the like, and the problems of aging and damage of the material structure have been receiving more and more attention. Currently, the research aiming at the problem is mainly to take a picture of a film coating on site or to take a sample through a cut tube and observe and analyze the sample under a metallurgical microscope in a laboratory. However, the method is highly dependent on the professional skill level and practical experience of researchers, and has strong subjectivity. Meanwhile, the experimental result is often relatively large in error due to high cost and low reusability of manual observation and discrimination.
With the arrival of the artificial intelligence era, machine learning algorithms represented by deep learning make great progress in the field of image analysis and recognition. More and more scholars pay attention to the application of deep learning in the field of material research, and provide possibility for exploring a scientific and efficient automatic identification method for material microstructures. Some researchers explore the application of machine learning methods to material microstructures, for example, Azimi and others convert the problem of defect detection of materials through pictures into semantic segmentation problems of pictures by using deep learning methods, and perform defect detection on metal materials by using semantic segmentation algorithms.
At present, the exploration of material research by using a deep learning method is still in the beginning stage. The existing research has less application in the classification of material tissue states, and the research aiming at the aging characteristics and the damage degree of microstructures of metallographic pictures with different scales is lacked. In the actual production environment, the data scale of the metallographic picture is often greatly different due to the difference of the conditions of the metallographic data acquisition equipment. At present, only the study of the aging and damage characteristics of the microstructure of metallographic picture data with a fixed proportion in an experimental environment is aimed at, and the effective analysis of the data is difficult to meet. Therefore, the automatic classification problem of the aging characteristics and the damage degree of the high-chromium martensite microstructure of the metallographic pictures with different scales is researched by using a deep learning method, and the method has important academic value and practical urgency.
Disclosure of Invention
The invention aims to solve the problems in the prior art and provides a high-chromium martensite heat-resistant steel microstructure damage and aging automatic grading method based on a deep residual error network so as to improve the identification precision and identification efficiency of steel microstructure aging and damage evaluation.
The purpose of the invention is realized by the following technical scheme:
a martensite microstructure aging and damage grading method based on deep learning comprises the following steps:
step one, determining standard resolution, namely the magnification a of the microstructure of the steel to be identified, wherein a is more than 50 and less than 1000, and collecting the microstructure pictures of the steel material with the same specification and size obtained by a metallographic microscope under the magnification and the microstructure pictures of the steel material with smaller magnification and different resolution. After expert grading assessment of aging damage, the assessment levels are bound with pictures to construct a data set.
Step two, performing the same pretreatment on all the pictures collected in the step one, wherein the pretreatment method comprises the following steps:
1) removing the text description part contained in the microstructure diagram obtained by the metallographic microscope to obtain an initial data set only containing a microstructure diagram body, taking the acquired standard resolution data as an initial training data set T0, and taking the acquired data of non-standard multiple resolution parts as a verification data set V0;
2) converting the three-channel gray-scale image in the initial training data set T0 into a single-channel gray-scale image to obtain a training data set T1;
3) reducing the image data in the T1 by using a bilinear interpolation method at different magnifications to obtain a training data set T2;
4) for each picture in the training data set T2, a size nxnx1 image is randomly taken, where 1< m <100,100< n <800, and if the original picture is smaller than the target resolution, the original picture is padded to nxn size in a padding manner. A new training data set T3 is obtained.
Step three, constructing a ResNet-FPN-MC model by utilizing a deep learning framework PyTorch, wherein ResNet represents one framework of a convolutional neural network: residual neural Networks, FPN stands for Feature Pyramid Networks (Feature Pyramid Networks), mc (martensite classification) is an english abbreviation for martensite grading. The network input picture size is nxnx1, and the output is a 5-dimensional vector, and each dimension represents the probability that the material belongs to the category.
Step four, using a training data set T3, adopting cross entropy as a loss function (no regular item is added), and training the model by using a random gradient descent method (the number of samples in each batch of the random gradient descent algorithm is set to be 24, a training round 30 is preset, and each iteration model parameter is reserved);
the specific training steps are as follows:
1) parameters in the current model backbone network are initialized using the pre-trained ResNet.
2) And randomly disordering the picture sequence in the training set.
3) Inputting a batch of pictures (the number of the pictures in the batch can be 16, 32 or 64 pictures) to the neural network each time, recording the output vector of the neural network, calculating the value of the cross entropy through the network output vector and the class label of the data, then performing back propagation according to the value of the cross entropy, and updating the model parameters; after each picture in the training data set T3 has calculated the loss function and has finished back propagation, it is recorded as a round of training (one round of training means that all pictures have been calculated in the forward direction of the network.
4) Model fine-tuning (technique for updating model parameters using a smaller learning rate for a pre-trained model)
5) And (4) recording model parameters in each round of training, judging whether the maximum training round is reached or not, executing the step 6 if the maximum training round is reached, and returning to the step 3 to continue training if the maximum training round is not reached.
6) And taking out the model with the highest accuracy on the verification set in the training process.
And step five, judging whether the resolution of the microstructure picture of the high-chromium martensite heat-resistant steel to be identified is within a range capable of being judged, if so, preprocessing according to the steps 1) and 2) in the step two, and then automatically grading by using the model trained in the step four.
At present, the task of martensite aging and damage grading is to manually observe a sample image by an expert and judge the damage grade of the sample image. The manual judgment depends on the professional knowledge level and practical experience of technicians, and fuzzy results can be generated on the judgment of samples between different levels of aging and damage due to different professional levels of the technicians, so that the manual classification efficiency is low, and the error is large. The grading method of the neural network is usually only trained for picture data with specific resolution, and cannot be conveniently expanded to picture data with different resolutions in real application. Aiming at the problems of martensite aging and damage grading, the invention provides a high-chromium martensite heat-resistant steel microstructure automatic identification method based on a residual error neural network and a characteristic pyramid network by combining with deep learning advantages, which not only can improve identification precision, but also is beneficial to the expansion of different resolution pictures.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a standard architecture of an 18-layer ResNet disclosed in an embodiment of the present application;
fig. 2 is an example of training data of different scales disclosed in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The method uses metallographic structure pictures of materials with different aging degrees P91 and P92 collected in a laboratory as a training set, and tries to construct a multi-scale data set by using a data enhancement method. On the basis, a deep residual error network method is utilized to carry out classification research on the aging and damage degree of the material microstructure, and the cognition of the model to pictures with different scales is enhanced through the method. The experimental result shows that the method enhances the identification of the model to different scales, and the identification accuracy rate of the multi-scale test set can reach 60 percent at most; further promotes the deep learning method to research the aging and damage characteristics of the microstructure of the material in a complex real production environment. And further provides possibility for constructing a metallographic picture data set of the steel material facing to a deep learning method and solving the problem of microstructure characteristic analysis, and lays a data foundation for automatic analysis of the microstructure of the material.
Depth residual error network
The deep residual error network (ResNet) is a convolutional neural network architecture, and by introducing a residual error connection structure, the training speed of a deep model can be effectively improved, and the expression effect of the model can also be improved [11 ]. Based on this, the ResNet50 model will be employed herein as an infrastructure to develop picture grading studies. Herein, the ResNet50 model will serve as a baseline hierarchical model and a backbone model for the FPN.
Feature pyramid network
In the problem of target detection in the field of computer vision, the situation that targets in data are different in size often exists; to solve the problem, Tsung-Yi Lin and the like propose a Feature Pyramid Network (FPN) in 2017; the method improves the precision of target detection by extracting multi-scale characteristic information for fusion, and has an obvious effect on small object detection in particular. The FPN is an additional component of a universal feature extraction network such as a ResNet model or a DenseNet model, and can be combined with a classical network to improve the effect of the original network. The maximum effect of the FPN is that the FPN performs top-down side connection on the high-level features of the low-resolution and high-semantic information and the low-level features of the high-resolution and low-semantic information, so that the features under all scales have rich semantic information, and the structure of the FPN is shown in fig. 1.
Example 1
The method is an automatic grading method for martensite damage and aging based on a depth residual error network, and comprises the steps of using a deep learning frame PyTorch convolutional neural network model and using an image preprocessing library torchvision to preprocess an image.
In the following, 315 pictures of microstructure of ferrous material collected in a laboratory by 500 × (i.e., enlarged by 10000 times) are taken as an example, and the microstructure of ferrous material is automatically recognized by deep learning.
The application discloses a martensite damage and aging automatic grading method based on deep learning, which specifically comprises the following steps:
step one, determining standard resolution, namely the magnification a of the microstructure of the steel to be identified, wherein the magnification a is more than 50 and less than 1000, collecting the microstructure pictures of the steel material with the same specification and size obtained by a metallographic microscope under the magnification, and the microstructure pictures of the steel material with smaller magnification and different resolution, wherein the pictures are shown in figure 2. After expert grading assessment of aging damage, the assessment levels are bound with pictures to construct a data set.
Step two, performing the same pretreatment on all the pictures collected in the step one, wherein the pretreatment method comprises the following steps:
1) removing the text description part contained in the microstructure diagram obtained by the metallographic microscope to obtain an initial data set only containing a microstructure diagram body, taking the acquired standard resolution data as an initial training data set T0, and taking the acquired data of non-standard multiple resolution parts as a verification data set V0;
2) converting the three-channel gray-scale image in the initial training data set T0 into a single-channel gray-scale image to obtain a training data set T1;
3) reducing the image data in the T1 by using a bilinear interpolation method at different magnifications to obtain a training data set T2;
4) for each picture in the training data set T2, a size nxnx1 image is randomly taken, where 1< m <100,100< n <800, and if the original picture is smaller than the target resolution, the original picture is padded to nxn size in a padding manner. A new training data set T3 is obtained.
Step three, constructing a ResNet-FPN-MC model by utilizing a deep learning framework PyTorch, wherein ResNet represents one framework of a convolutional neural network: residual neural Networks, FPN stands for Feature Pyramid Networks (Feature Pyramid Networks), MC for Martensite classification (martentite classification). The network input picture size is nxnx1, and the output is a 5-dimensional vector, and each dimension represents the probability that the material belongs to the category.
Step four, using a training data set T3, adopting cross entropy as a loss function (no regular item is added), and training the model by using a random gradient descent method (the number of samples in each batch of the random gradient descent algorithm is set to be 24, a training round 30 is preset, and each iteration model parameter is reserved);
the specific training steps are as follows:
1) parameters in the current model backbone network are initialized using the pre-trained ResNet.
2) And randomly disordering the picture sequence in the training set.
3) Inputting a batch of pictures (the number of the pictures in the batch can be 16, 32 or 64 pictures) to the neural network each time, recording the output vector of the neural network, calculating the value of the cross entropy through the network output vector and the class label of the data, then performing back propagation according to the value of the cross entropy, and updating the model parameters; after each picture in the training data set T3 has calculated the loss function and has finished back propagation, it is recorded as a round of training (one round of training means that all pictures have been calculated in the forward direction of the network.
4) Model fine-tuning (technique for updating model parameters using a smaller learning rate for a pre-trained model)
5) And (4) recording model parameters in each round of training, judging whether the maximum training round is reached or not, executing the step 6 if the maximum training round is reached, and returning to the step 3 to continue training if the maximum training round is not reached.
6) And taking out the model with the highest accuracy on the verification set in the training process.
The accuracy and recall of various steel microstructure images of the verification data set under the Best _ Model are counted, and as can be seen from table 1, the accuracy and recall of all the steel microstructure images reach 100%.
TABLE 1 accuracy of the martensitic microstructure map at different scales under Best _ Model
Therefore, according to the method for automatically grading the martensite damage and the aging based on the depth residual error network, the martensite microstructure image is automatically graded, and the original data set based on the single resolution reaches a certain accuracy rate on a plurality of extension scales.
And step five, judging whether the resolution of the microstructure picture of the high-chromium martensite heat-resistant steel to be identified is within a range capable of being judged, if so, preprocessing according to the steps 1) and 2) in the step two, and then automatically grading by using the model trained in the step four.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
Claims (9)
1. A multi-scale martensite microstructure aging and damage grading method is characterized in that:
the method comprises the following steps:
step one, determining standard resolution, namely the amplification factor a of the microstructure of the steel material to be identified, wherein a is more than 50 and less than 1000, collecting the steel material microstructure pictures with the same specification and size obtained by a metallographic microscope under the amplification factor and the steel material microstructure pictures with smaller amplification factor and different resolution, and binding the evaluation grade with the pictures to construct a data set after the aged damage is evaluated by experts in a grading way;
step two, performing the same pretreatment on all the pictures collected in the step one, wherein the pretreatment method comprises the following steps:
1) removing the text description part contained in the microstructure diagram obtained by the metallographic microscope to obtain an initial data set only containing a microstructure diagram body, taking the acquired standard resolution data as an initial training data set T0, and taking the acquired data of non-standard multiple resolution parts as a verification data set V0;
2) converting the three-channel gray-scale image in the initial training data set T0 into a single-channel gray-scale image to obtain a training data set T1;
3) reducing the image data in the T1 by using a bilinear interpolation method at different magnifications to obtain a training data set T2;
4) randomly selecting an nxnx1 image with the size of 1< m <100 and 100< n <800 for each picture in the training data set T2, and if the original picture is smaller than the target resolution, completing the original picture to the size of nxn in a completion mode to obtain a new training data set T3;
step three, constructing a ResNet-FPN-MC model by utilizing a deep learning framework PyTorch, wherein ResNet represents a convolutional neural network architecture: a residual error neural network (FPN) represents a characteristic Pyramid network (Feature Pyramid Networks), wherein MC is Martensite classification (Martensite classification), the size of a network input picture is nxnx1, the output is a 5-dimensional vector, and each dimension represents the probability of the material belonging to the category;
step four, training the model by using a training data set T3 and a random gradient descent method by using cross entropy as a loss function;
the specific training steps are as follows:
1) initializing parameters in the current model backbone network by using a pre-trained ResNet;
2) randomly disordering the picture sequence in the training set;
3) inputting a batch of pictures to the neural network each time, recording output vectors of the neural network, calculating a value of cross entropy through the output vectors of the network and class labels of data, then performing back propagation according to the value of the cross entropy, and updating model parameters; calculating a loss function for each picture in the training data set T3, and recording as a round of training after the back propagation is completed;
4) fine adjustment of the model;
5) recording model parameters in each round of training, simultaneously judging whether the maximum training round is reached, if the maximum training round is reached, executing the step 6, and if the maximum training round is not reached, returning to the step 3 to continue training;
6) taking out the model with the highest accuracy on the verification set in the training process;
and step five, judging whether the resolution of the microstructure picture of the high-chromium martensite heat-resistant steel to be identified is within a range capable of being judged, if so, preprocessing according to the steps 1) and 2) in the step two, and then automatically grading by using the model trained in the step four.
2. The method for classifying the aging and damage of the martensitic microstructure based on the deep learning as claimed in claim 1, wherein in the step 3) of the second step, all the images in the training data set T1 are reduced with interpolation to construct a multi-scale training set with data of different resolutions.
3. The method of claim 1, wherein in step three, a ResNet-FPN-MC model is constructed, the model having a structure comprising an input layer, four convolution modules, an FPN network, and an output layer.
4. The method of claim 3, wherein in the third step, the input/output channels of the four convolution modules are (64, 256), (128, 512), (256, 1024), (512, 2048), and the four convolution modules are stacked with 3, 4, 6, and 3 residual blocks.
5. The method of claim 3, wherein in step three, the FPN network comprises four 1x1 convolution kernels with input and output channels (256 ), (512, 256), (1024, 256), (2048, 256), and each 1x1 convolution kernel is followed by a 3x3 convolution kernel with input and output of (256 ).
6. The method of claim 3, wherein in step three, the output layer is composed of two parts, the first part has two multi-layer perceptron layers, the input and output are (12544,1024), (1024), the second part has only one multi-layer perceptron layer, and the input and output are (1024, 5).
7. The method according to claim 1, wherein in the fourth step, the number of iterations is preset to 10-100; adopting cross entropy as a loss function without adding a regular term; and when the random gradient descent method is applied to train the model, keeping the parameters of the model iterated each time.
8. The method as claimed in claim 1, wherein in step 3) of the fourth step, 16, 32 or 64 pictures are sequentially inputted to the ResNet-FPN-MC at a time.
9. The method of claim 1, wherein the model is fine-tuned using a smaller learning rate on a pre-trained ResNet-FPN model to obtain a model that performs best on a current data set.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011053202.4A CN112132086A (en) | 2020-09-29 | 2020-09-29 | Multi-scale martensite microstructure aging and damage grading method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011053202.4A CN112132086A (en) | 2020-09-29 | 2020-09-29 | Multi-scale martensite microstructure aging and damage grading method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112132086A true CN112132086A (en) | 2020-12-25 |
Family
ID=73843133
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011053202.4A Pending CN112132086A (en) | 2020-09-29 | 2020-09-29 | Multi-scale martensite microstructure aging and damage grading method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112132086A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112861665A (en) * | 2021-01-25 | 2021-05-28 | 中国石油天然气集团有限公司 | Oil casing heat treatment tissue inspection method based on deep learning |
CN112862763A (en) * | 2021-01-25 | 2021-05-28 | 中国石油天然气集团有限公司 | System and method for detecting heat treatment state of high-steel-grade thick-wall pipe fitting based on deep learning |
CN114324361A (en) * | 2021-12-31 | 2022-04-12 | 大连理工大学 | Metallographic structure degradation rating method and device |
CN114663699A (en) * | 2022-03-08 | 2022-06-24 | 中南大学湘雅医院 | Method for identifying wound injured tissue type and predicting wound healing time with high precision |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108198620A (en) * | 2018-01-12 | 2018-06-22 | 洛阳飞来石软件开发有限公司 | A kind of skin disease intelligent auxiliary diagnosis system based on deep learning |
US20190287292A1 (en) * | 2018-03-15 | 2019-09-19 | Siemens Healthcare Gmbh | Deep reinforcement learning for recursive segmentation |
CN110619355A (en) * | 2019-08-28 | 2019-12-27 | 武汉科技大学 | Automatic steel material microstructure identification method based on deep learning |
CN111063023A (en) * | 2019-12-02 | 2020-04-24 | 西南石油大学 | Skull defect reconstruction method based on three-dimensional convolutional neural network |
-
2020
- 2020-09-29 CN CN202011053202.4A patent/CN112132086A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108198620A (en) * | 2018-01-12 | 2018-06-22 | 洛阳飞来石软件开发有限公司 | A kind of skin disease intelligent auxiliary diagnosis system based on deep learning |
US20190287292A1 (en) * | 2018-03-15 | 2019-09-19 | Siemens Healthcare Gmbh | Deep reinforcement learning for recursive segmentation |
CN110619355A (en) * | 2019-08-28 | 2019-12-27 | 武汉科技大学 | Automatic steel material microstructure identification method based on deep learning |
CN111063023A (en) * | 2019-12-02 | 2020-04-24 | 西南石油大学 | Skull defect reconstruction method based on three-dimensional convolutional neural network |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112861665A (en) * | 2021-01-25 | 2021-05-28 | 中国石油天然气集团有限公司 | Oil casing heat treatment tissue inspection method based on deep learning |
CN112862763A (en) * | 2021-01-25 | 2021-05-28 | 中国石油天然气集团有限公司 | System and method for detecting heat treatment state of high-steel-grade thick-wall pipe fitting based on deep learning |
CN114324361A (en) * | 2021-12-31 | 2022-04-12 | 大连理工大学 | Metallographic structure degradation rating method and device |
CN114324361B (en) * | 2021-12-31 | 2024-03-15 | 大连理工大学 | Metallographic structure degradation rating method and device |
CN114663699A (en) * | 2022-03-08 | 2022-06-24 | 中南大学湘雅医院 | Method for identifying wound injured tissue type and predicting wound healing time with high precision |
CN114663699B (en) * | 2022-03-08 | 2024-08-06 | 中南大学湘雅医院 | Method for identifying wound damaged tissue type and predicting wound healing time with high precision |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111259930B (en) | General target detection method of self-adaptive attention guidance mechanism | |
CN112132086A (en) | Multi-scale martensite microstructure aging and damage grading method | |
CN110619355B (en) | Automatic steel material microstructure identification method based on deep learning | |
CN112580782B (en) | Channel-enhanced dual-attention generation countermeasure network and image generation method | |
CN106980858A (en) | The language text detection of a kind of language text detection with alignment system and the application system and localization method | |
CN113870230B (en) | Surface anomaly detection method based on hybrid supervised learning | |
CN111461127A (en) | Example segmentation method based on one-stage target detection framework | |
Gašparović et al. | Evaluating Yolov5, Yolov6, Yolov7, and Yolov8 in underwater environment: Is there real improvement? | |
CN110647802A (en) | Remote sensing image ship target detection method based on deep learning | |
CN116206185A (en) | Lightweight small target detection method based on improved YOLOv7 | |
CN114998210A (en) | Premature infant retinopathy detection system based on deep learning target detection | |
CN114155474A (en) | Damage identification technology based on video semantic segmentation algorithm | |
CN114463759A (en) | Lightweight character detection method and device based on anchor-frame-free algorithm | |
CN117576038A (en) | Fabric flaw detection method and system based on YOLOv8 network | |
CN117372777A (en) | Compact shelf channel foreign matter detection method based on DER incremental learning | |
CN116258632A (en) | Text image super-resolution reconstruction method based on text assistance | |
Wang et al. | MeDERT: A metal surface defect detection model | |
CN113283467A (en) | Weak supervision picture classification method based on average loss and category-by-category selection | |
CN114324361B (en) | Metallographic structure degradation rating method and device | |
CN116152146A (en) | Cast aluminum cylinder cover mechanical property prediction method based on GAN and CNN | |
Vasudevan et al. | Deep convolutional neural networks for symmetry detection | |
Pandiyan | Digital Image Forgery Detection Using Pre-Trained Xception Model as Feature Extractor | |
CN114886438B (en) | Epileptic detection method based on EEG single sample deep learning | |
CN114359698B (en) | Sonar image recognition method and system based on bidirectional skip feedback loop structure | |
Yuan et al. | LR-ProtoNet: Meta-Learning for Low-Resolution Few-Shot Recognition and Classification |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |