CN115114967A - Steel microstructure automatic classification method based on self-organization increment-graph convolution neural network - Google Patents

Steel microstructure automatic classification method based on self-organization increment-graph convolution neural network Download PDF

Info

Publication number
CN115114967A
CN115114967A CN202010995589.9A CN202010995589A CN115114967A CN 115114967 A CN115114967 A CN 115114967A CN 202010995589 A CN202010995589 A CN 202010995589A CN 115114967 A CN115114967 A CN 115114967A
Authority
CN
China
Prior art keywords
nodes
neural network
node
graph
data set
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010995589.9A
Other languages
Chinese (zh)
Inventor
李维刚
甘平
谌竟成
谢璐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University of Science and Engineering WUSE
Original Assignee
Wuhan University of Science and Engineering WUSE
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University of Science and Engineering WUSE filed Critical Wuhan University of Science and Engineering WUSE
Priority to CN202010995589.9A priority Critical patent/CN115114967A/en
Publication of CN115114967A publication Critical patent/CN115114967A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a steel microstructure automatic classification method integrating a self-organizing incremental learning neural network and a graph convolution neural network. Determining the type of a steel microstructure to be identified, collecting a plurality of steel microstructure pictures obtained by a scanning electron microscope to form a data set, and determining a category label for each picture in the data set; secondly, acquiring a characteristic vector set of the image sample data by adopting transfer learning; learning the characteristic data through a self-organizing incremental learning neural network (WSOINN) introducing a connection weight strategy to obtain a topological graph structure of the characteristic data, and introducing winning times to perform a small amount of manual labeling on nodes; and fourthly, constructing a multi-layer graph convolutional neural network (GCN) to mine potential connection of nodes in the graph, improving the generalization capability of the network by using a Dropout means, and finally automatically labeling the remaining nodes to obtain a classification result of the picture. According to the method, when the image labeling amount is only 12% of that of a traditional deep learning network model, the classification accuracy can reach more than 91%, and the accuracy is higher than that of network models such as VGG (virtual local area network), MLP (maximum likelihood probability) and SOINN (solid state information network).

Description

Steel microstructure automatic classification method based on self-organization increment-graph convolution neural network
Technical Field
The invention belongs to the technical field of steel microstructure image classification, relates to a steel microstructure automatic classification method based on deep learning, and particularly relates to a steel microstructure automatic classification method integrating a self-organizing incremental learning neural network and a graph convolution neural network.
Background
The steel material is still one of the most important and widely used materials due to excellent mechanical properties and low cost, the steel material has rich and various microstructures including ferrite, pearlite, bainite, martensite, austenite and the like, and the characteristics of the microstructure type, content, size, morphology, distribution and the like determine the properties of the material, so that the research on the microstructure of the steel material has important significance.
Conventionally, the identification of the steel microstructure is manually completed, the dependence on the professional experience of a person is large, and even an expert with rich experience can analyze errors due to image details which cannot be seen by naked eyes. Modern steel types are more and more, internal microstructures are more and more complex, and manual identification faces a huge challenge. With the deep development of artificial intelligence, researchers at home and abroad begin to use deep learning for the automatic identification of steel microstructure images, the current research needs to divide collected image data into a training set and a testing set, and the workload of manually marking the training set is huge in the face of huge data volume, so that the practical application is difficult. These problems are also common in other fields of research. Therefore, the research of an efficient algorithm capable of reducing the workload of manual data labeling has important theoretical research and practical application values for the research of steel microstructures and image data in other fields.
The concept of deep learning is derived from the research of an artificial neural network, is a branch of machine learning, and is an algorithm for performing characterization learning on data by taking the artificial neural network as an architecture. Research shows that the deep learning has better capability than human beings on certain specific image recognition, and the main reason for the phenomenon is that a deep learning model forms more abstract high-level representation attribute categories or features by combining low-level features to discover distributed feature representation of data and strong anti-noise, complex function expression and generalization capability thereof, and the deep learning is not only applied to image recognition, but also comprises multiple fields of image generation, machine translation, target detection, robot technology and the like.
At present, on the classification problem of image data, the division of a training set and a test set cannot be avoided generally, and potential correlation information between images cannot be fully utilized. In other fields, however, complex data relationships have been expressed by using a graph G ═ V, E with extremely strong abstraction and flexibility, where V is a set of nodes and E is a connection relationship between nodes. The strong and brisk capability of processing data becomes the key point of research and attention, and the wide application comprises social networks, traffic networks, biological information, knowledge maps, distributed operation, multi-agent clusters and the like. Graph convolutional neural network (GCN) is a popular and effective graph data processing method, and it is a process of extracting two-dimensional image features from CNN, and transits to graph structure, so as to perform massive, sparse and super-dimensional associated data mining and analysis. It can process irregular data with space topological graph structure and deeply explore its characteristics and regularity. The topological graph structure can be obtained through topological learning, the spatial distribution of data can be effectively described, and various topological models are proposed for data analysis at present. A certain number of neurons are randomly distributed in a data space by Self-organizing map (SOM), Neural Gas (NG) and Topology Reconstruction Network (TRN), are initialized and connected, and are input with data samples one by one for training based on a competitive learning mechanism; on the basis of NG, Growing Neural Gas (GNG for short) is proposed, and neurons of the G dynamically increase along with input data and have stronger dynamic property than NG, but the stability is poorer; a Self-Organizing Incremental learning Neural Network (SOINN) enhances the plasticity of the Network on the basis of SOM and GNG, and meanwhile, operations such as noise node deletion, adaptive threshold parameter adjustment and the like are added to stabilize the learning result.
According to the method, when the labeling rate is only 30%, the precision can reach 91%, the time consumption of model calculation is greatly reduced, the manual labeling degree is reduced under the condition of ensuring high classification precision, the labor is saved, and the classification is realized quickly.
Disclosure of Invention
The invention aims to solve the problems in the prior art and provides a steel microstructure automatic classification method based on a self-organization increment-graph convolution neural network so as to improve the classification precision of the steel microstructure and reduce the workload of manual labeling.
The purpose of the invention is realized by the following technical scheme:
an automatic classification method for steel microstructures by fusing a self-organizing incremental learning neural network and a graph convolution neural network comprises the following steps:
step one, determining the type of a microstructure of steel to be identified, collecting microstructure pictures of different steel material samples shot by a scanning electron microscope, and forming a data set by sequentially using ferrite, pearlite, bainite, lower bainite, lath martensite and sheet martensite.
Step two, carrying out the same pretreatment on all the pictures collected in the step one, wherein the pretreatment method comprises the following steps:
1) removing the text description part contained in the microstructure image acquired by the scanning electron microscope to obtain an initial data set T0 only containing the microstructure image body;
2) equally dividing and cutting each image in the data set T0 according to equal step length to obtain a new data set T1;
3) and carrying out image normalization and then standardization processing on all images in the data set T1 to obtain a data set T2.
Step three, removing a full connecting layer by adopting a pre-trained VGG16 model on the ImageNet data set, processing all images in the data set T2 by using the remaining convolution module, performing global mean pooling on 512 feature images output by each image to obtain 512-dimensional feature vectors, and thus obtaining a feature vector data set Ft { F ═ of all steel microstructure images 1 ,F 2 ,…F M },F i M is the number of sheets of all images for the ith feature vector.
Step four, building a self-organizing incremental learning neural network WSOINN introducing connection weight and node winning times, and learning the feature vector data set by using the WSOINN to obtain a topological graph structure of the self-organizing incremental learning neural network, wherein the specific algorithm steps are as follows:
1) initializing graph node set V ═ V 1 ,v 2 },v 1 ,v 2 ∈R d (ii) a Node connection
Figure BDA0002692422640000031
Is an empty set; number of node wins
Figure BDA0002692422640000032
Wherein R is d Representing a vector of dimension d, N being the number of nodes,
Figure BDA0002692422640000033
respectively represent nodes v 1 ,v 2 Respective number of wins;
2) inputting a new feature vector sample xi ∈ R d And searching for the node s closest to xi in V according to the Euclidean norm 1 And s 2 Namely:
Figure BDA0002692422640000034
the number of wins is increased by 1, i.e.
Figure BDA0002692422640000035
Wherein
Figure BDA0002692422640000036
To express | xi-v n V when | takes the minimum value n Taking the value of (a);
3) computing node s 1 、s 2 Is a similarity threshold of
Figure BDA0002692422640000037
For any node V e V, the nodes connected with V are collectedIs marked as P, if
Figure BDA0002692422640000038
T v =min||v-v n ||,v n Belongs to V \ V }; if it is
Figure BDA0002692422640000039
T v =max||v-v n ||,v n Epsilon of P, wherein
Figure BDA00026924226400000310
Indicating an empty collection, \\ indicating a delete operation;
4) if it is not
Figure BDA00026924226400000311
Or
Figure BDA00026924226400000312
And if not, discarding the sample xi and simultaneously correcting the node s 1 And s 2 ,s 1 =s 1 +ε(t)(ξ-s 1 ),s 2 =s 2 +ε'(t)(ξ-s 2 ) Wherein
Figure BDA00026924226400000313
5) If s 1 And s 2 No connection, E ═ E { (s { } { [ E { [ S ] } { ] { [ E ] } { [ S ] 1 ,s 2 )},
Figure BDA00026924226400000314
If there is a connection, the connection is established,
Figure BDA00026924226400000315
if it is not
Figure BDA00026924226400000316
Wherein the content of the first and second substances,
Figure BDA00026924226400000317
is s is 1 And s 2 Weight of the connecting edge, W max Is a predefined connection weight threshold;
6) deleting the isolated nodes and the corresponding winning times when the learning of the percentage lambda of the total samples is finished;
7) if the sample input is not finished, returning to the step 2); otherwise, outputting the graph node set V, the connection set E and the winning times of each node.
And step five, manually marking the nodes in the topological graph according to a certain marking rate according to the winning times, and specifically comprising the following steps:
1) according to the values of the node winning times and the corresponding relation, the nodes in all the sets V are sorted to obtain V order
2) Selecting V order Node set V with a certain top ranking l According to V, Ft correspondence, marking the node set V manually l All nodes in the tree to obtain V l Set of labels L labeled . Wherein the manual marking proportion can be 30 percent.
Step six, training a convolutional neural network (GCN) based on a topological graph, automatically labeling the remaining unmarked nodes by using the trained GCN, and finally determining the class information of the image according to the corresponding relation between the nodes and the image, wherein the concrete steps are as follows:
1) selecting a regularization method according to the number of nodes and characteristic dimensions, and constructing a multi-layer graph convolutional neural network (GCN), wherein the final layer of the GCN is connected with a Softmax classification layer and is used for predicting node information;
2) training the GCN network with topology G (V, E), according to V, V l 、L labeled Counting errors of the marked nodes, reversely propagating the errors, selecting a proper algorithm to optimize network parameters, and repeatedly iterating until the training errors are not reduced;
3) inputting the graph G (V, E) into the trained GCN to obtain a label set L of all nodes in V;
4) for any I i ∈Pictures,I i Is labeled as
Figure BDA0002692422640000041
Label L of j
The GCN network structure comprises 3 graph convolution layers and 1 Softmax classification layer, and each graph convolution layer is connected with a ReLU activation function; the network input dimension is N512, the first layer and the second layer are used for feature integration and dimensionality reduction, the output dimensions are N512 and N256 respectively, the third layer of convolution layer is combined with a Softmax layer for classification, the output dimension is N6, and N is the number of nodes.
A Dropout regularization means is introduced during GCN model training, cross entropy is adopted as a loss function, Adam algorithm optimization parameters are selected, the initial learning rate is 0.01, and the probability of Dropout neuron inactivation is 0.5.
The invention has the beneficial effects that:
aiming at the technical problems of more image data and extremely difficult labeling in deep learning, the topological learning and graph convolution theory is fused, and a novel semi-supervised classification method based on a self-organizing increment-graph convolution neural network (WSOINN-GCN) is provided. Compared with the existing deep learning model, the method provided by the invention effectively solves the problem of difficulty in manual labeling of the steel microstructure image training set data in practical application, and provides a new solution for classification of the steel microstructure image data and the like. When the image mark amount of the microstructure of the steel is only 12% of that of the traditional deep learning methods such as VGG (megasonic gas generator), the precision of the new method is higher than that of the traditional method, and the classification accuracy is up to 91%; when the same classification precision (90%) is achieved, the manual labeling amount is only 5.6% of that of the traditional method, and meanwhile, the efficiency advantage is guaranteed. The method has the characteristics of automatically extracting a data graph structure, implementing semi-supervised learning, dynamically adjusting a network structure and the like, and has high theoretical research value and wide application prospect in the fields of image data classification and the like.
Drawings
FIG. 1 is a model framework incorporating a self-organizing incremental learning neural network and a atlas neural network proposed by the present invention;
FIG. 2 is a sample view of an image data set of a microstructure of a steel material taken by a scanning electron microscope according to the present invention;
FIG. 3 is a network structure diagram of the image feature extraction of steel microstructure using a pre-trained VGG16 convolution module;
FIG. 4 is a block diagram of a graph convolution neural network (GCN) oriented to a steel microstructure image;
FIG. 5 shows the WSOINN network of the present invention at different λ, W max (λ is the percentage of the input samples, W) max A predefined connection weight threshold);
FIG. 6 shows the WSOINN network of the present invention at different λ, W max The connection matrix sparsity at value.
Detailed Description
The invention will be further explained by taking the refractory material of Wuhan university of science and technology and the microstructure picture of steel material taken by a scanning electron microscope of a key laboratory in metallurgical countries as an example, and combining the drawings and the embodiment.
The embodiment is as follows:
a microstructure picture (gold phase picture for short) of steel, which is shot by a Scanning Electron Microscope (SEM) of a fire-resistant material from Wuhan university of science and technology and a key laboratory in metallurgical countries, is collected as a data set, 152 pictures are included in six categories of ferrite, pearlite, upper bainite, lower bainite, lath martensite and sheet martensite, and a part of the microstructure sample picture of the steel is shown in figure 2. And determining a category label for each picture in the data set, wherein the category information is only used when the accuracy rate is marked and verified manually and is not used for other purposes.
Removing the text description part in the microstructure picture obtained by the scanning electron microscope to obtain an initial data set (T0) only containing the microstructure picture body and having the picture size of 884X 3; cutting each picture in the initial data set T0 into a plurality of n × 1 images according to the step length m, wherein m is 221, n is 221, and a new data set T1 is obtained, wherein T1 has 152 × 16 — 2432 pictures; all the image pixel values in the data set T1 are normalized, and the normalized values are normalized, and the set of 2432 processed Pictures is Pictures ═ I 1 ,I 2 ,…I M }。
The experimental hardware support comprises that a CPU is i5-7500, 4-core 4-thread, a master frequency is 3.41GHz, a memory is 12GB, a GPU is NVIDIA GeFore GTX 1060, a video memory is 6GB, an operating system is win10, a programming environment is spyder and Python3.7, models such as WSOINN and GCN are built by using advanced deep learning frameworks such as keras and tensoflow, and an OpenCV (open source computer vision library) is used for preprocessing images.
The steel microstructure automatic classification method fusing the self-organizing incremental learning neural network and the graph convolution neural network provided by the invention has the overall model frame as shown in figure 1.
Let M sets of Pictures to be classified as Pictures ═ I 1 ,I 2 ,…I M The method specifically comprises the following steps:
1) for all I i E.g. Pictures feature extraction, and obtaining a feature data set Ft ═ { F [ ] 1 ,F 2 ,…F M Which has a one-to-one correspondence with Pictures.
2) Estimating the distribution of original characteristic data, and initializing WSOINN hyper-parameters lambda and W max
3) Randomly selecting two feature data from Ft to initialize a WSOINN node set V ═ { V } 1 ,v 2 And initializing a node connection set E and a node winning number set win _ times as an empty set.
4) All F are put into i E, inputting the e and the Ft into the WSOINN for learning in sequence to obtain a topological graph structure G, and outputting V ═ V { (V) 1 ,v 2 ,…v m And corresponding E, win _ times ═ t 1 ,t 2 ,…t m Known by the WSOINN algorithm, V corresponds to win _ times one by one, and for any V i E.g. V, presence of F j e.Ft corresponds to it.
5) According to the winning times t i The value is large and small, and according to the corresponding relation, all v are treated i E.g. V to obtain V order
6) Select V order Node set V with a certain top ranking l Checking the original picture I according to the corresponding relation of V, Ft and Pictures i Manually annotating all v i ∈V l To obtain V l Set of labels L labeled
7) And selecting a regularization method according to the number of nodes and the characteristic dimension, and constructing a proper multi-layer graph convolution network GCN, wherein the GCN is connected with Softmax at the last layer for predicting node information.
8) Inputting the graph G (V, E) output by the WSOINN into the GCN to predict all node information, according to V, V l 、L labeled And (4) counting errors of the marked nodes, reversely propagating the errors, selecting a proper algorithm to optimize network parameters, and repeatedly iterating until the training errors are not reduced.
9) And inputting the graph G (V, E) into the trained GCN to obtain a label set L of V.
10) For any I i ∈Pictures,I i Is labeled as
Figure BDA0002692422640000071
Label L of j
The network structure of the GCN is shown in fig. 4. After convolution of each layer of graph, all the graph is connected with a ReLU activation function; the network input dimension is node number N512, the first layer and the second layer are used for feature integration and dimensionality reduction, the output dimension is N512 and N256 respectively, and the parameter quantity is 512 and 512 256 respectively; the third convolution layer is combined with the Softmax layer for classification, the output dimension of the third convolution layer is N x 6, the parameter number is 256 x 6, and N represents the node number in the graph structure. The GCN adopts an Adam algorithm to optimize parameters, the initial learning rate is 0.01, and the probability of dropout neuron inactivation is 0.5.
According to the steps, the WSOINN introduces edge connection weight on the original SOINN to represent the similarity of two nodes, so that the GCN can mine the characteristic relation between images, a few representative important nodes are selected for manual labeling by introducing the node winning times, the instability of a model caused by random selection is avoided, the WSOINN and the GCN are organically combined together, and efficient classification is realized while manual labeling is reduced.
Model accuracy index
And 2 indexes of the accuracy rate and the recall rate are counted, wherein the accuracy rate is the measurement of the accuracy of the annotation, and the recall rate reflects the comprehensiveness of the annotation. For a certain class of samples a, a confusion matrix is constructed as shown in table 1.
TABLE 1 confusion matrix
Figure BDA0002692422640000072
For a certain sample A, correctly classifying samples belonging to the class A into the class A, recording the number of the samples in the class A as TP, wrongly classifying samples not belonging to the class A into the class A, recording the number of the samples in the class A as FP, wrongly classifying samples belonging to the class A into other classes, recording the number of the samples in the class A as TN, correctly classifying samples not belonging to the class A into other classes, and recording the number of the samples in the class A as FN; the accuracy rate is then:
Figure BDA0002692422640000081
the recall ratio is as follows:
Figure BDA0002692422640000082
for the overall evaluation of the model, a micro-averaging method is adopted, the precision rate of the micro-averaging method is equal to the recall rate, and only one item needs to be counted.
Optimization analysis of model parameters
The WSOINN deletes isolated nodes after every input sample percentage λ, which affects the final node output number. Too many nodes may contain noise nodes, and too few nodes cannot fully reflect all sample distributions, thereby indirectly influencing automatic labeling precision. FIG. 5 shows a table listing the differences λ, W max Under the value, the WSOINN acquires the number of the nodes of the topological graph, the darker the color represents the less number of the generated nodes, and the maximum value of the number of the nodes is 865 and the minimum value is 256. Fig. 6 represents the sparseness (proportion of non-0 elements) of the connection matrix, and the darker the color represents the sparseness of the connection matrix. It can be seen that with λ, W max Increasing the number of nodes increases with the denser the connection matrix.
2432 metallographic pictures are collected in the experiment, and in order to guarantee precision and accelerate operation, 1/10-1/6W-SOINN with the original data amount of the generated nodes is selected for further analysis.
Table 2 lists the automatic labeling precision of the remaining nodes when the labeling rate is 0.3. It can be seen that: compared with the method for selecting the node labels according to the node winning times, the method for selecting the node labels randomly causes the automatic labeling precision of the rest nodes to be high and low, the automatic labeling precision does not exceed the former, and the method for selecting the node labels according to the node winning times has stable advantages. In any labeling mode, as the number of nodes increases, the labeling precision of the remaining nodes is in a descending trend, and under the nodes with the same scale, the connection matrix is more sparse, and the node labeling precision is higher.
Table 2 shows the difference of lambda and W when the mark rate is 0.3 max Marking precision of residual nodes under value
Figure BDA0002692422640000083
Combining the results in table 2, the parameter λ is 10%, W is selected when the node number is 294 and the number of non-0 elements in the connection matrix is 324 max 2 as a preferred parameter for WSOINN.
Classification accuracy of models
Table 3 lists the accuracy and recall of different types of golden phase diagrams when the marking rate is 0.3 and the automatic marking precision of the remaining nodes is 93%. It can be seen that: although the recall rate of the high-carbon flaky martensite is high, the accuracy rate is as low as 74%, the bainite accuracy is high, the recall rate is low, and cross misjudgment exists, the reason may be that although the feature can be extracted violently by the VGG16 convolution layer, the deep distinguishing feature cannot be further obtained by the VGG16 for the gray scale map with similar average pixel intensity.
TABLE 3 accuracy and recall (%), of the golden photo map, when the node accuracy was 93%)
Precision ratio (%) Recall (%)
Lower bainite 90% 92%
Low carbon lath martensite 93% 93%
Pearlite 99% 92%
Bainite alloy 93% 73
Ferrite
100% 100%
High carbon sheet martensite 74% 97%
For the same data set, table 4 lists the number of manual labels and training time required by VGG-ICAM, SOINN, MLP, WSOINN-GCN, when the classification accuracy of all pictures reaches more than 90%. As can be seen, the manual labeling amount required by the WSOINN-GCN is only 5.6% of that of the VGG-ICAM, 5.2% of that of the SOINN and the MLP, and the training time is greatly reduced compared with that of the VGG-ICAM.
TABLE 4 Classification accuracy up to 90%, labeling quantity and training time required for different methods
Figure BDA0002692422640000091
Therefore, compared with the existing deep learning algorithm, the new method effectively solves the problem of difficulty in manual labeling of the steel microstructure image training set data in practical application, and provides a new solution for classification of the steel microstructure image data and the like. The WSOINN-GCN has the characteristics of automatically extracting a data graph structure, implementing semi-supervised learning, dynamically adjusting a network structure and the like, and has wide application prospects in the fields of image data classification and the like.
It should be understood by those skilled in the art that the above embodiments are for illustrative purposes only and are not intended to limit the present invention, and that changes and modifications to the above embodiments may fall within the scope of the appended claims.

Claims (8)

1. A steel microstructure automatic classification method based on a self-organizing increment-atlas convolution neural network is characterized by comprising the following steps:
step one, determining the type of a microstructure of steel to be identified, collecting microstructure pictures of different steel material samples shot by a scanning electron microscope, and forming a data set by ferrite, pearlite, bainite, lower bainite, lath martensite and sheet martensite in sequence;
step two, carrying out the same pretreatment on all the pictures collected in the step one, wherein the pretreatment method comprises the following steps:
1) removing the text description part contained in the microstructure image acquired by the scanning electron microscope to obtain an initial data set T0 only containing the microstructure image body;
2) equally dividing and cutting each image in the data set T0 according to equal step length to obtain a new data set T1;
3) carrying out image normalization and then standardization processing on all images in the data set T1 to obtain a data set T2;
step three, removing the full connecting layer by adopting a pre-trained VGG16 model on the ImageNet data set, processing all images in the data set T2 by using the rest convolution module, and outputting the images of each imageThe 512 feature maps are subjected to global mean pooling to obtain 512-dimensional feature vectors, so that a feature vector data set Ft of all steel microstructure images is obtained, wherein the feature vectors are F 1 ,F 2 ,…F M },F i The number of the ith feature vector is M, and the number of all images is M;
step four, building a self-organizing incremental learning neural network WSOINN introducing connection weight and node winning times, and learning the feature vector data set by using the WSOINN to obtain a topological graph structure of the feature vector data set;
fifthly, manually marking the nodes in the topological graph according to a certain marking rate according to the winning times;
and step six, training a convolutional neural network (GCN) based on the topological graph, automatically labeling the remaining unmarked nodes by using the trained GCN, and finally determining the class information of the image according to the corresponding relation between the nodes and the image.
2. The steel product microstructure automatic classification method based on the self-organizing increment-graph convolution neural network as claimed in claim 1, wherein in the fourth step, the specific algorithm steps are as follows:
1) initializing graph node set V ═ V 1 ,v 2 },v 1 ,v 2 ∈R d (ii) a Node connection
Figure FDA0002692422630000011
Is an empty set; number of node wins
Figure FDA0002692422630000012
Wherein R is d Representing a vector of dimension d, N being the number of nodes,
Figure FDA0002692422630000013
respectively represent nodes v 1 ,v 2 Respective number of wins;
2) inputting a new feature vector sample xi ∈ R d And searching for the node s closest to xi in V according to the Euclidean norm 1 And s 2 Namely:
Figure FDA0002692422630000021
number of victory times plus 1, i.e.
Figure FDA0002692422630000022
Wherein
Figure FDA0002692422630000023
Express | | | xi-v n V when | takes the minimum value n Taking the value of (A);
3) computing node s 1 、s 2 Is a similarity threshold of
Figure FDA0002692422630000024
For any one of the nodes V e V, the node set connected with V is marked as P, if
Figure FDA0002692422630000025
If it is
Figure FDA0002692422630000026
Figure FDA0002692422630000027
Wherein
Figure FDA0002692422630000028
Representing an empty set, \\ representing a delete operation;
4) if it is used
Figure FDA0002692422630000029
Or
Figure FDA00026924226300000210
And if not, discarding the sample xi and simultaneously correcting the node s 1 And s 2 ,s 1 =s 1 +ε(t)(ξ-s 1 ),s 2 =s 2 +ε'(t)(ξ-s 2 ) Wherein
Figure FDA00026924226300000211
5) If s 1 And s 2 No connection, E ═ E { (s { } { [ E { [ S ] } { ] { [ E ] } { [ S ] 1 ,s 2 )},
Figure FDA00026924226300000212
If there is a connection, the connection is established,
Figure FDA00026924226300000213
if it is not
Figure FDA00026924226300000214
E\{(s 1 ,s 2 ) And (c) the step of (c) in which,
Figure FDA00026924226300000215
is s is 1 And s 2 Weight of the connecting edge, W max Is a predefined connection weight threshold;
6) deleting the isolated nodes and the corresponding winning times when the learning of the percentage lambda of the total samples is finished;
7) if the sample input is not finished, returning to the step 2); otherwise, outputting the graph node set V, the connection set E and the winning times of each node.
3. The steel product microstructure automatic classification method based on the self-organizing increment-atlas convolution neural network as claimed in claim 1 is characterized in that in the fifth step, the concrete steps are as follows:
1) according to the values of the node winning times and the corresponding relation, the nodes in all the sets V are sorted to obtain V order
2) Selecting V order Node set V with a certain top ranking l According to V, Ft correspondence, marking the node set V manually l All nodes in the tree obtain V l Set of labels L labeled
4. The method for automatically classifying a microstructure of a steel product according to claim 1, wherein in the sixth step,
1) selecting a regularization method according to the number of nodes and characteristic dimensions, and constructing a multi-layer graph convolutional neural network (GCN), wherein the final layer of the GCN is connected with a Softmax classification layer and is used for predicting node information;
2) training the GCN network with topology G (V, E), according to V, V l 、L labeled Counting errors of the marked nodes, reversely propagating the errors, selecting a proper algorithm to optimize network parameters, and repeatedly iterating until the training errors are not reduced;
3) inputting the graph G (V, E) into the trained GCN to obtain a label set L of all nodes in V;
4) for any I i ∈Pictures,I i Is labeled as
Figure FDA0002692422630000031
Label L of j
5. The method for automatically classifying steel microstructures based on the self-organizing incremental-graph convolution neural network as claimed in claim 2, wherein in the fourth step, the feature vector dimension d is 512, the λ is 10%, and W is W max The value is 2.
6. The method for automatically classifying steel microstructures based on the self-organizing incremental-graph convolution neural network as claimed in claim 3, wherein in the fifth step, the nodes ranked at the top 30% are selected and manually labeled.
7. The steel product micro-tissue automatic classification method based on the self-organizing increment-map convolution neural network as claimed in claim 4, wherein in the sixth step, the GCN comprises 3 map convolution layers and 1 Softmax classification layer, and each map convolution layer is connected with a ReLU activation function; the network input dimension is N512, the first layer and the second layer are used for feature integration and dimensionality reduction, the output dimensions are N512 and N256 respectively, the third layer of graph convolution layer is combined with a Softmax layer for classification, the output dimension is N6, and N is the number of nodes.
8. The steel product micro-tissue automatic classification method based on the self-organizing increment-graph convolution neural network as claimed in claim 4, wherein in the sixth step, Dropout regularization means is introduced during GCN model training, cross entropy is used as a loss function, Adam algorithm optimization parameters are selected, the initial learning rate is 0.01, and the probability of Dropout neuron inactivation is 0.5.
CN202010995589.9A 2020-09-21 2020-09-21 Steel microstructure automatic classification method based on self-organization increment-graph convolution neural network Pending CN115114967A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010995589.9A CN115114967A (en) 2020-09-21 2020-09-21 Steel microstructure automatic classification method based on self-organization increment-graph convolution neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010995589.9A CN115114967A (en) 2020-09-21 2020-09-21 Steel microstructure automatic classification method based on self-organization increment-graph convolution neural network

Publications (1)

Publication Number Publication Date
CN115114967A true CN115114967A (en) 2022-09-27

Family

ID=83322885

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010995589.9A Pending CN115114967A (en) 2020-09-21 2020-09-21 Steel microstructure automatic classification method based on self-organization increment-graph convolution neural network

Country Status (1)

Country Link
CN (1) CN115114967A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104880389A (en) * 2015-04-01 2015-09-02 江苏大学 Mixed crystal degree automatic measurement and fine classification method for steel crystal grains, and system thereof
WO2019071754A1 (en) * 2017-10-09 2019-04-18 哈尔滨工业大学深圳研究生院 Method for sensing image privacy on the basis of deep learning
CN110490849A (en) * 2019-08-06 2019-11-22 桂林电子科技大学 Surface Defects in Steel Plate classification method and device based on depth convolutional neural networks
CN110619355A (en) * 2019-08-28 2019-12-27 武汉科技大学 Automatic steel material microstructure identification method based on deep learning
CA3061717A1 (en) * 2018-11-16 2020-05-16 Royal Bank Of Canada System and method for a convolutional neural network for multi-label classification with partial annotations
WO2020130513A1 (en) * 2018-12-18 2020-06-25 주식회사 포스코 System and method for predicting material properties using metal microstructure images based on deep learning

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104880389A (en) * 2015-04-01 2015-09-02 江苏大学 Mixed crystal degree automatic measurement and fine classification method for steel crystal grains, and system thereof
WO2019071754A1 (en) * 2017-10-09 2019-04-18 哈尔滨工业大学深圳研究生院 Method for sensing image privacy on the basis of deep learning
CA3061717A1 (en) * 2018-11-16 2020-05-16 Royal Bank Of Canada System and method for a convolutional neural network for multi-label classification with partial annotations
WO2020130513A1 (en) * 2018-12-18 2020-06-25 주식회사 포스코 System and method for predicting material properties using metal microstructure images based on deep learning
CN110490849A (en) * 2019-08-06 2019-11-22 桂林电子科技大学 Surface Defects in Steel Plate classification method and device based on depth convolutional neural networks
CN110619355A (en) * 2019-08-28 2019-12-27 武汉科技大学 Automatic steel material microstructure identification method based on deep learning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
李维刚;谌竟成;范丽霞;谢璐;: "基于卷积神经网络的钢铁材料微观组织自动辨识", 钢铁研究学报, no. 01, 15 January 2020 (2020-01-15) *
李维刚等: "基于改进YOLOv3算法的带钢表面缺陷检测", 《电子学报》, vol. 48, no. 07, 31 July 2020 (2020-07-31) *

Similar Documents

Publication Publication Date Title
CN107122809B (en) Neural network feature learning method based on image self-coding
CN109376242B (en) Text classification method based on cyclic neural network variant and convolutional neural network
CN113190699B (en) Remote sensing image retrieval method and device based on category-level semantic hash
CN111723738B (en) Coal rock chitin group microscopic image classification method and system based on transfer learning
CN108170736A (en) A kind of document based on cycle attention mechanism quickly scans qualitative method
CN111914728B (en) Hyperspectral remote sensing image semi-supervised classification method and device and storage medium
CN106919951A (en) A kind of Weakly supervised bilinearity deep learning method merged with vision based on click
CN110363253A (en) A kind of Surfaces of Hot Rolled Strip defect classification method based on convolutional neural networks
Badawi et al. A hybrid memetic algorithm (genetic algorithm and great deluge local search) with back-propagation classifier for fish recognition
CN112182221B (en) Knowledge retrieval optimization method based on improved random forest
CN107403191A (en) A kind of semi-supervised learning machine sorting technique that transfinites with depth structure
CN110297888A (en) A kind of domain classification method based on prefix trees and Recognition with Recurrent Neural Network
CN110909158B (en) Text classification method based on improved firefly algorithm and K nearest neighbor
Fang et al. Identification of apple leaf diseases based on convolutional neural network
CN108446605B (en) Double interbehavior recognition methods under complex background
CN110991554B (en) Improved PCA (principal component analysis) -based deep network image classification method
CN117314266B (en) Novel intelligent scientific and technological talent evaluation method based on hypergraph attention mechanism
CN108509840B (en) Hyperspectral remote sensing image waveband selection method based on quantum memory optimization mechanism
CN114399642A (en) Convolutional neural network fluorescence spectrum feature extraction method
Rethik et al. Attention Based Mapping for Plants Leaf to Classify Diseases using Vision Transformer
CN112488188A (en) Feature selection method based on deep reinforcement learning
Yang et al. Hyper-spectral image pixel classification based on golden sine and chaotic spotted hyena optimization algorithm
CN110210562A (en) Image classification method based on depth network and sparse Fisher vector
CN108898157B (en) Classification method for radar chart representation of numerical data based on convolutional neural network
CN115114967A (en) Steel microstructure automatic classification method based on self-organization increment-graph convolution neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination