CN111915602A - Iron and steel material organization quantification method combining EBSD and deep learning method - Google Patents

Iron and steel material organization quantification method combining EBSD and deep learning method Download PDF

Info

Publication number
CN111915602A
CN111915602A CN202010816423.6A CN202010816423A CN111915602A CN 111915602 A CN111915602 A CN 111915602A CN 202010816423 A CN202010816423 A CN 202010816423A CN 111915602 A CN111915602 A CN 111915602A
Authority
CN
China
Prior art keywords
ebsd
data
deep learning
sem
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010816423.6A
Other languages
Chinese (zh)
Other versions
CN111915602B (en
Inventor
徐伟
沈春光
王晨充
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northeastern University China
Original Assignee
Northeastern University China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northeastern University China filed Critical Northeastern University China
Priority to CN202010816423.6A priority Critical patent/CN111915602B/en
Publication of CN111915602A publication Critical patent/CN111915602A/en
Application granted granted Critical
Publication of CN111915602B publication Critical patent/CN111915602B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10056Microscopic image
    • G06T2207/10061Microscopic image from scanning electron microscope
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Quality & Reliability (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a steel material tissue quantification method combining an EBSD (electron back scattering) method and a deep learning method, and relates to the technical field of steel material microstructure identification and deep learning application. Compared with the traditional microscopic structure identification method based on deep learning, the method improves the precision of the microscopic structure identification model due to the establishment of the high-quality data set, enables the image identification method based on deep learning to be applied to actual engineering steel grades with complex microscopic structures, and greatly improves the actual application value of the method. The complex microstructure can be accurately calibrated based on the EBSD method, so that the deep learning microstructure identification method can be applied to actual engineering steel containing very complex microstructures, and the actual application value of the deep learning method is greatly improved. In addition, the current image recognition model establishes the association between SEM and EBSD, and the EBSD 'phase diagram' can be obtained through the trained model by using a simple SEM image.

Description

Iron and steel material organization quantification method combining EBSD and deep learning method
Technical Field
The invention relates to the technical field of steel material microstructure identification and deep learning application, in particular to a steel material tissue quantification method combining an EBSD (electron back scattering) method and a deep learning method.
Background
With the vigorous development of computer science, the artificial intelligence technology has been widely applied to various industries, and great convenience and welfare are brought to human design. As an important branch of the application of artificial intelligence technology, image recognition technology is also widely applied in aspects of daily life, such as electronic police, unmanned supermarkets, medical influence, and the like. With the gradual approach of material research to the big data era, the advanced machine learning method is also widely applied to the field of material science. In addition to applying machine learning to establish associations between "composition/process-properties", they are also applied to identify different microstructures in materials. Jessica Gola et al, university of Saerland, Germany, applied a Support Vector Machine (SVM) to establish a relationship between morphology feature values and phase classes in dual-phase steel, and finally, phase classes, such as martensite or ferrite, can be successfully predicted by inputting microstructure feature values into a model. DeCost et al, at the university of kaingmilon, also applied the SVM algorithm to successfully predict the different phase classes of various materials by inputting the extracted microstructural features. Although microstructures of different materials have been successfully predicted based on conventional machine learning algorithms, the relationship between manually extracted morphological features and microstructure classes has been established in these studies. The manually extracted feature image features cannot comprehensively and objectively describe the real feature of the microstructure, and feature engineering is often adopted to reduce the dimension of input parameters, so that the accuracy of the model is obviously limited by manual operations. And the output result only gives the phase class name alone, can not cut the phase from the microscopic structure gold phase picture, can not carry on the subsequent microscopic structure quantitative analysis.
In recent years, deep learning methods have been rapidly developed and have achieved some application results in the field of material science. The EBSD (Electron Back Scattered diffraction, EBSD for short) technology can realize full-automatic acquisition of micro-area orientation information, has the characteristics of simpler sample preparation, high data acquisition speed, high resolution and the like, lays a foundation for fast and quantitative statistics and research on microstructure and texture of materials, becomes an effective analysis means in material research, and is widely applied to the field of various polycrystalline materials for researching orientation relation information, phase change process, interface performance, phase identification and the like. The image recognition method based on deep learning is also gradually applied to the microscopic structure recognition due to the strong learning and recognition capability. The full-connection neural network is applied by seied majid azimi and the like of the German aerospace research center to divide different phases in an SEM picture with pixel level precision, and the different phases are automatically marked with different colors for distinguishing. DeCost et al, at canaryman university, uses a PixelNet network to classify different microstructures in high carbon steel, again with pixel-level accuracy. In the research, in order to enable the model to learn the characteristics of different microstructures, different phases in the metallographic photograph are firstly identified in a manual classification mode, and a data set is established for model training after a label is formed. Therefore, the accuracy of the manual identification of the microstructure directly determines the accuracy of the model. In actual operation, however, the result of the classification of the microstructure among different operators is greatly different due to factors such as experience level, which significantly deteriorates the objective accuracy of the model; in addition, manual identification can only be limited to materials having a microstructure composition. However, most practical engineering steel grades, such as Quenching Partitioning (QP) steel, have quite complex microstructures, and obviously, the deep learning method based on manual identification cannot be applied to the microstructure analysis of the steel grades.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a steel material organization quantification method combining an EBSD (electron back scattering) method and a deep learning method, which introduces an EBSD technology which is good at distinguishing phase types into the data set manufacturing process, applies the technology to accurately identify and calibrate microstructures, establishes a high-precision data set and realizes the quantification analysis of complex microstructures in steel materials.
The technical scheme adopted by the invention is as follows:
a steel material organization quantification method combining an EBSD and a deep learning method comprises the following steps:
step 1: establishing an original data set of a target steel material;
establishing an original data set of the target steel material through an SEM (scanning electron microscope) experiment and an EBSD (electron back scattering) experiment, wherein the original data set comprises an SEM photo and an in-situ EBSD photo; the method comprises the following specific steps:
step 1.1: performing SEM experiment to acquire image data of the target steel material;
selecting N different areas on a metallographic sample to acquire image data, and acquiring one SEM picture in each of the area 1 to the area N under the X magnification; the X magnification is required to clearly distinguish the microstructure characteristics, wherein the number of crystal grains in each SEM picture is not less than 100;
step 1.2: and (3) performing EBSD experiments on the areas 1 to N under the X magnification, wherein the resolution ratio of the EBSD experiments is not lower than 90%, the EBSD scanning area is ensured to be consistent with the SEM shooting area, and an original data set of the target steel material is established, wherein the data set comprises N SEM pictures and EBSD data of corresponding positions of the SEM pictures.
Step 2: performing EBSD data preprocessing and establishing a training data set;
step 2.1: processing the EBSD data by using data analysis software to accurately distinguish different microstructures in the SEM image, wherein classified phases are distinguished by different colors to form a 'phase diagram' which is used as output data of the deep learning model, and the SEM image is used as input data of the deep learning model;
step 2.2: performing pixel size processing on a 'phase image' formed by the EBSD by using deep learning software to form an image with the same pixel size as the SEM image, so as to achieve the correspondence of pixel points between the SEM image and the 'phase image' of the EBSD;
step 2.3: cutting the SEM picture and the EBSD 'phase diagram' into M sub-graphs, wherein M is NxE, E represents the number of sub-graphs cut by each SEM image or EBSD result graph, establishing a deep learning model training data set, wherein the training data set comprises M SEM sub-graphs and M sub-graphs of the EBSD 'phase diagram', and using 6: 4, dividing the established training data set into a training set and a test set;
step 2.4: and respectively clockwise turning all samples in the training set by 90 degrees, 180 degrees and 270 degrees by adopting a data enhancement method, and then adding the turned images into the original training set to increase the number of the samples in the original training set by 3 times.
And step 3: establishing a U-Net deep learning model according to the training data set in the step 2;
step 3.1, building a U-Net deep learning model, wherein the model consists of a compression path and an expansion path, the compression path comprises 4 convolutional layers, and each convolutional layer is followed by a maximum pooling operation; the extended path contains 4 deconvolution layers, each deconvolution being followed by an upconvolution operation; each convolutional layer is connected with the corresponding deconvolution layer by using a 'jump layer'.
The convolutional layer adopts a 3 × 3 convolutional kernel, the activation function selects ReLU, and the formula of ReLU is approximately derived as follows:
f(x)=≈max(0,x)
wherein x is an input numerical value, the convergence rate and precision of the model are improved, data are processed by Batch Normalization (Batch Normalization), and the pseudo code is as follows:
Figure BDA0002632899220000031
Figure BDA0002632899220000032
Figure BDA0002632899220000033
yi←γmi+β≡BNγ,β(xi)
wherein x isiIs the ith data, m is the data quantity, muBIs the mean value of the batch data;
Figure BDA0002632899220000034
variance of batch data; m isiNormalizing the result for batch data; xi is a constant; BN represents a normalized conversion network, and gamma and beta are training parameters in the network; y isiAnd introducing data after scaling and translation operations for the normalized data, namely, outputting a final result.
Step 3.2, respectively inputting the input data and the output data of the deep learning model in the step 2 into the deep learning model, training the deep learning model by adopting an AdaGrad optimizer, and carrying out parameter optimization, wherein the AdaGrad optimizer has the following formula:
w=w+Δw
Figure BDA0002632899220000035
Figure BDA0002632899220000036
wherein w is the optimized parameter set and Δ w is the increment of the parameter set; eta is the learning rate; gnThe average value of the sample loss in the nth training with respect to the parameter gradient is calculated; σ is a very small positive number; r isnThe gradient accumulation variable in the nth training is set as 0;
in the parameter optimization process, Mean Square Error (MSE) is used as an evaluation index of model performance; and setting the iteration number of model training as n.
3.3, testing the prediction capability of the model by applying the test concentrated data after the deep learning model is trained; the accuracy of the prediction result is evaluated by using an MIoU index, the MIoU index evaluates the coincidence proportion of different phases between the prediction result and the actual result of the characterization model, and the formula is as follows:
Figure BDA0002632899220000041
wherein n isabRepresents the number of categories a predicted as categories b; n isclRepresenting the number of categories of the classification task; t is ta=∑bnabThe number of all pixel points in the category a; if the MIoU mean value of the data prediction result in the test set is larger than 75%, the model precision meets the application requirement; otherwise, returning to the step 3.2, and training the deep learning model again by adjusting the learning rate of the optimizer, replacing the evaluation function of the training process and adjusting the iteration times until the application requirements are met.
And 4, step 4: calculating the phase content by counting the proportion of pixel points in the predicted image, and realizing the quantification of the microscopic structure;
inputting an SEM picture based on a U-Net deep learning model, outputting a corresponding phase diagram through semantic segmentation, and marking different phases in the phase diagram into different colors; calculating the content of each phase by adopting deep learning software through counting the proportion of pixel point data of each phase to the total pixel quantity, wherein the calculation formula is as follows:
Figure BDA0002632899220000042
wherein C isfIs the content of f phase, NfThe number of f-phase pixels is, and N is the total number of image pixels.
Adopt the produced beneficial effect of above-mentioned technical scheme to lie in:
the invention provides a steel material organization quantification method combining an EBSD (electron back scattering) and a deep learning method, wherein an EBSD technology is used for identifying a microstructure and establishing a high-precision data set; the deep learning method is used for learning the SEM morphology characteristics of the microscopic structure and establishing high-latitude correlation between the SEM image and the phase diagram. Compared with the traditional microscopic structure identification method based on deep learning, the method improves the precision of the microscopic structure identification model due to the establishment of the high-quality data set, enables the image identification method based on deep learning to be applied to actual engineering steel grades with complex microscopic structures, and greatly improves the actual application value of the method. The complex microstructure can be accurately calibrated based on the EBSD method, so that the deep learning microstructure identification method can be applied to actual engineering steel containing very complex microstructures, and the actual application value of the deep learning method is greatly improved. In addition, the current image recognition model establishes the association between SEM and EBSD, and the EBSD 'phase diagram' can be obtained through the trained model by using a simple SEM image. Compared with the traditional EBSD experiment, the data driving method completes the reconstruction of the EBSD result with extremely high efficiency, and provides a feasible way for accelerating the EBSD experiment.
Drawings
FIG. 1 is a general flow chart of a method for quantifying a structure of a ferrous material according to the present invention;
FIG. 2 is a schematic structural diagram of a U-Net model according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of the identification result of the current QP steel microstructure according to the embodiment of the invention;
wherein FIG (a) -SEM image schematic; FIG. (b) -EBSD phase diagram schematic; fig. (c) -current model identification result diagram.
Detailed Description
The following detailed description of embodiments of the invention refers to the accompanying drawings.
A method for quantifying the organization of ferrous materials by combining an EBSD method and a deep learning method is shown in figure 1 and comprises the following steps:
step 1: establishing an original data set of a target steel material;
establishing an original data set of the target steel material through an SEM (scanning electron microscope) experiment and an EBSD (electron back scattering) experiment, wherein the original data set comprises an SEM photo and an in-situ EBSD photo; in this embodiment, the SEM photograph is 1024 × 768 pixels, and the specific steps are as follows:
step 1.1: performing SEM experiment to acquire image data of the target steel material;
selecting N different areas on a metallographic sample to acquire image data, and acquiring one SEM picture in each of the area 1 to the area N under the X magnification, wherein the resolution is not lower than 1024X 768 pixels;
the X magnification is required to be capable of clearly distinguishing microstructure characteristics, wherein the number of crystal grains in each SEM picture is not less than 100;
in this embodiment, Q & P steel with a complex microstructure is used, the magnification X selected in the SEM experiment is 2000, the number N of total selected regions is 3, and the image resolution is 1900 × 1350 pixels.
Step 1.2: and (3) performing EBSD experiments on the areas 1 to N under the X magnification, wherein the resolution ratio of the EBSD experiments is not lower than 90%, the EBSD scanning area is ensured to be consistent with the SEM shooting area, and an original data set of the target steel material is established, wherein the data set comprises N SEM pictures and EBSD data of corresponding positions of the SEM pictures.
In this example, the in-situ EBSD experiment was performed on the region N-3 at a magnification X of 2000, and the EBSD resolution was all greater than 95%, and the SEM images showed good agreement with the EBSD results.
Step 2: performing EBSD data preprocessing and establishing a training data set;
step 2.1: the EBSD data were processed using the data analysis software Channel 5 to accurately distinguish the different microstructures in the SEM images, for example, the BCC and FCC crystal structure phases could be distinguished using Phase map functionality; different BCC crystal structures are correspondingly subdivided by a Band Scope function, wherein classified phases are distinguished by different colors to form a 'phase diagram' which is used as output data of a deep learning model, and an SEM (scanning Electron microscope) picture is used as input data of the deep learning model;
in the present example, Q & P contains three types of phases of ferrite, martensite, and austenite. The FCC structure austenite is first distinguished from the other two phases using Phase map in EBSD, and the martensite is subsequently distinguished from the ferrite using Band Scope, where a Band Scope value > 90 is considered ferrite and phases less than 90 are considered martensite. Based on EBSD analysis, the three-phase structure in QP steel is accurately differentiated to form a "phase diagram".
Step 2.2: performing pixel size processing on a 'phase image' formed by the EBSD by using a deep learning software Open CV toolbox to form an image with the same pixel size as the SEM image, so as to achieve the correspondence of pixel points between the SEM image and the EBSD 'phase image';
the resulting EBSD "phase map" was processed by the OpenCV toolkit in this example as a 1900 × 1350 pixel image, consistent with the SEM image.
Step 2.3: clipping the SEM pictures and EBSD "phase diagrams" into M128 × 128 pixel sub-graphs, where M ═ N × E, E denotes the number of 128 × 128 pixel sub-graphs sliced per SEM image or EBSD result graph, creating a deep learning model training dataset comprising M128 × 128 pixel SEM sub-graphs and M128 × 128 pixel EBSD "phase diagrams" sub-graphs, using 6: 4, dividing the established training data set into a training set and a test set;
in this example, E is 140, so that after clipping, 420 sub-images of 128 × 128 pixels are obtained. Three original images were used for the training set (280 subgraphs) and 1 for the test set (140 subgraphs).
Step 2.4: and respectively clockwise turning all samples in the training set by 90 degrees, 180 degrees and 270 degrees by adopting a data enhancement method, and then adding the turned images into the original training set to increase the number of the samples in the original training set by 3 times.
The number of samples in the training set was increased to 1120 after the data enhancement method in this example.
And step 3: establishing a U-Net deep learning model according to the training data set in the step 2, wherein the U-Net deep learning model suitable for the problem of small samples is selected as the current training data set contains limited image data amount, and is shown in FIG. 2;
step 3.1, building a U-Net deep learning model, wherein the model consists of a compression path and an expansion path, the compression path comprises 4 convolutional layers, and each convolutional layer is followed by a maximum pooling operation; the extended path contains 4 deconvolution layers, each deconvolution being followed by an upconvolution operation; each convolutional layer is connected with the corresponding deconvolution layer by using a 'jump layer'. The deconvolution process parameters are consistent with the convolution operation, and the convolution kernel is the result of the transposition operation of the convolution kernel for the convolution operation. The convolution and deconvolution processes are connected by adopting a Skip Layer (Skip Layer), so that the features extracted in the convolution process can be directly superposed in the image reconstruction in the deconvolution process, a great deal of detail of the image can be reserved, and the small sample data can be fully utilized.
The convolutional layer adopts a 3 × 3 convolutional kernel, the activation function selects ReLU, and the formula of ReLU is approximately derived as follows:
f(x)=≈max(0,x)
wherein x is an input numerical value, the convergence rate and precision of the model are improved, data are processed by batch normalization (Batchnormalization), and the pseudo code is as follows:
Figure BDA0002632899220000061
Figure BDA0002632899220000062
Figure BDA0002632899220000063
yi←γmi+β≡BNγ,β(xi)
wherein x isiIs the ith data, m is the data quantity, muBIs the mean value of the batch data;
Figure BDA0002632899220000071
variance of batch data; m isiNormalizing the result for batch data; xi is a constant; BN represents a normalized conversion network, and gamma and beta are training parameters in the network; y isiAnd introducing data after scaling and translation operations for the normalized data, namely, outputting a final result.
Step 3.2, respectively inputting the input data and the output data of the deep learning model in the step 2 into the deep learning model, training the deep learning model by adopting an AdaGrad optimizer, and carrying out parameter optimization, wherein the AdaGrad optimizer has the following formula:
w=w+Δw
Figure BDA0002632899220000072
Figure BDA0002632899220000073
wherein w is the optimized parameter set and Δ w is the increment of the parameter set; eta is the learning rate; gnThe average value of the sample loss in the nth training with respect to the parameter gradient is calculated; sigma is a very small positive number, and takes the value of 10-7;rnThe gradient accumulation variable in the nth training is set as 0;
in the parameter optimization process, Mean Square Error (MSE) is used as an evaluation index of model performance; and setting the iteration number of model training as n.
The learning rate η is set to 10-3 in this example; to ensure model convergence, the maximum value of the training number n is set to 8000. MSE is used as an evaluation index of the model performance in the training process.
3.3, testing the prediction capability of the model by applying the test concentrated data after the deep learning model is trained; the accuracy of the prediction result is evaluated by using an MIoU index, the MIoU index evaluates the coincidence proportion of different phases between the prediction result and the actual result of the characterization model, and the formula is as follows:
Figure BDA0002632899220000074
wherein n isabRepresents the number of categories a predicted as categories b; n isclRepresenting the number of categories of the classification task; t is ta=∑bnabThe number of all pixel points in the category a; if the MIoU mean value of the data prediction result in the test set is larger than 75%, the model precision meets the application requirement; otherwise, returning to the step 3.2, and training the deep learning model again by adjusting the learning rate of the optimizer, replacing the evaluation function of the training process and adjusting the iteration times until the application requirements are met.
In the example, the prediction evaluation index MIoU of the training model to the test set data is 80.4%, the prediction precision is high, the QP steel microstructure can be accurately identified by the representation model, and the specific prediction result is shown in FIG. 3.
And 4, step 4: calculating the phase content by counting the proportion of pixel points in the predicted image, and realizing the quantification of the microscopic structure;
inputting an SEM picture based on a U-Net deep learning model, outputting a corresponding phase diagram through semantic segmentation, and marking different phases in the phase diagram into different colors; calculating the content of each phase by adopting an OpenCV toolbox of deep learning software through counting the proportion of pixel point data of each phase to the total pixel amount, wherein the calculation formula is as follows:
Figure BDA0002632899220000081
wherein C isfIs the content of f phase, NfThe number of f-phase pixels is, and N is the total number of image pixels.
In this example, the test set images were calculated to have ferrite, martensite and austenite contents of 72.7%, 22.2% and 5.2%; the quantitative analysis results of EBSD on the three phases of the graph are respectively 79.2%, 16.6% and 4.2%, and the quantitative analysis results of the current method are very close to that of EBSD analysis and show excellent precision and practicability.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; such modifications and substitutions do not depart from the spirit of the corresponding technical solutions and scope of the present invention as defined in the appended claims.

Claims (3)

1. A steel material organization quantification method combining an EBSD and a deep learning method is characterized in that: the method comprises the following steps:
step 1: establishing an original data set of a target steel material;
establishing an original data set of the target steel material through an SEM (scanning electron microscope) experiment and an EBSD (electron back scattering) experiment, wherein the original data set comprises an SEM photo and an in-situ EBSD photo;
step 1.1: performing SEM experiment to acquire image data of the target steel material;
selecting N different areas on a metallographic sample to acquire image data, and acquiring one SEM picture in each of the area 1 to the area N under the X magnification; the X magnification is required to clearly distinguish the microstructure characteristics, wherein the number of crystal grains in each SEM picture is not less than 100;
step 1.2: respectively carrying out EBSD experiments on the areas 1 to N under the X magnification, wherein the resolution ratio of the EBSD experiments is not lower than 90%, the EBSD scanning area is ensured to be consistent with the SEM shooting area, and an original data set of the target steel material is established, wherein the data set comprises N SEM pictures and EBSD data of corresponding positions of the SEM pictures;
step 2: performing EBSD data preprocessing and establishing a training data set;
and step 3: establishing a U-Net deep learning model according to the training data set in the step 2;
and 4, step 4: calculating the phase content by counting the proportion of pixel points in the predicted image, and realizing the quantification of the microscopic structure;
inputting an SEM picture based on a U-Net deep learning model, outputting a corresponding phase diagram through semantic segmentation, and marking different phases in the phase diagram into different colors; calculating the content of each phase by adopting deep learning software through counting the proportion of pixel point data of each phase to the total pixel quantity, wherein the calculation formula is as follows:
Figure FDA0002632899210000011
wherein C isfIs the content of f phase, NfThe number of f-phase pixels is, and N is the total number of image pixels.
2. The method for quantifying organization of ferrous materials combining EBSD and deep learning according to claim 1, wherein the step 2 comprises:
step 2.1: processing the EBSD data by using data analysis software to accurately distinguish different microstructures in the SEM image, wherein classified phases are distinguished by different colors to form a 'phase diagram' which is used as output data of the deep learning model, and the SEM image is used as input data of the deep learning model;
step 2.2: performing pixel size processing on a 'phase image' formed by the EBSD by using deep learning software to form an image with the same pixel size as the SEM image, so as to achieve the correspondence of pixel points between the SEM image and the 'phase image' of the EBSD;
step 2.3: cutting the SEM picture and the EBSD 'phase diagram' into M sub-graphs, wherein M is NxE, E represents the number of sub-graphs cut by each SEM image or EBSD result graph, establishing a deep learning model training data set, wherein the training data set comprises M SEM sub-graphs and M EBSD 'phase diagram' sub-graphs, and dividing the established training data set into a training set and a testing set by using a ratio of 6: 4;
step 2.4: and respectively clockwise turning all samples in the training set by 90 degrees, 180 degrees and 270 degrees by adopting a data enhancement method, and then adding the turned images into the original training set to increase the number of the samples in the original training set by 3 times.
3. The method for quantifying organization of ferrous materials combining EBSD and deep learning according to claim 1, wherein the step 3 comprises:
step 3.1, building a U-Net deep learning model, wherein the model consists of a compression path and an expansion path, the compression path comprises 4 convolutional layers, and each convolutional layer is followed by a maximum pooling operation; the extended path contains 4 deconvolution layers, each deconvolution being followed by an upconvolution operation; each convolution layer is connected with the corresponding deconvolution layer by a 'jump layer';
the convolutional layer adopts a 3 × 3 convolutional kernel, the activation function selects ReLU, and the formula of ReLU is approximately derived as follows:
f(x)=≈max(0,x)
wherein x is an input numerical value, the convergence rate and precision of the model are improved, data are processed by Batch Normalization (Batch Normalization), and the pseudo code is as follows:
Figure FDA0002632899210000021
Figure FDA0002632899210000022
Figure FDA0002632899210000023
yi←γni+β≡BNγ,β(xi)
wherein x isiIs the ith data, m is the data quantity, muBIs the mean value of the batch data;
Figure FDA0002632899210000024
variance of batch data; m isiNormalizing the result for batch data; xi is a constant; BN represents a normalized conversion network, and gamma and beta are training parameters in the network; introducing data after scaling and translation operations for the normalized data, namely outputting a final result;
step 3.2, respectively inputting the input data and the output data of the deep learning model in the step 2 into the deep learning model, training the deep learning model by adopting an AdaGrad optimizer, and carrying out parameter optimization, wherein the AdaGrad optimizer has the following formula:
w=w+Δw
Figure FDA0002632899210000025
Figure FDA0002632899210000026
wherein w is the optimized parameter set and Δ w is the increment of the parameter set; eta is the learning rate; gnThe average value of the sample loss in the nth training with respect to the parameter gradient is calculated; σ is a very small positive number; r isnThe gradient accumulation variable in the nth training is set as 0;
in the parameter optimization process, Mean Square Error (MSE) is used as an evaluation index of model performance; setting the iteration number of model training as n;
3.3, testing the prediction capability of the model by applying the test concentrated data after the deep learning model is trained; the accuracy of the prediction result is evaluated by using an MIoU index, the MIoU index evaluates the coincidence proportion of different phases between the prediction result and the actual result of the characterization model, and the formula is as follows:
Figure FDA0002632899210000031
wherein n isabRepresents the number of categories a predicted as categories b; n isclRepresenting the number of categories of the classification task; t is ta=∑bnabThe number of all pixel points in the category a; if the MIoU mean value of the data prediction result in the test set is larger than 75%, the model precision meets the application requirement; otherwise, returning to the step 3.2, and training the deep learning model again by adjusting the learning rate of the optimizer, replacing the evaluation function of the training process and adjusting the iteration times until the application requirements are met.
CN202010816423.6A 2020-08-14 2020-08-14 Steel material tissue quantification method combining EBSD and deep learning method Active CN111915602B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010816423.6A CN111915602B (en) 2020-08-14 2020-08-14 Steel material tissue quantification method combining EBSD and deep learning method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010816423.6A CN111915602B (en) 2020-08-14 2020-08-14 Steel material tissue quantification method combining EBSD and deep learning method

Publications (2)

Publication Number Publication Date
CN111915602A true CN111915602A (en) 2020-11-10
CN111915602B CN111915602B (en) 2023-07-11

Family

ID=73283047

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010816423.6A Active CN111915602B (en) 2020-08-14 2020-08-14 Steel material tissue quantification method combining EBSD and deep learning method

Country Status (1)

Country Link
CN (1) CN111915602B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113033106A (en) * 2021-04-06 2021-06-25 东北大学 Steel material performance prediction method based on EBSD and deep learning method
CN113256582A (en) * 2021-05-21 2021-08-13 兰州兰石检测技术有限公司 Method for identifying original austenite grain boundary in martensite metallographic phase based on U-net network

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019135052A1 (en) * 2018-01-05 2019-07-11 Technologies De France System and method for quantifying a metal of interest among a plurality of materials in a heterogeneous sample
US20190287761A1 (en) * 2017-12-18 2019-09-19 Fei Company Method, device and system for remote deep learning for microscopic image reconstruction and segmentation
US20200025696A1 (en) * 2018-07-19 2020-01-23 Fei Company Adaptive specimen image acquisition using an artificial neural network
US20200111219A1 (en) * 2018-10-03 2020-04-09 Fei Company Object tracking using image segmentation

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190287761A1 (en) * 2017-12-18 2019-09-19 Fei Company Method, device and system for remote deep learning for microscopic image reconstruction and segmentation
WO2019135052A1 (en) * 2018-01-05 2019-07-11 Technologies De France System and method for quantifying a metal of interest among a plurality of materials in a heterogeneous sample
US20200025696A1 (en) * 2018-07-19 2020-01-23 Fei Company Adaptive specimen image acquisition using an artificial neural network
US20200111219A1 (en) * 2018-10-03 2020-04-09 Fei Company Object tracking using image segmentation

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
SEYED MAJID AZIMI等: "Advanced Steel Microstructural Classification by Deep Learning Methods", 《SCIENTIFIC REPORTS》, pages 1 - 14 *
王润涵等: "基于卷积神经网络的岩心FIB-SEM图像分割算法", 计算机工程, vol. 47, no. 1, pages 1 - 14 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113033106A (en) * 2021-04-06 2021-06-25 东北大学 Steel material performance prediction method based on EBSD and deep learning method
CN113033106B (en) * 2021-04-06 2023-09-19 东北大学 Steel material performance prediction method based on EBSD and deep learning method
CN113256582A (en) * 2021-05-21 2021-08-13 兰州兰石检测技术有限公司 Method for identifying original austenite grain boundary in martensite metallographic phase based on U-net network

Also Published As

Publication number Publication date
CN111915602B (en) 2023-07-11

Similar Documents

Publication Publication Date Title
Li et al. Automated defect analysis in electron microscopic images
CN106709421B (en) Cell image identification and classification method based on transform domain features and CNN
CN111915602B (en) Steel material tissue quantification method combining EBSD and deep learning method
EP1646964B1 (en) Method and arrangement for determining an object contour
Pardo et al. Semantic segmentation of mFISH images using convolutional networks
JP7147974B2 (en) METHOD OF CLASSIFICATION OF METALLOGICAL STRUCTURE PHASE, METHOD OF CLASSIFICATION OF METALLOGICAL STRUCTURE PHASE, METHOD OF MATERIAL CHARACTERISTICS PREDICTION OF METALLIC MATERIAL, AND MATERIAL CHARACTERISTICS PREDICTION DEVICE OF MATERIAL MATERIAL
CN111915603A (en) Artificial intelligence prediction method for noise-free phase diagram in noise-containing EBSD data
CN111008650B (en) Metallographic structure automatic grading method based on deep convolution antagonistic neural network
CN113096096A (en) Microscopic image bone marrow cell counting method and system fusing morphological characteristics
Pei et al. Robustness of machine learning to color, size change, normalization, and image enhancement on micrograph datasets with large sample differences
Gupta et al. Grain boundary detection and phase segmentation of SEM ferrite–pearlite microstructure using SLIC and skeletonization
CN115587985A (en) Method for dividing cell nucleus of histopathology image and normalizing dyeing style
Zhao et al. A new method for classifying and segmenting material microstructure based on machine learning
Dennler et al. Learning-based defect recognition for quasi-periodic HRSTEM images
CN113241154A (en) Artificial intelligent blood smear cell labeling system and method
CN116597275A (en) High-speed moving target recognition method based on data enhancement
CN113033106B (en) Steel material performance prediction method based on EBSD and deep learning method
CN113177574B (en) Visual model for material characterization image analysis and analysis method thereof
CN113989567A (en) Garbage picture classification method and device
CN113222114A (en) Image data augmentation method and device
CN113962928A (en) Defect detection method, device and medium based on multi-scale feature distillation
Abrol et al. An automated segmentation of leukocytes using modified watershed algorithm on peripheral blood smear images
Zanotelli et al. A flexible image segmentation pipeline for heterogeneous multiplexed tissue images based on pixel classification
CN114708269B (en) Method for predicting maximum size of second phase particles of bearing steel based on image recognition
Zhang et al. A deep learning-based approach for the automatic measurement of laser-cladding coating sizes

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant