CN116953006A - Casting material scanning electron microscope image defect identification and quantification method - Google Patents

Casting material scanning electron microscope image defect identification and quantification method Download PDF

Info

Publication number
CN116953006A
CN116953006A CN202310818185.6A CN202310818185A CN116953006A CN 116953006 A CN116953006 A CN 116953006A CN 202310818185 A CN202310818185 A CN 202310818185A CN 116953006 A CN116953006 A CN 116953006A
Authority
CN
China
Prior art keywords
image
defects
defect
model
electron microscope
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310818185.6A
Other languages
Chinese (zh)
Inventor
黄诗尧
丁佳俊
蒲亮兮
杨雨童
王秋锋
赵海龙
黄理
陈秋任
包祖国
韩维建
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yangtze River Delta Advanced Materials Research Institute
Original Assignee
Yangtze River Delta Advanced Materials Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yangtze River Delta Advanced Materials Research Institute filed Critical Yangtze River Delta Advanced Materials Research Institute
Priority to CN202310818185.6A priority Critical patent/CN116953006A/en
Publication of CN116953006A publication Critical patent/CN116953006A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N23/00Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00
    • G01N23/22Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00 by measuring secondary emission from the material
    • G01N23/225Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00 by measuring secondary emission from the material using electron or ion
    • G01N23/2251Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00 by measuring secondary emission from the material using electron or ion using incident electron beams, e.g. scanning electron microscopy [SEM]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/14Image acquisition
    • G06V30/1444Selective acquisition, locating or processing of specific regions, e.g. highlighted text, fiducial marks or predetermined fields
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N2223/00Investigating materials by wave or particle radiation
    • G01N2223/40Imaging
    • G01N2223/401Imaging image processing
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N2223/00Investigating materials by wave or particle radiation
    • G01N2223/40Imaging
    • G01N2223/421Imaging digitised image, analysed in real time (recognition algorithms)
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Quality & Reliability (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Biochemistry (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Analysing Materials By The Use Of Radiation (AREA)

Abstract

The invention discloses a method for identifying and quantifying defects of a scanning electron microscope image of a casting material, which comprises the steps of firstly detecting and identifying a scanning electron microscope image scale of a tensile fracture of the casting material; detecting and identifying defects by adopting a digital image processing method and a deep learning image identification method according to the size of the scale; post-processing the identified defects by means of noise reduction, expansion and the like; and carrying out quantization statistics on the processed defects according to the identified scale. The method can accurately and rapidly identify the defects in the scanning electron microscope pictures of the casting materials, and achieves full automation; the deep learning model is subjected to targeted improvement and optimization based on the characteristics of the defects to be identified, so that the performance of the model is improved; the defect can be accurately quantified, and compared with manual labeling calculation, the processing speed is greatly improved, and the quantitative statistics problem is effectively solved.

Description

Casting material scanning electron microscope image defect identification and quantification method
Technical Field
The invention belongs to the field of defect segmentation quantization, and in particular relates to a cast material scanning electron microscope image defect identification and quantization method based on image processing.
Background
The casting process can be used for manufacturing parts with complex shapes, has the advantages of high efficiency, simple working procedures and the like, and is widely applied to industries such as automobiles, electronics, aerospace, sports equipment and the like. However, hole defects, such as air holes, shrinkage cavities and shrinkage porosity, caused by the casting process significantly reduce the mechanical properties and fatigue properties of the cast material. In order to quantitatively describe the effect of defects on performance, quantitative analysis of fracture defects of cast materials is required. Defects in casting fractures are typically characterized as follows: the size is unequal, the appearance is irregular, the distribution is uneven, and the quantity is irregular. These features present a significant challenge for defect identification.
Conventionally, the defect size is quantified by using a manual determination method, firstly, an SEM is required to scan a fracture to obtain a defect picture, then Image processing software such as Image-Pro or Image-J is introduced, the occupied area of the defect is defined by using a threshold segmentation or manual edge tracing method, furthermore, the length of a scale in the Image and the actual size represented by the length are required to be defined, and finally, the defect quantification data are given by the software. The method is complex in steps and not suitable for batch processing of data; and the data obtained by measuring by different people or measuring by the same person for many times has great fluctuation under the influence of subjective factors of the measurer, so the accuracy of quantitative data is lower.
Therefore, how to quickly and efficiently identify and count the fracture defects of the casting material, and further assist in controlling the defects of the casting material and optimizing the process not only has important academic research value, but also has very important industrial application potential. In recent years, with the successful application of deep learning models represented by convolutional neural networks (Convolutional neural network, CNN) in many Computer Vision (CV) fields, such as face recognition, pedestrian re-recognition, scene text detection, object tracking, and automatic driving. Advanced deep learning algorithms, ultra-strong parallel computing architectures and large amounts of training data have achieved tremendous success in image processing. However, the accuracy and speed of identification of microscopic images by existing image identification methods have yet to be improved. How to improve these methods, to realize automatic identification of metal fracture scanning electron microscope images and quantification of defects, has attracted widespread attention in industry.
For fracture defect detection, the prior art has provided some related solutions, mainly including two types of models, object detection and image segmentation. Most of the existing algorithms belong to target detection, and the algorithms can rapidly and accurately detect whether defects exist and the positions of the defects. For example, CN202110259892.7 discloses a method, apparatus and device for identifying microscopic image defects of materials based on deep learning, wherein defect prediction is performed on microscopic images through a target detection deep learning model, so that the problem of defect labeling of microscopic images of alloy structures is solved. Because the target detection can only divide the position of the defect and cannot obtain the specific shape of the defect, the target detection model is not suitable for defect identification of the scanning electron microscope image of the casting material. Image segmentation, which is the technique and process of dividing an image into several specific regions with unique properties and presenting objects of interest, is another direction of computer vision. The image segmentation algorithm of digital image processing can be applied to most of pictures, but the digital image processing method cannot realize automation, has large error of segmentation result and cannot be applied to pictures with close pixel values; the result of the image segmentation algorithm of digital deep learning applied to cast material scanning electron microscope pictures is generally that the IOU does not achieve the expected effect and the running speed is slow. For statistical quantification of defects, no related art has been found to solve this problem.
In view of the foregoing, there is a need for a method for identifying and quantifying image defects of a casting material scanning electron microscope based on image processing to solve the above-mentioned problems. The method can be applied to quantitative statistics of the scanning electron microscope image defects of the additive manufacturing material after recalibration.
Disclosure of Invention
In order to solve the defects in the prior art, the invention provides a method for identifying and quantifying the defects of a scanning electron microscope image of a casting material, which combines an image segmentation algorithm, a data augmentation technology, a text detection technology and a digital image processing technology based on deep learning, can accurately and rapidly identify the defects in the scanning electron microscope image of the casting material, and realizes full automation; the defects can be quantitatively analyzed, so that the problems of poor segmentation effect and slow running speed are effectively solved; compared with manual labeling, the processing speed is greatly improved, and the quantitative statistics problem is effectively solved.
The technical scheme adopted by the invention is as follows:
a method for identifying and quantifying the defects of a scanning electron microscope image of a casting material comprises the following steps:
obtaining an image of the casting material to be identified;
inputting the image to be identified into a defect identification and quantization model to obtain basic information of defects in the image; the basic information of the defect comprises defect type information, defect position information, defect quantity information and defect area information;
the defect identification and quantification model comprises:
the scale detection and identification model is used for detecting and identifying scales in the image;
the image magnification classifier is used for classifying the image magnification categories;
the image segmentation model comprises a digital image segmentation model and a deep learning image segmentation model, and is selected according to the image magnification factor output by the image magnification factor classifier and used for segmenting the actual position of the defect of the image to be identified;
the image post-processing model is used for carrying out post-processing on the mask outputted by the image segmentation model;
and (5) quantitatively characterizing the processed defects by using a quantitative analysis model.
Further, the scale detection and identification model is specifically as follows:
s1, detecting the approximate position of a scale through scale characteristic positioning;
s2, identifying the size of the detected scale picture and the occupied pixel length through OCR (Optical character recognition) technology, and using the detected scale picture for reference and defect quantitative statistics of an image segmentation model.
Further, the image magnification classifier is specifically as follows:
s1, determining a pixel length threshold according to the actual condition of an image dataset to be detected;
s2, comparing the pixel length calculated by the scale detection and identification model with a threshold value in each detection, and automatically completing classification.
Further, the digital image segmentation module in the image segmentation model comprises the following steps:
s1, performing pixel point threshold segmentation on a global image by adopting an image processing algorithm of global threshold segmentation according to pixel values of pixel points without any size adjustment and preprocessing, wherein the defect area can be segmented;
further, the construction and segmentation process of the deep learning image segmentation model in the image segmentation model comprises the following steps:
s1, constructing a model improved based on a traditional U-net neural network, replacing a convolution pooling structure with a cavity convolution pooling pyramid (ASPP) network structure at a coding downsampling layer and a decoding output layer, and adding an attention mechanism after each convolution layer of feature extraction;
s2, firstly, carrying out multi-step long feature extraction on the picture to obtain a plurality of feature images; carrying out respective convolution calculation on the feature map obtained in each step from the multi-scale cavity convolution branches by using an ASPP network; the weighted feature fusion is carried out on the feature graphs and the convolved feature graphs through the channel weight and the position weight provided by the attention module, so that a final feature result is obtained; and up-sampling the final feature map to the same size as the original map, and then identifying defective pixel points, thereby realizing image segmentation.
Further, the training method of the deep learning image segmentation model in the image segmentation model comprises the following steps:
s1, randomly initializing parameters of a deep learning image segmentation model;
s2, inputting the training set into the constructed deep learning image segmentation model;
s3, extracting a plurality of groups of feature images with different dimensions after each image is input into a deep learning image segmentation model; dividing the defects after fusing the feature graphs with different dimensions to obtain masks of the defects; comparing the pixel point set with the true defect pixel point set, and calculating a loss function loss value; and finally, returning the loss value to the deep learning image segmentation model for back propagation, adjusting the parameters according to the loss value and the learning rate, and repeating the steps until convergence or training is finished.
Further, the training data processing of the deep learning image segmentation module in the image segmentation model comprises the following steps:
s1, obtaining a scanning electron microscope image of a casting material, marking the types and positions of defects in the scanning electron microscope image, and performing amplification treatment on the marked pictures to obtain a defect data set;
s2, dividing the picture set to obtain a training set, a testing set and a verification set for training a deep learning image segmentation model; model training and learning are carried out based on the training set and the verification set data, parameters of a deep learning image segmentation model are obtained, and the deep learning image segmentation model based on the existing training set/verification set learning and training is output.
Further, the loss function adopted by the defect identification and quantization model training is expressed as:
in the method, in the process of the invention,for mask loss->The evaluation index of the Dice for semantic segmentation indicates that the similarity of two samples is evaluated.
Further, the Dice evaluation index is expressed as:
wherein X is a true defect pixel point set, Y is a defect recognition result pixel set returned by the model, X is Y is an intersection between X and Y, X is Y is the number of elements of X and Y, and the coefficient of a molecule is 2 because common elements between X and Y are repeatedly calculated by denominators.
Further, the mask penalty is expressed as:
for each pixel in the image, calculating a corresponding cross entropy loss function, and then averaging the cross entropy loss of all pixels to obtain lossThe formula is as follows:
in the method, in the process of the invention,to predict pixel points, y i And m is the total number of pixels in the calculation.
Further, the image post-processing model includes the steps of:
s1, inputting a mask after image segmentation, performing median filtering, and expanding a defect area to reduce loss, wherein after the operation is finished, a processed mask image is obtained;
s2, fusing the processed mask image obtained in the step S1 with the original image, and fusing the two images with each other through an image mixing technology so as to mark the defect on the original image, wherein the fused image is the final effect image.
Further, the quantitative analysis model includes the steps of:
s1, calculating the occupied area of each pixel according to the obtained scale and the occupied pixel length;
s2, inputting mask images processed in the image post-processing model, quantitatively counting the number of defects, and carrying out area, area occupation ratio and grade classification on each defect; the classification of the grades is classified according to the area ratio of each defect, and the classification is divided into three types, namely small defects below 0.01%, medium defects between 0.01% and 1%, and large defects above 1%.
The computer equipment is characterized by comprising a memory and a processor, wherein the memory stores a computer program, and the processor executes the method for identifying and quantifying the image defects of the casting material scanning electron microscope based on image processing when executing the computer program.
The invention has the beneficial effects that:
1. according to the image processing-based casting material scanning electron microscope image defect identification and quantification method, a deep learning image segmentation algorithm, a data augmentation technology, a character detection technology and a digital image processing technology are combined, so that defects in a casting material scanning electron microscope image can be accurately and rapidly identified, and full automation is realized; and the defects can be accurately quantified, so that the processing speed is greatly improved compared with manual labeling, and the quantitative statistics problem is effectively solved.
2. The invention provides an image processing-based casting material scanning electron microscope image defect identification and quantification method. The scale detection and identification model is mainly used for automatically detecting and identifying scales in images; the image magnification classifier is used for classifying the image magnification categories and selecting a subsequent model; the image segmentation model comprises a digital image segmentation model and a deep learning image segmentation model and is used for segmenting the actual position of the defect of the image to be identified; the image post-processing model is used for carrying out post-processing on the mask outputted by the image segmentation model; and (5) quantitatively characterizing the processed defects by using a quantitative analysis model. The invention can automatically classify the defect images with different magnification factors, enhance the pertinence of the segmentation model in processing the defect images, and avoid the defect that different models fail on the images with non-corresponding magnification factors. Meanwhile, the invention can end-to-end, pixels-to-pixels, and is more efficient compared with the traditional network based on the division of the convolution neural network, because the problems of repeated storage and calculation convolution caused by using pixel blocks are avoided, the complexity of a model and the operation time are obviously reduced, defects can be accurately quantized, and the problem of quantitative statistics is effectively solved.
3. The invention provides a casting material scanning electron microscope image defect identification and quantification method based on image processing, which comprises an improved deep learning model based on a U-net neural network; the convolution pooling structure is replaced by a cavity convolution pooling pyramid network structure in the coding downsampling layer and the decoding output layer, so that the receptive field can be enlarged, the resolution loss of downsampling is reduced, and the method is a targeted optimization method which is provided for the requirements of accurate defect target positioning and high edge precision in the problem; and the attention module is added after each convolution layer of the feature extraction, so that the attention capability of the model can be given, the model is helped to pay attention to the image channel and the feature region with different weights, and the calculation speed and the prediction accuracy of the model are improved.
Drawings
FIG. 1 is a graph of the marking of defect locations in a scanning electron microscope image of a casting material in an example.
Fig. 2 is a flowchart of the training method of the deep learning image segmentation model according to the present invention.
Fig. 3 is a loop iteration flowchart of a training method of the deep learning image segmentation model of the present invention.
FIG. 4 is a flowchart of a method for identifying and quantifying image defects of a cast material scanning electron microscope based on image processing according to the present invention.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
It should be further noted that, in order to avoid obscuring the present invention due to unnecessary details, only structures and/or identification steps closely related to aspects of the present invention are shown in the specific embodiments, and other details not greatly related to the present invention are omitted.
In addition, it should be further noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
The method for identifying the image defects of the scanning electron microscope of the casting material can be applied to an application environment which can comprise a terminal and a server, wherein the terminal is communicated with the server through a network. The method can be applied to the terminal and the server. The terminal may be, but not limited to, various industrial computers, personal computers, notebook computers, smart phones, tablet computers. The server may be implemented as a stand-alone server or as a server cluster composed of a plurality of servers.
A method for identifying and quantifying the defects of a scanning electron microscope image of a casting material is characterized by comprising the following steps:
obtaining an image of the casting material to be identified;
inputting the image to be identified into a defect identification and quantization model to obtain basic information of defects in the image; the basic information of the defect comprises defect type information, defect position information, defect quantity information and defect area information;
the defect identification model comprises:
the scale detection and identification model is used for detecting and identifying scales in the image;
the image magnification classifier is used for classifying the image magnification categories and selecting a subsequent model;
the image segmentation model comprises a digital image segmentation model and a deep learning image segmentation model and is used for segmenting the actual position of the defect of the image to be identified;
the image post-processing model is used for carrying out post-processing on the mask outputted by the image segmentation model;
and (5) quantitatively characterizing the processed defects by using a quantitative analysis model.
Specifically, the step of detecting and identifying the model by the scale comprises the following steps:
s1, positioning according to the characteristics of a scale, and detecting the approximate position of the scale;
s2, identifying the size of the detected scale picture and the occupied pixel length through OCR (Optical character recognition) technology, and using the detected scale picture for reference and defect quantitative statistics of a segmentation model.
Specifically, the image magnification classifier is specifically as follows:
s1, determining a pixel length threshold according to the actual condition of an image dataset to be detected;
s2, in each detection, referring to FIG. 4, comparing the actual length of the pixel calculated by the scale detection recognition model with a threshold value, judging that the actual length of the pixel is larger as a low-magnification image and selecting a digital image segmentation model, judging that the actual length of the pixel is smaller as a high-magnification image and selecting a deep learning image segmentation model.
In this way, the method adopts the automatic classifier to classify the defect images with different amplification factors, enhances the pertinence of the segmentation model to process the defect images, and avoids the defect that different models fail on the images with non-corresponding amplification factors, thereby improving the accuracy and the stability of the overall defect detection result.
Specifically, referring to fig. 2, the training method of the deep learning image segmentation model includes the following steps:
s1, obtaining a plurality of scanning electron microscope images, and respectively marking the defect types and positions of the scanning electron microscope images;
the scanning electron microscope image is an image formed by exciting various physical information through the interaction between a light beam and a substance, and collecting, amplifying and re-imaging the information to achieve the aim of representing the microscopic morphology of the substance.
Referring to FIG. 1, the type of cast material scanning electron microscope image defect and the relative picture position are marked.
The previously marked pictures are amplified to a certain number by a data amplification mode (including operations of twisting, rotation, scaling, random erasing and the like), and the obtained scanning electron microscope images are limited and have different amplification factors due to limited materials, so that the data amplification plays an expansion role on a data set and plays an important role on the rationality and the authenticity of an experiment;
specifically, the step of performing segmentation labeling on the defects of the scanning electron microscope image in the S1 includes:
1) Acquiring a plurality of scanning electron microscope images;
2) Marking the defect positions and categories of the scanning electron microscope images;
3) Outputting the marked defect position and category information to a json file;
in the actual operation process, firstly, a scanning electron microscope image is imported, the positions and the types of the defects are manually marked in the scanning electron microscope image, meanwhile, affine transformation can be utilized to enrich the content of the picture set, the software can automatically record the position range of each defect, and the position information is stored in a json file. When the defect position is marked, the corresponding category of the defect can be set through software and is also stored in the json file.
In this way, the method adopts the classification marking method to mark the defect image, thereby realizing high-precision example segmentation, improving the accuracy of defect detection results, promoting the intelligent progress of the alloy industry and accelerating the application of the fourth industrial revolution in the metal casting industry.
Specifically, in some embodiments, for step S1, after a plurality of scanning electron microscope images of the casting material are acquired, blank images are removed and then marked.
S2, training a defect recognition model based on the data set obtained in the S1.
Dividing the amplified picture set into data sets according to a ratio of 7:2:1 for training a defect identification model; model training and learning are carried out based on the training set and the verification set data, model parameters are obtained, a model trained based on the existing training set/verification set learning is output, and the model prediction performance is quantitatively evaluated by adopting the decision coefficient mIou, so that the establishment of the model is completed.
Specifically, the training method for the deep learning image segmentation model comprises the following steps:
s1, randomly initializing parameters of a deep learning image segmentation model;
s2, inputting the training set into the constructed deep learning image segmentation model;
s3, extracting a plurality of groups of feature images with different dimensions after each image is input into a deep learning image segmentation model; dividing the defects after fusing the feature graphs with different dimensions to obtain masks of the defects; comparing the pixel point set with the true defect pixel point set, and calculating a loss function loss value; finally, returning the loss value to the deep learning image segmentation model for back propagation, adjusting parameters according to the loss value and the learning rate, and repeating the steps until convergence or training is finished
Specifically, the loss function adopted by the defect recognition model training is expressed as:
in the method, in the process of the invention,for mask loss->And evaluating the similarity of the two samples as a Dice evaluation index of semantic segmentation.
Specifically, the Dice evaluation index is expressed as:
wherein X is a true defect pixel point set, Y is a defect recognition result pixel set returned by the model, X is Y is an intersection between X and Y, X is Y is the number of elements of X and Y, and the coefficient of a molecule is 2 because common elements between X and Y are repeatedly calculated by denominators.
Specifically, the mask loss is expressed as:
calculating a cross entropy loss function corresponding to each pixel point in the image, and then carrying out averaging operation on the cross entropy loss of all the pixel points to obtain the final mask loss of the networkThe formula is as follows:
in the method, in the process of the invention,to predict pixel points, y i Is a true pixel point.
Specifically, the image post-processing model includes the steps of:
s1, inputting a mask after image segmentation, performing median filtering, and expanding a defect area to reduce loss, wherein after the operation is finished, a processed mask image is obtained;
s2, fusing the processed mask image obtained in the step S1 with the original image, and fusing the two images with each other through an image mixing technology so as to mark the defect on the original image, wherein the fused image is the final effect image.
Specifically, the quantitative analysis model includes the steps of:
s1, calculating the occupied area of each pixel according to the size of the scale and the length of the occupied pixel;
s2, inputting mask images processed in the image post-processing model, quantitatively counting the number of defects, and carrying out area, area occupation ratio and grade classification on each defect; the classification of the grades is classified according to the area ratio of each defect, and the classification is divided into three types, namely small defects below 0.01%, medium defects between 0.01% and 1%, and large defects above 1%.
Referring to fig. 4, based on the above image defect recognition method, the present invention further provides a system for recognizing and quantifying image defects of a casting material scanning electron microscope based on image processing, comprising:
the scale detection and identification module is used for detecting and identifying scales in the image;
the image segmentation module comprises a digital image segmentation model and a deep learning image segmentation model and is used for segmenting the actual position of the defect of the image to be identified;
the image post-processing module is used for carrying out post-processing on the mask outputted by the image segmentation model;
and the quantitative analysis module is used for quantitatively characterizing the processed defects.
Based on the above image defect identification method, the present invention further provides a computer device, which comprises a memory and an identifier, wherein the memory stores a computer program, and the processor implements the steps of the above method when executing the computer program.
The above embodiments are merely for illustrating the design concept and features of the present invention, and are intended to enable those skilled in the art to understand the content of the present invention and implement the same, the scope of the present invention is not limited to the above embodiments. Therefore, all equivalent changes or modifications according to the principles and design ideas of the present invention are within the scope of the present invention.

Claims (13)

1. A method for identifying and quantifying the defects of a scanning electron microscope image of a casting material is characterized by comprising the following steps:
obtaining an image of the casting material to be identified;
inputting the image to be identified into a defect identification and quantization model to obtain basic information of defects in the image; the basic information of the defect comprises defect type information, defect position information, defect quantity information and defect area information;
the defect identification and quantification model comprises:
the scale detection and identification model is used for detecting and identifying scales in the image;
the image magnification classifier is used for classifying the image magnification categories;
the image segmentation model comprises a digital image segmentation model and a deep learning image segmentation model, and is selected according to the image magnification factor output by the image magnification factor classifier and used for segmenting the actual position of the defect in the image to be identified;
the image post-processing model is used for carrying out post-processing on the mask outputted by the image segmentation model;
and (5) quantitatively characterizing the processed defects by using a quantitative analysis model.
2. The method for identifying and quantifying image defects of a casting material scanning electron microscope according to claim 1, wherein the scale detection and identification model is specifically as follows:
s1, detecting the position of a scale through scale characteristic positioning;
s2, identifying the size of the detected scale picture and the occupied pixel length through OCR technology, and using the detected scale picture for reference and defect quantitative statistics of an image segmentation model.
3. The method for identifying and quantifying image defects of a cast material scanning electron microscope according to claim 1, wherein the image magnification classifier is specifically as follows:
s1, determining a pixel length threshold according to the actual condition of an image dataset to be detected;
s2, comparing the pixel length calculated by the scale detection and identification model with a threshold value in each detection, and automatically completing classification.
4. The method for identifying and quantifying image defects of a cast material scanning electron microscope according to claim 1, wherein the method for dividing by the digital image dividing module comprises the following steps:
and (3) adopting an image processing algorithm of global threshold segmentation, setting a threshold according to the pixel value of the pixel point, carrying out the threshold segmentation of the pixel point on the global image, and segmenting out the region where the defect is located.
5. The method for identifying and quantifying defects of a cast material scanning electron microscope image according to claim 1, wherein the process of constructing and segmenting the deep learning image segmentation model comprises the following steps:
s1, constructing a model improved based on a traditional U-net neural network, replacing a convolution pooling structure with a cavity convolution pooling pyramid network structure at a coding downsampling layer and a decoding output layer, and adding an attention mechanism after each convolution layer is extracted from features;
s2, firstly, carrying out multi-step long feature extraction on the picture to obtain a plurality of feature images; carrying out respective convolution calculation on the feature map obtained in each step from the multi-scale cavity convolution branches by using an ASPP network; the weighted feature fusion is carried out on the feature graphs and the convolved feature graphs through the channel weight and the position weight provided by the attention module, so that a final feature result is obtained; and up-sampling the final feature map to the same size as the original map, and then identifying defective pixel points, thereby realizing image segmentation.
6. The method for identifying and quantifying image defects of a cast material scanning electron microscope according to claim 4, wherein the training method for a deep learning image segmentation model in the image segmentation model comprises the steps of:
s1, randomly initializing parameters of a deep learning image segmentation model;
s2, inputting the training set into the constructed deep learning image segmentation model;
s3, extracting a plurality of groups of feature images with different dimensions after each image is input into a deep learning image segmentation model; dividing the defects after fusing the feature graphs with different dimensions to obtain masks of the defects; comparing the pixel point set with the true defect pixel point set, and calculating a loss function loss value; and finally, returning the loss value to the deep learning image segmentation model for back propagation, adjusting the parameters according to the loss value and the learning rate, and repeating the steps until convergence or training is finished.
7. The method for identifying and quantifying defects in a cast material scanning electron microscope image according to claim 5, wherein the training data processing of the deep learning image segmentation module comprises the steps of:
s1, obtaining a scanning electron microscope image of a casting material, marking the types and positions of defects in the scanning electron microscope image, and performing amplification treatment on the marked pictures to obtain a defect data set;
s2, dividing the picture set and the corresponding defect data set to obtain a training set, a testing set and a verification set for training a deep learning image segmentation model; model training and learning are carried out based on the training set and the verification set data, parameters of a deep learning image segmentation model are obtained, and the deep learning image segmentation model based on the existing training set/verification set learning and training is output.
8. A method for identifying defects in a scanning electron microscope image of a casting material according to any one of claims 1 to 6, wherein the loss function used for the training of the defect identification and quantization model is expressed as:
in the method, in the process of the invention,evaluation index for semantic division Dice, representing evaluation of similarity of two samples, ++>Is a mask penalty.
9. The method for identifying defects in a scanning electron microscope image of a casting material according to claim 7, wherein the Dice evaluation index is expressed as:
wherein X is a true defect pixel point set, Y is a defect recognition result pixel set returned by the model, X is Y is an intersection between X and Y, X is Y is the number of elements of X and Y, and the coefficient of a molecule is 2 because common elements between X and Y are repeatedly calculated by denominators.
10. The method for identifying defects in a cast material scanning electron microscope image according to claim 7, wherein the mask loss is expressed as:
calculating a cross entropy loss function corresponding to each pixel point in the image, and then carrying out averaging operation on the cross entropy loss of all the pixel points to obtain lossThe formula is as follows:
in the method, in the process of the invention,to predict pixel points, y i And m is the total number of pixels in the calculation.
11. The method for identifying and quantifying image defects of a cast material scanning electron microscope according to claim 1, wherein the image post-processing model comprises the steps of:
s1, inputting a mask after image segmentation, performing median filtering, and expanding a defect area to reduce loss, and obtaining a processed mask image after completion;
s2, fusing the processed mask image obtained in the step S1 with the original image, and fusing the two images with each other through an image mixing technology so as to mark the defect on the original image, wherein the fused image is the final effect image.
12. The method for identifying and quantifying image defects of a cast material scanning electron microscope according to claim 1, wherein the quantitative analysis model comprises the steps of:
s1, calculating the occupied area of each pixel according to the obtained scale and the occupied pixel length;
s2, inputting mask images processed in an image post-processing model, quantitatively counting the number of defects, and carrying out area, area occupation ratio and grade classification on each defect; the classification of the grades is classified according to the area ratio of each defect, and the classification is divided into three types, namely small defects below 0.01%, medium defects between 0.01% and 1%, and large defects above 1%.
13. A computer device comprising a memory and a processor, the memory storing a computer program, the processor executing the computer program to perform a method of identifying and quantifying cast material scanning electron microscope image defects as defined in claim 1.
CN202310818185.6A 2023-07-05 2023-07-05 Casting material scanning electron microscope image defect identification and quantification method Pending CN116953006A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310818185.6A CN116953006A (en) 2023-07-05 2023-07-05 Casting material scanning electron microscope image defect identification and quantification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310818185.6A CN116953006A (en) 2023-07-05 2023-07-05 Casting material scanning electron microscope image defect identification and quantification method

Publications (1)

Publication Number Publication Date
CN116953006A true CN116953006A (en) 2023-10-27

Family

ID=88452144

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310818185.6A Pending CN116953006A (en) 2023-07-05 2023-07-05 Casting material scanning electron microscope image defect identification and quantification method

Country Status (1)

Country Link
CN (1) CN116953006A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117635694A (en) * 2024-01-24 2024-03-01 中南大学 Method, device and equipment for measuring secondary sphere size of electron microscope image
CN117710377A (en) * 2024-02-06 2024-03-15 中国科学院长春光学精密机械与物理研究所 Deep learning algorithm-based CMOS defect detection method
CN117994786A (en) * 2024-03-06 2024-05-07 大连理工大学 Metal fracture type identification method based on deep learning

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117635694A (en) * 2024-01-24 2024-03-01 中南大学 Method, device and equipment for measuring secondary sphere size of electron microscope image
CN117635694B (en) * 2024-01-24 2024-04-19 中南大学 Method, device and equipment for measuring secondary sphere size of electron microscope image
CN117710377A (en) * 2024-02-06 2024-03-15 中国科学院长春光学精密机械与物理研究所 Deep learning algorithm-based CMOS defect detection method
CN117710377B (en) * 2024-02-06 2024-05-24 中国科学院长春光学精密机械与物理研究所 Deep learning algorithm-based CMOS defect detection method
CN117994786A (en) * 2024-03-06 2024-05-07 大连理工大学 Metal fracture type identification method based on deep learning

Similar Documents

Publication Publication Date Title
CN116953006A (en) Casting material scanning electron microscope image defect identification and quantification method
CN115082467B (en) Building material welding surface defect detection method based on computer vision
CN110660052A (en) Hot-rolled strip steel surface defect detection method based on deep learning
CN111860596B (en) Unsupervised pavement crack classification method and model building method based on deep learning
CN107784288B (en) Iterative positioning type face detection method based on deep neural network
CN110322445B (en) Semantic segmentation method based on maximum prediction and inter-label correlation loss function
CN110532946B (en) Method for identifying axle type of green-traffic vehicle based on convolutional neural network
CN115880298B (en) Glass surface defect detection system based on unsupervised pre-training
CN112862811A (en) Material microscopic image defect identification method, equipment and device based on deep learning
CN111369526B (en) Multi-type old bridge crack identification method based on semi-supervised deep learning
CN114372955A (en) Casting defect X-ray diagram automatic identification method based on improved neural network
CN113052185A (en) Small sample target detection method based on fast R-CNN
CN112766283B (en) Two-phase flow pattern identification method based on multi-scale convolution network
CN116883393B (en) Metal surface defect detection method based on anchor frame-free target detection algorithm
CN117036243A (en) Method, device, equipment and storage medium for detecting surface defects of shaving board
CN114897802A (en) Metal surface defect detection method based on improved fast RCNN algorithm
CN113496260B (en) Grain depot personnel non-standard operation detection method based on improved YOLOv3 algorithm
CN116129280B (en) Method for detecting snow in remote sensing image
CN111832463A (en) Deep learning-based traffic sign detection method
CN116597275A (en) High-speed moving target recognition method based on data enhancement
CN113989567A (en) Garbage picture classification method and device
CN113920391A (en) Target counting method based on generated scale self-adaptive true value graph
CN112862767A (en) Measurement learning-based surface defect detection method for solving difficult-to-differentiate unbalanced samples
CN112949614A (en) Face detection method and device for automatically allocating candidate areas and electronic equipment
CN111723223B (en) Multi-label image retrieval method based on subject inference

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination