CN116188560A - Automatic grain identification and accurate quantization characterization method based on metallographic pictures - Google Patents
Automatic grain identification and accurate quantization characterization method based on metallographic pictures Download PDFInfo
- Publication number
- CN116188560A CN116188560A CN202211625549.0A CN202211625549A CN116188560A CN 116188560 A CN116188560 A CN 116188560A CN 202211625549 A CN202211625549 A CN 202211625549A CN 116188560 A CN116188560 A CN 116188560A
- Authority
- CN
- China
- Prior art keywords
- image
- pixel
- gray
- grain
- value
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000012512 characterization method Methods 0.000 title claims abstract description 8
- 238000013139 quantization Methods 0.000 title claims description 6
- 238000000034 method Methods 0.000 claims abstract description 42
- 239000013078 crystal Substances 0.000 claims abstract description 34
- 230000011218 segmentation Effects 0.000 claims abstract description 20
- 230000009466 transformation Effects 0.000 claims abstract description 6
- 238000006243 chemical reaction Methods 0.000 claims description 21
- 239000011159 matrix material Substances 0.000 claims description 21
- 238000004364 calculation method Methods 0.000 claims description 16
- 230000000694 effects Effects 0.000 claims description 10
- 238000000605 extraction Methods 0.000 claims description 6
- 230000000877 morphologic effect Effects 0.000 claims description 6
- 238000004445 quantitative analysis Methods 0.000 claims description 6
- 201000000736 Amenorrhea Diseases 0.000 claims description 5
- 206010001928 Amenorrhoea Diseases 0.000 claims description 5
- 231100000540 amenorrhea Toxicity 0.000 claims description 5
- 238000012545 processing Methods 0.000 claims description 4
- 230000002093 peripheral effect Effects 0.000 claims description 3
- 230000008569 process Effects 0.000 claims description 3
- 238000004891 communication Methods 0.000 claims description 2
- 230000001131 transforming effect Effects 0.000 claims description 2
- 238000005260 corrosion Methods 0.000 claims 1
- 230000007797 corrosion Effects 0.000 claims 1
- 239000000463 material Substances 0.000 abstract description 10
- 239000007769 metal material Substances 0.000 abstract description 9
- 238000004458 analytical method Methods 0.000 abstract description 5
- 239000002184 metal Substances 0.000 abstract description 3
- 238000007431 microscopic evaluation Methods 0.000 abstract description 3
- 238000003672 processing method Methods 0.000 abstract description 3
- 238000011002 quantification Methods 0.000 abstract 2
- 230000007774 longterm Effects 0.000 abstract 1
- 238000010586 diagram Methods 0.000 description 4
- 230000007547 defect Effects 0.000 description 2
- 238000007689 inspection Methods 0.000 description 2
- 238000007619 statistical method Methods 0.000 description 2
- 238000010835 comparative analysis Methods 0.000 description 1
- 238000005336 cracking Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000010791 quenching Methods 0.000 description 1
- 230000000171 quenching effect Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/62—Analysis of geometric attributes of area, perimeter, diameter or volume
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/20—Image enhancement or restoration using local operators
- G06T5/30—Erosion or dilatation, e.g. thinning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20036—Morphological image processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20112—Image segmentation details
- G06T2207/20152—Watershed segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
- G06T2207/30136—Metal
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A40/00—Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
- Y02A40/10—Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in agriculture
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Geometry (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a metallographic picture-based automatic grain identification and accurate quantitative characterization method, and belongs to the technical field of material microscopic analysis. The method uses the image processing methods such as closed operation, bottom cap transformation, threshold value and the like to reduce noise and extract images, and uses the watershed segmentation algorithm based on the mark to segment crystal grains of the images. The grain quantification data can provide a reference for mechanical property analysis of metal materials, and can carry out tracking quantification analysis on the grain growth condition of the long-term overtemperature metal component.
Description
Technical Field
The invention belongs to the technical field of material microscopic analysis, and particularly relates to a grain automatic identification and accurate quantitative characterization method based on metallographic pictures.
Background
Grain statistics and analysis of metal materials are an essential item in related researches such as microscopic analysis and performance evaluation of materials. In general, the size of crystal grains of a metal material in a steady state accords with the Hall-Petch relation with the yield strength of the material, namely, the smaller the crystal grain size of the material is, the higher the yield strength of the material is, and the better the plasticity and toughness are, otherwise, the poorer the mechanical property of the metal material is, and quenching deformation and cracking are easy to occur.
Currently, the grains of metal materials are generally quantitatively analyzed by a manual statistical method. The manual counting method not only wastes time, and the counting error can be increased along with the fatigue of personnel when the counting time is too long, but also can only count the number of the crystal grains, can not calculate the area of the crystal grains and can not deeply analyze the size and the distribution of the area of the crystal grains. Therefore, based on the defect of analyzing and quantifying the grain image by the manual statistical method, the invention provides an image processing method for automatically identifying and precisely quantifying and characterizing grains in the grain image of the metal material.
Disclosure of Invention
The invention aims to overcome the defects in the prior art, and provides a method for carrying out grain automatic identification and accurate quantitative characterization analysis (namely maximum area, minimum area, average area, grain area size distribution and grain size analysis) on a metal material grain image by virtue of the advantages of an image processing method in image segmentation and quantitative analysis.
The invention solves the problems by adopting the following technical scheme: the automatic grain identification and accurate quantitative characterization method based on metallographic pictures is characterized by comprising the following steps of:
s1, gray level conversion: the imported metallographic picture is marked as I, and the metallographic picture is converted into a gray image with the pixel value of 0-255 through gray conversion and is marked as I 1 ;
S2, bottom cap conversion: for metallographic phase picture I after gray level conversion 1 Performing morphological bottom cap transformation, and marking the image transformed by the morphological bottom cap as I 2 ;
S3, extracting a grain boundary threshold value: transforming image I according to the through-the-bottom-cap 2 Is used for manually selecting a proper threshold value for the image I 2 Extracting grain boundaries in the process; that is, the threshold value of the grain boundary is 255, the threshold value of the crystal grain is 0, and the binary image is denoted as I 3 ;
S4, image closing operation: due to image I 3 In the presence of a plurality of noise points, in order to better target the image I 3 The grains in the binary image I are divided 3 Each grain is subjected to image closing operation processing to eliminate noise points in the grains as far as possible, and the image after closing operation is marked as I 4 ;
S5, dividing watershed based on marks: first, the image I after the amenorrhea operation 4 Performing distance conversion, and recording the image obtained by the distance conversion as I 5 The method comprises the steps of carrying out a first treatment on the surface of the Re-aiming at image I 5 Binarization processing is carried out to obtain an image I 6 Image I is checked by 8 neighborhood connectivity checking method 6 The communication areas in the image mark1 are subjected to classification marking; image I after the operation of amenorrhea 4 And binarized image I 6 Subtracting to obtain the foreground region I of the crystal grain 7 Image mark1 is corresponding to image I 7 The pixel value of the position with the pixel value of 255 is assigned to be 0, and the image mark1 corresponds to the image I 4 The pixel value of the position with the pixel value of 0 is assigned to be-1, and then the grain image is processedI, carrying out watershed segmentation by combining the mark image mark1 to obtain an image mark2;
s6, checking and improving grain segmentation effect: checking the grain image mark2 after the watershed segmentation, carrying out the next grain quantization operation if the grain segmentation effect is good, returning to the step S3 if the grain segmentation effect is poor, re-carrying out threshold extraction on the grain boundary, and carrying out the steps S4 and S5;
s7, quantitative analysis of crystal grains: counting the area of each crystal grain in the image mark2 by using an image traversing method, calculating the average area and the maximum area of the crystal grains, drawing a distribution curve of the area of the crystal grains, and calculating the grain size according to the definition of the grain size.
Further, the specific implementation method of the step S1 is as follows: image I 1 The Gray conversion calculation formula of (2) is gray=R×0.299+G×0.587+B×0.114, wherein Gray is Gray image I 1 R, G, B is the pixel value of image I.
Further, step S2 comprises the sub-steps of:
s21, gray level image I 1 Is converted into a three-dimensional matrix A (x, y, z), wherein x and y are images I 1 The plane position of the corresponding pixel, z is the image I 1 A gray value corresponding to the middle pixel; performing operation (closed operation) of expanding and then corroding the three-dimensional matrix A to obtain a matrix B;
s22, subtracting the matrix B from the matrix A to obtain a matrix C (bottom cap transformation), and converting the matrix C into a gray image I 2 。
Further, the specific implementation method of step S3 is as follows: from image I 2 Is used for selecting proper gray value gray_min and gray_max as the gray distribution of the image I 2 Threshold interval of grain boundary extraction, i.e. image I 2 When the gray value of the middle pixel is equal to or more than gray_min and equal to or less than gray_max, the pixel value of the region is marked as 255, the other pixel values which do not meet the gray interval are marked as 0, and the obtained binary image is marked as I 3 。
Further, step S4 comprises the sub-steps of:
s41, creation size and I 3 Equal to each otherAnd the pixel value is 0 4 ;
S42, for binary image I 3 8 neighborhood connectivity check is carried out on the region with the middle pixel being 0, and the image I is obtained 4 Corresponding to I 3 And pixels of non-adjacent regions are labeled 1, 2, 3, 4 …, respectively;
s43, for image I 4 The regions with the pixel values of 1, 2, 3 and 4 … … are respectively subjected to the image closing operation in the step S21, and when the image I is used for ensuring the integrity of the grain structure 4 When the image closing operation is carried out on the grain areas with the middle pixel values of 1, 2, 3 and 4 … …, the pixel values communicated with the background area are kept unchanged;
s44, image I 4 A pixel area of 0 is assigned a value of 1, and image I is given 4 The region with pixel value equal to 255 is assigned 0, and then the image I is given 4 The pixel area for 1 is assigned 255.
Further, step S5 comprises the sub-steps of:
s51, image I 4 The region with the middle pixel value of 255 is stored in the form of coordinates into a subset z1, and the image I is obtained 4 The region with the middle pixel value of 0 is stored in the subset z2 in the form of coordinates;
s52, calculating the minimum distance from each coordinate in the subset z1 to the coordinate in the subset z2 through a Euclidean distance formula, forming a set containing the coordinates and the distances, marking the set as z3, marking the minimum distance value in the set z3 as z_min, and marking the maximum distance value in the set z3 as z_max;
s53, creating size and image I 4 Image I with the same pixel value of 0 5 Image I 5 The coordinate position corresponding to the set z3 has a pixel value according to the formula G (x, y) =255×|z 3 (x, y) -z_min|/|z_max-z_min| is assigned;
s54, traversing the image I 5 Finding out the maximum value G_max of the pixel, traversing the image I 5 The pixel point in (a) is assigned 255 to the pixel with the gray value larger than 0.05 x G_max and 0 to the pixel with the gray value smaller than 0.05 x G_max, and the converted image is marked as I 6 ;
S55, creating size and I 6 The same image mark1, and the pixel value is0, image I is checked by 8 connected neighborhood checking method 6 Connectivity checking is performed on the region with the pixel value of 255, and mark1 is corresponding to I 6 The unconnected areas in the image are marked as 1, 2 and 3 … …;
s56, assigning the peripheral frames of the image mark1 as-1;
s57, traversing a region with a pixel value of 0 in the mark1 image, and if one pixel value of the upper, lower, left and right neighborhood of the region with the pixel value of 0 in the image is not 0, calculating the gray gradient of the pixel position of the point corresponding to the original image I, wherein the calculation formula of the gray gradient is as follows: in the formula, R, G, B is the RGB value of a target pixel point, RL, GL and BL are the RGB values of a pixel point in the left field of the target pixel point, RR, GR and BR are the RGB values of a pixel point in the right neighborhood of the target pixel point, RT, GT and BT are the RGB values of a pixel point in the upper neighborhood of the target pixel point, RB, GB and BB are the RGB values of a pixel point in the lower neighborhood of the target pixel point, and the pixel points for gray gradient calculation are put into a q-set queue in the form of a coordinate position;
s58, scanning the set q from left to right according to the magnitude of a gradient value, when a mark pixel exists in a four-adjacent-domain coordinate of a pixel point in the mark1 image corresponding to the coordinate position in the set q, assigning the pixel at the position in the mark1 image as a mark pixel value corresponding to the four-adjacent-domain coordinate, deleting the coordinate position in a queue of the set q, searching whether a region with the value of 0 exists in the four-adjacent-domain pixel at the position of the mark pixel point in the mark1 image, calculating the gradient according to a gray gradient method in the step S57 if the region exists, and putting the gradient into the queue of the set q in the form of the coordinate position, and exiting the scanning; when scanning that two or more different marking pixels exist in the four-adjacent-domain pixels of the pixel point in the mark1 image corresponding to the coordinate position in the set q, assigning the pixel of the position in the mark1 image as-1, deleting the coordinate position in a queue of the set q, searching whether a region with the value of 0 exists in the four-adjacent-domain pixels of the pixel point position in the mark1 image, calculating the gradient according to a gray gradient method in the step S57 if the region exists, and putting the gradient into the queue of the set q in the form of the coordinate position, and exiting the scanning;
s59, repeating the above step S58 until the region with the pixel value of 0 in the mark1 map disappears, and marking the image mark1 after the segmentation as mark2.
Further, the specific implementation method of step S6 is as follows: in image mark2 there are two or more principally independent grains, but their target pixel values are the same, if there are a large number of grains in the mark2 map, steps S4, S5 have to be repeated.
Further, step S7 includes the sub-steps of:
s71, traversing pixels in the image mark2, counting the number of pixels with different marks, namely the areas of different crystal grains, and quantitatively analyzing the areas of the crystal grains;
s72, the calculation formula of grain size is g= 3.321928lgN A 2.954 where N A The number of grains of the metallographic picture can be known by the steps, and the number N of grains of the metallographic picture per square millimeter can be known according to the scale of the metallographic picture A And then the grain size G of the metallographic picture can be known by the grain size calculation formula.
Compared with the prior art, the invention has the following advantages and effects: the image method is used for dividing the image of the metal material crystal grains, and compared with the manual statistics method for distinguishing the crystal grains, the time can be saved, the efficiency can be improved, and higher accuracy can be maintained. Through quantitative analysis of the metal material grain images, metallographic picture information can be deeply mined, the structure and performance relation of the metal can be reflected more deeply, and more material reference information can be provided for inspection staff.
Drawings
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is an original die image I in an embodiment of the invention;
FIG. 3 is a gray-scale converted grain image I according to an embodiment of the present invention 1 ;
FIG. 4 shows the present inventionGrain image I after bottom cap conversion in the illustrative embodiment 2 ;
FIG. 5 is a binary image I of grain boundary threshold extraction in an embodiment of the invention 3 ;
FIG. 6 is a block diagram of a die image I with image closure in an embodiment of the invention 4 ;
FIG. 7 is a distance-transformed grain image I according to an embodiment of the present invention 5 ;
FIG. 8 is a die image I in an embodiment of the invention 5 Binary image I for threshold extraction 6 ;
FIG. 9 is a view of a foreground region image I of a die in an embodiment of the invention 7 ;
FIG. 10 is a grain image mark2 after watershed segmentation in an embodiment of the present invention;
FIG. 11 is a schematic diagram of a set q in an embodiment of the present invention;
fig. 12 is a diagram illustrating a quantization of grain identification according to an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail by way of examples with reference to the accompanying drawings, which are illustrative of the present invention and not limited to the following examples.
Examples
As shown in fig. 1, a method for automatically identifying and precisely quantifying and characterizing crystal grains based on metallographic pictures comprises the following steps:
s1, gray level conversion: the metallographic picture imported at this time is shown in fig. 2, and is marked as I, and the gray value calculation formula is as follows: gray=R×0.299+G×0.587+B×0.114, where Gray is a Gray-converted value of image I and R, G, B is an RGB pixel value of image I. Performing gray level conversion on the image I according to the gray level value calculation formula, and marking the gray level image obtained by gray level conversion as I 1 As shown in fig. 3.
S2, bottom cap conversion: for gray-scale converted grain gray-scale image I 1 Performing morphological bottom cap transformation, and marking the image transformed by the morphological bottom cap as I 2 The method specifically comprises the following substeps:
s21, gradation is toImage I 1 Is converted into a three-dimensional matrix A (x, y, z), wherein x and y are images I 1 The plane position of the corresponding pixel, z is the image I 1 Pixel values of (a) are provided. Performing operation (closed operation) of expanding and then corroding the three-dimensional matrix A to obtain a matrix B;
s22, subtracting the matrix B from the matrix A to obtain a matrix C (bottom cap transformation), and converting the matrix C into a gray image I 2 As shown in fig. 4.
S3, extracting a grain boundary threshold value: from image I 2 Is used for manually selecting the image I 2 The lower gray limit value of (2) is gray_min=16, and the upper gray limit value is gray_max=255, namely the image I 2 When the gray pixel value is equal to or more than gray_min and equal to or less than gray_max, the gray pixel value of the region is marked as 255, the other pixel values which do not meet the gray interval are marked as 0, and the obtained binary image is marked as I 3 Image I 3 The region with a pixel value of 0 in the above is a crystal grain, and the region with a pixel value of 255 is a grain boundary, as shown in fig. 5.
S4, image closing operation: due to grain image I 3 In order to better divide grains in the image, a binary image I is needed 3 Performing image closing operation on each grain to eliminate noise in the grain as far as possible, and recording the image after the closing operation as I 4 As shown in fig. 6, the method specifically comprises the following substeps:
s41, creation size and I 3 Equal and pixel value 0 image I 4 ;
S42, for binary image I 3 8 neighborhood connectivity check is carried out on the region with the middle pixel being 0, and I is carried out 4 Corresponding to I 3 Pixels of non-adjacent regions are labeled 1, 2, 3, 4 …, respectively;
s43, pair I 4 The regions with the pixel values of 1, 2, 3 and 4 … … are respectively subjected to the image closing operation in the step S21, and when I, in order to ensure the integrity of the grain structure 4 When the image closing operation is carried out on the grain areas with the middle pixel values of 1, 2, 3 and 4 … …, the pixel values communicated with the background area are kept unchanged;
s44, performing operation on the image I after the amenorrhea 4 A pixel area of 0 is assigned a value of 1, and image I is given 4 The region with pixel value equal to 255 is assigned 0, and then the image I is given 4 The pixel area for 1 is assigned 255.
S5, dividing watershed based on marks: first to image I 4 Performing distance conversion, wherein the image obtained by the distance conversion is I 5 As shown in fig. 7. Re-aiming at image I 5 Binarization processing is carried out to obtain an image I 6 As shown in fig. 8. Image I using eight neighborhood connectivity inspection method 6 And (5) carrying out classification marking on the connected areas in the image mark1. Image I 4 And binarized image I 6 Subtracting to obtain the foreground region I of the crystal grain 7 As shown in fig. 9. Image mark1 is corresponding to image I 7 The pixel value of the position with the middle pixel value of 255 is assigned to be 0, and the image mark1 corresponds to the image I 4 The pixel value of the position with the middle pixel value of 0 is assigned to be-1, and watershed segmentation is carried out on the grain image I and the mark image mark1 to obtain an image mark2, as shown in fig. 10. The method specifically comprises the following substeps:
s51, image I 4 The region with the middle pixel value of 255 is stored in the form of coordinates in the subset z1, and the image I 4 The region with the middle pixel value of 0 is stored in the subset z2 in the form of coordinates;
s52, calculating the minimum distance from each coordinate in the subset z1 to the subset z2 through a Euclidean distance formula, forming a set containing the coordinates and the distances, marking the set as z3, marking the minimum distance value in the set z3 as z_min, and marking the maximum distance value in the set z3 as z_max;
s53, creating size and image I 4 Image I with the same pixel value of 0 5 Image I 5 The pixel values corresponding to the z3 coordinate positions of the set are calculated according to the formula G (x, y) =255×|z 3 (x, y) -z_min|/|z_max-z_min| is assigned a value;
s54, traversing the image I 5 Finding out the maximum gray G_max and traversing I 5 Pixels in the image are assigned 255 to pixels with gray values larger than 0.15 x G_max and 0 to pixels with gray values smaller than 0.15 x G_max, and the converted image is marked as I 6 ;
S55, creating size and I 6 The same image mark1 with the pixel value of 0 is used for image I by using an eight-connected neighborhood checking method 6 Connectivity checking is performed on the region with the pixel value of 255, and mark1 is corresponding to I 6 The unconnected areas in the image are marked as 1, 2 and 3 … …;
s56, assigning the peripheral frames of the image mark1 as-1;
s57, traversing a region with a pixel value of 0 in the mark1 image, and if a marked pixel which is not 0 exists in the upper, lower, left and right neighborhood of the region with the pixel value of 0 in the mark1 image, calculating the gray gradient of the pixel position of the point corresponding to the original image I, wherein the calculation formula of the gray gradient is as follows: in the formula, R, G, B is the RGB value of the target pixel point, RL, GL and BL are the RGB values of the pixel point in the left field of the target pixel point, RR, GR and BR are the RGB values of the pixel point in the right neighborhood of the target pixel point, RT, GT and BT are the RGB values of the pixel point in the upper neighborhood of the target pixel point, RB, GB and BB are the RGB values of the pixel point in the lower neighborhood of the target pixel point, and the pixel points subjected to gray gradient calculation are put into a queue of q in the form of the coordinate position, as shown in figure 11;
s58, scanning the set q from left to right from top to bottom according to the magnitude of a gradient value, when a mark pixel exists in a four-neighbor coordinate of a pixel point in the mark1 image corresponding to the coordinate position in the set q, assigning the pixel in the position in the mark1 image as a mark pixel value identical to the four-neighbor coordinate, deleting the coordinate position in a queue of the set q, searching whether a region with the value of 0 exists in the four-neighbor pixel in the position of the pixel point in the mark1 image, calculating the gradient according to a gray gradient method in the step S57 if the region exists, and putting the gradient in the queue of the set q in the form of the coordinate position, and exiting the scanning; when scanning that two or more different mark pixels exist in the four-adjacent-domain pixels of the pixel point in the mark1 image corresponding to the coordinate position in the set q, assigning the pixel at the position in the mark1 image as-1, deleting the coordinate position in a queue of the set q, searching whether a region with the value of 0 exists in the four-adjacent-domain pixels of the pixel point position in the mark1 image, calculating the gradient of the pixel point according to the gray gradient method in the step S57 if the region exists, and putting the gradient of the pixel point in the queue of the set q in the form of the coordinate position, and exiting the scanning;
s59, repeating the above step S58 until the region with the pixel value of 0 in the mark1 map disappears, and marking the image mark1 after the segmentation as mark2.
S6, checking and improving grain segmentation effect: checking a grain image mark2 after watershed segmentation, and performing next grain quantization operation if the grain segmentation effect is good; if the grain division effect is poor, that is, if two or more principally independent grains exist in the image mark2, but the target pixel values are the same, if a large number of grains exist in the mark2 diagram, the process returns to step S3, the grain boundaries are extracted again, and steps S4 and S5 are performed.
S7, quantifying crystal grains: the grain area in the segmented image mark2 is counted by using an image traversing method, the average area and the maximum area of grains are calculated, and the distribution curve of the grains is drawn, as shown in fig. 12. And calculating the grain size according to the definition of the grain size. The method specifically comprises the following substeps:
s71, traversing pixels in the image mark2, counting the number of pixels with different marks, namely the areas of different crystal grains, and quantitatively analyzing the areas of the crystal grains;
s72, calculating a grain size calculation formula: g= 3.321928lgN A 2.954 where N A The number of grains in the grain image can be known by the steps, and the number N of grains in the grain image can be known according to the scale of the grain image A The grain size G of the grain image is 5.5 grade according to the grain size calculation formula, and the standard rating of the metallographic image is 5.5 grade, so that the method has higher accuracy.
The invention carries out noise reduction pretreatment on the grain image by means of an image method, then segments grains in a metallographic picture by using a watershed segmentation method based on marking, and marks single segmented grains by using an eight-neighborhood connectivity checking method.
The invention can carry out quantitative analysis on the crystal grains in the metallographic pictures, so that an experimenter can rapidly and accurately acquire the grain size in the crystal grain image, and can carry out quantitative analysis on the area of each crystal grain and the distribution of the area of each crystal grain, as shown in fig. 12. The method is convenient for material workers to better carry out comparative analysis on the performance of the metal part material and assist in tracking the tissue evolution trend of the material.
What is not described in detail in this specification is all that is known to those skilled in the art.
Although the present invention has been described with reference to the above embodiments, it should be understood that the invention is not limited to the embodiments described above, but is capable of modification and variation without departing from the spirit and scope of the present invention.
Claims (2)
1. The automatic grain identification and accurate quantitative characterization method based on metallographic pictures is characterized by comprising the following steps of:
s1, gray level conversion: the imported metallographic picture is marked as I, and the metallographic picture is converted into a gray image with the pixel value of 0-255 through gray conversion and is marked as I 1 ;
S2, bottom cap conversion: for metallographic phase picture I after gray level conversion 1 Performing morphological bottom cap transformation, and marking the image transformed by the morphological bottom cap as I 2 ;
S3, extracting a grain boundary threshold value: transforming image I according to the through-the-bottom-cap 2 Is used for manually selecting a proper threshold value for the image I 2 Extracting grain boundaries in the process; that is, the threshold value of the grain boundary is 255, the threshold value of the crystal grain is 0, and the binary image is denoted as I 3 ;
S4, image closing operation: due to image I 3 In the presence of a plurality of noise points, in order to better target the image I 3 The grains in the binary image I are divided 3 Image-closing operation is performed on each die of the chip to eliminate noise in the die as much as possibleA point, namely, an image after the closing operation is recorded as I 4 ;
S5, dividing watershed based on marks: first, the image I after the amenorrhea operation 4 Performing distance conversion, and recording the image obtained by the distance conversion as I 5 The method comprises the steps of carrying out a first treatment on the surface of the Re-aiming at image I 5 Binarization processing is carried out to obtain an image I 6 Image I is checked by 8 neighborhood connectivity checking method 6 The communication areas in the image mark1 are subjected to classification marking; image I after the operation of amenorrhea 4 And binarized image I 6 Subtracting to obtain the foreground region I of the crystal grain 7 Image mark1 is corresponding to image I 7 The pixel value of the position with the pixel value of 255 is assigned to be 0, and the image mark1 corresponds to the image I 4 The pixel value of the position with the middle pixel value of 0 is assigned to be-1, watershed segmentation is carried out on the grain image I combined with the mark image mark1, and an image mark2 is obtained;
s6, checking and improving grain segmentation effect: checking the grain image mark2 after the watershed segmentation, carrying out the next grain quantization operation if the grain segmentation effect is good, returning to the step S3 if the grain segmentation effect is poor, re-carrying out threshold extraction on the grain boundary, and carrying out the steps S4 and S5;
s7, quantitative analysis of crystal grains: counting the area of each crystal grain in the image mark2 by using an image traversing method, calculating the average area and the maximum area of the crystal grains, drawing a distribution curve of the area of the crystal grains, and calculating the grain size according to the definition of the grain size.
2. The automatic identification and accurate quantitative characterization method for grains based on metallographic pictures according to claim 1, which is characterized in that:
the specific implementation method of the step S1 is as follows: image I 1 The Gray conversion calculation formula of (2) is gray=R×0.299+G×0.587+B×0.114, wherein Gray is Gray image I 1 R, G, B is the pixel value of image I;
step S2 comprises the following sub-steps:
s21, gray level image I 1 Into a three-dimensional matrix A (x, y, z), whereinx and y are images I 1 The plane position of the corresponding pixel, z is the image I 1 A gray value corresponding to the middle pixel; performing expansion and corrosion operation on the three-dimensional matrix A to obtain a matrix B;
s22, subtracting the matrix B from the matrix A to obtain a matrix C, and converting the matrix C into a gray image I 2 ;
The specific implementation method of the step S3 is as follows: from image I 2 Is used for selecting proper gray value gray_min and gray_max as the gray distribution of the image I 2 Threshold interval of grain boundary extraction, i.e. image I 2 When the gray value of the middle pixel is equal to or more than gray_min and equal to or less than gray_max, the pixel value of the region is marked as 255, the other pixel values which do not meet the gray interval are marked as 0, and the obtained binary image is marked as I 3 ;
Step S4 comprises the following sub-steps:
s41, creation size and I 3 Equal and pixel value 0 image I 4 ;
S42, for binary image I 3 8 neighborhood connectivity check is carried out on the region with the middle pixel being 0, and the image I is obtained 4 Corresponding to I 3 And pixels of non-adjacent regions are labeled 1, 2, 3, 4 …, respectively;
s43, for image I 4 The regions with the pixel values of 1, 2, 3 and 4 … … are respectively subjected to the image closing operation in the step S21, and when the image I is used for ensuring the integrity of the grain structure 4 When the image closing operation is carried out on the grain areas with the middle pixel values of 1, 2, 3 and 4 … …, the pixel values communicated with the background area are kept unchanged;
s44, image I 4 A pixel area of 0 is assigned a value of 1, and image I is given 4 The region with pixel value equal to 255 is assigned 0, and then the image I is given 4 Assigning 255 to the pixel region of 1;
step S5 comprises the following sub-steps:
s51, image I 4 The region with the middle pixel value of 255 is stored in the form of coordinates into a subset z1, and the image I is obtained 4 The region with the middle pixel value of 0 is stored in the subset z2 in the form of coordinates;
s52, calculating the minimum distance from each coordinate in the subset z1 to the coordinate in the subset z2 through a Euclidean distance formula, forming a set containing the coordinates and the distances, marking the set as z3, marking the minimum distance value in the set z3 as z_min, and marking the maximum distance value in the set z3 as z_max;
s53, creating size and image I 4 Image I with the same pixel value of 0 5 Image I 5 The coordinate position corresponding to the set z3 has a pixel value according to the formula G (x, y) =255×|z 3 (x, y) -z_min|/|z_max-z_min| is assigned;
s54, traversing the image I 5 Finding out the maximum value G_max of the pixel, traversing the image I 5 The pixel points in the pixel array are assigned 255 to pixels with gray values larger than 0.05 x G_max and 0 to pixels with gray values smaller than 0.05 x G_max, and the converted image is marked as I 6 ;
S55, creating size and I 6 The same image mark1 with the pixel value of 0 is used for image I by using an 8-connected neighborhood checking method 6 Connectivity checking is performed on the region with the pixel value of 255, and mark1 is corresponding to I 6 The unconnected areas in the image are marked as 1, 2 and 3 … …;
s56, assigning the peripheral frames of the image mark1 as-1;
s57, traversing a region with a pixel value of 0 in the mark1 image, and if one pixel value of the upper, lower, left and right neighborhood of the region with the pixel value of 0 in the image is not 0, calculating the gray gradient of the pixel position of the point corresponding to the original image I, wherein the calculation formula of the gray gradient is as follows: in the formula, R, G, B is the RGB value of a target pixel point, RL, GL and BL are the RGB values of a pixel point in the left field of the target pixel point, RR, GR and BR are the RGB values of a pixel point in the right neighborhood of the target pixel point, RT, GT and BT are the RGB values of a pixel point in the upper neighborhood of the target pixel point, RB, GB and BB are the RGB values of a pixel point in the lower neighborhood of the target pixel point, and the pixel point for gray gradient calculation is put into a queue of a set q in the form of a coordinate position;
s58, scanning the set q from left to right according to the magnitude of a gradient value, when a mark pixel exists in a four-adjacent-domain coordinate of a pixel point in the mark1 image corresponding to the coordinate position in the set q, assigning the pixel at the position in the mark1 image as a mark pixel value corresponding to the four-adjacent-domain coordinate, deleting the coordinate position in a queue of the set q, searching whether a region with the value of 0 exists in the four-adjacent-domain pixel at the position of the mark pixel point in the mark1 image, calculating the gradient according to a gray gradient method in the step S57 if the region exists, and putting the gradient into the queue of the set q in the form of the coordinate position, and exiting the scanning; when scanning that two or more different marking pixels exist in the four-adjacent-domain pixels of the pixel point in the mark1 image corresponding to the coordinate position in the set q, assigning the pixel of the position in the mark1 image as-1, deleting the coordinate position in a queue of the set q, searching whether a region with the value of 0 exists in the four-adjacent-domain pixels of the pixel point position in the mark1 image, calculating the gradient according to a gray gradient method in the step S57 if the region exists, and putting the gradient into the queue of the set q in the form of the coordinate position, and exiting the scanning;
s59, repeating the step S58 until the region with the pixel value of 0 in the mark1 image disappears, and marking the image mark1 after the segmentation as mark2;
the specific implementation method of the step S6 is as follows: two or more principally independent grains are present in the image mark2, but their target pixel values are the same, if there are a large number of grains in the mark2 map, then steps S4, S5 are repeated;
step S7 comprises the following sub-steps:
s71, traversing pixels in the image mark2, counting the number of pixels with different marks, namely the areas of different crystal grains, and quantitatively analyzing the areas of the crystal grains;
s72, the calculation formula of grain size is g= 3.321928lgN A 2.954 where N A The number of grains of the metallographic picture can be known by taking G as the grain size for the number of grains of each square millimeter, and the number N of grains of each square millimeter in the metallographic picture can be known according to the scale of the metallographic picture A And then the grain size G of the metallographic picture can be known by the grain size calculation formula.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211625549.0A CN116188560A (en) | 2022-12-16 | 2022-12-16 | Automatic grain identification and accurate quantization characterization method based on metallographic pictures |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211625549.0A CN116188560A (en) | 2022-12-16 | 2022-12-16 | Automatic grain identification and accurate quantization characterization method based on metallographic pictures |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116188560A true CN116188560A (en) | 2023-05-30 |
Family
ID=86451472
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211625549.0A Pending CN116188560A (en) | 2022-12-16 | 2022-12-16 | Automatic grain identification and accurate quantization characterization method based on metallographic pictures |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116188560A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117871416A (en) * | 2024-03-11 | 2024-04-12 | 视睿(杭州)信息科技有限公司 | Grain coordinate sorting method and system |
-
2022
- 2022-12-16 CN CN202211625549.0A patent/CN116188560A/en active Pending
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117871416A (en) * | 2024-03-11 | 2024-04-12 | 视睿(杭州)信息科技有限公司 | Grain coordinate sorting method and system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113160192B (en) | Visual sense-based snow pressing vehicle appearance defect detection method and device under complex background | |
CN112365494B (en) | Ore material image segmentation method based on deep learning prediction edge | |
CN107424142B (en) | Weld joint identification method based on image significance detection | |
CN106940889B (en) | Lymph node HE staining pathological image segmentation method based on pixel neighborhood feature clustering | |
CN110838126B (en) | Cell image segmentation method, cell image segmentation device, computer equipment and storage medium | |
CN109598681B (en) | No-reference quality evaluation method for image after repairing of symmetrical Thangka | |
CN111539330B (en) | Transformer substation digital display instrument identification method based on double-SVM multi-classifier | |
CN103440629B (en) | Laser labelling is from the digital image processing method of the Video Extensometer of motion tracking | |
CN112651440B (en) | Soil effective aggregate classification and identification method based on deep convolutional neural network | |
CN109584253A (en) | Oil liquid abrasive grain image partition method | |
CN114910480A (en) | Wafer surface defect detection method based on machine vision | |
CN116188560A (en) | Automatic grain identification and accurate quantization characterization method based on metallographic pictures | |
CN117853722A (en) | Steel metallographic structure segmentation method integrating superpixel information | |
CN110544262B (en) | Cervical cell image segmentation method based on machine vision | |
CN107239761B (en) | Fruit tree branch pulling effect evaluation method based on skeleton angular point detection | |
CN116718599B (en) | Apparent crack length measurement method based on three-dimensional point cloud data | |
CN116402822B (en) | Concrete structure image detection method and device, electronic equipment and storage medium | |
CN115619799B (en) | Grain image segmentation method and system based on transfer learning | |
CN110689553B (en) | Automatic segmentation method of RGB-D image | |
CN112184696A (en) | Method and system for counting cell nucleus and cell organelle and calculating area of cell nucleus and cell organelle | |
CN110097533B (en) | Method for accurately testing overall dimension and position of light spot | |
CN110322466B (en) | Supervised image segmentation method based on multi-layer region limitation | |
CN116434054A (en) | Intensive remote sensing ground object extraction method based on line-plane combination | |
CN113643290B (en) | Straw counting method and device based on image processing and storage medium | |
CN109615630A (en) | Semi-continuous casting alusil alloy Analysis on Microstructure method based on image processing techniques |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20230918 Address after: 310030 No. 10 West Garden Road, West Lake science and technology economic Park, Xihu District, Hangzhou, Zhejiang Applicant after: HUADIAN ELECTRIC POWER RESEARCH INSTITUTE Co.,Ltd. Applicant after: FUJIAN HUADIAN YONGAN POWER GENERATION CO.,LTD. Address before: 310030 No. 10 West Garden Road, West Lake science and technology economic Park, Xihu District, Hangzhou, Zhejiang Applicant before: HUADIAN ELECTRIC POWER RESEARCH INSTITUTE Co.,Ltd. |