CN116342614B - Waste silicon wafer shape detection method based on image recognition - Google Patents
Waste silicon wafer shape detection method based on image recognition Download PDFInfo
- Publication number
- CN116342614B CN116342614B CN202310637672.2A CN202310637672A CN116342614B CN 116342614 B CN116342614 B CN 116342614B CN 202310637672 A CN202310637672 A CN 202310637672A CN 116342614 B CN116342614 B CN 116342614B
- Authority
- CN
- China
- Prior art keywords
- highlight
- region
- filled
- silicon wafer
- line
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- XUIMIQQOPSSXEZ-UHFFFAOYSA-N Silicon Chemical compound [Si] XUIMIQQOPSSXEZ-UHFFFAOYSA-N 0.000 title claims abstract description 105
- 229910052710 silicon Inorganic materials 0.000 title claims abstract description 101
- 239000010703 silicon Substances 0.000 title claims abstract description 101
- 239000002699 waste material Substances 0.000 title claims abstract description 96
- 238000001514 detection method Methods 0.000 title abstract description 24
- 230000001502 supplementing effect Effects 0.000 claims abstract description 47
- 238000000034 method Methods 0.000 claims abstract description 33
- 238000009826 distribution Methods 0.000 claims abstract description 17
- 238000012216 screening Methods 0.000 claims abstract description 14
- 239000013589 supplement Substances 0.000 claims abstract description 4
- 238000013507 mapping Methods 0.000 claims description 29
- 238000009825 accumulation Methods 0.000 claims description 10
- 230000001154 acute effect Effects 0.000 claims description 3
- 239000000047 product Substances 0.000 claims description 3
- 230000009466 transformation Effects 0.000 abstract description 7
- 235000012431 wafers Nutrition 0.000 description 82
- 230000008569 process Effects 0.000 description 11
- 239000004973 liquid crystal related substance Substances 0.000 description 8
- 238000004519 manufacturing process Methods 0.000 description 7
- 238000007781 pre-processing Methods 0.000 description 6
- 238000004458 analytical method Methods 0.000 description 2
- 230000000295 complement effect Effects 0.000 description 2
- 238000003708 edge detection Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 230000001788 irregular Effects 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 229910010293 ceramic material Inorganic materials 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000000739 chaotic effect Effects 0.000 description 1
- 238000005520 cutting process Methods 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000005429 filling process Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000005498 polishing Methods 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 230000000153 supplemental effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G06T5/70—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
- G06T2207/30148—Semiconductor; IC; Wafer
Abstract
The invention relates to the technical field of image processing, in particular to a waste silicon wafer shape detection method based on image recognition. The method is based on basic Hough transformation, a highlight region in a Hough space is obtained, the fitting rate of the highlight region is determined according to the distribution between the intersection points and the sub-intersection points in the highlight region, and then line supplement parameters of the highlight region are obtained. Screening out the areas to be filled and the areas not to be filled according to the line supplementing parameters, judging the areas to be filled and the noise areas, further obtaining the reinforced waste silicon wafer edge images according to the highlight areas to be filled and the areas not to be filled, and detecting the shape of the waste silicon wafer by utilizing the reinforced waste silicon wafer edge images. According to the invention, the width characteristics of the line area are considered, and the reinforced waste silicon wafer edge image of the line without line breakage and noise is obtained based on Hough transformation, so that the shape detection precision is higher.
Description
Technical Field
The invention relates to the technical field of image processing, in particular to a waste silicon wafer shape detection method based on image recognition.
Background
In the process of widely applied silicon chips as production materials and electronic equipment production, the silicon chips are damaged to a certain extent due to unreasonable cutting, external environment, artificial damage and the like in the production process, and the silicon chips are identified as waste silicon chips and cannot be applied to subsequent equipment production. For waste silicon wafers, the shape information is important reference data in the production process, for example: if the shape is regular, the edges are neat, and the waste silicon wafers can be recycled after a certain operation flow such as polishing and the like; if the shape information of the waste silicon wafer is counted in the waste accumulation area, the cause of the problem generated in the silicon wafer processing flow can be analyzed. Therefore, in order to optimize the production process, it is necessary to perform shape detection on the waste silicon wafer in the silicon wafer waste accumulation area.
In the prior art, edge lines in waste silicon wafer images in a waste accumulation area can be detected through the Hough transformation principle, and further, shape information of waste silicon wafers is obtained according to the positions and the shapes of the edge lines. However, in the hough transform process, the voting result in the hough space is affected due to the problem of image quality, errors are generated in the voting result, the detected lines are broken and inconsistent in thickness after being mapped to the original image, the detected lines cannot accurately represent the edge distribution and the edge width of the silicon wafer, and the accuracy of detecting the shape of the waste silicon wafer is affected.
Disclosure of Invention
In order to solve the technical problem that the detected lines cannot accurately represent the edge distribution and the edge width of the silicon wafer due to a basic voting mechanism in Hough transformation, and further influence the shape detection of the waste silicon wafer, the invention aims to provide a waste silicon wafer shape detection method based on image recognition, and the adopted technical scheme is as follows:
the invention provides a waste silicon wafer shape detection method based on image recognition, which comprises the following steps:
acquiring a waste silicon wafer image and a waste silicon wafer edge image in a silicon wafer waste accumulation area; mapping the waste silicon wafer edge image into a Hough space to obtain voting points in the Hough space; screening out highlight points according to the voting values of the voting points;
obtaining a range neighborhood of each highlight point under a preset size, merging the adjacent range neighborhood to obtain a highlight region; the highlight points with the largest voting values in the highlight areas are intersecting points, and other highlight points are sub-intersecting points;
obtaining a relative coordinate distance and a relative included angle between the intersection point and the sub-intersection point in the highlight region; obtaining the fitting rate of the highlight region according to the relative coordinate distance distribution and the relative included angle distribution in the highlight region; obtaining line supplementing parameters according to the fitting rate of the highlight region and the voting value of the intersection point; dividing the highlight region into a region to be filled and a region not to be filled according to the line supplementing parameters, filling the region to be filled, screening out a noise region according to the line supplementing parameters corresponding to a filling result, and obtaining a filled highlight region; mapping the filled highlight region and the non-filled region to an image coordinate system to obtain an enhanced waste silicon wafer edge image;
and detecting the shape of the waste silicon wafer in the silicon wafer waste accumulation area according to the shape of the edge line in the enhanced waste silicon wafer edge image.
Further, the method for acquiring the highlight region comprises the following steps:
and acquiring the cross-over ratio of the adjacent range neighborhood, and merging the corresponding range neighborhood into a range neighborhood if the cross-over ratio is larger than a preset cross-over ratio threshold value until the range neighborhood cannot be merged, so as to acquire the highlight region.
Further, the method for acquiring the relative coordinate distance comprises the following steps:
taking the ratio of the maximum abscissa absolute value of the highlight points in the highlight region to the abscissa absolute value of the intersection points as an abscissa weight; taking the ratio of the minimum ordinate absolute value of the highlight points in the highlight region to the ordinate absolute value of the intersection points as ordinate weight;
multiplying the abscissa weight by the difference of the horizontal coordinates in the Euclidean distance formula, and multiplying the ordinate weight by the difference of the vertical coordinates in the Euclidean distance formula to obtain a relative coordinate distance formula; and obtaining the relative coordinate distance between each sub-intersection point and the intersection point according to the relative coordinate distance formula.
Further, the method for acquiring the relative included angle comprises the following steps:
and obtaining a connecting line of each sub-intersection point and the intersection point in the Hough space, and taking an acute angle included angle between the connecting line and the transverse axis of the Hough space as a relative included angle between the corresponding sub-intersection point and the intersection point.
Further, the obtaining the fitting rate of the highlight region includes:
obtaining average relative coordinate distances and average relative included angles of all sub-intersection points under the highlight region; mapping and normalizing the average relative coordinate distance negative correlation to obtain the concentration degree of the data points; and taking the product of the data point concentration degree and the average relative included angle as the fitting rate.
Further, the obtaining the line complement parameter according to the fitting rate of the highlight region and the voting value of the intersection point includes:
performing negative correlation mapping on the fitting rate to obtain a first mapping value; inversely related mapping the voting value of the intersection point to obtain a second mapping value; multiplying the first mapping value and the second mapping value to obtain a supplementary judgment index; if the supplementing judging index is smaller than a preset judging threshold value, the line supplementing parameter is a preset first label value; and if the supplementing judging index is not smaller than the judging threshold, the line supplementing parameter is a preset second label value.
Further, the screening the area to be filled in the highlight area according to the line supplementing parameter includes:
if the line supplementing parameter is the second label value, the corresponding highlight region is the region to be filled; and if the line supplementing parameter is the first label value, the corresponding highlight region is the non-to-be-filled region.
Further, the filtering the noise area according to the line supplementing parameter corresponding to the filling result and obtaining the filling highlight area includes:
in the region to be filled, taking the slope of the corresponding intersection point as the slope of the point to be filled, randomly filling the point to be filled in the region to be filled, and updating the line supplementing parameters of the region to be filled after each filling to obtain updated line supplementing parameters; if the updated line supplement parameter is still the second label value after filling for the preset filling times, the corresponding region to be filled is taken as a noise region; and if the updating line supplementing parameter of the area to be filled is updated to the first tag value before the preset filling times, stopping filling, and taking the filled area to be filled as the filling highlight area.
The invention has the following beneficial effects:
according to the invention, the highlight points are screened out based on the Hough transformation principle, and the distribution and the width of the waste silicon wafer edge lines cannot be accurately represented only by the lines obtained according to the highlight points in consideration of the quality influence of the edge information in the image by the basic voting mechanism, so that the highlight areas are obtained, all the highlight points in the highlight areas are analyzed, and corresponding line supplementing parameters are obtained by analyzing the distribution characteristics of the highlight points in the highlight areas. The line supplementing parameters can represent whether the corresponding highlight region needs to be filled or not, the filled highlight region corresponds to the edge region in the original image, the characteristics of the edge region are more obvious, and the edge lines are clearer and more complete; and the noise area can be screened out through the filling result, so that the influence of noise lines on the subsequent shape detection is avoided, namely, in the obtained enhanced waste silicon wafer edge image, the line information is clear and complete, the influence of the noise lines is not included, and the line width can be clearly represented, so that the edge detection of the waste silicon wafer based on the enhanced waste silicon wafer edge image can obtain a more accurate shape detection result.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions and advantages of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are only some embodiments of the invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a method for detecting the shape of a waste silicon wafer based on image recognition according to an embodiment of the invention;
FIG. 2 is an image of a spent silicon wafer according to one embodiment of the present invention;
FIG. 3 is an image of the edge of a spent silicon wafer according to one embodiment of the present invention.
Detailed Description
In order to further describe the technical means and effects adopted by the invention to achieve the preset aim, the following is a detailed description of specific implementation, structure, characteristics and effects of the method for detecting the shape of the waste silicon wafer based on image recognition according to the invention with reference to the attached drawings and the preferred embodiment. In the following description, different "one embodiment" or "another embodiment" means that the embodiments are not necessarily the same. Furthermore, the particular features, structures, or characteristics of one or more embodiments may be combined in any suitable manner.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
The invention provides a specific scheme of a waste silicon wafer shape detection method based on image recognition, which is specifically described below with reference to the accompanying drawings.
Referring to fig. 1, a flowchart of a method for detecting a shape of a waste silicon wafer based on image recognition according to an embodiment of the present invention is shown, where the method includes:
step S1: acquiring a waste silicon wafer image and a waste silicon wafer edge image in a silicon wafer waste accumulation area; mapping the waste silicon wafer edge image into a Hough space to obtain voting points in the Hough space; and screening out the highlight points according to the voting values of the voting points.
A large number of waste silicon wafers are piled in the silicon wafer waste piling area, and the shape information of a plurality of waste silicon wafers can be detected at one time through an image processing method. Referring to fig. 2, an image of a waste silicon wafer is shown according to one embodiment of the present invention. In order to facilitate subsequent image processing, the waste silicon wafer image is enhanced by an image preprocessing means and then is grayed, and the obtained gray image is used for subsequent edge detection and straight line detection. It should be noted that, the image preprocessing and graying method is a technical means well known to those skilled in the art, specific implementation steps are not repeated, in the embodiment of the present invention, the image preprocessing mainly uses a filtering denoising algorithm, and in other embodiments, other image preprocessing means can be selected according to implementation scenarios. It should be noted that, noise removed in the image preprocessing process is mainly noise points occurring in the image acquisition process or due to environmental influences, and noise areas and corresponding noise lines in the following description are information formed by dirt or foreign matters on the surface of the silicon wafer, because such information cannot be used for detecting the shape of the silicon wafer, and therefore, the noise cannot be processed in the image preprocessing stage.
In one embodiment of the invention, edge pixel points in the waste silicon wafer image are extracted by using a canny operator to obtain the waste silicon wafer edge image, and the waste silicon wafer edge image is a binary image, namely the image only comprises background pixel points and edge pixel points. Referring to fig. 3, an image of the edge of a waste silicon wafer is shown according to one embodiment of the present invention. As can be seen from comparison between fig. 2 and fig. 3, in the waste silicon wafer edge image, the edge of the waste silicon wafer is broken due to interference of other waste silicon wafer image information, and noise edge pixel points formed by dirt, foreign matters, scratches and other reasons exist on the surface of the waste silicon wafer. The broken edge line and the noise line formed by the noise edge pixel point affect the subsequent shape detection, so in order to further detect the shape line of the waste silicon wafer, the edge pixel point mapping value Hough space on the edge image of the waste silicon wafer needs to be changed into a line in the Hough space by the edge pixel point under the original image space, and the straight line under the original image space is changed into an intersection point where the straight line passes through in the Hough space. In the Hough space, when a certain intersection point passes through a straight line, the voting value of the intersection point is increased by one, the intersection point is marked as a voting point, namely, each voting point in the Hough space corresponds to one voting value, the larger the voting value of the voting point is, the more straight lines passing through the corresponding voting point are indicated, and the more straight lines corresponding to the voting point are in the original image space.
In the conventional Hough straight line detection process, voting points can be directly screened according to voting values in the Hough transformation process by utilizing the prior art, and highlight points are obtained, namely straight lines corresponding to the highlight points are detected straight lines. In the actual edge image of the waste silicon wafer, two reasons exist that the shape of the waste silicon wafer cannot be detected directly according to the straight line corresponding to the highlight point: 1. the problem that the width of the edge lines in the waste silicon wafer edge image is not considered, and the lines corresponding to the highlight points are broken in the waste silicon wafer edge image, so that one edge line is detected to be two or more lines for shape detection; 2. noise lines generated by factors such as dirt, foreign matters and the like on the surface of the waste silicon wafer are not considered, because highlight points are screened only according to voting values, clear noise lines exist on the surface of the waste silicon wafer, so that corresponding voting points are selected as highlight points and used for shape detection. Therefore, after the highlight point is screened out by using the prior art, analysis is needed to be continued so as to obtain an optimal detection result.
It should be noted that, hough transform and screening of highlight points in the hough transform process are all technical means well known to those skilled in the art, and specific implementation steps are not described herein.
Step S2: obtaining a range neighborhood of each highlight point under a preset size, combining the adjacent range neighborhood, and obtaining a highlight region; the highlight point with the largest voting value in the highlight region is an intersection point, and other highlight points are sub-intersection points.
Considering that the width of the line in the edge image of the waste silicon wafer exists, and one line with the width can be regarded as being formed by arranging a plurality of straight lines with the same slope and different intercept, the line with the width in the Hough space is represented as a highlight area formed by highlight points with similar slopes and different intercept, namely, one line with the width in the Hough space is represented as a long and narrow area. Therefore, in order to accurately detect the lines, the neighborhood of the highlight points in the hough space needs to be divided, and the range neighborhood of each highlight point under the preset size is obtained, namely the preset size characterizes the line width and the line skew tolerance, and because the adjacent highlight points are more likely to be a line area, the adjacent range neighborhood is combined, and the highlight area is obtained.
In one embodiment of the present invention, considering that the highlight region corresponding to the line region in the hough space is a long and narrow region, the range neighborhood is set to be a rectangle with the highlight point as the center, that is, two preset sizes are present, namely, the length of the range neighborhood and the width of the range neighborhood are respectively the length of the range neighborhood, the length of the range neighborhood is set to be the maximum line width in the edge image of the waste silicon wafer, and the maximum line width can be specifically set in combination with the camera view field and the shooting parameters, and is set to be 5 in one embodiment of the present invention; setting the width of the range neighborhood to 3, wherein the width of the range neighborhood represents the tolerance of the line skew in the highlight region, namely, other highlight points with the slope increased or decreased by 1 are regarded as points which are in a line region with the corresponding highlight point by taking the slope of the corresponding highlight point as the center.
It should be noted that, because the line area in the waste silicon wafer edge image must have a width, the highlight area must include a plurality of highlights, if a certain highlight is isolated, that is, only one highlight exists in the highlight area, it is indicated that the line corresponding to the highlight in the waste silicon wafer edge image may be a noise line, or a finer boundary line caused by image shooting. The number of the highlight points is small, analysis can be directly carried out in the subsequent shape detection process, and if the highlight points are noise lines, a region for shape detection can be formed through subsequent screening of noise; if the boundary line is normal and tiny, the boundary line can be regarded as a non-to-be-filled area in the subsequent process, and the geometric area can be directly formed by the boundary line and other clear lines for shape detection, so that the detection precision is not affected.
Preferably, the method for acquiring a highlight region in one embodiment of the present invention includes:
and acquiring the intersection ratio of adjacent range neighborhood, if the intersection ratio is larger than a preset intersection ratio threshold, merging the corresponding range neighborhood into one range neighborhood until the range neighborhood cannot be merged, and acquiring the highlight region. The overlap ratio shows the overlapping degree of the two range neighborhoods, and the larger the overlapping degree is, the more likely the two range neighborhoods are a line area. In one embodiment of the invention, the cross ratio threshold is set to 0.8. It should be noted that, the method for calculating the cross-over ratio is a technical means well known to those skilled in the art, and will not be described herein.
In the highlight region, the highlight point with the largest ticket value is an intersection point, and other highlight points are sub-intersection points. The intersection points mean that under the highlight area, the corresponding straight line is clearer in the waste silicon wafer edge image, the represented line information is more reference, and if a plurality of maximum voting values exist, one highlight point is randomly selected as the intersection point.
Step S3: obtaining a relative coordinate distance and a relative included angle between an intersection point and a sub-intersection point in the highlight region; obtaining the fitting rate of the highlight region according to the relative coordinate distance distribution and the relative included angle distribution in the highlight region; obtaining line supplementary parameters according to the fitting rate of the highlight region and the voting value of the intersection point; dividing the highlight region into a region to be filled and a region not to be filled according to the line supplementing parameters, filling the region to be filled, screening out a noise region according to the line supplementing parameters corresponding to the filling result, and obtaining a filled highlight region; and mapping the filled highlight region and the non-filled region to an image coordinate system to obtain an enhanced waste silicon wafer edge image.
In the highlight region, if the highlight points are distributed and gathered, and the sub-intersection points are vertically distributed relative to the intersection points, the fact that the corresponding highlight region presents a clear line region characteristic in the Hough space is indicated, namely the corresponding line region is not broken, the lines are regular, and the lines are not noise lines. Therefore, the relative coordinate distance and the relative included angle between the intersection point and the sub-intersection point in the highlight region can be obtained, and the fitting rate of the highlight region is further obtained according to the relative coordinate distance distribution and the relative included angle distribution set. The fitting rate can represent the rule degree of the corresponding highlight region, namely, the larger the fitting rate is, the more regular the highlight region is, the corresponding line region can be directly subjected to shape detection, and the smaller the fitting rate is, the more disordered the straight line in the highlight region is, and the more likely the corresponding line region is a fracture region or a noise line region.
Preferably, in one embodiment of the present invention, in consideration of the fact that the slope information between the highlights is more important in evaluating the degree of the rules of the highlight region, the method for acquiring the relative coordinate distance includes:
the ratio of the absolute value of the maximum abscissa of the highlight points in the highlight region to the absolute value of the abscissa of the intersection point is taken as the abscissa weight. The ratio of the minimum ordinate absolute value of the highlight points in the highlight region to the ordinate absolute value of the intersection points is taken as the ordinate weight.
And multiplying the abscissa weight by the difference of the horizontal coordinates in the Euclidean distance formula, and multiplying the ordinate weight by the difference of the vertical coordinates in the Euclidean distance formula to obtain a relative coordinate distance formula. And obtaining the relative coordinate distance between each sub-intersection point and the intersection point according to a relative coordinate distance formula. Namely, the relative coordinate distance formula is:
wherein, the liquid crystal display device comprises a liquid crystal display device,is the firstThe relative coordinate distance between each sub-intersection point and the intersection point,for the maximum abscissa absolute value of the highlight points in the highlight region,as the absolute value of the abscissa of the intersection point,for the minimum ordinate absolute value of the highlight in the highlight region,is the absolute value of the ordinate of the intersection point,as coordinates of the intersection point in hough space,is the firstCoordinates of the respective sub-intersections in the hough space.
According to a relative coordinate distance formula, on the basis of calculation of an original Euclidean distance, an abscissa weight and an ordinate weight are added, the abscissa difference is amplified by using the abscissa weight, and the ordinate difference is reduced by using the ordinate weight, so that the finally obtained relative coordinate distance can more effectively reflect the distribution between the sub-intersection point and the intersection point in the corresponding highlight region. I.e. the smaller the relative coordinate distance, the closer the sub-intersection is to the intersection, the more regular the corresponding highlight region.
Preferably, in one embodiment of the present invention, the method for acquiring the relative angle includes:
and in the Hough space, a connecting line of each sub-intersection point and the intersection point is obtained, and an acute angle included angle between the connecting line and the transverse axis of the Hough space is used as a relative included angle between the corresponding sub-intersection point and the intersection point. The instant formula is expressed as:
wherein, the liquid crystal display device comprises a liquid crystal display device,is the firstThe relative angles between the respective sub-intersections and the intersections,as coordinates of the intersection point in hough space,is the firstCoordinates of the sub-intersections in hough space,as an arctangent function.
In the highlight regionIn the domain, the closer the relative angle between the intersection point and the sub-intersection point isThe more the sub-intersection points are vertically distributed relative to the intersection points, i.e. the more regular the elongated areas the corresponding highlight areas are.
Preferably, in one embodiment of the present invention, obtaining the fitting rate of the highlight region includes:
obtaining average relative coordinate distances and average relative included angles of all sub-intersection points in the highlight region; mapping and normalizing the average relative coordinate distance negative correlation to obtain the concentration degree of the data points; the product of the concentration degree of the data points and the average relative included angle is taken as the fitting rate. In one embodiment of the invention, the fitting rate is formulated as:
wherein, the liquid crystal display device comprises a liquid crystal display device,in order for the fitting rate to be a good fit,as the average relative angle of the two faces,is a natural constant which is used for the production of the high-temperature-resistant ceramic material,is the average relative coordinate distance. That is, in the embodiment of the invention, the average relative coordinate distance is mapped and normalized in a negative correlation way by using an exponential function based on a natural constant.
The larger the average relative distance is, the more chaotic the distribution between the intersection points and the sub-intersection points is in the highlight region, and the more likely the corresponding highlight region is a noise line region or a line fracture region, the smaller the fitting rate is; because the relative angle is in the range of 0 toBecause ofThe larger the average relative included angle is, the more the sub-intersection points are vertically distributed with the intersection points, and the more regular the corresponding highlight areas are, the larger the fitting rate is.
The higher the fitting rate is, the more regular the corresponding highlight region is, and the more normal line regions are in the waste silicon wafer edge image; the smaller the fitting rate, the more irregular the corresponding highlight region, the more likely to be broken line regions or noisy line regions in the waste silicon wafer edge image. Therefore, the line supplementing parameters are obtained by further combining the voting values of the intersecting points in the highlight areas, namely, the larger the voting values are, the clearer and more complete the straight lines corresponding to the intersecting points in the highlight areas are, and the more the corresponding highlight areas are not required to be filled in the subsequent judgment; conversely, the smaller the voting value, the less obvious the straight line information in the corresponding highlight region is, and the more the straight line information needs to be filled or whether the straight line information is a noise line region is further judged.
Preferably, obtaining a line complement parameter from the fitting rate of the highlight region and the voting value of the intersection point includes:
performing negative correlation mapping on the fitting rate to obtain a first mapping value; inversely related mapping the voting values of the intersection points to obtain a second mapping value; multiplying the first mapping value and the second mapping value to obtain a supplementary judgment index; if the supplementing judging index is smaller than the preset judging threshold value, the line supplementing parameter is a preset first label value; and if the supplementing judging index is not smaller than the judging threshold value, the line supplementing parameter is a preset second label value. In one embodiment of the invention, the line supplemental parameter formula is expressed as:
wherein, the liquid crystal display device comprises a liquid crystal display device,in order for the fitting rate to be a good fit,the parameters are supplemented for the lines and the lines,as a result of the value of the first tag,as a result of the value of the second tag,as the vote value for the intersection point,in order to determine the threshold value,as a tangent function. I.e.For the first mapped value of the first value,is the second mapped value. In the embodiment of the present invention, the first label value is set to 0, the second label value is set to 1, and the judgment threshold is set to 0.307, which should be noted that other negative correlation mapping methods may be used in other embodiments, and further other values are selected as the label value or the threshold, which is not limited and described herein.
The line replenishment parameter characterizes whether the current highlight region needs to be filled, so the highlight region can be divided into a region to be filled and a region not to be filled according to the line replenishment parameter. The region to be filled executes a subsequent filling process and a noise region judging process, and the region to be filled is not considered as a region with clear and complete line information, so that the subsequent filling operation is not needed to be participated. Specifically, screening the area to be filled in the highlight area according to the line supplementing parameters comprises the following steps: if the line supplementing parameter is a second label value, the corresponding highlight region is a region to be filled; if the line supplementing parameter is a first label value, the corresponding highlight region is a non-to-be-filled region.
If the area to be filled is a line fracture area, the area is filled into a regularly distributed area after filling, and the corresponding line supplementing parameters are changed towards the direction of the normal line area; if the area to be filled is a noise line area, the line information distribution in the area is still disordered or unclear after filling, and the corresponding line supplementing parameters are unchanged or change to the direction of the irregular line area. Therefore, the noise area can be screened out and the filled highlight area can be obtained according to the line supplementing parameter corresponding to the filling result, which comprises the following steps:
in the region to be filled, taking the slope of the corresponding intersection point as the slope of the point to be filled, randomly filling the point to be filled in the region to be filled, and updating the line supplementing parameters of the region to be filled after each filling to obtain updated line supplementing parameters; if the line supplementary parameter is still the second label value after filling of the preset filling times, the corresponding region to be filled is used as a noise region; if the updated line supplement parameters of the area to be filled are updated to a first label value before the preset filling times, stopping filling, and taking the filled area to be filled as a filled highlight area.
In the embodiment of the invention, the filling times are set to 10, that is, if the line supplementary parameter updated for 10 times is filled and is also the second label value, the line information disorder in the corresponding region to be filled is indicated, and the line information disorder is not the regular silicon wafer boundary line and is taken as the noise region.
The filling highlight region and the non-filling region are regions with clear and complete line information, so that the regions are mapped to an image coordinate system, and the enhanced waste silicon wafer edge image with the clear line information can be obtained.
Step S4: and detecting the shape of the waste silicon wafer in the silicon wafer waste accumulation area according to the shape of the edge line in the enhanced waste silicon wafer edge image.
Because the edge image of the reinforced waste silicon wafer contains clear and complete edge lines, the shape of the waste silicon wafer in the silicon wafer waste accumulation area is detected according to the geometric shape formed by the edge lines, and the shape information of the corresponding waste silicon wafer can be obtained according to the specific shape rule degree and area of the geometric shape.
In summary, the embodiment of the present invention obtains the highlight region in the hough space based on the basic hough transform, and determines the fitting rate of the highlight region according to the distribution between the intersection point and the sub-intersection point in the highlight region, thereby obtaining the line supplementary parameter of the highlight region. Screening out the areas to be filled and the areas not to be filled according to the line supplementing parameters, judging the areas to be filled and the noise areas, further obtaining the reinforced waste silicon wafer edge images according to the highlight areas to be filled and the areas not to be filled, and detecting the shape of the waste silicon wafer by utilizing the reinforced waste silicon wafer edge images. According to the embodiment of the invention, the width characteristics of the line area are considered, and the enhanced waste silicon wafer edge image of the line without line breakage and noise is obtained based on Hough transformation, so that the shape detection precision is higher.
It should be noted that: the sequence of the embodiments of the present invention is only for description, and does not represent the advantages and disadvantages of the embodiments. The processes depicted in the accompanying drawings do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments.
Claims (5)
1. The method for detecting the shape of the waste silicon wafer based on image recognition is characterized by comprising the following steps of:
acquiring a waste silicon wafer image and a waste silicon wafer edge image in a silicon wafer waste accumulation area; mapping the waste silicon wafer edge image into a Hough space to obtain voting points in the Hough space; screening out highlight points according to the voting values of the voting points;
obtaining a range neighborhood of each highlight point under a preset size, merging the adjacent range neighborhood to obtain a highlight region; the highlight points with the largest voting values in the highlight areas are intersecting points, and other highlight points are sub-intersecting points;
obtaining a relative coordinate distance and a relative included angle between the intersection point and the sub-intersection point in the highlight region; obtaining the fitting rate of the highlight region according to the relative coordinate distance distribution and the relative included angle distribution in the highlight region; obtaining line supplementing parameters according to the fitting rate of the highlight region and the voting value of the intersection point; dividing the highlight region into a region to be filled and a region not to be filled according to the line supplementing parameters, filling the region to be filled, screening out a noise region according to the line supplementing parameters corresponding to a filling result, and obtaining a filled highlight region; mapping the filled highlight region and the non-filled region to an image coordinate system to obtain an enhanced waste silicon wafer edge image;
detecting the shape of the waste silicon wafer in the silicon wafer waste accumulation area according to the shape of the edge line in the enhanced waste silicon wafer edge image;
the obtaining line supplementary parameters according to the fitting rate of the highlight region and the voting value of the intersection point comprises:
performing negative correlation mapping on the fitting rate to obtain a first mapping value; inversely related mapping the voting value of the intersection point to obtain a second mapping value; multiplying the first mapping value and the second mapping value to obtain a supplementary judgment index; if the supplementing judging index is smaller than a preset judging threshold value, the line supplementing parameter is a preset first label value; if the supplementing judging index is not smaller than the judging threshold, the line supplementing parameter is a preset second label value;
the step of screening the area to be filled in the highlight area according to the line supplementing parameters comprises the following steps:
if the line supplementing parameter is the second label value, the corresponding highlight region is the region to be filled; if the line supplementing parameter is the first label value, the corresponding highlight region is the non-to-be-filled region;
the step of screening out the noise area and obtaining the filled highlight area according to the line supplementing parameter corresponding to the filling result comprises the following steps:
in the region to be filled, taking the slope of the corresponding intersection point as the slope of the point to be filled, randomly filling the point to be filled in the region to be filled, and updating the line supplementing parameters of the region to be filled after each filling to obtain updated line supplementing parameters; if the updated line supplement parameter is still the second label value after filling for the preset filling times, the corresponding region to be filled is taken as a noise region; and if the updating line supplementing parameter of the area to be filled is updated to the first tag value before the preset filling times, stopping filling, and taking the filled area to be filled as the filling highlight area.
2. The method for detecting the shape of the waste silicon wafer based on image recognition according to claim 1, wherein the method for acquiring the highlight region comprises the following steps:
and acquiring the cross-over ratio of the adjacent range neighborhood, and merging the corresponding range neighborhood into a range neighborhood if the cross-over ratio is larger than a preset cross-over ratio threshold value until the range neighborhood cannot be merged, so as to acquire the highlight region.
3. The method for detecting the shape of the waste silicon wafer based on image recognition according to claim 1, wherein the method for acquiring the relative coordinate distance comprises the following steps:
taking the ratio of the maximum abscissa absolute value of the highlight points in the highlight region to the abscissa absolute value of the intersection points as an abscissa weight; taking the ratio of the minimum ordinate absolute value of the highlight points in the highlight region to the ordinate absolute value of the intersection points as ordinate weight;
multiplying the abscissa weight by the difference of the horizontal coordinates in the Euclidean distance formula, and multiplying the ordinate weight by the difference of the vertical coordinates in the Euclidean distance formula to obtain a relative coordinate distance formula; and obtaining the relative coordinate distance between each sub-intersection point and the intersection point according to the relative coordinate distance formula.
4. The method for detecting the shape of the waste silicon wafer based on image recognition according to claim 1, wherein the method for acquiring the relative included angle comprises the following steps:
and obtaining a connecting line of each sub-intersection point and the intersection point in the Hough space, and taking an acute angle included angle between the connecting line and the transverse axis of the Hough space as a relative included angle between the corresponding sub-intersection point and the intersection point.
5. The method for detecting the shape of the waste silicon wafer based on image recognition according to claim 1, wherein the obtaining the fitting rate of the highlight region comprises:
obtaining average relative coordinate distances and average relative included angles of all sub-intersection points under the highlight region; mapping and normalizing the average relative coordinate distance negative correlation to obtain the concentration degree of the data points; and taking the product of the data point concentration degree and the average relative included angle as the fitting rate.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310637672.2A CN116342614B (en) | 2023-06-01 | 2023-06-01 | Waste silicon wafer shape detection method based on image recognition |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310637672.2A CN116342614B (en) | 2023-06-01 | 2023-06-01 | Waste silicon wafer shape detection method based on image recognition |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116342614A CN116342614A (en) | 2023-06-27 |
CN116342614B true CN116342614B (en) | 2023-08-08 |
Family
ID=86880850
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310637672.2A Active CN116342614B (en) | 2023-06-01 | 2023-06-01 | Waste silicon wafer shape detection method based on image recognition |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116342614B (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2012124142A1 (en) * | 2011-03-11 | 2012-09-20 | オムロン株式会社 | Image processing device and image processing method |
WO2016055031A1 (en) * | 2014-10-09 | 2016-04-14 | 北京配天技术有限公司 | Straight line detection and image processing method and relevant device |
CN113724258A (en) * | 2021-11-02 | 2021-11-30 | 山东中都机器有限公司 | Conveyor belt tearing detection method and system based on image processing |
WO2022148192A1 (en) * | 2021-01-07 | 2022-07-14 | 新东方教育科技集团有限公司 | Image processing method, image processing apparatus, and non-transitory storage medium |
CN115100214A (en) * | 2022-08-29 | 2022-09-23 | 南通市昊逸阁纺织品有限公司 | Textile quality detection method based on image processing |
CN115994904A (en) * | 2023-03-22 | 2023-04-21 | 山东万重山电子有限公司 | Garment steamer panel production quality detection method based on computer vision |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE102006059663B4 (en) * | 2006-12-18 | 2008-07-24 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Apparatus, method and computer program for identifying a traffic sign in an image |
-
2023
- 2023-06-01 CN CN202310637672.2A patent/CN116342614B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2012124142A1 (en) * | 2011-03-11 | 2012-09-20 | オムロン株式会社 | Image processing device and image processing method |
WO2016055031A1 (en) * | 2014-10-09 | 2016-04-14 | 北京配天技术有限公司 | Straight line detection and image processing method and relevant device |
WO2022148192A1 (en) * | 2021-01-07 | 2022-07-14 | 新东方教育科技集团有限公司 | Image processing method, image processing apparatus, and non-transitory storage medium |
CN113724258A (en) * | 2021-11-02 | 2021-11-30 | 山东中都机器有限公司 | Conveyor belt tearing detection method and system based on image processing |
CN115100214A (en) * | 2022-08-29 | 2022-09-23 | 南通市昊逸阁纺织品有限公司 | Textile quality detection method based on image processing |
CN115994904A (en) * | 2023-03-22 | 2023-04-21 | 山东万重山电子有限公司 | Garment steamer panel production quality detection method based on computer vision |
Non-Patent Citations (1)
Title |
---|
于书盼 ; 韩彦芳 ; .基于多特征融合与改进霍夫变换的电缆检测研究.软件导刊.2017,(11),全文. * |
Also Published As
Publication number | Publication date |
---|---|
CN116342614A (en) | 2023-06-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN115311292B (en) | Strip steel surface defect detection method and system based on image processing | |
CN116168026B (en) | Water quality detection method and system based on computer vision | |
CN115861291B (en) | Chip circuit board production defect detection method based on machine vision | |
CN114723681B (en) | Concrete crack defect detection method based on machine vision | |
CN115100203B (en) | Method for detecting quality of steel bar polishing and rust removal | |
Li et al. | Defect inspection in low-contrast LCD images using Hough transform-based nonstationary line detection | |
WO2015096535A1 (en) | Method for correcting fragmentary or deformed quadrangular image | |
CN115035122B (en) | Artificial intelligence-based integrated circuit wafer surface defect detection method | |
CN112233116B (en) | Concave-convex mark visual detection method based on neighborhood decision and gray level co-occurrence matrix description | |
CN116703907B (en) | Machine vision-based method for detecting surface defects of automobile castings | |
CN110569857B (en) | Image contour corner detection method based on centroid distance calculation | |
JP2001133418A (en) | Method and apparatus for defect detection based on shape feature | |
CN115311267B (en) | Method for detecting abnormity of check fabric | |
CN115861320B (en) | Intelligent detection method for automobile part machining information | |
CN115063430B (en) | Electric pipeline crack detection method based on image processing | |
CN116309565B (en) | High-strength conveyor belt deviation detection method based on computer vision | |
CN116523923B (en) | Battery case defect identification method | |
CN112288693A (en) | Round hole detection method and device, electronic equipment and storage medium | |
CN112330678A (en) | Product edge defect detection method | |
CN114332026A (en) | Visual detection method and device for scratch defects on surface of nameplate | |
CN112950540A (en) | Bar code identification method and equipment | |
CN115311279A (en) | Machine vision identification method for warp and weft defects of fabric | |
CN115358983A (en) | Tool defect detection method, tool defect detection apparatus, and computer-readable storage medium | |
CN112396618B (en) | Grain boundary extraction and grain size measurement method based on image processing | |
CN116342614B (en) | Waste silicon wafer shape detection method based on image recognition |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |