CN112884708A - Method for detecting burrs of circular injection molding piece - Google Patents
Method for detecting burrs of circular injection molding piece Download PDFInfo
- Publication number
- CN112884708A CN112884708A CN202110053584.9A CN202110053584A CN112884708A CN 112884708 A CN112884708 A CN 112884708A CN 202110053584 A CN202110053584 A CN 202110053584A CN 112884708 A CN112884708 A CN 112884708A
- Authority
- CN
- China
- Prior art keywords
- image information
- target area
- value
- circular target
- burrs
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 238000000034 method Methods 0.000 title claims abstract description 38
- 238000001746 injection moulding Methods 0.000 title claims abstract description 9
- 230000011218 segmentation Effects 0.000 claims abstract description 15
- 238000001514 detection method Methods 0.000 claims abstract description 13
- 238000003708 edge detection Methods 0.000 claims abstract description 8
- 238000007781 pre-processing Methods 0.000 claims abstract description 5
- 230000002093 peripheral effect Effects 0.000 claims abstract description 4
- 230000009466 transformation Effects 0.000 claims description 9
- 238000006243 chemical reaction Methods 0.000 claims description 6
- 238000001914 filtration Methods 0.000 claims description 6
- 230000002902 bimodal effect Effects 0.000 claims description 4
- 238000002347 injection Methods 0.000 claims 4
- 239000007924 injection Substances 0.000 claims 4
- 230000007547 defect Effects 0.000 description 6
- 238000000605 extraction Methods 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 208000037805 labour Diseases 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000010137 moulding (plastic) Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/84—Systems specially adapted for particular applications
- G01N21/88—Investigating the presence of flaws or contamination
- G01N21/8851—Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/12—Edge-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/84—Systems specially adapted for particular applications
- G01N21/88—Investigating the presence of flaws or contamination
- G01N21/8851—Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
- G01N2021/8887—Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges based on image processing techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20092—Interactive image processing based on input by user
- G06T2207/20104—Interactive definition of region of interest [ROI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20172—Image enhancement details
- G06T2207/20192—Edge enhancement; Edge preservation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
- G06T2207/30164—Workpiece; Machine component
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Life Sciences & Earth Sciences (AREA)
- Signal Processing (AREA)
- Health & Medical Sciences (AREA)
- Quality & Reliability (AREA)
- Chemical & Material Sciences (AREA)
- Analytical Chemistry (AREA)
- Biochemistry (AREA)
- General Health & Medical Sciences (AREA)
- Immunology (AREA)
- Pathology (AREA)
- Image Analysis (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
The invention discloses a method for detecting burrs of a circular injection molding part, which comprises the following steps: s1: collecting image information of a product and preprocessing the image information; s2: performing threshold segmentation processing on the preprocessed image information based on the image information of the S1 to obtain image information of a circular target area; s3: determining the shapes and positions of a plurality of burr outlines on the peripheral side of the circular target area based on the image information of the circular target area obtained in the step S2; s4: and extracting the characteristic information of each rough edge profile, outputting and storing. According to the method, firstly, the collected image information of the product is preprocessed to enhance the image quality, then, the target characteristic region is subjected to threshold segmentation, the shape and the position of the burrs are determined by a region morphology detection method and a sub-pixel edge detection method, and finally, the burrs are extracted, output and stored.
Description
Technical Field
The invention relates to the technical field of product burr detection, in particular to a method for detecting burrs of a circular injection molding piece.
Background
With the increasing of plastic output and the expanding of application fields, higher requirements are put on the quality of plastic molds, and therefore, defects such as burrs, depressions and the like on cast workpieces are often checked in the plastic processing industry, and after the defects are detected, the sizes and positions of the defects on the edges are determined. And at present to the detection of flaw such as deckle edge, sunken basically adopt the mode of artifical detection, need a large amount of labours, detect with high costsly, and the detection rate is low.
Disclosure of Invention
The invention aims to provide a method for detecting burrs of a circular injection molding part, which comprises the steps of firstly collecting image information of a product through a camera, carrying out gray level transformation and filtering denoising treatment on the image information to enhance the image quality, then carrying out threshold segmentation on a target characteristic region of the product based on a histogram double-peak method and a maximum inter-class variance method aiming at different sensitive regions, then determining the shape and the position of the burrs through a region morphology detection method and a sub-pixel edge detection method, and finally extracting, outputting and storing the burrs.
In order to realize the purpose, the following technical scheme is adopted:
a method for detecting burrs of a circular injection molding part comprises the following steps:
s1: collecting image information of a product and preprocessing the image information;
s2: performing threshold segmentation processing on the preprocessed image information based on the image information of the S1 to obtain image information of a circular target area;
s3: determining the shapes and positions of a plurality of burr outlines on the peripheral side of the circular target area based on the image information of the circular target area obtained in the step S2;
s4: and extracting the characteristic information of each rough edge profile, outputting and storing.
Further, the S1 includes the following steps:
s11: acquiring image information of a product through a camera;
s12: performing gray scale conversion processing on the product image information acquired in the step S11;
s13: and carrying out filtering and denoising processing on the product image information based on the gray level transformation processing of S12.
Further, the S13 includes the following steps:
s131: selecting a plurality of adjacent pixel points around the pixel point to be processed to form a pixel point cloud;
s132: calculating the mean value of the gray values of a plurality of pixel points in the pixel point cloud;
s133: setting a first threshold, calculating a difference value between the gray value of the pixel point to be processed and the gray value mean value obtained in the step S132, if the difference value is greater than the first threshold, giving the gray value mean value to the pixel point to be processed in the step S131, and if the difference value is less than or equal to the first threshold, keeping the gray value of the pixel point to be processed in the step S131 unchanged.
Further, the S2 includes the following steps:
s21: determining a second threshold value based on a histogram bimodal method, and preliminarily segmenting the outline of the circular target area in the product image information according to the second threshold value;
s22: and determining a third threshold value based on the maximum inter-class variance method, and finally segmenting the outline of the circular target region according to the third threshold value to obtain the image information of the circular target region.
Further, the S3 includes the following steps:
s31: determining a starting point and an end point of an intersection point of each burr outline and the boundary of the circular target area based on a regional morphology detection method;
s32: based on the starting point and the end point obtained in the step S31, calculating an included angle between a connecting line of the starting point and the center of the circular target area and a connecting line of the end point and the center of the circular target area;
s33: and extracting contour curves of burrs based on a sub-pixel edge detection method, and calculating the distance between all contour points on the contour curves and the center of the circular target area.
By adopting the scheme, the invention has the beneficial effects that:
the method comprises the steps of firstly collecting image information of a product through a camera, carrying out gray level transformation and filtering denoising processing on the image information to enhance image quality, then carrying out threshold segmentation on a target characteristic region of the product based on a histogram bimodal method and a maximum between-class variance method aiming at different sensitive regions, determining the shape and the position of burrs through a region morphology detection method and a sub-pixel edge detection method, and finally extracting, outputting and storing the burrs.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is an information diagram of a product image after threshold segmentation according to an embodiment of the present invention;
FIG. 3 is a graph of information for determining the location of the burr of FIG. 2;
Detailed Description
The invention is described in detail below with reference to the figures and the specific embodiments.
Referring to fig. 1 to 3, the invention provides a method for detecting burrs of a circular injection molding part, which comprises the following steps:
s1: collecting image information of a product and preprocessing the image information;
s2: performing threshold segmentation processing on the preprocessed image information based on the image information of the S1 to obtain image information of a circular target area;
s3: determining the shapes and positions of a plurality of burr outlines on the peripheral side of the circular target area based on the image information of the circular target area obtained in the step S2;
s4: and extracting the characteristic information of each rough edge profile, outputting and storing.
Wherein the S1 includes the steps of:
s11: acquiring image information of a product through a camera;
s12: performing gray scale conversion processing on the product image information acquired in the step S11;
s13: and carrying out filtering and denoising processing on the product image information based on the gray level transformation processing of S12.
The S13 includes the steps of:
s131: selecting a plurality of adjacent pixel points around the pixel point to be processed to form a pixel point cloud;
s132: calculating the mean value of the gray values of a plurality of pixel points in the pixel point cloud;
s133: setting a first threshold, calculating a difference value between the gray value of the pixel point to be processed and the gray value mean value obtained in the step S132, if the difference value is greater than the first threshold, giving the gray value mean value to the pixel point to be processed in the step S131, and if the difference value is less than or equal to the first threshold, keeping the gray value of the pixel point to be processed in the step S131 unchanged.
The S2 includes the steps of:
s21: determining a second threshold value based on a histogram bimodal method, and preliminarily segmenting the outline of the circular target area in the product image information according to the second threshold value;
s22: and determining a third threshold value based on the maximum inter-class variance method, and finally segmenting the outline of the circular target region according to the third threshold value to obtain the image information of the circular target region.
The S3 includes the steps of:
s31: determining a starting point and an end point of an intersection point of each burr outline and the boundary of the circular target area based on a regional morphology detection method;
s32: based on the starting point and the end point obtained in the step S31, calculating an included angle between a connecting line of the starting point and the center of the circular target area and a connecting line of the end point and the center of the circular target area;
s33: and extracting contour curves of burrs based on a sub-pixel edge detection method, and calculating the distance between all contour points on the contour curves and the center of the circular target area.
The working principle of the invention is as follows:
with reference to fig. 1 to 3, the method can rapidly inspect burrs (burrs), depressions and other defects on the cast workpiece, and further determine the size and position of the defects on the edges, so that the detection efficiency and the detection precision can be improved; when the device works, a product can be shot through the camera, and then the image information of the product can be obtained; the acquired image may be fuzzy, and if the contour feature extraction and other operations are directly performed on the image, the interference factors are more, so that the image needs to be preprocessed; in the method, gray scale conversion processing is firstly carried out to enhance the contrast of image information, improve the quality of an image and show the detail characteristics of a highlighted image, and specifically:
assuming that the pixel gray-scale value of the original image is D ═ f (x, y), and the gray-scale transformed image pixel gray-scale value is D ═ g (x, y), the gray enhancement can be expressed as: d '═ t (D), where D and D' must be in the gray scale range of the image, the function is called gray scale transformation function, which represents the specific conversion relation between the input and output gray scale values, when the gray scale transformation is performed, the target image is converted point by point according to the conversion relation and the gray scale value of each pixel in the obtained product image information, and then the purpose of improving the quality of the initial image and the definition of the picture can be achieved.
Then, the image after the gray level transformation is subjected to filtering and denoising processing, wherein the noise of the target image is suppressed under the condition of keeping the detail characteristics of the image as much as possible, and important information such as the outline and the edge of the image is not damaged, so that the image is clear and the visual effect is good; a plurality of adjacent pixels (neighborhoods) can be selected around a pixel to be processed to form a pixel point cloud, and then the mean value of the gray values of the pixels in the pixel point cloud is calculated according to the following formula:
wherein, S represents a neighborhood (pixel point cloud) of a pixel point to be processed, M is the total number of the pixel points, g (i, j) represents a pixel value of the image at (i, j), and f (x, y) represents the pixel value of the pixel point after being processed.
In the above formula, noise can be suppressed by selecting different neighborhoods, but when the neighborhoods are increased, the image is more blurred. In order to solve the problem, a first threshold value R (non-closed value) is set, the difference value between the gray value of the pixel to be processed and the gray value mean value obtained by the formula is calculated by the following formula, if the difference value is greater than the first threshold value, the gray value mean value is given to the pixel to be processed, and if the difference value is less than or equal to the first threshold value, the gray value of the pixel to be processed is kept unchanged, so that the blurring degree of the image can be reduced.
After preprocessing the image, basically obtaining a clearer and high-quality image, then, on the basis of the image, performing threshold segmentation processing on the image to obtain image information of a target area (circular), wherein the threshold segmentation is to regard the digital image as a combination of a target area and a background area with different gray levels by using the difference of gray characteristics of the target area to be extracted and the background in the digital image, then, selecting a proper threshold to distinguish the attribution area of each pixel point, and finally, generating a corresponding binary image, when the specific processing is performed, firstly, determining a threshold, then comparing the gray value of each pixel in the image with the threshold, then, changing the pixels with the gray value lower than the threshold into a certain gray value, generally setting the gray value to be black 0, and changing the pixels with the gray value higher than the threshold into another gray value, the color is generally set to be white 255, and a circular target area in the image can be segmented from a background area by the method, and the basic model of segmentation is as follows:
wherein g (i, j) represents the gray value of each pixel point.
As can be seen from the above, in order to realize accurate segmentation of the target region and the background region, the selection of the threshold is particularly important, in the method, a threshold (a second threshold) is first determined according to a histogram doublet method, which is used to initially segment the circular target region and the background region according to the distribution of different gray level pixels on the histogram: since a part of pixels are in a region with a small gray level and another part are in a region with a large gray level, it can be determined that a certain point exists between two peaks to divide an image into a portion with a high gray level and a portion with a low gray level, 4 rectangles can be taken out from the corners of the image, the average value of the pixels of the rectangles is used as a division threshold, target background division is performed, the average value of the background region and the average value of the target region are respectively obtained, and then the average value of the two average values is used as a second threshold.
After the image is primarily segmented, part of the target area is judged as the background area by mistake or part of the background area is judged as the target area by mistake, so that the fuzzy difference between the target area and the background area is reduced, therefore, a third threshold value is determined based on a maximum inter-class variance method, the third threshold value is further accurately segmented, f (i, j) is set to represent the gray value of the pixel point (i, j), the gray level is n, pkThe frequency of pixel appearance when the gray value is k is as follows:
if the gray value t is set as the threshold value, then,
The maximum inter-class variance method is characterized in that the minimum misjudgment probability is used as the selection of the optimal threshold, the zero-order accumulated moment and the first-order accumulated moment of the gray level histogram are calculated, the segmentation effect of the processed flaw defects is ideal, the levels are clear, and the noise points are few.
As shown in fig. 2, the circular target area can be accurately segmented by threshold segmentation operation, the burrs are represented as the convex parts of the target area, the burrs are segmented by performing an operation on the target area and subtracting the operation result from the original image, the operation uses a circle as a structural element, the size of the circle is close to that of the detected object (circular injection molding), so that it can be ensured that very large burrs can be removed, the radius of the circle is smaller than that of the plastic molding circle, the difference between the operation result of the image operation and the operation result of the original image operation represents the burrs, the maximum distance performs distance transformation on the operation area to obtain a distance image, the distance image contains the nearest distance between each point in the background and the reference shape, the latter is complementary to the reference shape, the maximum distance is the local maximum value of the distance from the image in the burr area, and the angle of the segmented burrs relative to the center of the target area can be calculated (as shown in fig., the starting and ending points of the intersection of each burr on the edge of the product with the target area need to be known, and these can be obtained by a regional morphology detection method:
firstly, calculating a pixel width boundary of a target area after calculation, wherein each connecting part of a burr area is intersected with the boundary, obtaining a pixel width area at the position of the detected boundary for each burr, and finally obtaining an area containing end points of the intersected area; then, the distance from the rough edge contour to the center of the target area needs to be calculated, the rough edge contour can be obtained by extracting through a sub-pixel edge detection method, the boundary is widened by utilizing morphology, a belt-shaped region of interest for extracting the sub-pixel boundary is obtained, adjacent boundary sections are combined together to obtain a continuous contour, a fitting circle of the contour is extracted by referring to the circular shape of a product, redundant points are completely shielded by using a Tukey weighting function, and the distance between all contour points and the center of the target area is calculated; the profile feature can be directly converted into sub-pixel precision profile feature, sub-pixel precision profile calculation, and control point (r) for profilei,ci) I 1., n, calculating the minimum circumscribed parallel wheelbase of the contour or calculating the convex hull of the contour, deriving the minimum circumscribed circle and the minimum circumscribed rectangle in any direction, the closed contour being represented by (R1, c1) ═ rn, cn), R represents the sub-pixel precision region surrounded by the contour, then the order moment of (p, q) is defined as:
similar to the region, normalized moments and central moments are defined, equations and the same, all of which can be calculated based on control points on the contour only, and the area and center of gravity of the contour are calculated as follows:
similar equations can be derived for the second moment, and parameters of a long axis, a short axis and a direction of the circle are calculated according to the equations; the extraction of the obvious edge needs threshold segmentation processing on the basis of the gradient magnitude, compared with the edge defined by the maximum gradient magnitude, the edge defined by the zero crossing of the laplacian operator calculates more partial derivatives, and the partial derivatives are calculated by finite difference:
the edge detection of the image is realized, and a discretization gradient approximation function is used for searching the gray level jump position of the image gray level matrix according to the gradient vector of the two-dimensional gray level matrix; calculating the starting point and the end point of the contour interval with all the distances exceeding a preset threshold value, defining a function, wherein the independent variable is the index of the contour point, and the function value is the distance value corrected by the threshold value, so that the starting point and the end point are the intersection points of the corresponding function zero, then two continuous zero-crossing points are used as a group, and the corresponding angle range can be calculated by the circle center and the second contour point at the position of the zero-crossing point; and finally, accurately confirming the positions and the shapes of the burrs according to the maximum distance between the burrs and the center of the target area, the angles of the starting point and the end point of the intersection point of the burrs and the boundary of the target area and the corresponding angle range.
The present invention is not limited to the above preferred embodiments, and any modifications, equivalent substitutions and improvements made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Claims (5)
1. A method for detecting burrs of a circular injection molding part is characterized by comprising the following steps:
s1: collecting image information of a product and preprocessing the image information;
s2: performing threshold segmentation processing on the preprocessed image information based on the image information of the S1 to obtain image information of a circular target area;
s3: determining the shapes and positions of a plurality of burr outlines on the peripheral side of the circular target area based on the image information of the circular target area obtained in the step S2;
s4: and extracting the characteristic information of each rough edge profile, outputting and storing.
2. The method for detecting burrs of a round injection molded part according to claim 1, wherein the step S1 comprises the steps of:
s11: acquiring image information of a product through a camera;
s12: performing gray scale conversion processing on the product image information acquired in the step S11;
s13: and carrying out filtering and denoising processing on the product image information based on the gray level transformation processing of S12.
3. The method for detecting burrs of a round injection molded part according to claim 2, wherein the step S13 comprises the steps of:
s131: selecting a plurality of adjacent pixel points around the pixel point to be processed to form a pixel point cloud;
s132: calculating the mean value of the gray values of a plurality of pixel points in the pixel point cloud;
s133: setting a first threshold, calculating a difference value between the gray value of the pixel point to be processed and the gray value mean value obtained in the step S132, if the difference value is greater than the first threshold, giving the gray value mean value to the pixel point to be processed in the step S131, and if the difference value is less than or equal to the first threshold, keeping the gray value of the pixel point to be processed in the step S131 unchanged.
4. The method for detecting burrs of a round injection molded part according to claim 1, wherein the step S2 comprises the steps of:
s21: determining a second threshold value based on a histogram bimodal method, and preliminarily segmenting the outline of the circular target area in the product image information according to the second threshold value;
s22: and determining a third threshold value based on the maximum inter-class variance method, and finally segmenting the outline of the circular target region according to the third threshold value to obtain the image information of the circular target region.
5. The method for detecting burrs of a round injection molded part according to claim 1, wherein the step S3 comprises the steps of:
s31: determining a starting point and an end point of an intersection point of each burr outline and the boundary of the circular target area based on a regional morphology detection method;
s32: based on the starting point and the end point obtained in the step S31, calculating an included angle between a connecting line of the starting point and the center of the circular target area and a connecting line of the end point and the center of the circular target area;
s33: and extracting contour curves of burrs based on a sub-pixel edge detection method, and calculating the distance between all contour points on the contour curves and the center of the circular target area.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110053584.9A CN112884708A (en) | 2021-01-15 | 2021-01-15 | Method for detecting burrs of circular injection molding piece |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110053584.9A CN112884708A (en) | 2021-01-15 | 2021-01-15 | Method for detecting burrs of circular injection molding piece |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112884708A true CN112884708A (en) | 2021-06-01 |
Family
ID=76048007
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110053584.9A Withdrawn CN112884708A (en) | 2021-01-15 | 2021-01-15 | Method for detecting burrs of circular injection molding piece |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112884708A (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113916893A (en) * | 2021-09-29 | 2022-01-11 | 逸美德科技股份有限公司 | Method for detecting die-cutting product defects |
CN113935962A (en) * | 2021-09-29 | 2022-01-14 | 常州市新创智能科技有限公司 | Method for detecting wool ball of glass fiber cloth |
CN114022483A (en) * | 2022-01-08 | 2022-02-08 | 南通欣斯特机械制造有限公司 | Injection molding flash area identification method based on edge characteristics |
CN114279357A (en) * | 2021-12-23 | 2022-04-05 | 杭州电子科技大学 | Die casting burr size measurement method and system based on machine vision |
CN115082441A (en) * | 2022-07-22 | 2022-09-20 | 山东微山湖酒业有限公司 | Retort material tiling method in wine brewing distillation process based on computer vision |
CN115115632A (en) * | 2022-08-29 | 2022-09-27 | 海门市新亚镍丝网有限公司 | Analysis method for accompanying phenomenon of textile seam slippage detection |
CN115330791A (en) * | 2022-10-13 | 2022-11-11 | 江苏东晨机械科技有限公司 | Part burr detection method |
CN115760782A (en) * | 2022-11-16 | 2023-03-07 | 华南理工大学 | In-mold labeling offset defect identification method based on machine vision |
CN117974663A (en) * | 2024-04-01 | 2024-05-03 | 瑞纳智绝缘材料(苏州)有限公司 | Glass fiber sleeve quality detection method based on image characteristics |
-
2021
- 2021-01-15 CN CN202110053584.9A patent/CN112884708A/en not_active Withdrawn
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113935962A (en) * | 2021-09-29 | 2022-01-14 | 常州市新创智能科技有限公司 | Method for detecting wool ball of glass fiber cloth |
CN113916893A (en) * | 2021-09-29 | 2022-01-11 | 逸美德科技股份有限公司 | Method for detecting die-cutting product defects |
CN114279357B (en) * | 2021-12-23 | 2024-05-03 | 杭州电子科技大学 | Die casting burr size measurement method and system based on machine vision |
CN114279357A (en) * | 2021-12-23 | 2022-04-05 | 杭州电子科技大学 | Die casting burr size measurement method and system based on machine vision |
CN114022483A (en) * | 2022-01-08 | 2022-02-08 | 南通欣斯特机械制造有限公司 | Injection molding flash area identification method based on edge characteristics |
CN114022483B (en) * | 2022-01-08 | 2022-03-25 | 南通欣斯特机械制造有限公司 | Injection molding flash area identification method based on edge characteristics |
CN115082441A (en) * | 2022-07-22 | 2022-09-20 | 山东微山湖酒业有限公司 | Retort material tiling method in wine brewing distillation process based on computer vision |
CN115082441B (en) * | 2022-07-22 | 2022-11-11 | 山东微山湖酒业有限公司 | Retort material tiling method in wine brewing distillation process based on computer vision |
CN115115632A (en) * | 2022-08-29 | 2022-09-27 | 海门市新亚镍丝网有限公司 | Analysis method for accompanying phenomenon of textile seam slippage detection |
CN115115632B (en) * | 2022-08-29 | 2023-04-07 | 海门市新亚镍丝网有限公司 | Analysis method for accompanying phenomenon of textile seam slippage detection |
CN115330791A (en) * | 2022-10-13 | 2022-11-11 | 江苏东晨机械科技有限公司 | Part burr detection method |
CN115760782A (en) * | 2022-11-16 | 2023-03-07 | 华南理工大学 | In-mold labeling offset defect identification method based on machine vision |
CN115760782B (en) * | 2022-11-16 | 2023-06-16 | 华南理工大学 | Machine vision-based in-mold labeling offset defect identification method |
CN117974663A (en) * | 2024-04-01 | 2024-05-03 | 瑞纳智绝缘材料(苏州)有限公司 | Glass fiber sleeve quality detection method based on image characteristics |
CN117974663B (en) * | 2024-04-01 | 2024-06-07 | 瑞纳智绝缘材料(苏州)有限公司 | Glass fiber sleeve quality detection method based on image characteristics |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112884708A (en) | Method for detecting burrs of circular injection molding piece | |
CN107808378B (en) | Method for detecting potential defects of complex-structure casting based on vertical longitudinal and transverse line profile features | |
CN115049835B (en) | Data preprocessing method based on die-casting die defect identification | |
CN108921176B (en) | Pointer instrument positioning and identifying method based on machine vision | |
CN111310558A (en) | Pavement disease intelligent extraction method based on deep learning and image processing method | |
CN103886589B (en) | Object-oriented automated high-precision edge extracting method | |
CN114399522A (en) | High-low threshold-based Canny operator edge detection method | |
CN105405138B (en) | Waterborne target tracking based on conspicuousness detection | |
CN111882561A (en) | Cancer cell identification and diagnosis system | |
CN109410147A (en) | A kind of supercavity image enchancing method | |
CN109544571A (en) | A kind of metallic phase image edge detection method based on mathematical morphology | |
CN111311618A (en) | Circular arc workpiece matching and positioning method based on high-precision geometric primitive extraction | |
CN105787912B (en) | Classification-based step type edge sub-pixel positioning method | |
CN109359653B (en) | Cotton leaf adhesion lesion image segmentation method and system | |
CN115147448A (en) | Image enhancement and feature extraction method for automatic welding | |
CN112435272A (en) | High-voltage transmission line connected domain removing method based on image contour analysis | |
CN111524156A (en) | Overlapped citrus segmentation method based on distance transformation and angular point detection | |
CN108898148A (en) | A kind of digital picture angular-point detection method, system and computer readable storage medium | |
CN114037657A (en) | Lithium battery tab defect detection method combining region growth and annular correction | |
CN113129323A (en) | Remote sensing ridge boundary detection method and system based on artificial intelligence, computer equipment and storage medium | |
CN111027474B (en) | Face region acquisition method and device, terminal equipment and storage medium | |
CN110765887A (en) | Automatic identification technology and detection method for tunnel lining cracks | |
CN113838077A (en) | Improved Canny operator-based sub-pixel edge extraction method | |
CN113763279A (en) | Accurate correction processing method for image with rectangular frame | |
CN110930330B (en) | Image segmentation and region growth based salt and pepper noise reduction algorithm |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20210601 |