CN115760782B - Machine vision-based in-mold labeling offset defect identification method - Google Patents

Machine vision-based in-mold labeling offset defect identification method Download PDF

Info

Publication number
CN115760782B
CN115760782B CN202211459884.8A CN202211459884A CN115760782B CN 115760782 B CN115760782 B CN 115760782B CN 202211459884 A CN202211459884 A CN 202211459884A CN 115760782 B CN115760782 B CN 115760782B
Authority
CN
China
Prior art keywords
circle
edge
image
pixel
gradient
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211459884.8A
Other languages
Chinese (zh)
Other versions
CN115760782A (en
Inventor
宋建
周维星
苏楚鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN202211459884.8A priority Critical patent/CN115760782B/en
Publication of CN115760782A publication Critical patent/CN115760782A/en
Application granted granted Critical
Publication of CN115760782B publication Critical patent/CN115760782B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Abstract

The invention discloses a machine vision-based in-mold labeling offset defect identification method, which comprises the following steps of: s1, after calibration and correction of an industrial camera, acquiring an image of a labeled product in a mold, making a circle with a diameter smaller than that of a circle B, and dividing the image into a label part and an outer edge part; s2, acquiring an interested region by using threshold segmentation and morphological processing, and extracting sub-pixel edges on the interested region by using a Canny operator and a Zernike moment to realize edge extraction of a target circle A and a target circle C; s3, performing weight type edge fitting, namely performing edge fitting on the circle A and the circle C by using a weight function to obtain parameters of the radius and the circle center of the circle A and the circle C, and calculating the minimum distance g between the circle B and the circle C min ,g min And judging that the labels are offset defective products when the labels are smaller than the qualified spacing. The invention can solve the problem of detection of offset defects of the in-mold labeling of the product, and enables the detection of machine vision defects to have flexibility.

Description

Machine vision-based in-mold labeling offset defect identification method
Technical Field
The invention belongs to the technical field of machine vision, and particularly relates to a machine vision-based in-mold labeling offset defect identification method.
Background
The actual production process of the in-mold labeling product at the present stage shows the characteristic of typical flexible manufacturing: the shape of the product produced by each injection molding machine in the workshop is consistent, but the labels, the colors and the batches are different, and the products can be replaced irregularly according to the requirements of customers. In order to improve the production efficiency and fully utilize the rapid detection capability of the vision system, products on the conveyor belts of a plurality of injection molding machines are assembled on a total conveyor belt, and a set of machine vision system is used for detection in a concentrated manner. The label offset is the defect of highest detection difficulty in the in-mold labeling product, the narrow gap is difficult to directly judge by means of human eyes, a great amount of time and energy are consumed by means of a complex measuring tool, and the accuracy is poor, so that the detection task is difficult to finish in batch production.
Currently, there are few studies on visual inspection of in-mold label offset defects in injection molded articles, and the problem is also difficult to solve: firstly, due to the flexible production condition, the detected products have the diversity of labels and colors, and the traditional image segmentation algorithm cannot play a role; secondly, the structure of some products is compact, the information is complex, the extraction of the bias characteristic is difficult, and great challenges are brought to the detection of the bias defect of the label of the product.
Disclosure of Invention
The invention mainly aims to overcome the defects and shortcomings of the prior art and provides a machine vision-based in-mold labeling offset defect identification method.
In order to achieve the above purpose, the present invention adopts the following technical scheme:
the in-mold labeling offset defect identification method based on machine vision is characterized in that the outer circle of a labeling product is set as a circle A, the convex line circle of the labeling product is set as a circle B, and a label is regarded as a standard circle and is set as a circle C; the method comprises the following steps:
s1, after calibration and correction of an industrial camera, acquiring an image of a labeled product in a mold, making a circle with a diameter smaller than that of a circle B, and dividing the image into a label part and an outer edge part;
s2, acquiring an interested region by using threshold segmentation and morphological processing, and extracting sub-pixel edges on the interested region by using a Canny operator and a Zernike moment to realize edge extraction of a target circle A and a target circle C;
s3, performing weight type edge fitting, namely performing edge fitting on the circle A and the circle C by using a weight function to obtain parameters of the radius and the circle center of the circle A and the circle C, and calculating the minimum distance g between the circle B and the circle C min ,g min And judging that the labels are offset defective products when the labels are smaller than the qualified spacing.
Further, when the industrial camera collects the image of the in-mold labeling product, the annular light source is inclined at a certain angle so as to make the characteristics of the region of interest highlighted.
Further, the step S1 specifically includes:
and (3) using a positioning algorithm to obtain positioning coordinates (x, y) of the in-mold labeling product, taking the coordinates as a circle center to make a circle with the diameter smaller than that of the circle B, and dividing the image into a circle C and an outer edge part thereof.
Further, in step S2, the region of interest is acquired specifically as follows:
the circle C and the outer edge part are respectively divided by threshold values, and the expression of threshold value division is as follows:
Figure BDA0003954964270000021
wherein f (x, y), g (x, y) are the gray values of the image before and after segmentation, T 1 ,T 2 Is a threshold value;
and obtaining an image after threshold segmentation, performing a closed operation by using a circular structural element with a proper radius, eliminating interference of low gray noise in the region on the premise of not causing the deficiency of the target circular edge, and taking the two regions after the closed operation as the interested region extracted by the target circular edge.
Further, during threshold segmentation, different thresholds are selected for different labeling products for segmentation.
Further, extracting the subpixel edges using the Canny operator and the Zernike moments includes:
pixel-level edge positioning is performed on the region of interest by using a Canny operator;
on the pixel-level edges detected by the Canny operator, a Zernike moment sub-pixel edge localization algorithm is used.
Further, performing pixel-level edge positioning on the region of interest by using a Canny operator specifically includes:
step one, performing Gaussian smoothing filtering on an image;
step two, calculating a gradient amplitude diagram and a gradient direction; the calculation formula of the gradient amplitude is:
Figure BDA0003954964270000031
the calculation formula of the gradient angle is as follows:
Figure BDA0003954964270000032
wherein G is x And G y The gradients in the x-direction and y-direction, respectively;
step three, non-maximum suppression is applied to the gradient amplitude image;
step four, detecting pixel-level edges by using double-threshold processing and edge connection; the processing method of the double threshold value specifically comprises the following steps:
selecting proper high threshold and low threshold, if the gradient value of a certain pixel is higher than the high threshold, reserving; discarding if the gradient value of a certain pixel is below a low threshold; if the gradient value of a certain pixel is between the high threshold value and the low threshold value, if the gradient of the pixel in the 8-connected region of the pixel is higher than the high threshold value, the gradient is reserved, and if the gradient is not present, the gradient is abandoned.
Further, the edge positioning algorithm using the Zernike moment sub-pixel specifically comprises the following steps:
step one, calculating Zernike moment of an image; the calculation formula of the Zernike moment of the gray image n-order m times is as follows:
Figure BDA0003954964270000033
wherein f (x, y) represents the gray value of the image point (x, y), V * mn (ρ, θ) is a Zernike polynomial V mn (ρ, θ) complex conjugation;
step two, calculating 4 edge parameters, namely background gray level h, step height k, vertical distance l from the center of the disc to the edge and included angle omega between the vertical line and the x axis, through the rotation invariance of the Zernike moment;
and thirdly, positioning the edges of the sub-pixels according to the edge parameters.
Further, in step S3, the edge fitting is specifically:
calculating the circle center and radius of the fitting circle by using a least square circle fitting algorithm and calculating the minimum value of algebraic distance from each edge point of the target circle to the fitting circle; the least squares method is calculated as follows:
Figure BDA0003954964270000041
wherein (alpha, beta) is the center of a fitting circle, R is the radius of the fitting circle, (X) i ,Y i ) Is an edge point;
introducing a Tukey weight function, wherein the function introduces a weight omega for each sub-pixel edge point, and the function expression is as follows:
Figure BDA0003954964270000042
wherein delta is the distance from the edge point to the fitting circle; τ is a clipping factor, derived from the standard deviation of the edge points, which are ignored when fitting circles when δ is greater than τ, and whose weight varies smoothly between 1 and 0 when δ is less than τ.
Further, in step S3, g min Is calculated as follows:
the center of circle A (x) A ,y A ) And radius r A Center of circle C (x) C ,y C ) And radius r C G is calculated according to the following formula min
g min =r A -r C -l-s
Wherein l is the processing distance from circle A to circle B, and s is the center distance of circle A to circle C.
Compared with the prior art, the invention has the following advantages and beneficial effects:
1. the method provided by the invention aims at different in-mold labeling products, can accurately and effectively judge whether the labeling is biased, and realizes the requirement of flexible detection.
2. Unlike conventional methods for processing the whole image, the invention proposes a divide-and-conquer idea; for the situation that the target spacing is small or local overlap is caused, the whole treatment can lead to the connection of the regions between the targets; the method is used for dividing two target circles with small spacing or partially overlapped, then independently carrying out operations such as edge extraction, edge fitting and the like on the separated target circles, and finally integrating.
Drawings
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is a schematic diagram of the structure of an in-mold labeling article;
FIG. 3 is a graph of circle A, circle C position parameters and g min Is a relationship of (2);
FIG. 4 is a diagram of a target circle edge extraction process;
fig. 5 is a graph of the results after edge fitting of the target circle.
Detailed Description
The present invention will be described in further detail with reference to examples and drawings, but embodiments of the present invention are not limited thereto.
Examples
In the embodiment, a labeling plastic product is taken as an example, wherein the outer circle of the product is set as a circle A, the convex line circle of the product is set as a circle B, and the distance between the circle A and the circle B is set as a fixed value l; the label is a non-standard circle, but the main body is a circle, so the label is regarded as a standard circle, and the circle is referred to as a circle C.
As shown in fig. 1 and 2, the method for identifying offset defects of in-mold labeling based on machine vision comprises the following steps:
s1, after calibration and correction of an industrial camera, acquiring an image of a labeled product in a mold, making a circle with a diameter smaller than that of a circle B, and dividing the image into a label part and an outer edge part; when the industrial camera collects the image of the in-mold labeling product, the annular light source is inclined at a certain angle to highlight the characteristics of the region of interest, and in the embodiment, 70-degree annular light source is adopted for inclined lighting.
And (3) obtaining positioning coordinates (x, y) of the in-mold labeling product by using a positioning algorithm, for example, obtaining a detection frame of the in-mold labeling product by using a YOLO algorithm, taking the central point of the frame as the positioning coordinates (x, y), and simultaneously removing the background outside the frame. A circle with the diameter smaller than that of the circle B is made by taking the coordinate as the center of the circle, and the image is divided into a circle C and the outer edge part thereof.
S2, acquiring an interested region by using threshold segmentation and morphological processing, and extracting sub-pixel edges on the interested region by using a Canny operator and a Zernike moment to realize edge extraction of a target circle A and a target circle C; as shown in fig. 4, a process diagram of the target circle edge extraction is shown.
In this embodiment, the region of interest acquisition is specifically:
the circle C and the outer edge part are respectively divided by using thresholds, and different labeling products are respectively divided by using different thresholds; the expression of the threshold segmentation is as follows:
Figure BDA0003954964270000061
wherein f (x, y), g (x, y) are the gray values of the image before and after segmentation, T 1 ,T 2 Is a threshold value;
and obtaining an image after threshold segmentation, performing a closed operation by using a circular structural element with a proper radius, eliminating interference of low gray noise in the region on the premise of not causing the deficiency of the target circular edge, and taking the two regions after the closed operation as the interested region extracted by the target circular edge.
In this embodiment, extracting the subpixel edges using the Canny operator and the Zernike moments includes:
pixel-level edge localization using Canny operators at the region of interest, specifically includes:
step one, performing Gaussian smoothing filtering on an image;
step two, calculating a gradient amplitude diagram and a gradient direction; the calculation formula of the gradient amplitude is:
Figure BDA0003954964270000062
the calculation formula of the gradient angle is as follows:
Figure BDA0003954964270000071
wherein G is x And G y Respectively denoted as x-directionAnd a gradient in the y-direction;
step three, non-maximum suppression is applied to the gradient amplitude image;
step four, detecting pixel-level edges by using double-threshold processing and edge connection; the processing method of the double threshold value specifically comprises the following steps:
selecting proper high threshold and low threshold (the high threshold is approximately 2-3 times of the low threshold), and if the gradient value of a certain pixel is higher than the high threshold, reserving; discarding if the gradient value of a certain pixel is below a low threshold; if the gradient value of a certain pixel is between the high threshold value and the low threshold value, if the gradient of the pixel in the 8-connected region of the pixel is higher than the high threshold value, the gradient is reserved, and if the gradient is not present, the gradient is abandoned.
On the pixel-level edge detected by the Canny operator, a Zernike moment sub-pixel edge positioning algorithm is used, and specifically:
step one, calculating Zernike moment of an image; the calculation formula of the Zernike moment of the gray image n-order m times is as follows:
Figure BDA0003954964270000072
wherein f (x, y) represents the gray value of the image point (x, y), V * mn (ρ, θ) is a Zernike polynomial V mn (ρ, θ) complex conjugation;
step two, calculating 4 edge parameters, namely background gray level h, step height k, vertical distance l from the center of the disc to the edge and included angle omega between the vertical line and the x axis, through the rotation invariance of the Zernike moment;
and thirdly, positioning the edges of the sub-pixels according to the edge parameters.
S3, performing weight type edge fitting, namely performing edge fitting on the circle A and the circle C by using a weight function to obtain parameters of the radius and the circle center of the circle A and the circle C, and calculating the minimum distance g between the circle B and the circle C min ,g min And judging that the labels are offset defective products when the labels are smaller than the qualified spacing.
In this embodiment, the edge fitting is specifically:
calculating the circle center and radius of the fitting circle by using a least square circle fitting algorithm and calculating the minimum value of algebraic distance from each edge point of the target circle to the fitting circle; the least squares method is calculated as follows:
Figure BDA0003954964270000081
wherein (alpha, beta) is the center of a fitting circle, R is the radius of the fitting circle, (X) i ,Y i ) Is an edge point;
introducing a Tukey weight function, wherein the function introduces a weight omega for each sub-pixel edge point, and the function expression is as follows:
Figure BDA0003954964270000082
wherein delta is the distance from the edge point to the fitting circle; τ is a clipping factor, derived from the standard deviation of the edge points, which are ignored when fitting circles when δ is greater than τ, and whose weight varies smoothly between 1 and 0 when δ is less than τ.
Comparing the fitted circle with the target circle, as shown in fig. 5, the fitted circle is a result diagram of the edge of the target circle, and the ideal fitting effect can be seen from the diagram.
In the present embodiment, g as shown in FIG. 3 min Is calculated as follows:
the center of circle A (x) A ,y A ) And radius r A Center of circle C (x) C ,y C ) And radius r C G is calculated according to the following formula min
g min =r A -r C -l-s
Wherein l is the processing distance from circle A to circle B, and s is the center distance of circle A to circle C.
In this example g min And judging that the defective products with the label offset exist when the label offset is smaller than 3+/-0.3 mm.
It should also be noted that in this specification, terms such as "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (9)

1. The in-mold labeling offset defect identification method based on machine vision is characterized in that the outer circle of a labeling product is set as a circle A, the convex line circle of the labeling product is set as a circle B, and a label is regarded as a standard circle and is set as a circle C; the method comprises the following steps:
s1, after calibration and correction of an industrial camera, acquiring an image of a labeled product in a mold, making a circle with a diameter smaller than that of a circle B, and dividing the image into a label part and an outer edge part;
s2, acquiring an interested region by using threshold segmentation and morphological processing, and extracting sub-pixel edges on the interested region by using a Canny operator and a Zernike moment to realize edge extraction of a target circle A and a target circle C;
s3, performing weight type edge fitting, namely performing edge fitting on the circle A and the circle C by using a weight function to obtain parameters of the radius and the circle center of the circle A and the circle C, and calculating the minimum distance g between the circle B and the circle C min ,g min Judging that the label is not good when the qualified space is smaller than the qualified space; g min Is calculated as follows:
acquisition circle ACenter of circle (x) A ,y A ) And radius r A Center of circle C (x) C ,y C ) And radius r C G is calculated according to the following formula min
g min =r A -r C -l-s
Wherein l is the processing distance from circle A to circle B, and s is the center distance of circle A to circle C.
2. The machine vision based in-mold labeling offset defect identification method of claim 1, wherein the industrial camera captures the in-mold labeling product image using an annular light source tilt angle to highlight the region of interest feature.
3. The method for identifying offset defects of in-mold labeling based on machine vision according to claim 1, wherein step S1 is specifically:
and (3) using a positioning algorithm to obtain positioning coordinates (x, y) of the in-mold labeling product, taking the coordinates as a circle center to make a circle with the diameter smaller than that of the circle B, and dividing the image into a circle C and an outer edge part thereof.
4. The method for identifying offset defects of in-mold labeling based on machine vision according to claim 1, wherein in step S2, the region of interest is obtained specifically as follows:
the circle C and the outer edge part are respectively divided by threshold values, and the expression of threshold value division is as follows:
Figure FDA0004192915640000011
wherein f (x, y), g (x, y) are the gray values of the image before and after segmentation, T 1 ,T 2 Is a threshold value;
and obtaining an image after threshold segmentation, performing a closed operation by using a circular structural element with a proper radius, eliminating interference of low gray noise in the region on the premise of not causing the deficiency of the target circular edge, and taking the two regions after the closed operation as the interested region extracted by the target circular edge.
5. The machine vision based in-mold labeling offset defect identification method of claim 4, wherein different thresholds are selected for different labeling products for segmentation.
6. The machine vision based in-mold labeling offset defect identification method of claim 1, wherein extracting sub-pixel edges using Canny operators and Zernike moments comprises:
pixel-level edge positioning is performed on the region of interest by using a Canny operator;
on the pixel-level edges detected by the Canny operator, a Zernike moment sub-pixel edge localization algorithm is used.
7. The machine vision based in-mold labeling offset defect identification method of claim 6, wherein performing pixel-level edge localization using Canny operator on the region of interest specifically comprises:
step one, performing Gaussian smoothing filtering on an image;
step two, calculating a gradient amplitude diagram and a gradient direction; the calculation formula of the gradient amplitude is:
Figure FDA0004192915640000021
the calculation formula of the gradient angle is as follows:
Figure FDA0004192915640000022
wherein G is x And G y The gradients in the x-direction and y-direction, respectively;
step three, non-maximum suppression is applied to the gradient amplitude image;
step four, detecting pixel-level edges by using double-threshold processing and edge connection; the processing method of the double threshold value specifically comprises the following steps:
selecting a high threshold value and a low threshold value, and if the gradient value of a certain pixel is higher than the high threshold value, reserving; discarding if the gradient value of a certain pixel is below a low threshold; if the gradient value of a certain pixel is between the high threshold value and the low threshold value, if the gradient of the pixel in the 8-connected region of the pixel is higher than the high threshold value, the gradient is reserved, and if the gradient is not present, the gradient is abandoned.
8. The method for identifying offset defects of in-mold labeling based on machine vision according to claim 6, wherein the Zernike moment subpixel edge positioning algorithm is specifically:
step one, calculating Zernike moment of an image; the calculation formula of the Zernike moment of the gray image n-order m times is as follows:
Figure FDA0004192915640000031
wherein f (x, y) represents the gray value of the image point (x, y), V * mn (ρ, θ) is a Zernike polynomial V mn (ρ, θ) complex conjugation;
step two, calculating 4 edge parameters, namely background gray level h, step height k, vertical distance l from the center of the disc to the edge and included angle omega between the vertical line and the x axis, through the rotation invariance of the Zernike moment;
and thirdly, positioning the edges of the sub-pixels according to the edge parameters.
9. The method for identifying offset defects of in-mold labeling based on machine vision according to claim 1, wherein in step S3, the edge fitting is specifically:
calculating the circle center and radius of the fitting circle by using a least square circle fitting algorithm and calculating the minimum value of algebraic distance from each edge point of the target circle to the fitting circle; the least squares method is calculated as follows:
Figure FDA0004192915640000032
wherein (alpha, beta) is the center of a fitting circle, R is the radius of the fitting circle, (X) i ,Y i ) Is an edge point;
introducing a Tukey weight function, wherein the function introduces a weight omega for each sub-pixel edge point, and the function expression is as follows:
Figure FDA0004192915640000033
wherein delta is the distance from the edge point to the fitting circle; τ is a clipping factor, derived from the standard deviation of the edge points, which are ignored when fitting circles when δ is greater than τ, and whose weight varies smoothly between 1 and 0 when δ is less than τ.
CN202211459884.8A 2022-11-16 2022-11-16 Machine vision-based in-mold labeling offset defect identification method Active CN115760782B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211459884.8A CN115760782B (en) 2022-11-16 2022-11-16 Machine vision-based in-mold labeling offset defect identification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211459884.8A CN115760782B (en) 2022-11-16 2022-11-16 Machine vision-based in-mold labeling offset defect identification method

Publications (2)

Publication Number Publication Date
CN115760782A CN115760782A (en) 2023-03-07
CN115760782B true CN115760782B (en) 2023-06-16

Family

ID=85334185

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211459884.8A Active CN115760782B (en) 2022-11-16 2022-11-16 Machine vision-based in-mold labeling offset defect identification method

Country Status (1)

Country Link
CN (1) CN115760782B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116309510B (en) * 2023-03-29 2024-03-22 清华大学 Numerical control machining surface defect positioning method and device
CN116612116A (en) * 2023-07-19 2023-08-18 天津伍嘉联创科技发展股份有限公司 Crystal appearance defect detection method based on deep learning image segmentation

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106127253A (en) * 2016-06-27 2016-11-16 北京航空航天大学 A kind of method for detecting infrared puniness target utilizing sample characteristics learning classification
CN109658391A (en) * 2018-12-04 2019-04-19 东北大学 A kind of radius of circle measurement method being fitted based on contour mergence and convex closure
CN109900711A (en) * 2019-04-02 2019-06-18 天津工业大学 Workpiece, defect detection method based on machine vision
CN111879241A (en) * 2020-06-24 2020-11-03 西安交通大学 Mobile phone battery size measuring method based on machine vision
CN112884708A (en) * 2021-01-15 2021-06-01 深圳市悦创进科技有限公司 Method for detecting burrs of circular injection molding piece
CN113129322A (en) * 2021-04-22 2021-07-16 中煤科工集团重庆研究院有限公司 Sub-pixel edge detection method
CN113610917A (en) * 2021-08-09 2021-11-05 河南工业大学 Circular array target center image point positioning method based on blanking points
CN114239378A (en) * 2021-11-09 2022-03-25 华南理工大学 Injection molding product size prediction method based on custom LightGBM model loss
CN115018833A (en) * 2022-08-05 2022-09-06 山东鲁芯之光半导体制造有限公司 Processing defect detection method of semiconductor device

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106127253A (en) * 2016-06-27 2016-11-16 北京航空航天大学 A kind of method for detecting infrared puniness target utilizing sample characteristics learning classification
CN109658391A (en) * 2018-12-04 2019-04-19 东北大学 A kind of radius of circle measurement method being fitted based on contour mergence and convex closure
CN109900711A (en) * 2019-04-02 2019-06-18 天津工业大学 Workpiece, defect detection method based on machine vision
CN111879241A (en) * 2020-06-24 2020-11-03 西安交通大学 Mobile phone battery size measuring method based on machine vision
CN112884708A (en) * 2021-01-15 2021-06-01 深圳市悦创进科技有限公司 Method for detecting burrs of circular injection molding piece
CN113129322A (en) * 2021-04-22 2021-07-16 中煤科工集团重庆研究院有限公司 Sub-pixel edge detection method
CN113610917A (en) * 2021-08-09 2021-11-05 河南工业大学 Circular array target center image point positioning method based on blanking points
CN114239378A (en) * 2021-11-09 2022-03-25 华南理工大学 Injection molding product size prediction method based on custom LightGBM model loss
CN115018833A (en) * 2022-08-05 2022-09-06 山东鲁芯之光半导体制造有限公司 Processing defect detection method of semiconductor device

Also Published As

Publication number Publication date
CN115760782A (en) 2023-03-07

Similar Documents

Publication Publication Date Title
CN115760782B (en) Machine vision-based in-mold labeling offset defect identification method
CN109141232B (en) Online detection method for disc castings based on machine vision
CN106952257B (en) A kind of curved surface label open defect detection method based on template matching and similarity calculation
WO2020248439A1 (en) Crown cap surface defect online inspection method employing image processing
WO2022027949A1 (en) Machine vision-based detecting method and system for glass bottle bottom defects
CN102495076B (en) Method for detecting defects of metal zipper teeth of zipper on basis of machine vision
CN103604808B (en) A kind of bottle cap defective vision detection method
CN104680519B (en) Seven-piece puzzle recognition methods based on profile and color
CN107590837A (en) A kind of vision positioning intelligent precise puts together machines people and its camera vision scaling method
CN108896574B (en) Bottled liquor impurity detection method and system based on machine vision
CN109900711A (en) Workpiece, defect detection method based on machine vision
CN104574389A (en) Battery piece chromatism selection control method based on color machine vision
CN107894423B (en) Automatic detection equipment and method for vehicle body surface quality loss and intelligent vehicle detection system
CN101799434A (en) Printing image defect detection method
CN104050446A (en) Meter pointer image identification method based on pointer width character
CN102680487A (en) System and method for monitoring painting quality of components, in particular of motor-vehicle bodies
CN106824816A (en) A kind of PE based on machine vision bottles of detection and method for sorting
CN110766684A (en) Stator surface defect detection system and detection method based on machine vision
CN104297255A (en) Visual inspection method and system device for paper cup defects
CN111080638B (en) Method for detecting dirt at bottom of molded bottle
CN106018422A (en) Matching-based visual outline defect inspection system and method for specially-shaped stamping parts
CN107036542A (en) A kind of ring gear internal-and external diameter appearance detecting method and device
CN109324056B (en) Sewing thread trace measuring method based on machine vision
CN110567965A (en) Smartphone glass cover plate edge visual defect detection method
CN106560910B (en) Tube core defect detection method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant