CN113920090B - Prefabricated rod appearance defect automatic detection method based on deep learning - Google Patents

Prefabricated rod appearance defect automatic detection method based on deep learning Download PDF

Info

Publication number
CN113920090B
CN113920090B CN202111191037.3A CN202111191037A CN113920090B CN 113920090 B CN113920090 B CN 113920090B CN 202111191037 A CN202111191037 A CN 202111191037A CN 113920090 B CN113920090 B CN 113920090B
Authority
CN
China
Prior art keywords
defect
shooting
image
slice
outputting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111191037.3A
Other languages
Chinese (zh)
Other versions
CN113920090A (en
Inventor
丁发展
王峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuxi Xuelang Shuzhi Technology Co ltd
Original Assignee
Wuxi Xuelang Shuzhi Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuxi Xuelang Shuzhi Technology Co ltd filed Critical Wuxi Xuelang Shuzhi Technology Co ltd
Priority to CN202111191037.3A priority Critical patent/CN113920090B/en
Publication of CN113920090A publication Critical patent/CN113920090A/en
Application granted granted Critical
Publication of CN113920090B publication Critical patent/CN113920090B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/95Investigating the presence of flaws or contamination characterised by the material or shape of the object to be examined
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Biochemistry (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Signal Processing (AREA)
  • Quality & Reliability (AREA)
  • Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)

Abstract

The invention discloses an automatic detection method for appearance defects of a preform based on deep learning, which relates to the technical field of defect detection and comprises the following steps: performing sectional surface defect detection and internal defect detection on the preform rod body by a focal plane defect extraction method and halo filtering treatment, and filtering out haze interference in an image and defects far away from a focal plane; carrying out defect combination on the surface/internal defect image of the backlight group and the surface/internal defect image of the light transmission group according to the same shooting direction to obtain a surface defect integrated image in each shooting direction and an internal defect integrated image in each shooting direction of the rest shooting depths; carrying out the defect point space matching of adjacent shooting depths on the internal defect integrated image, and attributing defect points which simultaneously appear in two adjacent shooting depths to a plane with a closer distance; by using the method, the contact between personnel and the preform can be reduced, and the precision and the speed of defect detection are improved.

Description

Preform appearance defect automatic detection method based on deep learning
Technical Field
The invention relates to the technical field of defect detection, in particular to an automatic detection method for appearance defects of a preform based on deep learning.
Background
Before the optical fiber preform is delivered from a factory, the internal and external defects of the preform body are detected, wherein the defects comprise surface defects (gouges and scratches) and internal defects (bubbles, impurities and gas lines).
At present, the detection of the optical fiber preform mainly adopts a manual visual inspection mode, however, on one hand, the manual detection is difficult to realize a high-precision detection standard, the error is large, and some tiny defects are difficult to be found by human eyes; on the other hand, it is difficult for the worker detection speed to keep up with the production speed of the machine, and for this reason, the factory has to enlarge the scale of quality control personnel, resulting in an increase in detection cost.
Disclosure of Invention
The invention provides an automatic detection method for the appearance defects of a prefabricated rod based on deep learning aiming at the problems and the technical requirements, and the technical scheme of the invention is as follows:
an automatic detection method for appearance defects of a prefabricated rod based on deep learning comprises the following steps when a parallel part of the prefabricated rod moves to a machine vision module along the direction towards a transmission light source:
sequentially shooting backlight group images and light transmission group images at different shooting depths of each section of the preform rod body; the section is segmented along the length direction of the prefabricated rod, the outer surface of the prefabricated rod is set as a first shooting depth of a camera, the rest shooting depths are gradually increased layer by layer from the radial direction of the prefabricated rod to the center of the rod body, and the tail handle of the prefabricated rod faces a transmission light source in the movement process of the prefabricated rod;
acquiring surface defect integrated images of the preform in all shooting directions and internal defect integrated images of the preform in all shooting directions at other shooting depths, wherein the method comprises the following steps:
extracting a backlight group image and a light transmission group image corresponding to the first shooting depth, carrying out surface defect detection on the backlight group image by using a focal plane defect extraction method, and filtering defects far away from a focal plane in each backlight image to obtain a backlight group surface defect image corresponding to the first shooting depth; performing halo filtering processing on the light transmission group images, then performing surface defect detection by using a focal plane defect extraction method, and filtering out fog interference and defects far away from a focal plane in each light transmission group image to obtain light transmission group surface defect images corresponding to a first shooting depth;
combining the surface defect images of the backlight group and the surface defect images of the light transmission group according to the same shooting direction to obtain surface defect integrated images in all shooting directions;
and taking internal defect integrated images in the same shooting direction corresponding to adjacent shooting depths in the rest shooting depths to perform defect point space matching, and attributing defect points appearing in the two adjacent shooting depths to a plane with closer distance.
The beneficial technical effects of the invention are as follows:
the method can be used for the sectional type surface defect detection and the internal defect detection of the preform rod body, through a focal plane defect extraction method and halo filtering processing, fog interference and flaws far away from a focusing plane in the image are filtered, then the surface/internal defect images of the backlight group and the surface/internal defect images of the light transmission group are combined according to the same shooting direction to obtain surface defect integrated images of all shooting directions and internal defect integrated images of all shooting directions of the rest shooting depths, by carrying out the defect point space matching of adjacent shooting depths on the internal defect integrated image, the defect points which simultaneously appear at two adjacent shooting depths are attributed to planes with closer distances, the preform appearance defect automatic detection method based on deep learning can reduce contact between personnel and the preform and improve the precision and speed of defect detection.
Drawings
Fig. 1 is a structural diagram of a machine vision module according to the present application.
Fig. 2 is a positional relationship diagram of an industrial camera and a backlight provided by the present application.
FIG. 3 is a flow chart of segmented imaging of a preform rod provided herein.
Fig. 4 is a partial group of backlight images at a first shooting depth, where (1) is a partial focal plane view of a BG _ E original image, and (2) is a partial non-focal plane view of the BG _ E original image.
Fig. 5 is a set of partial images before and after the light transmission image filtering processing at the first imaging depth, where (1) and (2) are two partial images of the IN _ E original image, and (3) and (4) are two partial images of the IN _ E filtering image.
Fig. 6 is a partial image set after the light transmission image filtering processing of the first shooting depth provided IN the present application, IN which (1) and (2) are focal plane partial images of the IN _ E filter image, and (3) and (4) are non-focal plane partial images of the IN _ E filter image.
Fig. 7 is a schematic defect merging diagram of a surface defect image of a backlight group and a surface defect image of a light transmission group corresponding to a first shooting depth provided by the present application.
Fig. 8 is a partial group of backlight images of the rest shooting depths provided in the present application, in which (1) is a partial non-focal plane view of the BG _ E original image, and (2) is a partial focal plane view of the BG _ E original image.
Fig. 9 is a flowchart of a focal plane defect extraction method provided herein.
FIG. 10 is a flow chart of the adaptive contour extension operation provided herein.
Fig. 11 is a schematic diagram of a slice focal plane two-class convolutional neural network provided by the present application.
Fig. 12 is a flow chart of halo filtering process provided herein.
Fig. 13 is a flowchart of the defect point spatial matching method for the internal defect integrated image for the remaining shooting depths provided in the present application.
Detailed Description
The following further describes the embodiments of the present invention with reference to the drawings.
The application provides a prefabricated rod appearance defect automatic detection method based on deep learning, which is realized based on a machine vision module in a prefabricated rod appearance defect automatic detection mechanism. Referring to fig. 1 and 2, the machine vision module includes five industrial cameras a-E, five backlight sources a '-E' and a transmission light source 1, each industrial camera is fixed on a mounting plate 4 through a corresponding industrial camera bracket 2, an appearance inspection opening 5 is provided on the mounting plate 4, the five industrial cameras a-E are arranged around the appearance inspection opening 5, and a first inscribed circle a connecting a polygon surrounded by center points of the cameras is concentric with a preform rod 6 passing through the appearance inspection opening 5. Each backlight source is fixed on the mounting plate 4 through a corresponding backlight source bracket 3, five backlight sources A '-E' are arranged around the appearance detection port 5, and a second inscribed circle b of a polygon formed by connecting the central points of the backlight sources is concentric with the preform rod body 6 passing through the appearance detection port 5.
The first inscribed circle a is outside the second inscribed circle b, and each industrial camera is disposed opposite to the corresponding backlight. Specifically, the image pickup surface of the industrial camera a is parallel to the light emitting surface of the backlight a ', and similarly, the image pickup surfaces of the other opposing industrial cameras B to E are parallel to the light emitting surface of the backlights B ' -E '. The included angle of the center direction of the shooting surface between every two of the five industrial cameras A-E is 72 degrees, and the included angle of the center direction of the light emitting surface between every two of the five backlight sources A '-E' is 72 degrees.
The transmission light source 1 is disposed outside the mounting plate 4, and the center point of the transmission light source is concentric with the preform rod 6 passing through the appearance detection port 5.
When the preform parallel portion is moved to the machine vision module in a direction towards the transmission light source 1, the method comprises the steps of:
step 1: and shooting the backlight group image and the light transmission group image in sequence at different shooting depths of each section of the preform rod body. Wherein, the section position is segmented along the prefabricated stick length direction, and the surface of establishing prefabricated stick is the first shooting degree of depth of camera, and other shooting degrees of depth go forward one by one to the central successive layer of barred body along prefabricated stick radial direction, prefabricated stick caudal peduncle orientation transmission light source among the prefabricated stick motion process. The specific steps are shown in fig. 3:
step 11: and determining the current shooting depth for the current section position of the preform rod body.
Calculating a shot depth array DeepSet using the following formula:
Figure BDA0003301201590000041
wherein, deep set 1 As the first shot depth, deep set 2 To deep set n The rest shooting depths in the bar body are obtained; n is the shooting depth, and the value range of n is [2, 4]](ii) a Orgw is the distance from the central point of the lens of the industrial camera to the center of the second inscribed circle; dia is the diameter of the preform.
Deepset is taken at the initial shooting depth 1
Step 12: and calculating the lens target focal length of the industrial camera and driving the lens to zoom.
The calculation formula is as follows:
Figure BDA0003301201590000042
wherein, f n Is the lens target focal length at the nth shooting depth, H i Height of camera target surface H o Deepset for height of field of view n The value range of n is [2, 4] for the radial nth shooting depth of the prefabricated rod];
Calculating the focal length difference required to be adjusted by the lens according to the target focal length of the lens and the current focal length of the lens:
f d =f n -f o
wherein f is o The current focal length of the lens;
and a control system in the lens drives a motor gear of a lens focusing ring according to the focal length difference, so that the current focal length of the lens is consistent with the target focal length of the lens.
Step 13: the backlight source strobes in sequence according to a preset sequence A '→ E', and the corresponding industrial camera shoots backlight images of the current section and the current shooting depth during strobing to form a backlight group image.
Step 14: and closing all the backlight sources A '-E', opening the transmission light source 1, wherein the transmission light source 1 adopts a progressive lighting mode, and all the industrial cameras A-E simultaneously shoot the light transmission images of the current section and the current shooting depth to form a light transmission group image. The progressive lighting mode is that the light intensity of the transmission light source is gradually enhanced along with the shot section.
Step 15: judging whether all shooting depth imaging is finished, if so, entering the next section of imaging of the preform rod body, namely restarting the step 11, and otherwise, advancing the current shooting depth by one layer, namely, taking the deep set for the shooting depth 2 And step 12 is performed.
Step 2: and acquiring surface defect integrated images of the preform in all shooting directions and internal defect integrated images of the preform in all shooting directions of the rest shooting depths.
<1> as shown in fig. 4, since the preform itself is transparent, when the industrial camera images it, the flaws in the body at the focal plane will be clearly present (as shown in fig. 4-1), while the flaws far from the focal plane will be blurred (as shown in fig. 4-2). As shown in fig. 5-1 and 5-2, when imaging in transmission mode, due to the incomplete uniformity of the thickness of the cladding layer of the preform, the light will be scattered irregularly when penetrating the rod body, and will appear as haze interference on the imaging.
The method for acquiring the surface defect integrated image specifically comprises the following steps:
step 2 a: extraction of Deepset 1 The corresponding backlight group image BgSet ═ { BG _ a, BG _ B, BG _ C, BG _ D, BG _ E } and the light transmission group image InSet ═ { IN _ a, IN _ B, IN _ C, IN _ D, IN _ E }.
And step 2 b: performing surface defect detection on the backlight set image BgSet by using a focal plane defect extraction method, extracting all defects (shown as figure 4-1) positioned at a focal plane, filtering the defects (shown as figure 4-2) far away from the focal plane in each backlight image, and obtaining a deep set 1 And corresponding backlight group surface defect images.
And step 2 c: with reference to fig. 5 and 6, halo filtering is performed on the light transmission group image InSet, surface defect detection is performed by using a focal plane defect extraction method, all defects (as shown in fig. 6-1 and 6-2) located at the focal plane are extracted, haze interferences (as shown in fig. 5-3 and 5-4) and defects (as shown in fig. 6-3 and 6-4) far away from the focal plane in each light transmission image are filtered, and then depset is obtained 1 And (4) corresponding surface defect images of the light transmission groups.
Step 2 d: will DeepSet 1 And (3) defect merging is carried out on the corresponding surface defect image of the backlight group and the surface defect image of the light transmission group according to the same shooting direction, as shown in fig. 7, surface defect integrated images in all shooting directions are obtained, wherein the surface defects comprise gouges and scratches.
<2>Internal defects inside the preform solid rod using a deep set 2 To deep set n The imaged data set is subjected to detection of bubbles, impurities, gas lines. As shown in FIG. 8, a deep set is performed 2 To deep set n The focal plane will be pushed into the preform, the flaws on the surface of the preform will be blurred (see fig. 8-1), and the flaws on the focal plane inside the preform will be clearNow (see fig. 8-2).
The method for acquiring the internal defect integrated image specifically comprises the following steps:
step 2 e: extraction of Deepset 2 To deep set n The corresponding BgSet ═ BG _ a, BG _ B, BG _ C, BG _ D, BG _ E } and the photomontage image InSet ═ IN _ a, IN _ B, IN _ C, IN _ D, IN _ E }.
Step 2 f: respectively carrying out internal defect detection on the backlight group image BgSet of each shooting depth by using a focal plane defect extraction method, extracting all defects (such as BG _ E local 2 in figure 8) positioned at a focal plane, filtering the defects (such as BG _ E local 1 in figure 8) far away from the focal plane in each backlight image, and obtaining a DeepsSet 2 To deep set n And (4) corresponding to the internal defect image of the backlight group.
Step 2 g: respectively carrying out halo filtering treatment on the light transmission group image InSet of each shooting depth, then carrying out internal defect detection by using a focal plane defect extraction method, and filtering out fog interference and defects far away from a focal plane in each light transmission image to obtain a Deepset 2 To deep set n And (4) corresponding internal defect images of the light transmission groups.
Step 2 h: defect merging is carried out on the internal defect image of the backlight group and the internal defect image of the light transmission group corresponding to the same shooting depth according to the same shooting direction to obtain deep set 2 To deep set n The internal defects of the respective photographing directions of (1) integrate the image, wherein the internal defects include bubbles, impurities, gas lines.
Put into the form of a data set DeepLayerSet:
Figure BDA0003301201590000061
it should be noted that, the steps <1> and <2> of acquiring the surface or internal defect integrated image, and the processing of the backlight group image BgSet and the light transmission group image InSet in the steps <1> and <2> do not distinguish the sequence.
In step 2, the processing of the backlight group image BgSet and the light transmission group image InSet both include a focal plane defect extraction operation, as shown in fig. 9, the specific processing steps include:
(1) and carrying out binarization processing on the backlight group image BgSet, and outputting a binarization result graph corresponding to each backlight image, wherein the binarization result graph is recorded as Frame _ Threshold.
(2) And performing first and second convolution processing operations on the Frame _ Threshold by using the first convolution kernel, and outputting a flaw outline closed graph corresponding to each result graph and recording the flaw outline closed graph as Frame _ Close.
The structure of the first convolution kernel rk1 is:
Figure BDA0003301201590000062
the first convolution processing operation R1 is:
Figure BDA0003301201590000063
wherein, (x, y) is the pixel coordinate of the anchor point position of the first convolution kernel, (x ', y') is the coordinate offset of the pixels around the anchor point in the first convolution kernel relative to the anchor point, FrameThreshold (·) represents a binarization result graph, and FrameCloseR1 represents a flaw contour closed graph after the first convolution processing;
the second convolution processing operation R2 is:
Figure BDA0003301201590000071
where FrameCloseR2 represents the defect contour closed graph after the second convolution process.
(3) And performing outline detection on the Frame _ Close, and outputting a defect outline information list which Is recorded as Is _ constraints.
(4) And calculating an output defect outline bounding box list by using Is _ constraints, and recording the list as Is _ boxes.
(5) And (5) intercepting the defect slice of each defect point on the backlight set image BgSet by using Is _ boxes, and outputting a defect slice set which Is recorded as org _ point _ rois.
(6) And for each defect slice in the org _ point _ rois, performing defect area reverse selection by using the corresponding contour point array in the Is _ constraints, eliminating other defects intercepted from the current defect slice, and outputting a single defect slice set which Is recorded as clear _ point _ rois.
(7) According to Is _ constraints, performing an adaptive contour extension operation on each slice in the clear _ point _ rois, extending or clipping all slices to a size of 224 × 224, and outputting a defect slice set with normalized size, which Is denoted as fine _ point _ rois, as shown in fig. 10, specifically including:
(7.1) clear _ point _ rois and Is _ curves are extracted, and the width w, height h and center point (ctx, cty) of each slice are calculated.
And (4) judging to enter the step (7.2) or the step (7.5) according to the sizes of w, h and 224.
(7.2) if w is less than or equal to 224 and h is less than or equal to 224, creating a canvas matrix with the size of 224 multiplied by 224, and marking as fine _ point _ rois, and setting the initial value of the matrix element as zero.
(7.3) copying clear _ point _ rois to the center position of fine _ point _ rois with the center point (ctx, cty) of the corresponding slice as the center.
(7.4) fill the zero element in fine _ point _ rois with edge pixels of clear _ point _ rois, perform step (7.9).
(7.5) if w > 224 and h > 224, calculating the gravity center point (xg, yg) of the flaw contour.
(7.6) performing canvas expansion on the clear _ point _ rois by taking the center point (ctx, cty) of the corresponding slice as the center, wherein the width and the height of the expanded clear _ point _ rois are (w +224, h +224) and the gravity center point is (xg +112, yg + 112).
(7.7) filling the zero element in the clear _ point _ rois after the expansion by using the edge pixel of the clear _ point _ rois before the expansion.
(7.8) the expanded clear _ point _ rois is sliced with the expanded center of gravity (xg +112, yg +112) as the center to obtain a defect slice with a size of 224 × 224.
(7.9) outputting a size-normalized defect slice set, denoted fine _ point _ rois.
(8) Inputting fine _ point _ rois into slice focal plane two-classification convolution spiritAnd carrying out batch image classification reasoning in the network, and outputting two classification result values by the network, wherein the result values comprise: deep set 1 And (4) corresponding current focal plane flaws are recorded as 0, and non-current focal plane flaws are recorded as 1, and a defect slice focal plane class set is output and recorded as point _ label.
As shown in fig. 11, the slice focal plane binary classification convolutional neural network takes a depth residual learning block structure as a network layer structure mode, and finally connects a max-posing layer (i.e., a maximum pooling layer) to a full connection-2d layer (i.e., a full connection layer) and inputs the max-posing layer to a softmax layer to realize binary classification of a focal plane or a non-focal plane. The slice focal plane binary-classification convolutional neural network takes a large amount of fine _ point _ rois as samples for training, and the trained network effectively judges whether the flaw points are located on the current focal plane.
(9) And clearing the flaws which are judged not to be the current focal plane from the backlight set image according to point _ label and Is _ controls, and outputting the backlight set surface defect image corresponding to the current focal plane.
To obtain a Deepset 1 The corresponding light transmission group surface defect image is similar to the method steps (1) to (9) for obtaining the backlight group surface defect image, and the difference is that: (1) and (4) carrying out binarization processing on the light transmission group image InSet, negating the result, and outputting a binarization result graph corresponding to each image. (5) And (5) intercepting a defect slice of each defect point on the light transmission set image InSet by using Is _ boxes, and outputting org _ point _ rois. (9) And finally outputting the surface defect image of the light transmission group corresponding to the current focal plane.
In step 2, the processing of the light transmission group image InSet includes halo filtering processing, and as shown in fig. 12, the specific processing steps include:
(1) and performing edge protection filtering on the light transmission group images InSet by using a bilateral filter, and outputting a filtering result image corresponding to each light transmission image, wherein the filtering result image is recorded as Frame _ BF.
(2) And performing third convolution processing operation on the Frame _ BF by using the second convolution kernel, and outputting a flaw form graph corresponding to each result graph and recording the flaw form graph as a Frame _ grd.
The structure of the second convolution kernel rk2 is:
Figure BDA0003301201590000081
the third convolution processing operation R3 is:
Figure BDA0003301201590000082
here, FrameBF (·) represents a filtering result map, and FramegrdR3(·) represents a defect morphology map.
(3) And performing binarization processing on the Frame _ grd, and outputting a binarization result graph corresponding to each form graph and recording the binarization result graph as the Frame _ Thd.
(4) And performing R1 and R2 operation on the Frame _ Thd by using rk1, and outputting a defect contour closed graph corresponding to each result graph and recording the defect contour closed graph as Frame _ Cls.
(5) And carrying out contour detection on the Frame _ Cls, and outputting a defect contour mask which is marked as constraints _ mask.
(6) And carrying out numerical value negation on the constants _ mask, filtering background halos in the light transmission set image InSet by using the negated constants _ mask, and outputting a light transmission set flaw filter image.
And step 3: taking DeepSet 2 To deep set n And performing defect point space matching on the internal defect integrated images in the same shooting direction corresponding to the middle adjacent shooting depths, and attributing the defect points appearing in the two adjacent shooting depths to a plane with closer distance. As shown in fig. 13, step 3 specifically includes:
step 3.1: extraction of Deepset n-1 And deep set n Corresponding internal defect integrated images org _ n-1 and org _ n in the same shooting direction, wherein deep set n-1 Deepset, the shot depth of layer n-1 n The value range of n is [3,4] when the defect points are matched in space for the shooting depth of the nth layer]。
Step 3.2: r1 and R2 are respectively carried out on org _ n-1 and org _ n by using rk1, and then deep set is output n-1 And deep set n Is marked as Frame _ Close _ n-1 and Frame _ Close _ n, steps 3.3 and 3.5 are entered.
Step 3.3: and calculating the defect outline overlapping areas of the Frame _ Close _ n-1 and the Frame _ Close _ n, and outputting a defect overlapping mask map, which is denoted as Fcommon.
Step 3.4: and performing contour detection on Fcommon, outputting a defect overlapping defect contour information list, namely as constraints _ common, and entering the step 3.6.
Step 3.5: respectively carrying out contour detection on the Frame _ Close _ n-1 and the Frame _ Close _ n and outputting Deepset n-1 And deep set n Step 3.6 is entered into the defect profile information lists of (1), as constraints _ n-1 and constraints _ n.
Step 3.6: for each contour in the constraints _ common, the center point (cx, cy) is calculated, and the defect contour containing the center point (cx, cy) is matched in constraints _ n-1 and constraints _ n, respectively, as (n-1) c and nc.
Step 3.7: and (3) performing contour approximation calculation on (n-1) c and nc, and outputting a defect contour approximation value which is recorded as Apo.
Step 3.8: and (4) judging the sizes of Apo and the threshold, if the sizes of Apo and the threshold are smaller than the threshold, entering a step 3.14, and if the sizes of Apo and the threshold are larger than the threshold, entering a step 3.9.
Step 3.9: and (n-1) c is used for extracting a defective slice on org _ n-1, and the defective slice is recorded as p _ n-1, nc is used for extracting a defective slice on org _ n, and the defective slice is recorded as p _ n, slice definition calculation is carried out on p _ n-1 and p _ n respectively, and slice definition values are output, and are recorded as cpn-1 and cpn.
Step 3.10: the sizes of cpn-1 and cpn are judged, if cpn-1 is not more than cpn, the procedure goes to step 3.11, and if cpn-1 is more than cpn, the procedure goes to step 3.12.
Step 3.11: let the N-1 focal plane defect removal list be N-1_ separate, add (N-1) c to N-1_ separate, go to step 3.13.
Step 3.12: note that the N-focal plane defect removal list is N _ separate, and nc is added to N _ separate.
Step 3.13: and generating a contour filtering mask by N-1_ partition, filtering the flaws which are not matched with the current focal plane in org _ N-1, and outputting a flaw attribution map space _ N-1 of the N-1 focal plane in the current shooting direction. And generating a contour filtering mask by N _ separate, filtering the flaws which are not matched with the current focal plane in the org _ N, and outputting a flaw attribution map space _ N of the N focal planes in the current shooting direction.
Step 3.14: the flaw profile is preserved.
What has been described above is only a preferred embodiment of the present application, and the present invention is not limited to the above embodiment. It is to be understood that other modifications and variations directly derived or suggested to those skilled in the art without departing from the spirit and scope of the present invention are to be considered as included within the scope of the present invention.

Claims (10)

1. A preform appearance defect automatic detection method based on deep learning is characterized in that the method is realized based on a machine vision module in a preform appearance defect automatic detection mechanism, the machine vision module comprises a plurality of industrial cameras, a plurality of backlight sources and transmission light sources, each industrial camera is fixed on a mounting plate through a corresponding industrial camera support, an appearance detection port is formed in the mounting plate, the industrial cameras are arranged around the appearance detection port, and a first inscribed circle of a polygon formed by connecting center points of the cameras is concentric with a preform rod body penetrating through the appearance detection port; each backlight source is fixed on the mounting plate through a corresponding backlight source bracket, the plurality of backlight sources are arranged around the appearance detection port, a second inscribed circle of a polygon formed by connecting the center points of the backlight sources is concentric with the preform rod body penetrating through the appearance detection port, the first inscribed circle is arranged on the outer side of the second inscribed circle, and each industrial camera is arranged opposite to the corresponding backlight source; the transmission light source is arranged on the outer side of the mounting plate, and the center point of the transmission light source is concentric with the preform rod body passing through the appearance detection port;
when the preform parallel portion is moved to the machine vision module in a direction toward the transmission light source, the method includes:
sequentially shooting backlight group images and light transmission group images at different shooting depths of each section of the preform rod body; the section is segmented along the length direction of the prefabricated rod, the outer surface of the prefabricated rod is set as a first shooting depth of a camera, the rest shooting depths are gradually increased layer by layer from the radial direction of the prefabricated rod to the center of the rod body, and the tail handle of the prefabricated rod faces the transmission light source in the movement process of the prefabricated rod;
acquiring surface defect integrated images of the preform in all shooting directions and internal defect integrated images of the preform in all shooting directions of the rest shooting depths, wherein the method comprises the following steps of:
extracting a backlight group image and a light transmission group image corresponding to the first shooting depth, performing surface defect detection on the backlight group image by using a focal plane defect extraction method, and filtering defects far away from a focal plane in each backlight image to obtain a backlight group surface defect image corresponding to the first shooting depth; performing halo filtering processing on the light transmission group images, then performing surface defect detection by using a focal plane defect extraction method, and filtering out fog interference and defects far away from a focal plane in each light transmission group image to obtain light transmission group surface defect images corresponding to the first shooting depth;
combining the surface defect image of the backlight group and the surface defect image of the light transmission group according to the same shooting direction to obtain surface defect integrated images in all shooting directions;
and taking internal defect integrated images in the same shooting direction corresponding to adjacent shooting depths in the rest shooting depths to perform defect point space matching, and attributing defect points appearing in the two adjacent shooting depths to a plane with closer distance.
2. The method of claim 1, wherein the sequentially photographing of the backlight group image and the transparent group image at different photographing depths for each segment of the preform body comprises:
determining the current shooting depth of the current section position of the preform rod body, calculating the lens target focal length of the industrial camera, and driving the lens to zoom;
the backlight sources are strobed sequentially according to a preset sequence, and the corresponding industrial cameras shoot backlight images of the current section and the current shooting depth during strobing to form a backlight group image;
closing all backlight sources, opening the transmission light sources, and simultaneously shooting light-transmitting images of the current section and the current shooting depth by all industrial cameras in a progressive lighting mode to form light-transmitting group images;
and judging whether all shooting depth imaging is finished, if so, entering the next section of imaging of the preform rod body, otherwise, advancing the current shooting depth by one layer, and executing the step of calculating the lens target focal length of the industrial camera.
3. The method for automatically detecting the appearance defect of the preform based on the deep learning of claim 1, wherein the surface defect detection is performed on the backlight group images by using a focal plane defect extraction method, defects far away from a focal plane in each backlight image are filtered, so as to obtain the surface defect image of the backlight group corresponding to the first shooting depth, which is similar to the method for obtaining the surface defect image of the transparent group corresponding to the first shooting depth by using the focal plane defect extraction method, and comprises the following steps:
carrying out binarization processing on the backlight group images and outputting a binarization result image corresponding to each backlight image; performing first and second convolution processing operations on the binarization result graph by using a first convolution kernel, and outputting a flaw contour closed graph corresponding to each result graph; carrying out contour detection on the flaw contour closed graph and outputting a flaw contour information list; calculating and outputting a defect outline bounding box list by utilizing the defect outline information list; intercepting a defect slice of each defect point on the backlight group image by using the defect outline bounding box list, and outputting a defect slice set; for each defect slice in the defect slice set, performing defect area reverse selection by using a corresponding contour point array in the defect contour information list, removing other defects intercepted from the current defect slice, and outputting a single defect slice set; according to the defect contour information list, performing self-adaptive contour extension operation on each slice in the single defect slice set, expanding or cutting all slices to 224 multiplied by 224, and outputting a defect slice set with normalized size; inputting the defect slice collection with the normalized size into a slice focal plane two-class convolutional neural network, performing batch image classification reasoning, outputting two classification result values by the network, wherein the two classification result values comprise a current focal plane flaw and a non-current focal plane flaw corresponding to a first shooting depth, and outputting a defect slice focal plane class collection; removing the flaws which are judged to be not current focal planes from the backlight group image according to the defect slice focal plane category set and the flaw outline information list, and outputting a backlight group surface defect image corresponding to the current focal plane;
the different points of obtaining the surface defect image of the light transmission group corresponding to the first shooting depth are as follows: carrying out binarization processing on the images of the light transmission set, negating the result, and outputting a binarization result image corresponding to each image; intercepting the defect slice of each defect point on the light transmission group image by using the defect outline bounding box list, and outputting a defect slice set; and finally outputting the surface defect image of the light transmission group corresponding to the current focal plane.
4. The method for automatically detecting the appearance defects of the deep learning-based preform rod according to claim 1, wherein the halo filtering processing is firstly carried out on the images of the transmission groups to filter out haze interference in each transmission image, and the method comprises the following steps:
performing edge protection filtering on the light transmission group images by using a bilateral filter, and outputting a filtering result graph corresponding to each light transmission image; performing third convolution processing operation on the filtering result graph by using a second convolution kernel, and outputting a flaw form graph corresponding to each result graph; performing binarization processing on the defect form maps and outputting binarization result maps corresponding to each form map; performing first and second convolution processing operations on the binarization result graph by using a first convolution kernel, and outputting a flaw contour closed graph corresponding to each result graph; carrying out contour detection on the flaw contour closed graph and outputting a flaw contour mask; and carrying out numerical value negation operation on the flaw outline mask, filtering background halos in the light transmission group image by using the negated flaw outline mask, and outputting a light transmission group flaw filter image.
5. The method for automatically detecting the appearance defect of the preform based on the deep learning of claim 1, wherein the step of taking the internal defect integrated images in the same shooting direction corresponding to the adjacent shooting depth in the rest shooting depths to perform the defect point space matching comprises the following steps:
extracting internal defect integrated images in the same shooting direction corresponding to the shooting depth of the (n-1) th layer and the shooting depth of the nth layer, wherein n is the shooting depth, and the value range of n is [3,4] when the defect points are spatially matched; respectively carrying out first convolution processing operation and second convolution processing operation on the internal defect integrated images of the two shooting depths by using a first convolution kernel, and outputting defect outline closed graphs of the (n-1) th layer and the nth layer of shooting depths; calculating a defect contour overlapping area of the defect contour closed graph corresponding to the two shooting depths, and outputting a defect overlapping mask graph; carrying out contour detection on the defect overlapping mask image, and outputting a defect overlapping flaw contour information list; respectively carrying out contour detection on the flaw contour closed graphs of the two shooting depths, and outputting flaw contour information lists of the shooting depths of the (n-1) th layer and the nth layer; calculating a central point for each profile in the defect overlapping defect profile information list, and respectively matching the defect profiles containing the central points in the defect profile information lists of the two shooting depths; carrying out profile approximation calculation on the defect profiles of the shooting depths of the (n-1) th layer and the nth layer, and outputting a defect profile approximation value;
judging the sizes of the flaw outline approximation value and the threshold value, and if the sizes of the flaw outline approximation value and the threshold value are smaller than the threshold value, keeping the flaw outline; if the number of the defect slices is larger than the threshold value, extracting defect slices on the corresponding internal defect integrated image according to the defect outline of the shooting depth of the (n-1) th layer, extracting defect slices on the corresponding internal defect integrated image according to the defect outline of the shooting depth of the (n) th layer, respectively carrying out slice definition calculation on the two defect slices, and outputting slice definition values of the shooting depths of the (n-1) th layer and the n-th layer; judging the sizes of the two slice definition values, and if the slice definition value of the N-1 th layer of shooting depth is less than or equal to the slice definition value of the N-1 th layer of shooting depth, adding a flaw profile of the N-1 th layer of shooting depth into an N-1 focal plane flaw removal list; otherwise, adding a flaw profile of the nth layer of shooting depth in an N focal plane flaw removal list; generating a contour filtering mask according to the N-1 focal plane defect removing list, filtering the defects which are not matched with the current focal plane in the internal defect integrated image of the N-1 layer shooting depth, and outputting a defect attribution map of the N-1 focal plane in the current shooting direction; generating a contour filtering mask according to the N focal plane defect removing list, filtering the defects which are not matched with the current focal plane in the internal defect integrated image of the nth layer of shooting depth, and outputting a defect attribution map of the N focal plane in the current shooting direction.
6. The method of claim 2, wherein the determining the current shooting depth comprises:
calculating a shot depth array DeepSet using the following formula:
Figure FDA0003301201580000041
wherein, deep set 1 As the first shot depth, deep set 2 To deep set n The rest shooting depths in the bar body are obtained; n is the shooting depth, and the value range of n is [2, 4]](ii) a Orgw is the distance from the central point of the lens of the industrial camera to the center of the second inscribed circle; dia is the diameter of the preform.
7. The method of claim 2, wherein the step of calculating the objective focal length of the lens of the industrial camera and driving the lens to zoom comprises:
the calculation formula is as follows:
Figure FDA0003301201580000051
wherein f is n Is the lens target focal length at the nth shooting depth, H i Height of camera target surface H o Is the field of visionHeight, deep set n The radial nth shooting depth of the prefabricated rod is obtained, and the value range of n is [2, 4]];
Calculating the focal length difference required to be adjusted by the lens according to the target focal length of the lens and the current focal length of the lens:
f d =f n -f o
wherein f is o The current focal length of the lens;
and driving a motor gear of a lens focusing ring by a control system in the lens according to the focal length difference to ensure that the current focal length of the lens is consistent with the target focal length of the lens.
8. The method of claim 3, wherein said performing an adaptive profile extending operation on each slice in said single defect slice set according to said defect profile information list to expand or crop all slices to 224 x 224 size and output a defect slice set with normalized dimensions comprises:
extracting the single defect slice collection and the defect outline information list, and calculating the width, the height and the center point of each slice;
if the width and the height are both less than or equal to 224, creating a canvas matrix with the size of 224 multiplied by 224, and setting the initial value of the matrix element as zero; copying the single defect slice collection to the central position of the canvas matrix by taking the central point of the corresponding slice as the center; filling zero elements in a canvas matrix by using edge pixels of the single defect slice set, and outputting a defect slice set with normalized size;
if the width and the height are both larger than 224, calculating a gravity center point of the flaw profile; carrying out canvas expansion on the single defect slice collection by taking the central point of the corresponding slice as the center, wherein the width and the height of the expanded single defect slice collection are respectively increased by 224, and the horizontal and vertical coordinates of the central point of gravity are respectively increased by 112; filling zero elements in the extended single defect slice set by using edge pixels of the single defect slice set before extension; and cutting the extended single defect slice set by taking the center of gravity point after the extension as the center to obtain defect slices with the size of 224 multiplied by 224, and outputting the defect slice set with the normalized size.
9. The method for automatically detecting the appearance defects of the preform based on the deep learning of claim 3, wherein the slice focal plane binary classification convolutional neural network takes a deep residual error learning block structure as a network layer structure mode, and finally uses a max-posing layer full connection-2d layer to input into a softmax layer to realize the binary classification of focal planes or non-focal planes; the slice focal plane binary-classification convolutional neural network takes the defect slice collection with the normalized size as a sample for training, and the trained network effectively judges whether the flaw point is positioned on the current focal plane.
10. The method of claim 4, wherein the first convolution kernel rk1 has the following structure:
Figure FDA0003301201580000061
the first convolution processing operation R1 is:
Figure FDA0003301201580000062
wherein, (x, y) is the pixel coordinate of the anchor point position of the first convolution kernel, (x ', y') is the coordinate offset of the pixels around the anchor point in the first convolution kernel relative to the anchor point, FrameThreshold (·) represents a binarization result graph, and FrameCloseR1 represents a flaw contour closed graph after the first convolution processing;
the second convolution processing operation R2 is:
Figure FDA0003301201580000063
wherein FrameCloseR2 represents the defect contour closure map after the second convolution processing;
the structure of the second convolution kernel rk2 is:
Figure FDA0003301201580000064
the third convolution processing operation R3 is:
Figure FDA0003301201580000065
here, FrameBF (·) represents a filtering result map, and FramegrdR3(·) represents a defect morphology map.
CN202111191037.3A 2021-10-13 2021-10-13 Prefabricated rod appearance defect automatic detection method based on deep learning Active CN113920090B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111191037.3A CN113920090B (en) 2021-10-13 2021-10-13 Prefabricated rod appearance defect automatic detection method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111191037.3A CN113920090B (en) 2021-10-13 2021-10-13 Prefabricated rod appearance defect automatic detection method based on deep learning

Publications (2)

Publication Number Publication Date
CN113920090A CN113920090A (en) 2022-01-11
CN113920090B true CN113920090B (en) 2022-08-30

Family

ID=79239955

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111191037.3A Active CN113920090B (en) 2021-10-13 2021-10-13 Prefabricated rod appearance defect automatic detection method based on deep learning

Country Status (1)

Country Link
CN (1) CN113920090B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114152626B (en) * 2022-02-07 2022-05-24 盛吉盛(宁波)半导体科技有限公司 Method and device applied to defect height measurement
CN114519792B (en) * 2022-02-16 2023-04-07 无锡雪浪数制科技有限公司 Welding seam ultrasonic image defect identification method based on machine and depth vision fusion
CN114638807B (en) * 2022-03-22 2023-10-20 无锡雪浪数制科技有限公司 Metal plate surface defect detection method based on deep learning
CN116542967B (en) * 2023-06-29 2023-10-03 厦门微图软件科技有限公司 Method, device and equipment for detecting defects of lithium battery pole
CN117576088B (en) * 2024-01-15 2024-04-05 平方和(北京)科技有限公司 Intelligent liquid impurity filtering visual detection method and device

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4492463A (en) * 1982-03-29 1985-01-08 At&T Bell Laboratories Method for inspecting multilayer transparent rods
US5406374A (en) * 1992-08-27 1995-04-11 Shin-Etsu Chemical Co., Ltd. Method for detecting bubbles and inclusions present in optical fiber preform and apparatus for detecting same
CN102706884A (en) * 2012-05-10 2012-10-03 江苏光迅达光纤科技有限公司 Device and method for detecting optical fibers
CN103698342A (en) * 2014-01-09 2014-04-02 浙江师范大学 Laser scattering-based optical-fiber prefabricated rod defect detection method
CN111103309A (en) * 2018-10-26 2020-05-05 苏州乐佰图信息技术有限公司 Method for detecting flaws of transparent material object
CN111175306A (en) * 2020-01-31 2020-05-19 武汉大学 Automatic bubble detection system and method for optical fiber preform based on machine vision
CN111289540A (en) * 2020-03-12 2020-06-16 华侨大学 Optical glass flaw detection device and thickness calculation method thereof
CN112824881A (en) * 2020-04-28 2021-05-21 奕目(上海)科技有限公司 System and method for detecting defects of transparent or semitransparent medium based on light field camera

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3014238B1 (en) * 2013-06-25 2018-03-07 Prysmian S.p.A. Method for detecting defects in a rod-shaped transparent object

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4492463A (en) * 1982-03-29 1985-01-08 At&T Bell Laboratories Method for inspecting multilayer transparent rods
US5406374A (en) * 1992-08-27 1995-04-11 Shin-Etsu Chemical Co., Ltd. Method for detecting bubbles and inclusions present in optical fiber preform and apparatus for detecting same
CN102706884A (en) * 2012-05-10 2012-10-03 江苏光迅达光纤科技有限公司 Device and method for detecting optical fibers
CN103698342A (en) * 2014-01-09 2014-04-02 浙江师范大学 Laser scattering-based optical-fiber prefabricated rod defect detection method
CN111103309A (en) * 2018-10-26 2020-05-05 苏州乐佰图信息技术有限公司 Method for detecting flaws of transparent material object
CN111175306A (en) * 2020-01-31 2020-05-19 武汉大学 Automatic bubble detection system and method for optical fiber preform based on machine vision
CN111289540A (en) * 2020-03-12 2020-06-16 华侨大学 Optical glass flaw detection device and thickness calculation method thereof
CN112824881A (en) * 2020-04-28 2021-05-21 奕目(上海)科技有限公司 System and method for detecting defects of transparent or semitransparent medium based on light field camera

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于机器视觉的光纤预制棒内部缺陷检测;王飞舟;《中国优秀硕士学位论文全文数据库 信息科技辑》;20150215(第02期);I138-988 *
基于机器视觉的预制棒瑕疵检测研究与应用;李杭;《中国优秀硕士学位论文全文数据库 工程科技I辑》;20200715(第07期);A005-88 *

Also Published As

Publication number Publication date
CN113920090A (en) 2022-01-11

Similar Documents

Publication Publication Date Title
CN113920090B (en) Prefabricated rod appearance defect automatic detection method based on deep learning
CN111007073B (en) Method and system for online detection of part defects in additive manufacturing process
CN111650210B (en) Burr detection method and detection system for high-speed high-precision lithium ion battery pole piece
CN110018167B (en) Method and system for rapidly detecting appearance defects of curved screen
CN111612737B (en) Artificial board surface flaw detection device and detection method
CN103186887B (en) Image demister and image haze removal method
CN110706224B (en) Optical element weak scratch detection method, system and device based on dark field image
CN116990323B (en) High-precision printing plate visual detection system
CN112419237B (en) Deep learning-based automobile clutch master cylinder groove surface defect detection method
CN114120317B (en) Optical element surface damage identification method based on deep learning and image processing
CN112614105B (en) Depth network-based 3D point cloud welding spot defect detection method
CN114926407A (en) Steel surface defect detection system based on deep learning
CN111474179A (en) Lens surface cleanliness detection device and method
CN112200790B (en) Cloth defect detection method, device and medium
CN111062961A (en) Contact lens edge defect detection method based on deep learning
CN112529893A (en) Hub surface flaw online detection method and system based on deep neural network
CN111338051B (en) Automatic focusing method and system based on TFT liquid crystal panel
CN113240647A (en) Mobile phone shell rear cover defect detection method and system based on deep learning
CN117309892B (en) Defect detection method, device and system for blue film of battery and light source controller
JP2010130549A (en) Contamination detection device and method of detecting contamination of photographic device
CN112750113B (en) Glass bottle defect detection method and device based on deep learning and linear detection
CN117761060A (en) Visual detection system and detection method thereof
CN115797314B (en) Method, system, equipment and storage medium for detecting surface defects of parts
CN117197108A (en) Optical zoom image quality evaluation method, system, computer device and medium
CN115791801A (en) 3D glass on-line monitoring platform based on machine vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant