US20190057498A1 - Method and system for detecting line defects on surface of object - Google Patents
Method and system for detecting line defects on surface of object Download PDFInfo
- Publication number
- US20190057498A1 US20190057498A1 US15/678,310 US201715678310A US2019057498A1 US 20190057498 A1 US20190057498 A1 US 20190057498A1 US 201715678310 A US201715678310 A US 201715678310A US 2019057498 A1 US2019057498 A1 US 2019057498A1
- Authority
- US
- United States
- Prior art keywords
- images
- image
- defects
- operations
- potential
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000007547 defect Effects 0.000 title claims abstract description 165
- 238000000034 method Methods 0.000 title claims abstract description 49
- 238000005286 illumination Methods 0.000 claims abstract description 100
- 238000003384 imaging method Methods 0.000 claims abstract description 30
- 238000007670 refining Methods 0.000 claims abstract description 11
- 230000010339 dilation Effects 0.000 claims description 12
- 238000001914 filtration Methods 0.000 claims description 7
- 238000005520 cutting process Methods 0.000 claims description 5
- 238000004590 computer program Methods 0.000 claims description 4
- 238000001514 detection method Methods 0.000 description 8
- 238000003708 edge detection Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 230000001960 triggered effect Effects 0.000 description 3
- 230000008878 coupling Effects 0.000 description 2
- 238000010168 coupling process Methods 0.000 description 2
- 238000005859 coupling reaction Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000000877 morphologic effect Effects 0.000 description 2
- 238000007637 random forest analysis Methods 0.000 description 2
- 238000010276 construction Methods 0.000 description 1
- 238000009776 industrial production Methods 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000010248 power generation Methods 0.000 description 1
- 238000000275 quality assurance Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 239000000523 sample Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G06K9/6256—
-
- G06K9/6267—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/20—Image enhancement or restoration using local operators
- G06T5/30—Erosion or dilatation, e.g. thinning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
- G06T7/0008—Industrial image inspection checking presence/absence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
- G06V10/443—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10141—Special mode during image acquisition
- G06T2207/10152—Varying illumination
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20021—Dividing image into blocks, subimages or windows
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20024—Filtering details
- G06T2207/20032—Median filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20036—Morphological image processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20048—Transform domain processing
- G06T2207/20061—Hough transform
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20076—Probabilistic image processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20112—Image segmentation details
- G06T2207/20132—Image cropping
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/06—Recognition of objects for industrial automation
Definitions
- This invention relates generally to a method and a system for detecting a line defect on a surface of an object.
- Defect detection on a surface of an object is an important aspect of industrial production quality assurance process which may provide an important guarantee for quality of products.
- Defects may include line defects on a surface of an object.
- Methods of line defect detection may include applying edge detection on an image of the surface of the object.
- the edges detected in the image may include a plurality of false detections, such as borders of the image may be detected as true line defects.
- aspects of the present invention relate to a method and a system for detecting a line defect on a surface of an object.
- a method for detecting a line defect on a surface of an object comprises supporting the object on a platform.
- the method comprises illuminating the surface of the object with a plurality of illumination sources comprising at least one ambient illumination source and at least one dark field illumination source.
- the method comprises capturing images of the surface of the object under illumination conditions with the illumination sources using an imaging device.
- the method comprises processing the captured images with a plurality of image operations using an image processor to detect areas of potential defects at locations on the surface of the object.
- the method comprises cutting the areas of the potential defects from the processed images to sub images using the image processor.
- the method comprises stitching the sub images at same location together to generate a set of hypotheses of the potential defects at the locations on the surface of the object using the image processor.
- the method comprises classifying the hypotheses in the stitched images with a classifier to determine whether the potential defects are true defects using the image processor.
- the classifier is trained with training data having characteristics of the true defects.
- the detected true defects are discrete true defects at the locations on the surface of the object.
- the method comprises determining whether the discrete true defects consist of a line defect by refining line segments detected on one of the processed image based on a criterion.
- the method comprises generating an output comprising the line defect on the surface of the object.
- a system for detecting a line defect on a surface of an object comprises a platform for supporting the object.
- the system comprises a plurality of illumination sources comprising at least one ambient illumination source and at least one dark field illumination source for illuminating the surface of the object.
- the system comprises an imaging device for capturing images of the surface of the object under illumination conditions with the illumination sources.
- the system comprises an image processor.
- the image processor processes the captured images with a plurality of image operations using to detect areas of potential defects at locations on the surface of the object.
- the image processor cuts the areas of the potential defects from the processed images to sub images.
- the image processor stitches the sub images at same location together to generate a set of hypotheses of the potential defects at the locations on the surface of the object.
- the image processor classifies the hypotheses in the stitched images with a classifier to determine whether the potential defects are true defects.
- the classifier is trained with training data having characteristics of the true defects.
- the detected true defects are discrete true defects at the locations on the surface of the object.
- the image processor determines whether the discrete true defects consist of a line defect by refining line segments detected on one of the processed image based on a criterion.
- the image processor generates an output an output comprising the line defect on the surface of the object.
- a computer program executable in a computer for performing a method of detecting a line defect on a surface of an object stores images of the surface of the object under illumination conditions with illumination sources comprising at least one ambient illumination source and at least one dark field illumination source.
- the method comprises step of processing the images with a plurality of image operations to detect areas of potential defects at locations on the surface of the object.
- the method comprises step of cutting the areas of the potential defects from the processed images to sub images.
- the method comprises step of stitching the sub images at same location together to generate a set of hypotheses of the potential defects at the locations on the surface of the object.
- the method comprises step of classifying the hypotheses in the stitched images with a classifier to determine whether the potential defects are true defects.
- the classifier is trained with training data having characteristics of the true defects.
- the detected true defects are discrete true defects at the locations on the surface of the object.
- the method comprises step of determining whether the discrete true defects consist of a line defect by refining line segments detected on one of the processed image based on a criterion.
- the method comprises step of generating an output comprising the line defect on the surface of the object.
- FIG. 1 illustrates a schematic side view of a system for detecting a defect at a surface of an object according to an embodiment of the invention
- FIG. 2 illustrates a schematic top view of a system for detecting a defect at a surface of an object according to an embodiment of the invention
- FIG. 3 illustrates a schematic diagram of a pattern consisting of a bright region and a shadow region of a defect on a surface under a dark field illumination source according to an embodiment of the invention
- FIG. 4 illustrates a schematic flow chart of a method for detecting a line defect at a surface of an object according to an embodiment of the invention.
- FIG. 5 illustrates a schematic flow chart of a step for processing images of the method as illustrated in FIG. 4 according to an embodiment of the invention.
- FIGS. 1 and 2 respectively illustrate a schematic side view and top view of a system 100 for detecting a defect at a surface 112 of an object 110 according to an embodiment of the invention.
- the system 100 may include a platform 120 that supports the object 110 .
- the system 100 may include a motor 122 .
- the platform 120 may be movable along at least one direction by the motor 122 .
- the motor 122 may have a motor controller 124 that controls a movement of the platform 120 .
- the system 100 may include a hood 130 arranged above the platform 120 .
- the hood 130 may have a hollow cone shape.
- the system 100 may have an imaging device 140 .
- the imaging device 140 may be arranged inside the hood 130 .
- the imaging device 140 may be located at top of the hood 130 .
- the imaging device 140 may include, for example, a camera.
- the imaging device 140 may include a lens 142 .
- the lens 142 may be arranged at a location relative to the surface 112 of the object 110 such that the imaging device 140 may have a desirable field of view 144 on the surface 112 of the object 110 .
- the imaging device 140 may pan and tilt relative to the surface 112 of the object 110 to achieve a desired field of view 144 on the surface 112 of the object 100 .
- the motor 122 may move the platform 120 along with the object 110 so that the field of view 144 of the imaging device 140 may cover different areas of the surface 112 of the object 110 .
- the system 100 may have at least one ambient illumination source 150 .
- the ambient illumination source 150 may be arranged inside the hood 130 .
- the ambient illumination source 150 may be located at top of the hood 130 .
- the ambient illumination source 150 may provide an ambient illumination condition at the surface 112 of the object 110 .
- the ambient illumination source 150 may include, for example, a light emitting diode (LED) strobe light.
- the ambient illumination source 150 may have a ring shape. According to an exemplary embodiment as illustrated in FIG. 1 , the lens 142 may extend through the ring shaped ambient illumination source 150 .
- the system 100 may include at least one dark field illumination source 160 .
- the dark field illumination source 160 may be arranged at bottom of the hood 130 .
- the dark field illumination source 160 may provide a dark field illumination condition on the surface 112 of the object 110 .
- the dark field illumination source 160 may include, for example, a LED strobe light.
- the dark field illumination source 160 may be oriented at a location relative to the surface 112 of the object 110 such that a dark field illumination condition may be provided within the field of view 144 of the lens 142 at the surface 112 of the object 110 . Under a dark field illumination condition, a defect may have a predictable pattern consisting of a bright region that is illuminated by the dark field illumination source 160 and a dark or shadow region that is not illuminated by the dark field illumination source 160 .
- FIG. 2 illustrates a schematic top view of the system 100
- four dark field illumination sources 160 are arranged at bottom of the hood 130 .
- Two dark field illumination sources 160 are arranged along an x-axis in a plane of the field of view 144 .
- the two dark field illumination sources 160 are located at two sides of the field of view 144 respectively, denoted x-positive and x-negative.
- Two other dark field illumination sources 160 are arranged along a y-axis in the plane of the field of view 144 .
- the two other dark field illumination sources 160 are located at two sides of the field of view 144 respectively, denoted y-positive and y-negative.
- Different numbers of dark field illumination sources 160 may be arranged at bottom of the hood 130 .
- the system 100 may include a trigger controller 170 .
- the trigger controller 170 may functionally connect to the imaging device 140 , the ambient illumination source 150 and the dark field illumination sources 160 .
- the trigger controller 170 may trigger the ambient illumination source 150 and the dark field illumination sources 160 at a defined pattern, sequence, or simultaneously.
- the trigger controller 170 may trigger the imaging device 140 to capture images of the surface 112 of the object 110 under the triggered illumination conditions respectively.
- the trigger controller 170 may control configurations of the dark field illumination sources 160 .
- the configurations of the dark field illumination sources 160 may include orientations of the dark field illumination sources 160 relative to the surface 112 of the object 110 , illumination intensities of the dark field illumination sources 160 , etc.
- the trigger controller 170 may be a computer having a computer program implemented.
- the system 100 may include an image processor 180 .
- the image processor 180 may functionally connect to the imaging device 140 .
- the image processor 180 may process the images captured by the imaging device 140 to detect defects on the surface 112 of the object 110 .
- the image processor 180 may be a computer having a computer program implemented.
- the trigger controller 170 and the image processor 180 may be integrated parts of one computer.
- the system 100 may include a display device 190 functionally connected to the imaging processor 180 .
- the display device 190 may display the captured images.
- the display device 190 may display the processed images.
- the display device 190 may display an output including information of a detected defect.
- the display device 190 may be a monitor.
- the display device 190 may be an integrated part of the imaging processor 180 .
- a shape of a potential defect may have a specific and predictable pattern consisting of bright region and shadow region in an image under different configurations of the dark field illumination sources 160 .
- FIG. 3 illustrates a schematic diagram of a pattern consisting of a bright region 115 and a shadow region 117 of a defect 113 on a surface 112 under a dark field illumination source 160 according to an embodiment.
- a surface 112 may have a V-shaped defect 113 .
- a first portion 114 of the V-shaped defect 113 is illuminated by the dark filed illumination source 160 that forms a bright region 115 in an image.
- a second portion 116 of the V-shaped defect 113 is not illuminated by the dark filed illumination source 160 that forms a shadow region 117 in the image.
- the bright region 115 and the shadow region 117 are within a field of view 144 of an imaging device 140 .
- the V-shaped defect 113 may be a micro defect, for example, scale of the V-shaped 113 defect may be as small as micrometers.
- FIG. 4 illustrates a schematic flow chart of a method 200 for detecting a line defect at a surface 112 of an object 110 using an image processor 180 according to an embodiment of the invention.
- the imaging device 140 may be calibrated relative to the surface 112 of the object 110 to be inspected.
- the calibration may estimate parameters of the lens 142 of the imaging device 140 to correct distortion of the lens 142 of the imaging device 140 when capturing images of the surface 112 of the object 110 .
- the ambient illumination source 150 and the dark field illumination sources 160 may be triggered by the trigger controller 170 to illuminate the surface 112 of the object 110 with an ambient illumination condition and dark field illumination conditions.
- the imaging device 140 may be triggered by the trigger controller 170 to capture images of the surface 112 of the object 110 under the ambient illumination condition and the dark field illumination conditions respectively. Each image captures a field of view 144 of the surface 112 of the object 110 under different illumination conditions.
- Each image may contain potential defects having specific shapes on the surface 112 of the object 100 . Shapes of the potential defects have predictable patterns consisting of bright region and shadow region based on configurations of the dark field illumination sources 160 .
- the captured images of the surface 112 of the object 110 are processed by the image processor 180 .
- the processing step 230 may implement a plurality of image operations to the captured images to detect areas of potential defects at locations on the surface 112 of the object 110 .
- a plurality of areas may be detected having potential defects. Each area may have a potential defect.
- the locations of the plurality of areas may be represented by, such as x, y locations on a plane of the surface 112 .
- the potential defects may be detected by patterns consisting of bright region and shadow region in the processed images.
- the plurality of image operations may enhance shapes of potential defects and reduce false detection rate.
- step 240 the areas showing the potential defects at locations on the surface 112 of the object 110 are cut off from the processed images to sub images. Size of each area to be cut off may be small enough to detect a micro defect. For example, size of each area may be less than 100 by 100 pixels, depending on resolution of the image. Areas that do not show indications of potential defects may be pruned out and do not need further processing.
- step 250 sub images having potential defects at the same location on the surface 112 of the object 110 are stitched together to generate a set of hypotheses of the potential defects at the locations on the surface 112 of the object 110 .
- the stitched images are classified with a classifier to determine whether the hypotheses of the potential defects are true defects on the surface 112 of the object 110 .
- the classifier may be trained with a training data having characteristics of a true defect.
- the classification outputs a plurality of discrete true defects at the locations on the surface 112 of the object 110 .
- a random forest classifier may be used to classify the potential defects.
- the random forest classifier may classify hypotheses with high efficiency and scalability in large scale applications.
- the discrete true defects are determined whether they consist of a line defect on the surface 112 of the object 110 by refining line segments detected on one of the processed images.
- the line segments are refined based on certain criteria.
- Line segments may be detected on one of the processed images by step 242 and step 244 .
- edges are detected on one of the processed images by applying edge detection.
- the edges may be detected, for example, by Canny edge detector.
- line segments may be detected from the edges by applying Hough transform.
- Hough transform A probabilistic Hough transform may be used for detecting the line segments.
- a criterion for refining the line segments may include that a distance from each of the discrete true defects to each of the line segments is less than a threshold value.
- the threshold value may be, for example, 5 pixels, or 8 pixels, or 10 pixels.
- the distance may be a perpendicular distance from a center of the discrete true defect to each of the line segments.
- a criterion for refining the line segments may include that a difference between a slope of each of the discrete true defects and a slope of each of the line segments is less than a threshold value.
- the threshold value may be, for example, in a range of ⁇ 7 to 7 degrees, or in a range of ⁇ 5 to 5 degrees, or in a range of ⁇ 3 to 3 degrees.
- the slop of each of the discrete true defects may be obtained by Hough transform.
- the threshold values for distance and slop may be defined depending on resolution of the image.
- the line defect is detected by iteratively removing the line segments that do not satisfy the criteria.
- the line defect may consist of a plurality of connected line segments.
- the line defect may have a curved line shape.
- an output is generated.
- the output may include the detected line defect on the surface 112 of the object 110 .
- the output may be a report form.
- the output may be an image with the detected true defects marked at the locations.
- the image may be one of the captured images or one of the processed images.
- the output may be stored in the imaging processor 180 , or displayed on a display device 190 , or print out by a printer.
- FIG. 5 illustrates a schematic flow chart of a step 230 for processing images of the method 200 as illustrated in FIG. 4 according to an embodiment of the invention.
- the trigger controller 170 may sequentially turn the ambient illumination source 150 and the dark field illumination sources 160 on and off.
- the trigger controller 170 may trigger the imaging device 140 to sequentially capture images of the surface 112 of the object 110 under ambient illumination condition and dark field illumination conditions respectively.
- the dark filed illumination sources 160 may be sequentially turned on and off by the trigger controller 170 so that images are sequentially captured by the imaging device 140 under sequential dark field illumination conditions.
- image_amb image captured with the ambient illumination source 150 turned on is denoted as image_amb.
- Image captured with the dark field illumination source 160 located on x-positive position turned on is denoted as image_xpos.
- Image captured with the dark field illumination source 160 located on x-negative position turned on is denoted as image_xneg.
- Image captured with the dark field illumination source 160 located on y-positive location turned on is denoted as image_ypos.
- Image captured with the dark field illumination source 160 located on y-negative location turned on is denoted as image_yneg.
- convolution operations are implemented to the captured images with corresponding kernels.
- Convolution operations may filter noises in the captured images.
- Kernels are defined corresponding to predefined configurations of the dark field illumination sources 160 to enhance detecting a potential defect in the captured images based on a pattern consisting of bright region and shadow region under the predefined configurations of the dark field illumination sources 160 .
- Shape of a potential defect such as size or length of the potential defect, may have a specific and predictable pattern consisting of bright region and shadow region in an image under different configurations of the dark field illumination sources 160 .
- kernels for the five different images captured with certain predefined configurations of dark field illumination sources 160 may be defined as followings:
- kernel_xpos [ - 1 - 1 - 1 - 1 - 1 - 1 1 1 0 0 0 ]
- kernel_xneg [ 0 0 0 0 1 1 - 1 - 1 - 1 - 1 ]
- kernel_ambx [ 1 1 1 - 1 - 1 - 1 - 1 - 1 1 1 1 ]
- ⁇ kernel_ypos ⁇ ⁇ is ⁇ ⁇ the ⁇ ⁇ convolution ⁇ ⁇ operator ⁇ ⁇ for ⁇ ⁇ image_ypos .
- ⁇ kernel_yneg ⁇ ⁇ is ⁇ ⁇ the ⁇ ⁇ convolution ⁇ ⁇ operator ⁇ ⁇ for ⁇ ⁇ image_yneg
- ⁇ kernel_amby ⁇ ⁇ is ⁇ ⁇ the ⁇ ⁇ convolution ⁇ ⁇ operator ⁇ ⁇ for ⁇ ⁇ image_amb
- ⁇ kernel_ambx ⁇ _y ⁇ ⁇ is ⁇ ⁇ the ⁇ ⁇ convolution ⁇ ⁇ operator ⁇ ⁇ for ⁇ ⁇ image_amb .
- the kernels may be redefined to detect potential defects in the captured images once configurations of the dark field illumination sources 160 changes, such as orientation, intensities, etc.
- the kernels are redefined based on different patterns of potential defects consisting of bright region and shadow region in the captured images under different configurations of the dark field illumination sources 160 .
- dilation operations are implemented to the images filtered by the convolutions.
- Dilation operations are morphological operations that may probe and expand shapes of potential defects 112 in the filtered images using structuring elements.
- three-by-three flat structuring elements may be used in the dilation operations at step 232 .
- step 233 multiply operations are implemented to the convoluted and dilated images.
- the multiply operations may further filter noises in the images.
- images captured with dark field illumination sources 160 located at x-axis including image_xpos and image_xneg after operations of convolution using kernel_xpos and kernel_xneg respectively and dilations are multiplied with image_amb after operations of convolution using kernel_ambx and dilation to one image.
- Images captured with dark field illumination sources 160 located at y-axis including image_ypos and image_yneg after operations of convolution using kernel_ypos and kernel_yneg respectively and dilations are multiplied with image_amb after operations of convolution using kernel_amby and dilation to another image.
- step 234 median filtering operations are implemented to the multiplied images for further filtering the images.
- Median filtering operations may preserve potential defects 112 in the image while removing noises.
- the output images after median filtering operations may be detonated as image_x and image_y and may be output as two output images of imaging processing step 230 .
- a magnitude operation is implemented to the two images image_x and image_y after the median filtering operations. Magnitude operation may maximize signal-to-nose ratio.
- the output image after magnitude operation is multiplied with image_amb after operations of convolution using kernel_ambx_y and dilation to one image detonated as image_xy.
- the processed images are output to three processed images of the imaging processing step 230 .
- the three output processed images may include image_x, image_y, and image_xy.
- the three output processed images are enhanced from the captured images to detect areas of potential defects at locations on the surface 112 of the object 110 . Areas of potential defects in each of the three output processed images are cut to sub images in step 240 .
- Image_xy may be used in step 242 for edge detection and followed by line segment detection in step 244 .
- the proposed system 100 and method use a plurality of image processing techniques, such as image enhancement, morphological operation and machine learning tools including hypothesis generation and classification to accurately detect and quantify defects on any type of surfaces 112 of any objects 110 without relying on strong assumptions on characteristics of defects.
- the proposed system 100 and method iteratively prune false defects and detect true defect by focusing on smaller areas on a surface 112 of an object 110 .
- Micro defects may be detected on a surface 112 of an object 110 .
- the micro defects may be a micro crack.
- the micro defects may be as small as micrometers.
- the micro defects are further processed to detect significant line defects on a surface 112 of an object 110 by iteratively refining line segments detected on the enhanced image.
- the proposed system 100 and method may be used in power generation industry to accurately detect and quantify line defects on surfaces of generator wedges.
- the proposed system 100 and method may be automatically operated by a computer to detect line defects on a surface 112 of an object 110 .
- the proposed system 100 and method may provide efficient automated line defect detection on a surface 112 of an object 110 .
- the proposed system 100 and method may provide a plurality of advantages in detecting line defects on a surface 112 of an object 110 , such as higher detection accuracy, cost reduction, and consistent detection performance, etc.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Multimedia (AREA)
- Artificial Intelligence (AREA)
- Quality & Reliability (AREA)
- Databases & Information Systems (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Medical Informatics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- General Engineering & Computer Science (AREA)
- Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)
- Image Processing (AREA)
Abstract
Description
- This invention relates generally to a method and a system for detecting a line defect on a surface of an object.
- Defect detection on a surface of an object is an important aspect of industrial production quality assurance process which may provide an important guarantee for quality of products. Defects may include line defects on a surface of an object. Methods of line defect detection may include applying edge detection on an image of the surface of the object. However, the edges detected in the image may include a plurality of false detections, such as borders of the image may be detected as true line defects.
- Briefly described, aspects of the present invention relate to a method and a system for detecting a line defect on a surface of an object.
- According to an aspect, a method for detecting a line defect on a surface of an object is presented. The method comprises supporting the object on a platform. The method comprises illuminating the surface of the object with a plurality of illumination sources comprising at least one ambient illumination source and at least one dark field illumination source. The method comprises capturing images of the surface of the object under illumination conditions with the illumination sources using an imaging device. The method comprises processing the captured images with a plurality of image operations using an image processor to detect areas of potential defects at locations on the surface of the object. The method comprises cutting the areas of the potential defects from the processed images to sub images using the image processor. The method comprises stitching the sub images at same location together to generate a set of hypotheses of the potential defects at the locations on the surface of the object using the image processor. The method comprises classifying the hypotheses in the stitched images with a classifier to determine whether the potential defects are true defects using the image processor. The classifier is trained with training data having characteristics of the true defects. The detected true defects are discrete true defects at the locations on the surface of the object. The method comprises determining whether the discrete true defects consist of a line defect by refining line segments detected on one of the processed image based on a criterion. The method comprises generating an output comprising the line defect on the surface of the object.
- According to an aspect, a system for detecting a line defect on a surface of an object is presented. The system comprises a platform for supporting the object. The system comprises a plurality of illumination sources comprising at least one ambient illumination source and at least one dark field illumination source for illuminating the surface of the object. The system comprises an imaging device for capturing images of the surface of the object under illumination conditions with the illumination sources. The system comprises an image processor. The image processor processes the captured images with a plurality of image operations using to detect areas of potential defects at locations on the surface of the object. The image processor cuts the areas of the potential defects from the processed images to sub images. The image processor stitches the sub images at same location together to generate a set of hypotheses of the potential defects at the locations on the surface of the object. The image processor classifies the hypotheses in the stitched images with a classifier to determine whether the potential defects are true defects. The classifier is trained with training data having characteristics of the true defects. The detected true defects are discrete true defects at the locations on the surface of the object. The image processor determines whether the discrete true defects consist of a line defect by refining line segments detected on one of the processed image based on a criterion. The image processor generates an output an output comprising the line defect on the surface of the object.
- According to an aspect, a computer program executable in a computer for performing a method of detecting a line defect on a surface of an object is presented. The computer stores images of the surface of the object under illumination conditions with illumination sources comprising at least one ambient illumination source and at least one dark field illumination source. The method comprises step of processing the images with a plurality of image operations to detect areas of potential defects at locations on the surface of the object. The method comprises step of cutting the areas of the potential defects from the processed images to sub images. The method comprises step of stitching the sub images at same location together to generate a set of hypotheses of the potential defects at the locations on the surface of the object. The method comprises step of classifying the hypotheses in the stitched images with a classifier to determine whether the potential defects are true defects. The classifier is trained with training data having characteristics of the true defects. The detected true defects are discrete true defects at the locations on the surface of the object. The method comprises step of determining whether the discrete true defects consist of a line defect by refining line segments detected on one of the processed image based on a criterion. The method comprises step of generating an output comprising the line defect on the surface of the object.
- Various aspects and embodiments of the application as described above and hereinafter may not only be used in the combinations explicitly described, but also in other combinations. Modifications will occur to the skilled person upon reading and understanding of the description.
- Exemplary embodiments of the application are explained in further detail with respect to the accompanying drawings. In the drawings:
-
FIG. 1 illustrates a schematic side view of a system for detecting a defect at a surface of an object according to an embodiment of the invention; -
FIG. 2 illustrates a schematic top view of a system for detecting a defect at a surface of an object according to an embodiment of the invention; -
FIG. 3 illustrates a schematic diagram of a pattern consisting of a bright region and a shadow region of a defect on a surface under a dark field illumination source according to an embodiment of the invention; -
FIG. 4 illustrates a schematic flow chart of a method for detecting a line defect at a surface of an object according to an embodiment of the invention; and -
FIG. 5 illustrates a schematic flow chart of a step for processing images of the method as illustrated inFIG. 4 according to an embodiment of the invention. - To facilitate understanding, identical reference numerals have been used, where possible, to designate identical elements that are common to the figures.
- A detailed description related to aspects of the present invention is described hereafter with respect to the accompanying figures.
-
FIGS. 1 and 2 respectively illustrate a schematic side view and top view of asystem 100 for detecting a defect at asurface 112 of anobject 110 according to an embodiment of the invention. Thesystem 100 may include aplatform 120 that supports theobject 110. Thesystem 100 may include amotor 122. Theplatform 120 may be movable along at least one direction by themotor 122. Themotor 122 may have amotor controller 124 that controls a movement of theplatform 120. - The
system 100 may include ahood 130 arranged above theplatform 120. Thehood 130 may have a hollow cone shape. Thesystem 100 may have animaging device 140. Theimaging device 140 may be arranged inside thehood 130. Theimaging device 140 may be located at top of thehood 130. Theimaging device 140 may include, for example, a camera. Theimaging device 140 may include alens 142. Thelens 142 may be arranged at a location relative to thesurface 112 of theobject 110 such that theimaging device 140 may have a desirable field ofview 144 on thesurface 112 of theobject 110. Theimaging device 140 may pan and tilt relative to thesurface 112 of theobject 110 to achieve a desired field ofview 144 on thesurface 112 of theobject 100. Themotor 122 may move theplatform 120 along with theobject 110 so that the field ofview 144 of theimaging device 140 may cover different areas of thesurface 112 of theobject 110. - The
system 100 may have at least oneambient illumination source 150. Theambient illumination source 150 may be arranged inside thehood 130. Theambient illumination source 150 may be located at top of thehood 130. Theambient illumination source 150 may provide an ambient illumination condition at thesurface 112 of theobject 110. Theambient illumination source 150 may include, for example, a light emitting diode (LED) strobe light. Theambient illumination source 150 may have a ring shape. According to an exemplary embodiment as illustrated inFIG. 1 , thelens 142 may extend through the ring shapedambient illumination source 150. - The
system 100 may include at least one darkfield illumination source 160. The darkfield illumination source 160 may be arranged at bottom of thehood 130. The darkfield illumination source 160 may provide a dark field illumination condition on thesurface 112 of theobject 110. The darkfield illumination source 160 may include, for example, a LED strobe light. The darkfield illumination source 160 may be oriented at a location relative to thesurface 112 of theobject 110 such that a dark field illumination condition may be provided within the field ofview 144 of thelens 142 at thesurface 112 of theobject 110. Under a dark field illumination condition, a defect may have a predictable pattern consisting of a bright region that is illuminated by the darkfield illumination source 160 and a dark or shadow region that is not illuminated by the darkfield illumination source 160. - With reference to
FIG. 2 which illustrates a schematic top view of thesystem 100, four darkfield illumination sources 160 are arranged at bottom of thehood 130. Two darkfield illumination sources 160 are arranged along an x-axis in a plane of the field ofview 144. The two darkfield illumination sources 160 are located at two sides of the field ofview 144 respectively, denoted x-positive and x-negative. Two other darkfield illumination sources 160 are arranged along a y-axis in the plane of the field ofview 144. The two other darkfield illumination sources 160 are located at two sides of the field ofview 144 respectively, denoted y-positive and y-negative. Different numbers of darkfield illumination sources 160 may be arranged at bottom of thehood 130. - With reference to
FIG. 1 , thesystem 100 may include atrigger controller 170. Thetrigger controller 170 may functionally connect to theimaging device 140, theambient illumination source 150 and the dark field illumination sources 160. Thetrigger controller 170 may trigger theambient illumination source 150 and the darkfield illumination sources 160 at a defined pattern, sequence, or simultaneously. Thetrigger controller 170 may trigger theimaging device 140 to capture images of thesurface 112 of theobject 110 under the triggered illumination conditions respectively. Thetrigger controller 170 may control configurations of the dark field illumination sources 160. The configurations of the darkfield illumination sources 160 may include orientations of the darkfield illumination sources 160 relative to thesurface 112 of theobject 110, illumination intensities of the darkfield illumination sources 160, etc. According to an embodiment, thetrigger controller 170 may be a computer having a computer program implemented. - The
system 100 may include animage processor 180. Theimage processor 180 may functionally connect to theimaging device 140. Theimage processor 180 may process the images captured by theimaging device 140 to detect defects on thesurface 112 of theobject 110. According to an embodiment, theimage processor 180 may be a computer having a computer program implemented. According to an embodiment, thetrigger controller 170 and theimage processor 180 may be integrated parts of one computer. Thesystem 100 may include adisplay device 190 functionally connected to theimaging processor 180. Thedisplay device 190 may display the captured images. Thedisplay device 190 may display the processed images. Thedisplay device 190 may display an output including information of a detected defect. Thedisplay device 190 may be a monitor. Thedisplay device 190 may be an integrated part of theimaging processor 180. - According to an embodiment, a shape of a potential defect, such as size or length of the defect, may have a specific and predictable pattern consisting of bright region and shadow region in an image under different configurations of the dark field illumination sources 160.
FIG. 3 illustrates a schematic diagram of a pattern consisting of abright region 115 and ashadow region 117 of adefect 113 on asurface 112 under a darkfield illumination source 160 according to an embodiment. As illustrated inFIG. 3 , asurface 112 may have a V-shapeddefect 113. Afirst portion 114 of the V-shapeddefect 113 is illuminated by the dark filedillumination source 160 that forms abright region 115 in an image. Asecond portion 116 of the V-shapeddefect 113 is not illuminated by the dark filedillumination source 160 that forms ashadow region 117 in the image. Thebright region 115 and theshadow region 117 are within a field ofview 144 of animaging device 140. The V-shapeddefect 113 may be a micro defect, for example, scale of the V-shaped 113 defect may be as small as micrometers. -
FIG. 4 illustrates a schematic flow chart of amethod 200 for detecting a line defect at asurface 112 of anobject 110 using animage processor 180 according to an embodiment of the invention. Instep 210, theimaging device 140 may be calibrated relative to thesurface 112 of theobject 110 to be inspected. The calibration may estimate parameters of thelens 142 of theimaging device 140 to correct distortion of thelens 142 of theimaging device 140 when capturing images of thesurface 112 of theobject 110. - In
step 220, theambient illumination source 150 and the darkfield illumination sources 160 may be triggered by thetrigger controller 170 to illuminate thesurface 112 of theobject 110 with an ambient illumination condition and dark field illumination conditions. Theimaging device 140 may be triggered by thetrigger controller 170 to capture images of thesurface 112 of theobject 110 under the ambient illumination condition and the dark field illumination conditions respectively. Each image captures a field ofview 144 of thesurface 112 of theobject 110 under different illumination conditions. Each image may contain potential defects having specific shapes on thesurface 112 of theobject 100. Shapes of the potential defects have predictable patterns consisting of bright region and shadow region based on configurations of the dark field illumination sources 160. - In
step 230, the captured images of thesurface 112 of theobject 110 are processed by theimage processor 180. Theprocessing step 230 may implement a plurality of image operations to the captured images to detect areas of potential defects at locations on thesurface 112 of theobject 110. A plurality of areas may be detected having potential defects. Each area may have a potential defect. The locations of the plurality of areas may be represented by, such as x, y locations on a plane of thesurface 112. The potential defects may be detected by patterns consisting of bright region and shadow region in the processed images. The plurality of image operations may enhance shapes of potential defects and reduce false detection rate. - In
step 240, the areas showing the potential defects at locations on thesurface 112 of theobject 110 are cut off from the processed images to sub images. Size of each area to be cut off may be small enough to detect a micro defect. For example, size of each area may be less than 100 by 100 pixels, depending on resolution of the image. Areas that do not show indications of potential defects may be pruned out and do not need further processing. - In
step 250, sub images having potential defects at the same location on thesurface 112 of theobject 110 are stitched together to generate a set of hypotheses of the potential defects at the locations on thesurface 112 of theobject 110. - In
step 260, the stitched images are classified with a classifier to determine whether the hypotheses of the potential defects are true defects on thesurface 112 of theobject 110. The classifier may be trained with a training data having characteristics of a true defect. The classification outputs a plurality of discrete true defects at the locations on thesurface 112 of theobject 110. According to an embodiment, a random forest classifier may be used to classify the potential defects. The random forest classifier may classify hypotheses with high efficiency and scalability in large scale applications. - In
step 270, the discrete true defects are determined whether they consist of a line defect on thesurface 112 of theobject 110 by refining line segments detected on one of the processed images. The line segments are refined based on certain criteria. - Line segments may be detected on one of the processed images by
step 242 andstep 244. Instep 242, edges are detected on one of the processed images by applying edge detection. The edges may be detected, for example, by Canny edge detector. Instep 244, line segments may be detected from the edges by applying Hough transform. A probabilistic Hough transform may be used for detecting the line segments. - According to an embodiment, a criterion for refining the line segments may include that a distance from each of the discrete true defects to each of the line segments is less than a threshold value. The threshold value may be, for example, 5 pixels, or 8 pixels, or 10 pixels. The distance may be a perpendicular distance from a center of the discrete true defect to each of the line segments. A criterion for refining the line segments may include that a difference between a slope of each of the discrete true defects and a slope of each of the line segments is less than a threshold value. The threshold value may be, for example, in a range of −7 to 7 degrees, or in a range of −5 to 5 degrees, or in a range of −3 to 3 degrees. The slop of each of the discrete true defects may be obtained by Hough transform. The threshold values for distance and slop may be defined depending on resolution of the image. The line defect is detected by iteratively removing the line segments that do not satisfy the criteria. The line defect may consist of a plurality of connected line segments. The line defect may have a curved line shape.
- In
step 280, an output is generated. The output may include the detected line defect on thesurface 112 of theobject 110. The output may be a report form. The output may be an image with the detected true defects marked at the locations. The image may be one of the captured images or one of the processed images. The output may be stored in theimaging processor 180, or displayed on adisplay device 190, or print out by a printer. -
FIG. 5 illustrates a schematic flow chart of astep 230 for processing images of themethod 200 as illustrated inFIG. 4 according to an embodiment of the invention. Referring toFIG. 2 andFIG. 4 step 220, thetrigger controller 170 may sequentially turn theambient illumination source 150 and the darkfield illumination sources 160 on and off. Thetrigger controller 170 may trigger theimaging device 140 to sequentially capture images of thesurface 112 of theobject 110 under ambient illumination condition and dark field illumination conditions respectively. The dark filedillumination sources 160 may be sequentially turned on and off by thetrigger controller 170 so that images are sequentially captured by theimaging device 140 under sequential dark field illumination conditions. For example, image captured with theambient illumination source 150 turned on is denoted as image_amb. Image captured with the darkfield illumination source 160 located on x-positive position turned on is denoted as image_xpos. Image captured with the darkfield illumination source 160 located on x-negative position turned on is denoted as image_xneg. Image captured with the darkfield illumination source 160 located on y-positive location turned on is denoted as image_ypos. Image captured with the darkfield illumination source 160 located on y-negative location turned on is denoted as image_yneg. - In
step 231 of thestep 230, convolution operations are implemented to the captured images with corresponding kernels. Convolution operations may filter noises in the captured images. Kernels are defined corresponding to predefined configurations of the darkfield illumination sources 160 to enhance detecting a potential defect in the captured images based on a pattern consisting of bright region and shadow region under the predefined configurations of the dark field illumination sources 160. Shape of a potential defect, such as size or length of the potential defect, may have a specific and predictable pattern consisting of bright region and shadow region in an image under different configurations of the dark field illumination sources 160. - Different kernels affect output filtered images. For example, assuming color code as: black=1, white=0, grey=−1, the kernels for the five different images captured with certain predefined configurations of dark
field illumination sources 160 may be defined as followings: -
- The kernels may be redefined to detect potential defects in the captured images once configurations of the dark
field illumination sources 160 changes, such as orientation, intensities, etc. The kernels are redefined based on different patterns of potential defects consisting of bright region and shadow region in the captured images under different configurations of the dark field illumination sources 160. - In
step 232, dilation operations are implemented to the images filtered by the convolutions. Dilation operations are morphological operations that may probe and expand shapes ofpotential defects 112 in the filtered images using structuring elements. According to an embodiment, three-by-three flat structuring elements may be used in the dilation operations atstep 232. - In
step 233, multiply operations are implemented to the convoluted and dilated images. The multiply operations may further filter noises in the images. With reference toFIG. 4 , images captured with darkfield illumination sources 160 located at x-axis including image_xpos and image_xneg after operations of convolution using kernel_xpos and kernel_xneg respectively and dilations are multiplied with image_amb after operations of convolution using kernel_ambx and dilation to one image. Images captured with darkfield illumination sources 160 located at y-axis including image_ypos and image_yneg after operations of convolution using kernel_ypos and kernel_yneg respectively and dilations are multiplied with image_amb after operations of convolution using kernel_amby and dilation to another image. - In step 234, median filtering operations are implemented to the multiplied images for further filtering the images. Median filtering operations may preserve
potential defects 112 in the image while removing noises. The output images after median filtering operations may be detonated as image_x and image_y and may be output as two output images ofimaging processing step 230. - In
step 235, a magnitude operation is implemented to the two images image_x and image_y after the median filtering operations. Magnitude operation may maximize signal-to-nose ratio. The output image after magnitude operation is multiplied with image_amb after operations of convolution using kernel_ambx_y and dilation to one image detonated as image_xy. - In
step 236, the processed images are output to three processed images of theimaging processing step 230. The three output processed images may include image_x, image_y, and image_xy. The three output processed images are enhanced from the captured images to detect areas of potential defects at locations on thesurface 112 of theobject 110. Areas of potential defects in each of the three output processed images are cut to sub images instep 240. Image_xy may be used instep 242 for edge detection and followed by line segment detection instep 244. - According to an aspect, the proposed
system 100 and method use a plurality of image processing techniques, such as image enhancement, morphological operation and machine learning tools including hypothesis generation and classification to accurately detect and quantify defects on any type ofsurfaces 112 of anyobjects 110 without relying on strong assumptions on characteristics of defects. The proposedsystem 100 and method iteratively prune false defects and detect true defect by focusing on smaller areas on asurface 112 of anobject 110. Micro defects may be detected on asurface 112 of anobject 110. The micro defects may be a micro crack. The micro defects may be as small as micrometers. The micro defects are further processed to detect significant line defects on asurface 112 of anobject 110 by iteratively refining line segments detected on the enhanced image. The proposedsystem 100 and method may be used in power generation industry to accurately detect and quantify line defects on surfaces of generator wedges. - According to an aspect, the proposed
system 100 and method may be automatically operated by a computer to detect line defects on asurface 112 of anobject 110. The proposedsystem 100 and method may provide efficient automated line defect detection on asurface 112 of anobject 110. The proposedsystem 100 and method may provide a plurality of advantages in detecting line defects on asurface 112 of anobject 110, such as higher detection accuracy, cost reduction, and consistent detection performance, etc. - Although various embodiments that incorporate the teachings of the present invention have been shown and described in detail herein, those skilled in the art can readily devise many other varied embodiments that still incorporate these teachings. The invention is not limited in its application to the exemplary embodiment details of construction and the arrangement of components set forth in the description or illustrated in the drawings. The invention is capable of other embodiments and of being practiced or of being carried out in various ways. Also, it is to be understood that the phraseology and terminology used herein is for the purpose of description and should not be regarded as limiting. The use of “including,” “comprising,” or “having” and variations thereof herein is meant to encompass the items listed thereafter and equivalents thereof as well as additional items. Unless specified or limited otherwise, the terms “mounted,” “connected,” “supported,” and “coupled” and variations thereof are used broadly and encompass direct and indirect mountings, connections, supports, and couplings. Further, “connected” and “coupled” are not restricted to physical or mechanical connections or couplings.
-
- 100: System
- 110: Object
- 112: Surface of the Object
- 113: V-shaped Defect
- 114: First Portion of the V-shaped Defect
- 115: Bright Region
- 116: Second Portion of the V-shaped Defect
- 117: Shadow Region
- 120: Platform
- 122: Motor
- 124: Motor Controller
- 130: Hood
- 140: Imaging Device
- 142: Lens
- 144: Field of View
- 150: Ambient Illumination Source
- 160: Dark Filed Illumination Source
- 170: Trigger Controller
- 180: Image Processor
- 190: Display Device
- 200: Method
Claims (20)
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/678,310 US10192301B1 (en) | 2017-08-16 | 2017-08-16 | Method and system for detecting line defects on surface of object |
DE102018211704.7A DE102018211704B4 (en) | 2017-08-16 | 2018-07-13 | Method and system for detecting line defects on a surface of an object |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/678,310 US10192301B1 (en) | 2017-08-16 | 2017-08-16 | Method and system for detecting line defects on surface of object |
Publications (2)
Publication Number | Publication Date |
---|---|
US10192301B1 US10192301B1 (en) | 2019-01-29 |
US20190057498A1 true US20190057498A1 (en) | 2019-02-21 |
Family
ID=65032184
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/678,310 Active 2037-08-19 US10192301B1 (en) | 2017-08-16 | 2017-08-16 | Method and system for detecting line defects on surface of object |
Country Status (2)
Country | Link |
---|---|
US (1) | US10192301B1 (en) |
DE (1) | DE102018211704B4 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109765245A (en) * | 2019-02-25 | 2019-05-17 | 武汉精立电子技术有限公司 | Large scale display screen defects detection localization method |
CN109975308A (en) * | 2019-03-15 | 2019-07-05 | 维库(厦门)信息技术有限公司 | A kind of surface inspecting method based on deep learning |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6807459B2 (en) * | 2017-07-25 | 2021-01-06 | 富士フイルム株式会社 | Damage diagram creation method, damage diagram creation device, damage diagram creation system, and recording medium |
EP3511122B1 (en) * | 2017-11-07 | 2020-04-29 | Dalian University of Technology | Monocular vision six-dimensional measurement method for high-dynamic large-range arbitrary contouring error of cnc machine tool |
US11587233B2 (en) * | 2019-10-17 | 2023-02-21 | International Business Machines Corporation | Display panel defect enhancement |
CN111008655A (en) * | 2019-11-28 | 2020-04-14 | 上海识装信息科技有限公司 | Method and device for assisting in identifying authenticity of physical commodity brand and electronic equipment |
CN111292228B (en) * | 2020-01-16 | 2023-08-11 | 宁波舜宇仪器有限公司 | Lens defect detection method |
CN111339220B (en) * | 2020-05-21 | 2020-08-28 | 深圳新视智科技术有限公司 | Defect mapping method |
DE102021200676A1 (en) | 2021-01-26 | 2022-07-28 | Robert Bosch Gmbh | Method for detecting and displaying surface changes on an object, device and system made up of multiple devices for carrying out the method |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5917935A (en) * | 1995-06-13 | 1999-06-29 | Photon Dynamics, Inc. | Mura detection apparatus and method |
US5828778A (en) * | 1995-07-13 | 1998-10-27 | Matsushita Electric Industrial Co., Ltd. | Method and apparatus for analyzing failure of semiconductor wafer |
US6780656B2 (en) * | 2000-09-18 | 2004-08-24 | Hpl Technologies, Inc. | Correction of overlay offset between inspection layers |
US7525659B2 (en) * | 2003-01-15 | 2009-04-28 | Negevtech Ltd. | System for detection of water defects |
US20060192949A1 (en) * | 2004-12-19 | 2006-08-31 | Bills Richard E | System and method for inspecting a workpiece surface by analyzing scattered light in a back quartersphere region above the workpiece |
KR101587176B1 (en) * | 2007-04-18 | 2016-01-20 | 마이크로닉 마이데이타 에이비 | Method and apparatus for mura detection and metrology |
JP5579588B2 (en) * | 2010-12-16 | 2014-08-27 | 株式会社日立ハイテクノロジーズ | Defect observation method and apparatus |
US9885934B2 (en) * | 2011-09-14 | 2018-02-06 | View, Inc. | Portable defect mitigators for electrochromic windows |
US9838583B2 (en) | 2015-09-21 | 2017-12-05 | Siemens Energy, Inc. | Method and apparatus for verifying lighting setup used for visual inspection |
US10234402B2 (en) * | 2017-01-05 | 2019-03-19 | Kla-Tencor Corporation | Systems and methods for defect material classification |
-
2017
- 2017-08-16 US US15/678,310 patent/US10192301B1/en active Active
-
2018
- 2018-07-13 DE DE102018211704.7A patent/DE102018211704B4/en active Active
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109765245A (en) * | 2019-02-25 | 2019-05-17 | 武汉精立电子技术有限公司 | Large scale display screen defects detection localization method |
CN109975308A (en) * | 2019-03-15 | 2019-07-05 | 维库(厦门)信息技术有限公司 | A kind of surface inspecting method based on deep learning |
Also Published As
Publication number | Publication date |
---|---|
US10192301B1 (en) | 2019-01-29 |
DE102018211704B4 (en) | 2023-11-16 |
DE102018211704A1 (en) | 2019-02-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10192301B1 (en) | Method and system for detecting line defects on surface of object | |
US10215714B1 (en) | Method and system for detecting defects on surface of object | |
EP3243166B1 (en) | Structural masking for progressive health monitoring | |
EP3531114B1 (en) | Visual inspection device and illumination condition setting method of visual inspection device | |
EP1970839B1 (en) | Apparatus, method, and program for face feature point detection | |
US10089753B1 (en) | Method, system and computer-readable medium for camera calibration | |
WO2017067342A1 (en) | Board card position detection method and apparatus | |
JP6665550B2 (en) | Tire contact surface analysis device, tire contact surface analysis system, and tire contact surface analysis method | |
KR102308437B1 (en) | Apparatus and method for optimizing external examination of a subject | |
US20180232876A1 (en) | Contact lens inspection in a plastic shell | |
CN114136975A (en) | Intelligent detection system and method for surface defects of microwave bare chip | |
CN111239142A (en) | Paste appearance defect detection device and method | |
JP2008170256A (en) | Flaw detection method, flaw detection program and inspection device | |
CN110596118A (en) | Print pattern detection method and print pattern detection device | |
JP2019200775A (en) | Surface defect inspection device and surface defect inspection method | |
CN113139943A (en) | Method and system for detecting appearance defects of open circular ring workpiece and computer storage medium | |
CN116237266A (en) | Flange size measuring method and device | |
JP2009116419A (en) | Outline detection method and outline detection device | |
US20090129632A1 (en) | Method of object detection | |
JP2019066222A (en) | Visual inspection device and visual inspection method | |
KR102117697B1 (en) | Apparatus and method for surface inspection | |
US20200198784A1 (en) | Avoiding dazzling of persons by a light source | |
Yakimov | Preprocessing digital images for quickly and reliably detecting road signs | |
JP2003329428A (en) | Device and method for surface inspection | |
JP2006145228A (en) | Unevenness defect detecting method and unevenness defect detector |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: SIEMENS CORPORATION, NEW JERSEY Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:PANDA, RAMESWAR;WU, ZIYAN;ERNST, JAN;SIGNING DATES FROM 20170804 TO 20170807;REEL/FRAME:043593/0038 Owner name: SIEMENS ENERGY, INC., FLORIDA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:BAILEY, KEVIN P.;REEL/FRAME:043592/0814 Effective date: 20170809 Owner name: SIEMENS ENERGY, INC., FLORIDA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:SIEMENS CORPORATION;REEL/FRAME:043593/0158 Effective date: 20170828 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 4 |