WO2023097637A1 - 一种用于缺陷检测的方法和系统 - Google Patents

一种用于缺陷检测的方法和系统 Download PDF

Info

Publication number
WO2023097637A1
WO2023097637A1 PCT/CN2021/135264 CN2021135264W WO2023097637A1 WO 2023097637 A1 WO2023097637 A1 WO 2023097637A1 CN 2021135264 W CN2021135264 W CN 2021135264W WO 2023097637 A1 WO2023097637 A1 WO 2023097637A1
Authority
WO
WIPO (PCT)
Prior art keywords
defect
mask
segmented
level
picture
Prior art date
Application number
PCT/CN2021/135264
Other languages
English (en)
French (fr)
Inventor
束岸楠
袁超
韩丽丽
Original Assignee
宁德时代新能源科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 宁德时代新能源科技股份有限公司 filed Critical 宁德时代新能源科技股份有限公司
Priority to PCT/CN2021/135264 priority Critical patent/WO2023097637A1/zh
Priority to CN202180066474.4A priority patent/CN116547713A/zh
Priority to EP21962758.5A priority patent/EP4227900A4/en
Priority to US18/196,900 priority patent/US11922617B2/en
Publication of WO2023097637A1 publication Critical patent/WO2023097637A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T7/0006Industrial image inspection using a design-rule based approach
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/337Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20016Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30152Solder

Definitions

  • This application relates to the field of artificial intelligence, in particular to a method and system for defect detection.
  • the present application provides a method and system for defect detection, which can detect defects when the object to be detected is very small, significantly reduce the detection cost, and greatly improve the quality inspection efficiency.
  • the present application provides a method for defect detection, comprising: collecting a two-dimensional (2D) picture of an object to be detected; inputting the collected 2D picture into a trained defect segmentation model to obtain a segmented 2D A defect mask, wherein the defect segmentation model is trained based on a multi-level feature extraction instance segmentation network with a layer-by-layer increase in intersection-over-union (IoU) threshold, wherein the 2D defect mask includes information about the segmented defect region The defect type, defect size and defect position information; and based on the predefined defect rules to judge the segmented 2D defect mask to output defect detection results.
  • IoU intersection-over-union
  • the method further includes: collecting a three-dimensional (3D) picture of the object to be detected; preprocessing the collected 3D picture to obtain an image with depth information of the object to be detected; converting the obtained An image with depth information is input into the trained defect segmentation model to obtain a segmented 3D defect mask, wherein the 3D defect mask includes information about the defect depth of the segmented defect region; and based on the predefined defect Rules are used to fuse and judge the segmented 2D defect mask and 3D defect mask to output defect detection results.
  • 3D three-dimensional
  • the segmentation results of depth for actual needs are obtained, so that the detection results are more accurate, and when the defect is not obvious in the 2D image form, the fusion of 2D and 3D detection results can be realized. Missed kills, and greatly reduced the probability of overkills.
  • the fusion judgment of the segmented 2D defect mask and the 3D defect mask based on the predefined defect rules further includes: using the coordinate transformation matrix between the collected 2D picture and the 3D picture to transform the The 2D picture is pixel-level aligned with the 3D picture to obtain an aligned picture; the divided 2D defect mask and 3D defect mask are filled into the aligned picture; and based on predefined defect rules in the Fusion judgment is performed on the segmented 2D defect mask and 3D defect mask on the aligned image to output a defect detection result.
  • the multi-level feature extraction instance segmentation network is obtained by cascading three-level instance segmentation networks, wherein the IoU threshold for positive and negative sample sampling is set to 0.2- 0.4, set to 0.3-0.45 in the second level, and set to 0.5-0.7 in the third level.
  • the IoU threshold for positive and negative sample sampling is set to 0.2- 0.4, set to 0.3-0.45 in the second level, and set to 0.5-0.7 in the third level.
  • the IoU threshold is set to 0.3 in the first level, to 0.4 in the second level, and to 0.5 in the third level.
  • the defect mask output by the multi-level feature extraction instance segmentation network is obtained based on a weighted average of the defect masks output by the instance segmentation network at each level.
  • judging the segmented defect mask based on a predefined defect rule to output a defect detection result further includes: when the size or depth of the segmented defect region is larger than the defect type for the defect region
  • the defect class is output as the defect detection result.
  • the defect type includes a pit defect and a protrusion defect
  • judging the segmented defect mask based on a predefined defect rule to output a defect detection result further includes: defects in the defect region In the case of a pit defect, if the segmented 2D defect mask and 3D defect mask both include the defect region, or the depth of the defect region is greater than the predefined threshold for the pit defect, then output the defect class As a defect detection result; or in the case where the defect type of the defect region is a raised defect, if both the segmented 2D defect mask and the 3D defect mask include the defect region, and the size of the defect region and If the depth is greater than a predefined threshold for raised defects, the defect class is output as the defect detection result.
  • the present application provides a system for defect detection, the system comprising: an image acquisition module configured to acquire a two-dimensional (2D) picture of an object to be detected; a defect segmentation module, The defect segmentation module is configured to input the acquired 2D pictures into a trained defect segmentation model to obtain segmented 2D defect masks, wherein the defect segmentation model is improved layer by layer based on an intersection over union (IoU) threshold
  • the multi-level feature extraction instance segmentation network is trained, wherein the 2D defect mask includes information about the defect type, defect size and defect location of the segmented defect region; and a defect judgment module, which is It is configured to judge the segmented 2D defect mask based on a predefined defect rule to output a defect detection result.
  • the image acquisition module is further configured to acquire a three-dimensional (3D) picture of the object to be inspected; the defect segmentation module is further configured to: preprocess the acquired 3D picture to obtain the object to be inspected Detecting an image with depth information of an object; inputting the resulting image with depth information into the trained defect segmentation model to obtain a segmented 3D defect mask, wherein the 3D defect mask includes information about the segmented defect Information about the defect depth of the region; and the defect judgment module is further configured to perform fusion judgment on the segmented 2D defect mask and 3D defect mask based on a predefined defect rule to output a defect detection result.
  • 3D defect mask includes information about the segmented defect Information about the defect depth of the region
  • the segmentation results of depth for actual needs are obtained, so that the detection results are more accurate, and when the defect is not obvious in the 2D image form, the fusion of 2D and 3D detection results can be realized. Missed kills, and greatly reduced the probability of overkills.
  • the defect judgment module is further configured to: use the coordinate transformation matrix between the collected 2D picture and the 3D picture to perform pixel-level alignment on the 2D picture and the 3D picture to obtain an aligned the picture; fill the segmented 2D defect mask and 3D defect mask into the aligned picture; and perform the segmentation on the aligned picture based on the predefined defect rules
  • the defect mask performs fusion judgment to output defect detection results.
  • the multi-level feature extraction instance segmentation network is obtained by cascading three-level instance segmentation networks, wherein the IoU threshold for positive and negative sample sampling is set to 0.2- 0.4, set to 0.3-0.45 in the second level, and set to 0.5-0.7 in the third level.
  • the IoU threshold for positive and negative sample sampling is set to 0.2- 0.4, set to 0.3-0.45 in the second level, and set to 0.5-0.7 in the third level.
  • the IoU threshold is set to 0.3 in the first level, to 0.4 in the second level, and to 0.5 in the third level.
  • the defect mask output by the multi-level feature extraction instance segmentation network is obtained based on a weighted average of the defect masks output by the instance segmentation network at each level.
  • the defect judging module is further configured to: output a defect class as a defect detection result when the size or depth of the segmented defect region is greater than a predefined threshold for the defect type of the defect region.
  • the defect type includes a pit defect and a protrusion defect
  • the defect judging module is further configured to: if the defect type of the defect region is a pit defect, if the segmented 2D Both the defect mask and the 3D defect mask include the defect region, or the depth of the defect region is greater than the predefined threshold for the pit defect, then output the defect class as the defect detection result; or the defect type in the defect region In the case of a raised defect, if both the segmented 2D defect mask and the 3D defect mask include the defect area, and the size and depth of the defect area are greater than the predefined threshold for raised defects, then output the defect class as a defect detection result.
  • the present application provides a device for defect detection, the device comprising: a memory storing computer-executable instructions; and at least one processor, the computer-executable instructions being executed by the When the at least one processor executes, the device performs the following operations: collect a two-dimensional (2D) picture of the object to be detected; input the collected 2D picture into the trained defect segmentation model to obtain a segmented 2D defect mask , wherein the defect segmentation model is trained based on a multi-level feature extraction instance segmentation network with an intersection-over-union (IoU) threshold increased layer by layer, wherein the 2D defect mask includes the defect type of the segmented defect region , defect size and defect location information; and judge the segmented 2D defect mask based on a predefined defect rule to output a defect detection result.
  • 2D two-dimensional
  • the computer-executable instructions when executed, further cause the at least one processor to perform the following operations: acquire a three-dimensional (3D) picture of the object to be inspected; the defect segmentation module is further configured to: Preprocessing the collected 3D pictures to obtain an image with depth information of the object to be detected; inputting the obtained image with depth information into the trained defect segmentation model to obtain a segmented 3D defect mask, Wherein the 3D defect mask includes information about the defect depth of the segmented defect region; and the defect judging module is further configured to classify the segmented 2D defect mask and the 3D defect mask based on a predefined defect rule Fusion judgments are made to output defect detection results.
  • 3D defect mask includes information about the defect depth of the segmented defect region
  • the segmentation results of depth for actual needs are obtained, so that the detection results are more accurate, and when the defect is not obvious in the 2D image form, the fusion of 2D and 3D detection results can be realized. Missed kills, and greatly reduced the probability of overkills.
  • the computer-executable instructions when executed, further cause the at least one processor to perform the following operations: use the coordinate transformation matrix between the collected 2D picture and the 3D picture to convert the 2D picture to the 3D picture
  • the 3D picture is aligned at the pixel level to obtain an aligned picture; the segmented 2D defect mask and the 3D defect mask are filled into the aligned picture; and based on predefined defect rules in the aligned Fusion judgment is performed on the segmented 2D defect mask and 3D defect mask on the image to output the defect detection result.
  • the multi-level feature extraction instance segmentation network is obtained by cascading three-level instance segmentation networks, wherein the IoU threshold for positive and negative sample sampling is set to 0.2- 0.4, set to 0.3-0.45 in the second level, and set to 0.5-0.7 in the third level.
  • the IoU threshold for positive and negative sample sampling is set to 0.2- 0.4, set to 0.3-0.45 in the second level, and set to 0.5-0.7 in the third level.
  • the IoU threshold is set to 0.3 in the first level, to 0.4 in the second level, and to 0.5 in the third level.
  • the present application provides a computer-readable storage medium, the computer-readable storage medium stores computer-executable instructions, and when the computer-executable instructions are executed by a computing device, the computing device implements the following: A method for defect detection in any of the preceding aspects.
  • FIG. 1 is an example flowchart of a method for defect detection according to an embodiment of the present application
  • FIG. 2 is a schematic structural diagram of an instance segmentation network based on a multi-level feature extraction architecture according to an embodiment of the present application
  • Fig. 3 is a schematic diagram of a 2D defect segmentation result according to an embodiment of the present application.
  • FIG. 4 is an example flowchart of a method for defect detection according to another embodiment of the present application.
  • FIG. 5 is a schematic diagram of a pseudo-color image according to an embodiment of the present application.
  • Fig. 6 is a schematic diagram of a 3D defect segmentation result according to an embodiment of the present application.
  • Fig. 7 is an example flowchart of fusion processing of 2D and 3D segmentation results according to an embodiment of the present application
  • Fig. 8 is a schematic architecture diagram of a system for defect detection according to an embodiment of the present application.
  • Fig. 9 is a schematic architecture diagram of a device for defect detection according to another embodiment of the present application.
  • Defect detection system 800 image acquisition module 801, defect segmentation module 802, defect detection module 803;
  • a device 900 a memory 901 , and a processor 902 .
  • the existing samples only use two-dimensional (2D) images for defect detection, but the pits and bumps that appear in laser welding are defects caused by depth problems, and their 2D shapes may be very insignificant, so for deep defects , only using 2D images for detection is very prone to misjudgment.
  • the inventors have conducted in-depth research and designed a multi-level backbone that extracts defect features more accurately ( backbone) network structure and a defect detection algorithm that fuses the results of 2D and 3D instance segmentation algorithms.
  • this application can still achieve better segmentation and detection results in semantic segmentation models where positive samples are particularly difficult to obtain or very few positive samples.
  • the present application combines the results of 2D and 3D inspections, and performs a customizable fusion of the results, and obtains the depth-related model results for actual needs, making the detection of deep defects more accurate.
  • the present application reduces the missed detection of defects such as seal nail pits and melted beads to 0%, and reduces the overkill to within 0.02%.
  • the present application can also customize and adjust the threshold of defect specifications during detection, making the detection algorithm more flexible.
  • the present application can be applied to the field of defect detection combined with artificial intelligence (AI), and the method and system for defect detection disclosed in the embodiments of the present application can be used, but not limited to, for defect detection of sealing nail welding bead, It can also be used for defect detection of various other products in modern industrial manufacturing.
  • AI artificial intelligence
  • the defect detection for the welding bead of the sealing nail is taken as an example for description.
  • FIG. 1 is an exemplary flowchart of a method 100 for defect detection according to an embodiment of the present application.
  • FIG. 1 is an exemplary flowchart of a method 100 for defect detection according to an embodiment of the present application.
  • FIG. 1 refers to FIG. 1, and further refer to FIG. 2 and FIG. 3, wherein FIG. 2 is a schematic structural diagram of an instance segmentation network based on a multi-level feature extraction architecture according to an embodiment of the present application, and FIG. 3 is A schematic diagram of a 2D defect segmentation result according to an embodiment of the present application.
  • the method 100 starts at step 101, collecting a two-dimensional (2D) picture of an object to be detected.
  • step 102 the collected 2D images are input into the trained defect segmentation model to obtain the segmented 2D defect mask, wherein the defect segmentation model is based on the multi-level feature extraction layer-by-layer improvement of the intersection-over-union (IoU) threshold Instance segmentation network is trained, where the segmented 2D defect mask includes information about the defect type, defect size and defect location of the segmented defect region.
  • step 103 the segmented 2D defect mask is judged based on predefined defect rules to output defect detection results.
  • the network architecture of the defect segmentation model in step 102 is shown in FIG. 2 , and the network architecture is obtained by cascading three levels of instance segmentation networks.
  • the network architecture is obtained by cascading three levels of instance segmentation networks.
  • IoU intersection-over-union ratio
  • candidate boxes are divided into positive samples (foreground) and negative samples (background), and these positive and negative samples are sampled so that the ratio between them satisfies 1:3 as much as possible (the total number of the two is usually 128), and then these candidate boxes (eg, usually 128) are fed into the ROI (Region of Interest) pooling layer, and finally category classification and bounding box regression.
  • the instance segmentation network based on the multi-level feature extraction architecture, it is possible to achieve better segmentation and detection results in the instance segmentation model where positive samples are particularly difficult to obtain or the positive samples are extremely few. There is no missed kill, and the probability of overkill is greatly reduced.
  • FIG. 6 is a schematic diagram of a 3D defect segmentation result according to an embodiment of the present application.
  • Steps 401 and 402 in method 400 are the same as steps 101 and 102 in method 100 , and will not be repeated here.
  • step 403 a three-dimensional (3D) picture of the object to be detected is collected.
  • step 404 the collected 3D pictures are preprocessed to obtain an image with depth information of the object to be detected.
  • a 3D camera for example, a depth camera
  • the collected 3D image can be saved in the form of a depth image, wherein the gray value of the depth image represents the depth information in the Z direction.
  • the depth image and the 2D image generally have positional consistency, in other words, the pixels on the depth image correspond to the pixels in the 2D image one-to-one.
  • the captured depth image may be rendered to obtain a false color image, as shown in FIG. 5 .
  • a pre-defined colormap for example, applycolormap (pseudocolor function) in OpenCV
  • a pre-defined colormap for example, applycolormap (pseudocolor function) in OpenCV
  • the 3D defect mask (Mask) output by the defect segmentation model in Fig. 2 is shown in Fig. 6.
  • the 3D defect mask segmented during the detection of the sealing nail welding bead is shown
  • the result superimposed with the original 3D image, in which the gray area in the upper left corner is the contour of the segmented mask, which indicates the segmentation information based on the depth map of the relevant defect, and the type, area size and depth of the defect can be directly judged through the result.
  • the segmentation results about the depth are obtained according to the actual needs, so that the detection results are more accurate, and when the defect is not obvious in the 2D image morphology (for example, the pit is different from the 2D image morphology).
  • the difference between the normal weld bead is not significant), through the fusion of 2D and 3D inspection results to achieve no missed killing, and greatly reduce the probability of overkill.
  • FIG. 7 is an example flowchart of 2D and 3D segmentation result fusion processing 700 according to an embodiment of the present application .
  • the step of post-processing the 2D and 3D segmentation results starts at block 701 , using the coordinate transformation matrix between the collected 2D picture and the 3D picture to perform pixel-level alignment of the 2D picture and the 3D picture to obtain an aligned picture.
  • the segmented 2D defect mask and 3D defect mask are populated into the aligned picture.
  • fusion judgment is performed on the segmented 2D defect mask and 3D defect mask on the aligned image based on the predefined defect rules to output a defect detection result.
  • the collected 2D and 3D pictures are preprocessed respectively, wherein the 2D pictures are grayscaled to obtain 2D grayscaled pictures, and the brightness pictures are separated from the 3D pictures.
  • the obtained 2D grayscale picture and 3D brightness picture three identical position points of sealing nail welding are selected in advance, and space transformation (for example, using opencv) is performed to solve the coordinate transformation equation to obtain 2D picture and 3D picture
  • space transformation for example, using opencv
  • the coordinate transformation matrix between them uses the coordinate transformation matrix to transform and align the 2D picture and the 3D picture at the pixel level, so as to obtain an aligned overlay picture.
  • the segmented 2D defect masks and 3D defect masks can be filled into corresponding positions of the aligned superimposed images.
  • fusion judgment can be performed on the segmented 2D defect mask and 3D defect mask based on predefined defect rules to output a defect detection result.
  • the 2D defect mask and the 3D defect mask include the same defect area (such as the gray marked area), and the type of the defect area is a pit, which indicates that the 2D detection result is re-detected on the 3D detection The pits in , so the defect detection result is output as the defect class.
  • the defect segmentation model used in the present application for defect segmentation is trained by a multi-level feature extraction instance segmentation network, wherein the multi-level feature extraction instance segmentation
  • the network is obtained by cascading three levels of instance segmentation networks, where the IoU threshold for positive and negative sample sampling is set to 0.2-0.4 in the first level and 0.3-0.45 in the second level, And it is set to 0.5-0.7 in the third level.
  • the extraction of candidate frames is divided into three stages, in which a smaller IoU threshold is set in the first stage, for example, the IoU threshold can be set to 0.2-0.4, at this stage you can get There are more positive samples than directly setting the IoU threshold to 0.5, but this sacrifices a certain accuracy, which can be understood as a preliminary screening here.
  • the second stage sampling is continued on the basis of the previously extracted candidate boxes.
  • the IoU threshold of the sampling can be set to 0.3-0.45, thereby obtaining a finer sampling based on the existing sampling.
  • the IoU threshold for sampling can be set to 0.5-0.7.
  • the results of the third level are directly output to obtain the final instance segmentation results.
  • the IoU threshold for positive and negative sample sampling is set to 0.3 in the first level and set to 0.4, and is set to 0.5 in the third level.
  • the defect mask output by the multi-level feature extraction instance segmentation network may be based on the weighted average of the defect masks output by the instance segmentation network at each level. owned.
  • judging the segmented defect mask based on a predefined defect rule to output a defect detection result further includes: the size of the segmented defect area Or when the depth is greater than the predefined threshold for the defect type of the defect region, the defect class is output as the defect detection result.
  • the defect type includes a pit defect and a protrusion defect
  • judging the segmented defect mask based on a predefined defect rule to output a defect detection result further includes: In the case where the defect type is a pit defect, if both the segmented 2D defect mask and the 3D defect mask include the defect region, or the depth of the defect region is greater than the predefined threshold for the pit defect, then output the defect class As a result of defect detection, or in the case that the defect type of the defect region is a raised defect, if both the segmented 2D defect mask and the 3D defect mask include the defect region, and the size and depth of the defect region are larger than those for the raised defect If there is a predefined threshold of defects, the defect class is output as the defect detection result.
  • the size or depth of the defect area exceeds the specification for If the threshold value of the sag is not required, it is judged as a defect (also known as NG).
  • NG a defect
  • the bump-type defect after detection in 2D inspection, in 3D inspection, if the size and depth of the defect area exceed the threshold requirements for bumps in the specification, it is judged as a defect.
  • the threshold of defect specifications during detection can be adjusted in a customized manner, making the detection algorithm more flexible.
  • FIG. 8 is a schematic architecture diagram of a system 800 for defect detection according to an embodiment of the present application.
  • the system 800 includes at least an image acquisition module 801 , a defect segmentation module 802 and a defect judgment module 803 .
  • the image acquisition module 801 can be used to acquire two-dimensional (2D) pictures of the object to be detected.
  • the defect segmentation module 802 can be used to input the collected 2D pictures into the trained defect segmentation model to obtain the segmented 2D defect mask, wherein the defect segmentation model is based on the multi-level improvement of the intersection over union (IoU) threshold level by level
  • the feature extraction instance segmentation network is trained, where the 2D defect mask includes information about the defect type, defect size and defect location of the segmented defect region.
  • the defect judging module 803 can be used to judge the segmented 2D defect mask based on predefined defect rules to output a defect detection result.
  • the system for defect detection utilizes an instance segmentation network based on a multi-level feature extraction architecture, and can segment instances where positive samples are particularly difficult to obtain or positive samples are extremely few
  • the model still achieves good segmentation and detection results, realizes no missing killing, and greatly reduces the probability of overkilling.
  • the image acquisition module 801 may be further configured to acquire a three-dimensional (3D) picture of the object to be detected.
  • the defect segmentation module 802 may be further configured to: preprocess the collected 3D pictures to obtain an image with depth information of the object to be detected, and input the obtained image with depth information into the trained defect segmentation model to obtain the obtained A segmented 3D defect mask, wherein the 3D defect mask includes information about defect depths of the segmented defect regions.
  • the defect judgment module 803 may be further configured to perform fusion judgment on the segmented 2D defect mask and 3D defect mask based on predefined defect rules to output a defect detection result.
  • FIG. 9 is a schematic architecture diagram of an apparatus 900 for defect detection according to another embodiment of the present application.
  • an apparatus 900 may include a memory 901 and at least one processor 902 .
  • the memory 901 may store computer-executable instructions.
  • the computer-executable instructions cause the device 900 to perform the following operations when executed by the at least one processor 902: collect a two-dimensional (2D) picture of the object to be inspected, and input the collected 2D picture into the trained defect segmentation model to obtain the obtained Segmented 2D defect masks, wherein the defect segmentation model is trained based on a multi-level feature extraction instance segmentation network with layer-by-layer enhancement of the intersection-over-union (IoU) threshold, wherein the 2D defect mask includes information about the segmented defect The defect type, defect size and defect position information of the region, and judge the segmented 2D defect mask based on the predefined defect rules to output defect detection results.
  • 2D defect mask includes information about the segmented defect The defect type, defect size and defect position information of the region, and judge the segmented 2D defect mask based on the predefined defect rules to output defect detection results.
  • the memory 901 may include RAM, ROM, or a combination thereof. In some cases, memory 901 may contain, among other things, a BIOS that may control basic hardware or software operations, such as interaction with peripheral components or devices.
  • Processor 902 may comprise an intelligent hardware device (e.g., a general purpose processor, DSP, CPU, microcontroller, ASIC, FPGA, programmable logic device, discrete gate or transistor logic components, discrete hardware components, or any combination thereof) .
  • the device for defect detection utilizes an instance segmentation network based on a multi-level feature extraction architecture, so that positive samples are particularly difficult to obtain or there are few positive samples
  • the instance segmentation model still achieves good segmentation and detection results, realizes no missing killing, and greatly reduces the probability of overkilling.
  • the computer-executable instructions when executed by at least one processor 902, cause the at least one processor 902 to perform various operations described above with reference to FIGS.
  • a general-purpose processor can be a microprocessor, but in the alternative, the processor can be any conventional processor, controller, microcontroller, or state machine.
  • a processor may also be implemented as a combination of computing devices (eg, a combination of a DSP and a microprocessor, multiple microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration).
  • the functions described herein may be implemented in hardware, software executed by a processor, firmware, or any combination thereof. If implemented in software executed by a processor, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Other examples and implementations are within the scope of the disclosure and the following claims. For example, due to the nature of software, functions described herein can be implemented using software executed by a processor, hardware, firmware, hardwiring, or any combination thereof. Features implementing functions may also be physically located at various locations, including being distributed such that portions of functions are implemented at different physical locations.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

一种用于缺陷检测的方法和系统。所述方法包括:采集待检测对象的2D图片;将所采集的2D图片输入经训练的缺陷分割模型以得到所分割的2D缺陷掩膜,其中该缺陷分割模型是基于交并比阈值逐层级提高的多层级特征提取实例分割网络来训练得到的,其中该2D缺陷掩膜包括关于所分割的缺陷区域的缺陷类型、缺陷大小和缺陷位置的信息;以及基于预定义的缺陷规则来对所分割的2D缺陷掩膜进行判断以输出缺陷检测结果。

Description

一种用于缺陷检测的方法和系统 技术领域
本申请涉及人工智能领域,特别是涉及一种用于缺陷检测的方法和系统。
背景技术
在现代工业制造领域中,对工业产品的缺陷检测是工业产品质量检测的关键部分,这对于改良产品工艺,提高产线良率是非常重要的。
然而,在传统工业制造中,对工业产品的缺陷检测通常采用人工观察的方式,这在待检测对象非常小的情况下难以直接观察到缺陷,并且存在检测成本大、质检效率低的问题。
发明内容
鉴于上述问题,本申请提供一种用于缺陷检测的方法和系统,能够在待检测对象非常小的情况下检出缺陷并且显著减少了检测成本,且大大提高了质检效率。
第一方面,本申请提供了一种用于缺陷检测的方法,包括:采集待检测对象的二维(2D)图片;将所采集的2D图片输入经训练的缺陷分割模型以得到所分割的2D缺陷掩膜,其中所述缺陷分割模型是基于交并比(IoU)阈值逐层级提高的多层级特征提取实例分割网络来训练得到的,其中所述2D缺陷掩膜包括关于所分割的缺陷区域的缺陷类型、缺陷大小和缺陷位置的信息;以及基于预定义的缺陷规则来对所分割的2D缺陷掩 膜进行判断以输出缺陷检测结果。
本申请实施例的技术方案中,通过设计一种基于多层级特征提取架构的实例分割网络,可以在正样本特别难获取或者正样本特别少的语义分割模型中依然达到较好的分割和检测结果,实现了无漏杀,且大幅降低了过杀的概率。
在一些实施例中,所述方法进一步包括:采集待检测对象的三维(3D)图片;对所采集的3D图片进行预处理以得到所述待检测对象的具有深度信息的图像;将所得到的具有深度信息的图像输入所述经训练的缺陷分割模型以得到所分割的3D缺陷掩膜,其中所述3D缺陷掩膜包括关于所分割的缺陷区域的缺陷深度的信息;以及基于预定义的缺陷规则来对所分割的2D缺陷掩膜和3D缺陷掩膜进行融合判断以输出缺陷检测结果。通过结合2D和3D检测的结果,得到了针对实际需求的关于深度的分割结果,从而使检测结果更准确,并且在缺陷在2D图像形态上不明显时,通过融合2D和3D检测结果来实现无漏杀,且大幅降低了过杀的概率。
在一些实施例中,基于预定义的缺陷规则来对所分割的2D缺陷掩膜和3D缺陷掩膜进行融合判断进一步包括:利用所采集的2D图片与3D图片之间的坐标变换矩阵来将所述2D图片与所述3D图片进行像素级对齐以得到经对齐的图片;将所分割的2D缺陷掩膜和3D缺陷掩膜填入所述经对齐的图片;以及基于预定义的缺陷规则在所述经对齐的图片上对所分割的2D缺陷掩膜和3D缺陷掩膜进行融合判断以输出缺陷检测结果。在将2D与3D分割结果进行融合时通过对齐并且填入对应的掩膜,可以更直观且准确的检出缺陷。
在一些实施例中,所述多层级特征提取实例分割网络是由三个层级的实例分割网络来级联得到的,其中用于正负样本采样的IoU阈值在第一层级中被设置为0.2-0.4,在第二层级中被设置为0.3-0.45,并且在第三层 级中被设置为0.5-0.7。在进行正负样本采样时通过在第一阶段设置较低的IoU阈值,有效避免了正负样本不均衡导致的过拟合问题,同时通过逐层级提高IoU阈值来进行不断进阶的精细采样,从而得到更高的特征提取精度,使得缺陷检测的结果更为准确。
在一些实施例中,所述IoU阈值在第一层级中被设置为0.3,在第二层级中被设置为0.4,并且在第三层级中被设置为0.5。通过将多层级特征提取实例分割网络中各层级的IoU阈值分别设为0.3、0.4和0.5,可以逐层级对正负样本进行更精细的采样,使得缺陷检测的结果更为准确。
在一些实施例中,所述多层级特征提取实例分割网络所输出的缺陷掩膜是基于每一层级的实例分割网络所输出的缺陷掩膜的加权平均来得到的。通过对每一层级的网络输出的缺陷掩膜进行加权平均,使得缺陷检测的结果更为准确,大大降低了过杀的概率。
在一些实施例中,基于预定义的缺陷规则来对所分割的缺陷掩膜进行判断以输出缺陷检测结果进一步包括:在所分割的缺陷区域的大小或深度大于针对所述缺陷区域的缺陷类型的预定义阈值时,输出缺陷类作为缺陷检测结果。通过基于不同类型的缺陷应用不同的缺陷规则进行判断,可以定制化地调整检测时缺陷规格的阈值,使得检测算法更加灵活。
在一些实施例中,所述缺陷类型包括凹坑缺陷和凸起缺陷,基于预定义的缺陷规则来对所分割的缺陷掩膜进行判断以输出缺陷检测结果进一步包括:在所述缺陷区域的缺陷类型为凹坑缺陷的情况下,若所分割的2D缺陷掩膜和3D缺陷掩膜均包括所述缺陷区域,或者所述缺陷区域的深度大于针对凹坑缺陷的预定义阈值,则输出缺陷类作为缺陷检测结果;或者在所述缺陷区域的缺陷类型为凸起缺陷的情况下,若所分割的2D缺陷掩膜和3D缺陷掩膜均包括所述缺陷区域,并且所述缺陷区域的大小和深度大于针对凸起缺陷的预定义阈值,则输出缺陷类作为缺陷检测结果。通 过根据凹坑缺陷和凸起缺陷的不同类别应用不同的缺陷规则进行判断,可以定制化地调整检测时缺陷规格的阈值,使得检测算法更加灵活。
第二方面,本申请提供了一种用于缺陷检测的系统,所述系统包括:图像采集模块,所述图像采集模块被配置成采集待检测对象的二维(2D)图片;缺陷分割模块,所述缺陷分割模块被配置成将所采集的2D图片输入经训练的缺陷分割模型以得到所分割的2D缺陷掩膜,其中所述缺陷分割模型是基于交并比(IoU)阈值逐层级提高的多层级特征提取实例分割网络来训练得到的,其中所述2D缺陷掩膜包括关于所分割的缺陷区域的缺陷类型、缺陷大小和缺陷位置的信息;以及缺陷判断模块,所述缺陷判断模块被配置成基于预定义的缺陷规则来对所分割的2D缺陷掩膜进行判断以输出缺陷检测结果。
本申请实施例的技术方案中,通过设计一种基于多层级特征提取架构的实例分割网络,可以在正样本特别难获取或者正样本特别少的语义分割模型中依然达到较好的分割和检测结果,实现了无漏杀,且大幅降低了过杀的概率。
在一些实施例中,所述图像采集模块被进一步配置成采集待检测对象的三维(3D)图片;所述缺陷分割模块被进一步配置成:对所采集的3D图片进行预处理以得到所述待检测对象的具有深度信息的图像;将所得到的具有深度信息的图像输入所述经训练的缺陷分割模型以得到所分割的3D缺陷掩膜,其中所述3D缺陷掩膜包括关于所分割的缺陷区域的缺陷深度的信息;以及所述缺陷判断模块被进一步配置成基于预定义的缺陷规则来对所分割的2D缺陷掩膜和3D缺陷掩膜进行融合判断以输出缺陷检测结果。通过结合2D和3D检测的结果,得到了针对实际需求的关于深度的分割结果,从而使检测结果更准确,并且在缺陷在2D图像形态上不明显时,通过融合2D和3D检测结果来实现无漏杀,且大幅降低了过杀的概率。
在一些实施例中,所述缺陷判断模块被进一步配置成:利用所采集的2D图片与3D图片之间的坐标变换矩阵来将所述2D图片与所述3D图片进行像素级对齐以得到经对齐的图片;将所分割的2D缺陷掩膜和3D缺陷掩膜填入所述经对齐的图片;以及基于预定义的缺陷规则在所述经对齐的图片上对所分割的2D缺陷掩膜和3D缺陷掩膜进行融合判断以输出缺陷检测结果。在将2D与3D分割结果进行融合时通过对齐并且填入对应的掩膜,可以更直观且准确的检出缺陷。
在一些实施例中,所述多层级特征提取实例分割网络是由三个层级的实例分割网络来级联得到的,其中用于正负样本采样的IoU阈值在第一层级中被设置为0.2-0.4,在第二层级中被设置为0.3-0.45,并且在第三层级中被设置为0.5-0.7。在进行正负样本采样时通过在第一阶段设置较低的IoU阈值,有效避免了正负样本不均衡导致的过拟合问题,同时通过逐层级提高IoU阈值来进行不断进阶的精细采样,从而得到更高的特征提取精度,使得缺陷检测的结果更为准确。
在一些实施例中,所述IoU阈值在第一层级中被设置为0.3,在第二层级中被设置为0.4,并且在第三层级中被设置为0.5。通过将多层级特征提取实例分割网络中各层级的IoU阈值分别设为0.3、0.4和0.5,可以逐层级对正负样本进行更精细的采样,使得缺陷检测的结果更为准确。
在一些实施例中,所述多层级特征提取实例分割网络所输出的缺陷掩膜是基于每一层级的实例分割网络所输出的缺陷掩膜的加权平均来得到的。通过对每一层级的网络输出的缺陷掩膜进行加权平均,使得缺陷检测的结果更为准确,大大降低了过杀的概率。
在一些实施例中,所述缺陷判断模块被进一步配置成:在所分割的缺陷区域的大小或深度大于针对所述缺陷区域的缺陷类型的预定义阈值时,输出缺陷类作为缺陷检测结果。通过基于不同类型的缺陷应用不同的缺陷 规则进行判断,可以定制化地调整检测时缺陷规格的阈值,使得检测算法更加灵活。
在一些实施例中,所述缺陷类型包括凹坑缺陷和凸起缺陷,所述缺陷判断模块被进一步配置成:在所述缺陷区域的缺陷类型为凹坑缺陷的情况下,若所分割的2D缺陷掩膜和3D缺陷掩膜均包括所述缺陷区域,或者所述缺陷区域的深度大于针对凹坑缺陷的预定义阈值,则输出缺陷类作为缺陷检测结果;或者在所述缺陷区域的缺陷类型为凸起缺陷的情况下,若所分割的2D缺陷掩膜和3D缺陷掩膜均包括所述缺陷区域,并且所述缺陷区域的大小和深度大于针对凸起缺陷的预定义阈值,则输出缺陷类作为缺陷检测结果。通过根据凹坑缺陷和凸起缺陷的不同类别应用不同的缺陷规则进行判断,可以定制化地调整检测时缺陷规格的阈值,使得检测算法更加灵活。
第三方面,本申请提供了一种用于缺陷检测的装置,所述装置包括:存储器,所述存储器存储有计算机可执行指令;以及至少一个处理器,所述计算机可执行指令在被所述至少一个处理器执行时使所述装置进行以下操作:采集待检测对象的二维(2D)图片;将所采集的2D图片输入所述经训练的缺陷分割模型以得到所分割的2D缺陷掩膜,其中所述缺陷分割模型是基于交并比(IoU)阈值逐层级提高的多层级特征提取实例分割网络来训练得到的,其中所述2D缺陷掩膜包括关于所分割的缺陷区域的缺陷类型、缺陷大小和缺陷位置的信息;以及基于预定义的缺陷规则来对所分割的2D缺陷掩膜进行判断以输出缺陷检测结果。
本申请实施例的技术方案中,通过设计一种基于多层级特征提取架构的实例分割网络,可以在正样本特别难获取或者正样本特别少的语义分割模型中依然达到较好的分割和检测结果,实现了无漏杀,且大幅降低了过杀的概率。
在一些实施例中,所述计算机可执行指令在被执行时进一步使所述至少一个处理器进行以下操作:采集待检测对象的三维(3D)图片;所述缺陷分割模块被进一步配置成:对所采集的3D图片进行预处理以得到所述待检测对象的具有深度信息的图像;将所得到的具有深度信息的图像输入所述经训练的缺陷分割模型以得到所分割的3D缺陷掩膜,其中所述3D缺陷掩膜包括关于所分割的缺陷区域的缺陷深度的信息;以及所述缺陷判断模块被进一步配置成基于预定义的缺陷规则来对所分割的2D缺陷掩膜和3D缺陷掩膜进行融合判断以输出缺陷检测结果。通过结合2D和3D检测的结果,得到了针对实际需求的关于深度的分割结果,从而使检测结果更准确,并且在缺陷在2D图像形态上不明显时,通过融合2D和3D检测结果来实现无漏杀,且大幅降低了过杀的概率。
在一些实施例中,所述计算机可执行指令在被执行时进一步使所述至少一个处理器进行以下操作:利用所采集的2D图片与3D图片之间的坐标变换矩阵来将所述2D图片与所述3D图片进行像素级对齐以得到经对齐的图片;将所分割的2D缺陷掩膜和3D缺陷掩膜填入所述经对齐的图片;以及基于预定义的缺陷规则在所述经对齐的图片上对所分割的2D缺陷掩膜和3D缺陷掩膜进行融合判断以输出缺陷检测结果。在将2D与3D分割结果进行融合时通过对齐并且填入对应的掩膜,可以更直观且准确的检出缺陷。
在一些实施例中,所述多层级特征提取实例分割网络是由三个层级的实例分割网络来级联得到的,其中用于正负样本采样的IoU阈值在第一层级中被设置为0.2-0.4,在第二层级中被设置为0.3-0.45,并且在第三层级中被设置为0.5-0.7。在进行正负样本采样时通过在第一阶段设置较低的IoU阈值,有效避免了正负样本不均衡导致的过拟合问题,同时通过逐层级提高IoU阈值来进行不断进阶的精细采样,从而得到更高的特征提取精 度,使得缺陷检测的结果更为准确。
在一些实施例中,所述IoU阈值在第一层级中被设置为0.3,在第二层级中被设置为0.4,并且在第三层级中被设置为0.5。通过将多层级特征提取实例分割网络中各层级的IoU阈值分别设为0.3、0.4和0.5,可以逐层级对正负样本进行更精细的采样,使得缺陷检测的结果更为准确。
第四方面,本申请提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机可执行指令,当所述计算机可执行指令被计算设备执行时,使所述计算设备实现如前述方面中任一用于缺陷检测的方法。
上述说明仅是本申请技术方案的概述,为了能够更清楚了解本申请的技术手段,而可依照说明书的内容予以实施,并且为了让本申请的上述和其它目的、特征和优点能够更明显易懂,以下特举本申请的具体实施方式。
附图说明
为了能详细地理解本申请的上述特征所用的方式,可以参考各实施例来对以上简要概述的内容进行更具体的描述,其中一些方面在附图中示出。然而应该注意,附图仅示出了本申请的某些典型方面,故不应被认为限定其范围,因为该描述可以允许有其它等同有效的方面。
图1是根据本申请的一实施例的用于缺陷检测的方法的示例流程图;
图2是根据本申请的一实施例的基于多层级特征提取架构的实例分割网络的结构示意图;
图3是根据本申请的一实施例的2D缺陷分割结果的示意图;
图4是根据本申请的另一实施例的用于缺陷检测的方法的示例流程图;
图5是根据本申请的一实施例的伪彩色图像的示意图;
图6是根据本申请的一实施例的3D缺陷分割结果的示意图;
图7是根据本申请的一实施例的2D和3D分割结果融合处理的示例流程图;
图8是根据本申请的一实施例的用于缺陷检测的系统的示意架构图;
图9是根据本申请的另一实施例的用于缺陷检测的装置的示意架构图。
具体实施方式中的附图标号如下:
缺陷检测系统800,图像采集模块801,缺陷分割模块802,缺陷检测模块803;
装置900,存储器901,处理器902。
具体实施方式
下面将结合附图对本申请技术方案的实施例进行详细的描述。以下实施例仅用于更加清楚地说明本申请的技术方案,因此只作为示例,而不能以此来限制本申请的保护范围。
除非另有定义,本文所使用的所有的技术和科学术语与属于本申请的技术领域的技术人员通常理解的含义相同;本文中所使用的术语只是为了描述具体的实施例的目的,不是旨在于限制本申请;本申请的说明书和权利要求书及上述附图说明中的术语“包括”和“具有”以及它们的任何变形,意图在于覆盖不排他的包含。
在本申请实施例的描述中,“多个”的含义是两个以上,除非另有明确具体的限定。在本文中提及“实施例”意味着,结合实施例描述的特定特征、结构或特性可以包含在本申请的至少一个实施例中。在说明书中的各个位置出现该短语并不一定均是指相同的实施例,也不是与其它实施例互斥的独立的或备选的实施例。本领域技术人员显式地和隐式地理解的 是,本文所描述的实施例可以与其它实施例相结合。
在本申请实施例的描述中,术语“和/或”仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本文中字符“/”,一般表示前后关联对象是一种“或”的关系。
目前,从市场形势的发展来看,电池的应用越加广泛。储能电池被应用于水力、火力、风力和太阳能电站等储能电源系统,动力电池还被广泛应用于电动自行车、电动摩托车、电动汽车等电动交通工具,以及军事装备和航空航天等多个领域。随着动力电池应用领域的不断扩大,其市场的需求量也在不断地扩增。密封钉焊接是动力电池生产过程中不可或缺的环节,密封钉焊接是否达标直接影响电池的安全。密封钉焊接区域称为焊道,由于焊接时候的温度、环境、激光角度等变化,焊道上常常存在爆线(凹坑)、融珠等缺陷。
目前随着机器视觉和工业自动化的发展,存在基于人工智能来自动检测缺陷的方法,然而在待检测对象非常小或缺陷难以辨别的情况下,使用传统的特征提取网络不能很好地检出缺陷,容易出现漏杀或过杀。例如,现有方案采用Res2Net作为特征提取模块,但由于密封钉有些缺陷形态非常的不明显且会有非常小的缺陷,仅使用Res2Net进行训练容易出现正负样本比例过于失衡而导致模型收敛不到位,从而导致模型对难样本的检测能力受限。另外,现有样本仅采用二维(2D)图片进行缺陷检测,但是激光焊接中出现的凹坑、凸起均为深度问题导致的缺陷,其2D形态可能非常不显著,因此对于具有深度的缺陷,仅采用2D图片进行检测非常容易出现误判。
基于以上考虑,为了解决缺陷检测中对于不明显缺陷以及仅具有深度的缺陷容易出现漏杀或过杀的问题,发明人经过深入研究,设计了一种 更精确地提取缺陷特征的多层级主干(backbone)网络结构以及一种对2D、3D实例分割算法结果进行融合的缺陷检测算法。本申请通过利用基于多层级特征提取架构的实例分割网络,可以在正样本特别难获取或者正样本特别少的语义分割模型中依然达到较好的分割和检测结果。另外,本申请通过结合2D和3D检测的结果,对结果进行了可定制化的融合,得到了针对实际需求的关于深度的模型结果,使得对具有深度的缺陷的检测更为准确。相较于之前算法,本申请将例如密封钉凹坑和融珠的缺陷检测漏杀降低到了0%,并且将过杀降低到了0.02%以内。另外,本申请还可以定制化地调整检测时缺陷规格的阈值,使得检测算法更加灵活。
可以领会,本申请可以应用于与人工智能(AI)相结合的缺陷检测领域,本申请实施例公开的用于缺陷检测的方法和系统可以但不限用于针对密封钉焊道的缺陷检测,还可用于针对现代工业制造中的其他各类产品的缺陷检测。
以下实施例为了方便说明,以针对密封钉焊道的缺陷检测为例进行说明。
图1是根据本申请的一实施例的用于缺陷检测的方法100的示例流程图。根据本申请的一个实施例,参考图1,并且进一步参考图2和图3,其中图2是根据本申请的一实施例的基于多层级特征提取架构的实例分割网络的结构示意图,图3是根据本申请的一实施例的2D缺陷分割结果的示意图。方法100开始于步骤101,采集待检测对象的二维(2D)图片。在步骤102,将所采集的2D图片输入经训练的缺陷分割模型以得到所分割的2D缺陷掩膜,其中该缺陷分割模型是基于交并比(IoU)阈值逐层级提高的多层级特征提取实例分割网络来训练得到的,其中所分割的2D缺陷掩膜包括关于所分割的缺陷区域的缺陷类型、缺陷大小和缺陷位置的信息。在步骤103,基于预定义的缺陷规则来对所分割的2D缺陷掩膜进行 判断以输出缺陷检测结果。
步骤102中的缺陷分割模型的网络架构如图2所示,该网络架构是由三个层级的实例分割网络来级联得到的。通常在实例分割网络的训练阶段,首先计算每个候选框(proposal)与真实框(gt)之间的交并比(IoU),通过人为设定一个IoU阈值(例如,通常为0.5),把这些候选框分为正样本(前景)和负样本(背景),并且对这些正负样本采样,使得它们之间的比例尽量满足1∶3(二者总数量通常为128),之后这些候选框(例如,通常为128个)被送入到ROI(感兴趣区域)池化层,最后进行类别分类和边框回归。但在当前应用场景下,由于凹坑特征非常不显著,直接将IoU阈值设置为0.5,会产生如下两个问题:1)满足这个阈值条件的候选框非常少,容易导致过拟合;以及2)严重的不匹配(mismatch)问题,由于当前的语义分割结构本身就存在这个问题,在IoU阈值设置得较高时,这个问题就更加严重。以上两个问题都会导致缺陷特征提取性能的下降。因此,在本申请中将候选框的提取分为三个阶段,其中在第一阶段设置较小的IoU阈值并且逐阶段提高IoU阈值。图2中的缺陷分割模型所输出的2D缺陷掩膜(Mask)如图3所示,在图3中为清楚起见示出了在对密封钉焊道进行检测时所分割出的2D缺陷掩膜与原始2D图片叠加的结果,其中右下角灰色区域为分割出的掩膜上色之后的形态,此区域指示检测出来的凹坑缺陷,分割结果较为准确。
由此,在缺陷检测中,通过利用基于多层级特征提取架构的实例分割网络,可以在正样本特别难获取或者正样本特别少的实例分割模型中依然达到较好的分割和检测结果,实现了无漏杀,且大幅降低了过杀的概率。
根据本申请的一个实施例,可选地,参考图4至图6,图4是根据本申请的另一实施例的用于缺陷检测的方法400的示例流程图,图5是根据本申请的一实施例的经渲染的伪彩色图像的示意图,图6是根据本申请 的一实施例的3D缺陷分割结果的示意图。方法400中的步骤401和402与方法100中的步骤101和102相同,在此不再赘述。进一步地,在步骤403,采集待检测对象的三维(3D)图片。在步骤404,对所采集的3D图片进行预处理以得到待检测对象的具有深度信息的图像。在步骤405,将所得到的具有深度信息的图像输入经训练的缺陷分割模型以得到所分割的3D缺陷掩膜,其中3D缺陷掩膜包括关于所分割的缺陷区域的缺陷深度的信息。在步骤406,基于预定义的缺陷规则来对所分割的2D缺陷掩膜和3D缺陷掩膜进行融合判断以输出缺陷检测结果。
通常利用3D相机(例如,深度相机)来采集待检测对象的3D图像,其中所采集的3D图像可以是以深度图像的形式保存的,其中深度图像的灰度值代表Z方向的深度信息。深度图像和2D图像通常具有位置一致性,换言之,深度图像上的像素点和2D图像中的像素点是一一对应的。为了使深度信息能够更容易地被人和机器分辨,通常需要对所采集的深度图像进行预处理以得到具有深度信息的图像。在一些情形中,可以对所拍摄的深度图像进行渲染以得到伪彩色图像,如图5所示。例如,可以使用预定义的颜色映射(例如,在OpenCV中使用applycolormap(伪彩色函数))来将灰度图像伪彩色化,以使得深度信息能够更容易地被人和机器感知。图2中的缺陷分割模型所输出的3D缺陷掩膜(Mask)如图6所示,在图6中为清楚起见示出了在对密封钉焊道进行检测时所分割出的3D缺陷掩膜与原始3D图片叠加的结果,其中左上角灰色区域为分割出的掩膜的轮廓,其指示相关缺陷的基于深度图的分割信息,可以通过结果直接判断出缺陷的类型、面积大小和深度。
通过结合2D和3D检测的结果,得到了针对实际需求的关于深度的分割结果,从而使检测结果更准确,并且在缺陷在2D图像形态上不明显时(例如,凹坑在2D图像形态上与正常焊道差异不显著),通过融合2D 和3D检测结果来实现无漏杀,且大幅降低了过杀的概率。
根据本申请的一个实施例,可选地,继续参考图3、4和6,并且进一步参考图7,图7是根据本申请的一实施例的2D和3D分割结果融合处理700的示例流程图。对2D和3D分割结果进行后处理的步骤开始于框701,利用所采集的2D图片与3D图片之间的坐标变换矩阵来将2D图片与3D图片进行像素级对齐以得到经对齐的图片。在框702,将所分割的2D缺陷掩膜和3D缺陷掩膜填入所述经对齐的图片。在框703,基于预定义的缺陷规则在经对齐的图片上对所分割的2D缺陷掩膜和3D缺陷掩膜进行融合判断以输出缺陷检测结果。
由于需要对所分割的2D和3D缺陷掩膜进行融合判断,因此需要将原始采集的2D和3D图片进行像素级对齐以对缺陷位置进行定位,从而在融合判断时可以确定该缺陷是否在2D和3D图片中均被检出。具体而言,首先分别对所采集的2D和3D图片进行预处理,其中将2D图片灰度化以得到2D灰度化图片,并且从3D图片中分离出亮度图片。随后,在所得到的2D灰度化图片和3D亮度图片中,事先选取密封钉焊接的三个相同位置点,进行空间变换(例如,利用opencv),求解坐标变换方程,得到2D图片与3D图片之间的坐标变换矩阵,采用坐标变换矩阵,将2D图片和3D图片进行变换和像素级对齐,从而得到经对齐的叠加图片。对齐之后,可将所分割的2D缺陷掩膜和3D缺陷掩膜填入经对齐的叠加图片的对应位置。随后,可基于预定义的缺陷规则来对所分割的2D缺陷掩膜和3D缺陷掩膜进行融合判断以输出缺陷检测结果。如图7中所示,2D缺陷掩膜与3D缺陷掩膜包括同一缺陷区域(如灰色标出区域),且该缺陷区域的类型为凹坑,这指示在3D检测上复检出2D检测结果中的凹坑,由此输出缺陷检测结果为缺陷类。
由此可见,在将2D与3D分割结果进行融合时通过对齐并且填入对 应的掩膜,可以对缺陷区域在2D和3D图片中同时进行定位,从而能够更直观且准确地检测缺陷。
根据本申请的一个实施例,可选地,继续参考图2,本申请中用于缺陷分割的缺陷分割模型是由多层级特征提取实例分割网络来训练得到的,其中该多层级特征提取实例分割网络是由三个层级的实例分割网络来级联得到的,其中用于正负样本采样的IoU阈值在第一层级中被设置为0.2-0.4,在第二层级中被设置为0.3-0.45,并且在第三层级中被设置为0.5-0.7。
如前所述,在本申请中将候选框的提取分为三个阶段,其中在第一阶段设置较小的IoU阈值,例如,可以将IoU阈值设置为0.2-0.4,在这一阶段可以得到比直接将IoU阈值设置为0.5更多的正样本,但是这牺牲了一定精度,在此可以理解为初筛。随后在第二阶段,在原先提取到的候选框的基础上继续采样,此时采样的IoU阈值可被设置为0.3-0.45,由此得到在已有采样基础上的更精细的采样。随后在第三阶段,在第二阶段的结果上继续采样,此时采样的IoU阈值可被设置为0.5-0.7。最后,直接输出第三层级的结果以得到最后的实例分割结果。
根据本申请的一个实施例,可选地,继续参考图2,在训练缺陷分割模型时用于正负样本采样的IoU阈值在第一层级中被设置为0.3,在第二层级中被设置为0.4,并且在第三层级中被设置为0.5。
在进行正负样本采样时通过在第一阶段设置较低的IoU阈值,有效避免了正负样本不均衡导致的过拟合问题,同时通过逐层级提高IoU阈值来进行不断进阶的精细采样,从而得到更高的特征提取精度。
根据本申请的一个实施例,可选地,继续参考图2,多层级特征提取实例分割网络所输出的缺陷掩膜可以是基于每一层级的实例分割网络所输出的缺陷掩膜的加权平均来得到的。
通过对每一层级的网络输出的缺陷掩膜进行加权平均,使得缺陷检 测的结果更为准确,大大降低了过杀的概率。
根据本申请的一个实施例,可选地,在进行缺陷分割之后,基于预定义的缺陷规则来对所分割的缺陷掩膜进行判断以输出缺陷检测结果进一步包括:在所分割的缺陷区域的大小或深度大于针对该缺陷区域的缺陷类型的预定义阈值时,输出缺陷类作为缺陷检测结果。
在一些示例中,预定义的缺陷规则可包括仅在2D和3D检测结果中均检出缺陷,并且该缺陷的缺陷区域的大小或深度大于预定义的阈值时,将检测对象标识为缺陷类,否则标识为正常类。在其他示例中,预定义的缺陷规则可包括在3D检测中复检出2D检测结果中的缺陷,或者该缺陷的深度大于预定义的阈值时,将检测对象标识为缺陷类,否则标识为正常类。预定义的缺陷规则针对不同类型的缺陷可以是不同的。
根据本申请的一个实施例,可选地,缺陷类型包括凹坑缺陷和凸起缺陷,基于预定义的缺陷规则来对所分割的缺陷掩膜进行判断以输出缺陷检测结果进一步包括:在缺陷区域的缺陷类型为凹坑缺陷的情况下,若所分割的2D缺陷掩膜和3D缺陷掩膜均包括该缺陷区域,或者该缺陷区域的深度大于针对凹坑缺陷的预定义阈值,则输出缺陷类作为缺陷检测结果,或者在缺陷区域的缺陷类型为凸起缺陷的情况下,若所分割的2D缺陷掩膜和3D缺陷掩膜均包括该缺陷区域,并且该缺陷区域的大小和深度大于针对凸起缺陷的预定义阈值,则输出缺陷类作为缺陷检测结果。
例如,在密封钉焊道缺陷检测中,对于凹坑类型的缺陷,一旦在3D检测上复检出来2D检测结果中的凹坑,或者在3D检测中,缺陷区域的大小或深度超出规格中对于凹陷的阈值要求,则判定为缺陷类(又称为NG)。对于凸起类型的缺陷,在2D检测中检出之后,在3D检测中,缺陷区域的大小和深度均超出规格中对于凸起的阈值要求,则判定为缺陷类。
通过基于不同类型的缺陷应用不同的缺陷规则进行判断,可以定制 化地调整检测时缺陷规格的阈值,使得检测算法更加灵活。
图8是根据本申请的一实施例的用于缺陷检测的系统800的示意架构图。根据本申请的一个实施例,参考图8,系统800至少包括图像采集模块801、缺陷分割模块802和缺陷判断模块803。图像采集模块801可用于采集待检测对象的二维(2D)图片。缺陷分割模块802可用于将所采集的2D图片输入经训练的缺陷分割模型以得到所分割的2D缺陷掩膜,其中该缺陷分割模型是基于交并比(IoU)阈值逐层级提高的多层级特征提取实例分割网络来训练得到的,其中2D缺陷掩膜包括关于所分割的缺陷区域的缺陷类型、缺陷大小和缺陷位置的信息。缺陷判断模块803可用于基于预定义的缺陷规则来对所分割的2D缺陷掩膜进行判断以输出缺陷检测结果。
与上述用于缺陷检测的方法100相对应,根据本申请的用于缺陷检测的系统通过利用基于多层级特征提取架构的实例分割网络,可以在正样本特别难获取或者正样本特别少的实例分割模型中依然达到较好的分割和检测结果,实现了无漏杀,且大幅降低了过杀的概率。
根据本申请的一个实施例,可选地,图像采集模块801可被进一步配置成采集待检测对象的三维(3D)图片。缺陷分割模块802可被进一步配置成:对所采集的3D图片进行预处理以得到待检测对象的具有深度信息的图像,将所得到的具有深度信息的图像输入经训练的缺陷分割模型以得到所分割的3D缺陷掩膜,其中该3D缺陷掩膜包括关于所分割的缺陷区域的缺陷深度的信息。缺陷判断模块803可被进一步配置成基于预定义的缺陷规则来对所分割的2D缺陷掩膜和3D缺陷掩膜进行融合判断以输出缺陷检测结果。
由此,当缺陷在2D图像形态上不明显时,通过融合2D和3D检测结果来实现无漏杀,且大幅降低了过杀的概率。在此,根据本申请的用于 缺陷检测的系统的各模块所执行的操作的具体细节参见上文中针对图1至图7所进行的阐述,在此为简洁起见不再赘述。
本领域技术人员能够理解,本公开的系统及其各模块既可以以硬件形式实现,也可以以软件形式实现,并且各模块可以任意合适的方式合并或组合。
图9是根据本申请的另一实施例的用于缺陷检测的装置900的示意架构图。根据本申请的一个实施例,参考图9,装置900可包括存储器901和至少一个处理器902。存储器901可存储有计算机可执行指令。计算机可执行指令在被该至少一个处理器902执行时使该装置900进行以下操作:采集待检测对象的二维(2D)图片,将所采集的2D图片输入经训练的缺陷分割模型以得到所分割的2D缺陷掩膜,其中该缺陷分割模型是基于交并比(IoU)阈值逐层级提高的多层级特征提取实例分割网络来训练得到的,其中该2D缺陷掩膜包括关于所分割的缺陷区域的缺陷类型、缺陷大小和缺陷位置的信息,并且基于预定义的缺陷规则来对所分割的2D缺陷掩膜进行判断以输出缺陷检测结果。
存储器901可包括RAM、ROM、或其组合。在一些情形中,存储器901可尤其包含BIOS,该BIOS可控制基本硬件或软件操作,诸如与外围组件或设备的交互。处理器902可包括智能硬件设备(例如,通用处理器、DSP、CPU、微控制器、ASIC、FPGA、可编程逻辑器件、分立的门或晶体管逻辑组件、分立的硬件组件,或其任何组合)。
由此,与上述用于缺陷检测的方法100相对应,根据本申请的用于缺陷检测的装置通过利用基于多层级特征提取架构的实例分割网络,可以在正样本特别难获取或者正样本特别少的实例分割模型中依然达到较好的分割和检测结果,实现了无漏杀,且大幅降低了过杀的概率。在此,计算机可执行指令在由至少一个处理器902执行时使该至少一个处理器902执 行上文中参考图1-7所描述的各种操作,在此为简洁起见不再赘述。
结合本文中的公开描述的各种解说性框以及模块可以用设计成执行本文中描述的功能的通用处理器、DSP、ASIC、FPGA或其他可编程逻辑器件、分立的门或晶体管逻辑、分立的硬件组件、或其任何组合来实现或执行。通用处理器可以是微处理器,但在替换方案中,处理器可以是任何常规的处理器、控制器、微控制器、或状态机。处理器还可被实现为计算设备的组合(例如,DSP与微处理器的组合、多个微处理器、与DSP核心协同的一个或多个微处理器,或者任何其他此类配置)。
本文中所描述的功能可以在硬件、由处理器执行的软件、固件、或其任何组合中实现。如果在由处理器执行的软件中实现,则各功能可以作为一条或多条指令或代码存储在计算机可读介质上或藉其进行传送。其他示例和实现落在本公开及所附权利要求的范围内。例如,由于软件的本质,本文描述的功能可使用由处理器执行的软件、硬件、固件、硬连线或其任何组合来实现。实现功能的特征也可物理地位于各种位置,包括被分布以使得功能的各部分在不同的物理位置处实现。
虽然已经参考优选实施例对本申请进行了描述,但在不脱离本申请的范围的情况下,可以对其进行各种改进并且可以用等效物替换其中的部件。尤其是,只要不存在结构冲突,各个实施例中所提到的各项技术特征均可以任意方式组合起来。本申请并不局限于文中公开的特定实施例,而是包括落入权利要求的范围内的所有技术方案。

Claims (18)

  1. 一种用于缺陷检测的方法,所述方法包括:
    采集待检测对象的二维(2D)图片;
    将所采集的2D图片输入经训练的缺陷分割模型以得到所分割的2D缺陷掩膜,其中所述缺陷分割模型是基于交并比(IoU)阈值逐层级提高的多层级特征提取实例分割网络来训练得到的,其中所述2D缺陷掩膜包括关于所分割的缺陷区域的缺陷类型、缺陷大小和缺陷位置的信息;以及
    基于预定义的缺陷规则来对所分割的2D缺陷掩膜进行判断以输出缺陷检测结果。
  2. 如权利要求1所述的方法,其特征在于,所述方法进一步包括:
    采集待检测对象的三维(3D)图片;
    对所采集的3D图片进行预处理以得到所述待检测对象的具有深度信息的图像;
    将所得到的具有深度信息的图像输入所述经训练的缺陷分割模型以得到所分割的3D缺陷掩膜,其中所述3D缺陷掩膜包括关于所分割的缺陷区域的缺陷深度的信息;以及
    基于预定义的缺陷规则来对所分割的2D缺陷掩膜和3D缺陷掩膜进行融合判断以输出缺陷检测结果。
  3. 如权利要求2所述的方法,其特征在于,基于预定义的缺陷规则来对所分割的2D缺陷掩膜和3D缺陷掩膜进行融合判断进一步包括:
    利用所采集的2D图片与3D图片之间的坐标变换矩阵来将所述2D图片与所述3D图片进行像素级对齐以得到经对齐的图片;
    将所分割的2D缺陷掩膜和3D缺陷掩膜填入所述经对齐的图片;以及
    基于预定义的缺陷规则在所述经对齐的图片上对所分割的2D缺陷掩膜和3D缺陷掩膜进行融合判断以输出缺陷检测结果。
  4. 如权利要求1-3中任一项所述的方法,其特征在于,所述多层级特征提取实例分割网络是由三个层级的实例分割网络来级联得到的,其中用于正负样本采样的IoU阈值在第一层级中被设置为0.2-0.4,在第二层级中被设置为0.3-0.45,并且在第三层级中被设置为0.5-0.7。
  5. 如权利要求4所述的方法,其特征在于,所述IoU阈值在第一层级中被设置为0.3,在第二层级中被设置为0.4,并且在第三层级中被设置为0.5。
  6. 如权利要求1-5中任一项所述的方法,其特征在于,所述多层级特征提取实例分割网络所输出的缺陷掩膜是基于每一层级的实例分割网络所输出的缺陷掩膜的加权平均来得到的。
  7. 如权利要求1-6中任一项所述的方法,其特征在于,基于预定义的缺陷规则来对所分割的缺陷掩膜进行判断以输出缺陷检测结果进一步包括:
    在所分割的缺陷区域的大小或深度大于针对所述缺陷区域的缺陷类型的预定义阈值时,输出缺陷类作为缺陷检测结果。
  8. 如权利要求1-7中任一项所述的方法,其特征在于,所述缺陷类型包括凹坑缺陷和凸起缺陷,基于预定义的缺陷规则来对所分割的缺陷掩膜进行判断以输出缺陷检测结果进一步包括:
    在所述缺陷区域的缺陷类型为凹坑缺陷的情况下,若所分割的2D缺陷掩膜和3D缺陷掩膜均包括所述缺陷区域,或者所述缺陷区域的深度大于针对凹坑缺陷的预定义阈值,则输出缺陷类作为缺陷检测结果;或者
    在所述缺陷区域的缺陷类型为凸起缺陷的情况下,若所分割的2D缺陷掩膜和3D缺陷掩膜均包括所述缺陷区域,并且所述缺陷区域的大小和深度大于针对凸起缺陷的预定义阈值,则输出缺陷类作为缺陷检测结果。
  9. 一种用于缺陷检测的系统,所述系统包括:
    图像采集模块,所述图像采集模块被配置成采集待检测对象的二维(2D)图片;
    缺陷分割模块,所述缺陷分割模块被配置成将所采集的2D图片输入经训练的缺陷分割模型以得到所分割的2D缺陷掩膜,其中所述缺陷分割模型是基于交并比(IoU)阈值逐层级提高的多层级特征提取实例分割网络来训练得到的,其中所述2D缺陷掩膜包括关于所分割的缺陷区域的缺陷类型、缺陷大小和缺陷位置的信息;以及
    缺陷判断模块,所述缺陷判断模块被配置成基于预定义的缺陷规则来对所分割的2D缺陷掩膜进行判断以输出缺陷检测结果。
  10. 如权利要求9所述的系统,其特征在于,
    所述图像采集模块被进一步配置成采集待检测对象的三维(3D)图片;
    所述缺陷分割模块被进一步配置成:
    对所采集的3D图片进行预处理以得到所述待检测对象的具有深度信息的图像;
    将所得到的具有深度信息的图像输入所述经训练的缺陷分割模型以得到所分割的3D缺陷掩膜,其中所述3D缺陷掩膜包括关于所分割的缺陷区域的缺陷深度的信息;以及
    所述缺陷判断模块被进一步配置成基于预定义的缺陷规则来对所分割的2D缺陷掩膜和3D缺陷掩膜进行融合判断以输出缺陷检测结果。
  11. 如权利要求10所述的系统,其特征在于,所述缺陷判断模块被进一步配置成:
    利用所采集的2D图片与3D图片之间的坐标变换矩阵来将所述2D图片与所述3D图片进行像素级对齐以得到经对齐的图片;
    将所分割的2D缺陷掩膜和3D缺陷掩膜填入所述经对齐的图片;以及
    基于预定义的缺陷规则在所述经对齐的图片上对所分割的2D缺陷掩膜和3D缺陷掩膜进行融合判断以输出缺陷检测结果。
  12. 如权利要求9-11中任一项所述的系统,其特征在于,所述多层级特征提取实例分割网络是由三个层级的实例分割网络来级联得到的,其中用于正负样本采样的IoU阈值在第一层级中被设置为0.2-0.4,在第二层级中被设置为0.3-0.45,并且在第三层级中被设置为0.5-0.7。
  13. 如权利要求12所述的系统,其特征在于,所述IoU阈值在第一层级中被设置为0.3,在第二层级中被设置为0.4,并且在第三层级中被设置为0.5。
  14. 如权利要求9-13中任一项所述的系统,其特征在于,所述多层级特征提取实例分割网络所输出的缺陷掩膜是基于每一层级的实例分割网络所输出的缺陷掩膜的加权平均来得到的。
  15. 如权利要求9-14中任一项所述的系统,其特征在于,所述缺陷判断模块被进一步配置成:
    在所分割的缺陷区域的大小或深度大于针对所述缺陷区域的缺陷类型的预定义阈值时,输出缺陷类作为缺陷检测结果。
  16. 如权利要求9-15中任一项所述的系统,其特征在于,所述缺陷类型包括凹坑缺陷和凸起缺陷,所述缺陷判断模块被进一步配置成:
    在所述缺陷区域的缺陷类型为凹坑缺陷的情况下,若所分割的2D缺陷掩膜和3D缺陷掩膜均包括所述缺陷区域,或者所述缺陷区域的深度大于针对凹坑缺陷的预定义阈值,则输出缺陷类作为缺陷检测结果;或者
    在所述缺陷区域的缺陷类型为凸起缺陷的情况下,若所分割的2D缺陷掩膜和3D缺陷掩膜均包括所述缺陷区域,并且所述缺陷区域的大小和深度大于针对凸起缺陷的预定义阈值,则输出缺陷类作为缺陷检测结果。
  17. 一种用于缺陷检测的装置,所述装置包括:
    存储器,所述存储器存储有计算机可执行指令;以及
    至少一个处理器,所述计算机可执行指令被所述至少一个处理器执行时,使所述装置实现如权利要求1-8中任一用于缺陷检测的方法。
  18. 一种计算机可读存储介质,所述计算机可读存储介质存储有计算机可执行指令,当所述计算机可执行指令被计算设备执行时,使所述计算设备实现如权利要求1-8中任一用于缺陷检测的方法。
PCT/CN2021/135264 2021-12-03 2021-12-03 一种用于缺陷检测的方法和系统 WO2023097637A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
PCT/CN2021/135264 WO2023097637A1 (zh) 2021-12-03 2021-12-03 一种用于缺陷检测的方法和系统
CN202180066474.4A CN116547713A (zh) 2021-12-03 2021-12-03 一种用于缺陷检测的方法和系统
EP21962758.5A EP4227900A4 (en) 2021-12-03 2021-12-03 DEFECT DETECTION METHOD AND SYSTEM
US18/196,900 US11922617B2 (en) 2021-12-03 2023-05-12 Method and system for defect detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/135264 WO2023097637A1 (zh) 2021-12-03 2021-12-03 一种用于缺陷检测的方法和系统

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/196,900 Continuation US11922617B2 (en) 2021-12-03 2023-05-12 Method and system for defect detection

Publications (1)

Publication Number Publication Date
WO2023097637A1 true WO2023097637A1 (zh) 2023-06-08

Family

ID=86611311

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/135264 WO2023097637A1 (zh) 2021-12-03 2021-12-03 一种用于缺陷检测的方法和系统

Country Status (4)

Country Link
US (1) US11922617B2 (zh)
EP (1) EP4227900A4 (zh)
CN (1) CN116547713A (zh)
WO (1) WO2023097637A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116721098A (zh) * 2023-08-09 2023-09-08 常州微亿智造科技有限公司 工业检测中的缺陷检测方法、缺陷检测装置

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117011293A (zh) * 2023-09-28 2023-11-07 山洋自动化设备(苏州)有限公司 一种产品包装质量检测方法及系统
CN117541591B (zh) * 2024-01-10 2024-03-26 深圳市恒义建筑技术有限公司 钢结构的焊缝缺陷无损检测方法及相关设备

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111179229A (zh) * 2019-12-17 2020-05-19 中信重工机械股份有限公司 一种基于深度学习的工业ct缺陷检测方法
CN111768365A (zh) * 2020-05-20 2020-10-13 太原科技大学 基于卷积神经网络多特征融合的太阳能电池缺陷检测方法
CN112767345A (zh) * 2021-01-16 2021-05-07 北京工业大学 一种dd6单晶高温合金共晶缺陷检测和分割方法
US20210150227A1 (en) * 2019-11-15 2021-05-20 Argo AI, LLC Geometry-aware instance segmentation in stereo image capture processes
US20210279950A1 (en) * 2020-03-04 2021-09-09 Magic Leap, Inc. Systems and methods for efficient floorplan generation from 3d scans of indoor scenes
CN113658206A (zh) * 2021-08-13 2021-11-16 江南大学 一种植物叶片分割方法

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108345911B (zh) * 2018-04-16 2021-06-29 东北大学 基于卷积神经网络多级特征的钢板表面缺陷检测方法
WO2021173138A1 (en) * 2020-02-27 2021-09-02 Lm Wind Power A/S System and method for monitoring wind turbine rotor blades using infrared imaging and machine learning
US11449977B2 (en) * 2020-07-29 2022-09-20 Applied Materials Israel Ltd. Generating training data usable for examination of a semiconductor specimen
EP3971556A1 (en) * 2020-09-17 2022-03-23 Evonik Operations GmbH Qualitative or quantitative characterization of a coating surface
CN112766184B (zh) * 2021-01-22 2024-04-16 东南大学 基于多层级特征选择卷积神经网络的遥感目标检测方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210150227A1 (en) * 2019-11-15 2021-05-20 Argo AI, LLC Geometry-aware instance segmentation in stereo image capture processes
CN111179229A (zh) * 2019-12-17 2020-05-19 中信重工机械股份有限公司 一种基于深度学习的工业ct缺陷检测方法
US20210279950A1 (en) * 2020-03-04 2021-09-09 Magic Leap, Inc. Systems and methods for efficient floorplan generation from 3d scans of indoor scenes
CN111768365A (zh) * 2020-05-20 2020-10-13 太原科技大学 基于卷积神经网络多特征融合的太阳能电池缺陷检测方法
CN112767345A (zh) * 2021-01-16 2021-05-07 北京工业大学 一种dd6单晶高温合金共晶缺陷检测和分割方法
CN113658206A (zh) * 2021-08-13 2021-11-16 江南大学 一种植物叶片分割方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4227900A4 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116721098A (zh) * 2023-08-09 2023-09-08 常州微亿智造科技有限公司 工业检测中的缺陷检测方法、缺陷检测装置
CN116721098B (zh) * 2023-08-09 2023-11-14 常州微亿智造科技有限公司 工业检测中的缺陷检测方法、缺陷检测装置

Also Published As

Publication number Publication date
EP4227900A1 (en) 2023-08-16
EP4227900A4 (en) 2024-01-24
CN116547713A (zh) 2023-08-04
US20230281785A1 (en) 2023-09-07
US11922617B2 (en) 2024-03-05

Similar Documents

Publication Publication Date Title
WO2023097637A1 (zh) 一种用于缺陷检测的方法和系统
US10997438B2 (en) Obstacle detection method and apparatus
CN107578418B (zh) 一种融合色彩和深度信息的室内场景轮廓检测方法
CN111222478A (zh) 一种工地安全防护检测方法和系统
TWI759651B (zh) 基於機器學習的物件辨識系統及其方法
CN110309765B (zh) 一种视频运动目标高效检测方法
CN111738336A (zh) 基于多尺度特征融合的图像检测方法
CN109101932A (zh) 基于目标检测的多任务及临近信息融合的深度学习算法
CN111461036A (zh) 一种利用背景建模增强数据的实时行人检测方法
CN107507140B (zh) 基于特征融合的高速公路露天场景车辆阴影干扰抑制方法
CN115410039A (zh) 基于改进YOLOv5算法的煤炭异物检测系统及方法
CN116630264A (zh) 密封钉焊接缺陷的检测方法、存储介质以及电子设备
CN117078603A (zh) 基于改进的yolo模型的半导体激光芯片损伤探测方法和系统
CN110956616B (zh) 一种基于立体视觉的目标检测方法及系统
CN111274872B (zh) 基于模板匹配的视频监控动态不规则多监管区域判别方法
CN116645568A (zh) 目标检测方法、装置、电子设备及存储介质
Dong et al. Moving object and shadow detection based on RGB color space and edge ratio
CN107169440A (zh) 一种基于图模型的道路检测方法
KR101669447B1 (ko) 영상 기반의 운전자 졸음 인식 시스템 및 그 인식 방법
KR20140062334A (ko) 장애물 검출 장치 및 방법
CN115311625A (zh) 一种判断目标是否接触输电线路的监控方法
KR101501531B1 (ko) 스테레오 비전 기반 보행자 검출 시스템 및 그 방법
CN115205651A (zh) 一种基于双模态融合的低见度道路目标的检测方法
CN109359518A (zh) 一种红外视频的运动物体识别方法、系统和报警装置
CN113469980A (zh) 一种基于图像处理的法兰识别方法

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 202180066474.4

Country of ref document: CN

ENP Entry into the national phase

Ref document number: 2021962758

Country of ref document: EP

Effective date: 20230510