WO2022252348A1 - 基于旋转目标和改进注意力机制的薄膜划痕瑕疵检测方法 - Google Patents
基于旋转目标和改进注意力机制的薄膜划痕瑕疵检测方法 Download PDFInfo
- Publication number
- WO2022252348A1 WO2022252348A1 PCT/CN2021/105737 CN2021105737W WO2022252348A1 WO 2022252348 A1 WO2022252348 A1 WO 2022252348A1 CN 2021105737 W CN2021105737 W CN 2021105737W WO 2022252348 A1 WO2022252348 A1 WO 2022252348A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- film
- centernet
- detection
- loss
- target
- Prior art date
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 39
- 230000007547 defect Effects 0.000 title claims abstract description 26
- 239000010409 thin film Substances 0.000 title abstract description 8
- 238000011897 real-time detection Methods 0.000 claims abstract description 3
- 230000004927 fusion Effects 0.000 claims description 9
- 238000000034 method Methods 0.000 claims description 7
- 238000004519 manufacturing process Methods 0.000 abstract description 8
- 230000002950 deficient Effects 0.000 abstract 1
- 239000010408 film Substances 0.000 description 36
- 239000011159 matrix material Substances 0.000 description 8
- 238000013135 deep learning Methods 0.000 description 5
- 238000005070 sampling Methods 0.000 description 3
- 238000004422 calculation algorithm Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 230000001788 irregular Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 230000000737 periodic effect Effects 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 238000005728 strengthening Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/90—Dynamic range modification of images or parts thereof
- G06T5/94—Dynamic range modification of images or parts thereof based on local image properties, e.g. for local contrast enhancement
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/30—Computing systems specially adapted for manufacturing
Definitions
- the invention belongs to the field of image processing and pattern recognition in computer vision, and relates to a film scratch defect detection method based on deep learning.
- the present invention considers that the shape of the scratches on the film is often irregular, the uncertainty of the distribution position is large, and there is a certain rotation angle, and the features are relatively complex, so the rotating target detection method based on deep learning is used for film scratches Defect detection has the advantages of faster speed, higher detection accuracy and more accurate positioning, which can meet the requirements of industrial film defect detection.
- the present invention provides a film scratch defect detection method based on a rotating target and an improved attention mechanism.
- This method optimizes the network structure of CenterNet, removes the first down-sampling, and performs cross-layer fusion on the backbone network, strengthening the The network extracts the features of small targets, enhances the detail information, and adds the rotation angle branch to detect the angle of the target, effectively improving the accuracy of film scratch defect detection.
- the present invention comprises the steps:
- Step 1 Use an industrial camera to collect film images, manually mark film defects, and obtain a film data set
- Step 2 Train the CenterNet network on the coco large-scale target detection data set to obtain a CenterNet pre-trained network model
- Step 3 modifying the structure of the pre-trained network model of CenterNet.
- the output pixels of the third module (Layer3) and the fourth module (Layer4) of the ResNet50 backbone network are similarly weighted to enhance the output of the third module (Layer3), and then the third module (Layer3)
- the attention operation of the channel dimension is performed, and the output weight of the first module (Layer1) is enhanced.
- Sub-pixel convolution is used to replace the upsampling layer after CenterNet's backbone network to solve the problem of deconvolution artifacts.
- a rotation angle branch is added to the output of the backbone network to detect the angle of the target;
- Step 4 On the film data set, input the film data into the modified CenterNet network, and retrain the modified network model, where the heatmap prediction uses Focal Loss, and the width and height, center point offset and angle prediction all use L1Loss.
- Loss L hm + ⁇ size L size + ⁇ off L off + ⁇ ang L ang
- L hm is the heatmap loss
- L size is the width and height loss
- L off is the center point Offset loss
- L ang is the angle prediction loss
- ⁇ size is the width and height loss weight
- ⁇ off is the center point offset loss weight
- ⁇ ang is the angle prediction loss weight
- the weights are all set to 0.1, and a new network model is obtained after training, namely target network model;
- Step 5 Load the target network model into the film real-time detection system, and load the real-time film data collected by the camera into the system for film scratch defect detection.
- the present invention is mainly aimed at the small amount of data of film scratches, small targets, and inconspicuous features, which greatly increase the difficulty of using deep learning methods for identification and detection.
- the optimization structure proposed by the present invention is modified in the backbone network part of the CenterNet network, and a down-sampling operation is reduced in the CenterNet backbone network to make it more sensitive to detailed features, and the attention module is used to weight the features that need to be focused on among the features , the fusion of cross-layer features makes the network extraction features more complete and rich, increases the expressiveness of features, and is more suitable for the detection of film scratch defects proposed by the present invention.
- the rotation angle prediction branch for scratches the detection of objects with a large aspect ratio is more convergent, and the detection of film scratches is improved.
- Fig. 1 is a schematic diagram of the defect detection process of the present invention
- Fig. 2 is a schematic diagram of the improved detection network of the present invention.
- the present invention provides a rotating target and a method for thin film defect detection based on an improved attention mechanism.
- the workflow of the film detection system is shown in Figure 1, and the steps are as follows:
- the film detection system reads the film image in real time
- step (3) Enter the network to judge whether there is a defect in the film image, if there is a defect, then enter step (4), otherwise enter step (5);
- the detection system marks the defects and prompts that there are defects in the image
- step (1) Whether there are any unread images, if so, return to step (1), otherwise end the detection.
- the network model is obtained in the following way:
- Step 1 Use an industrial camera to collect film images, manually mark film defects, and obtain a film data set
- Step 2 train the CenterNet network on the coco large-scale target detection data set, and obtain a pre-trained network model of CenterNet;
- Step 3 Modify the structure of the pre-trained network model of CenterNet, see Figure 2 for details. Remove the first downsampling layer in CenterNet's backbone network ResNet50 to enhance the underlying details of the image. Cross-layer fusion is performed on the output pixels of Layer 3 and Layer 4 of the ResNet50 backbone network, and similarity weighting is used to enhance the output of Layer 3, integrate global information and highlight the areas that need attention in the feature map, and do not increase too much calculation. Detection speed. Then, the attention operation of the channel dimension is performed on the fusion result of Layer3, and the weighted enhancement is performed on Layer1.
- the structure of the CenterNet network model is improved, and the characteristics in the improved network structure are processed according to the following steps:
- the input image X first passes through the backbone network of CenterNet.
- the backbone network ResNet50 it is mainly divided into four Layer modules to extract the features of the image.
- the improved part removes the residual edge in Layer2, and at the same time changes the convolution kernel step size to is 1, so that the Layer2 module does not perform downsampling, and the image X obtains pixel matrices X 1 , X 2 , X 3 , and X 4 in sequence after passing through four Layer feature extraction modules.
- f 1 ⁇ 1 ( ⁇ ) represents a 1 ⁇ 1 convolution operation
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Quality & Reliability (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
- Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)
Abstract
本发明公开了一种基于旋转目标和改进注意力机制的薄膜划痕瑕疵检测方法。本发明首先读取薄膜图像,然后将图像输入到网络模型中进行推理,通过网络结构推理出薄膜图像中的瑕疵并进行标注。本发明通过对CenterNet进行小目标以及旋转目标检测的改进,对主干网络检测小目标进行改进,向网络结构中加入注意力机制,增加对瑕疵目标的角度预测分支,在不影响实时检测速度的前提下提高对瑕疵检测尤其是划痕类目标检测的准确率。本发明用于工业薄膜瑕疵检测可以有效提高薄膜产品的质量,且不需要人工检测,节省了人工及时间成本,满足现代生产要求。
Description
本发明属于计算机视觉中的图像处理和模式识别领域,涉及一种基于深度学习的薄膜划痕瑕疵检测方法。
随着对于薄膜的需求量不断地增长,薄膜制造行业加速发展,薄膜生产企业开始采用更宽的幅宽和更快的生产线来提高企业生产效率,然而现代薄膜制造行业的对于薄膜质量的要求也日益严格,越来越多的企业开始关注薄膜制造过程中对薄膜质量的控制。由于制造技术和环境的影响,薄膜表面可能会出现各种瑕疵,其中划痕是一种最常出现的瑕疵,影响薄膜的外观以及质量,给生产企业带来不必要的问题,但由于划痕往往细小、形状不规则以及倾斜角度不固定,检测难度很大,容易漏检并且不容易计算尺寸大小。目前应用的算法多是传统的图像处理方法,并且没有考虑划痕往往有一定的旋转角度的特性,检测精度低。目前尚没有技术方案将旋转目标检测应用于薄膜的瑕疵检测。
近年来随着深度学习在计算机视觉领域的普遍应用以及GPU的迅猛发展,人们将注意力越来越多的转移到深度学习方面,已广泛应用于多种计算机视觉领域,成为当前的主流算法。本发明考虑到薄膜上的划痕瑕疵形状往往不规则,分布位置的不确定性较大,并且存在一定的旋转角度,特征较为复杂,因此采用基于深度学习的旋转目标检测方法用于薄膜划痕瑕疵检测,具有速度更快、检测精确度更高和定位更精准等优点,可以达到工业薄膜瑕疵检测的要求。发明内容
本发明提供了一种基于旋转目标和改进的注意力机制的薄膜划痕瑕疵检测方法,该方法优化了CenterNet的网络结构,去除第一个下采样,并对主干网络进行跨层融合,加强了网络对小目标的特征 提取,增强细节信息,增加了旋转角度分支对目标的角度进行检测,有效提高薄膜划痕瑕疵检测的准确率。
本发明包括如下步骤:
步骤1、使用工业相机采集薄膜图像,手工标注薄膜瑕疵,得到一个薄膜数据集;
步骤2、在coco大型目标检测数据集上训练CenterNet网络,得到一个CenterNet的预训练网络模型;
步骤3、对CenterNet的预训练网络模型的结构进行修改。在CenterNet的主干网络ResNet50中去除第一个下采样层,增强图像底层细节信息。采用跨层融合将ResNet50主干网络的第三模块(Layer3)与第四模块(Layer4)的输出像素采取相似度加权来对第三模块(Layer3)的输出进行增强,再对第三模块(Layer3)进行通道维度的注意力操作,与第一模块(Layer1)的输出加权增强。使用子像素卷积替代CenterNet的主干网络之后的上采样层,以解决反卷积的人工痕迹问题。为了对划痕方向进行估算以便对瑕疵进行准确定位,对主干网络的输出增加一个旋转角度分支,对目标的角度进行检测;
步骤4、在薄膜数据集上,将薄膜数据输入修改后的CenterNet网络,重新训练修改过的网络模型,其中heatmap预测使用Focal Loss,宽高、中心点偏移和角度预测均使用L1Loss,对损失进行融合,设置不同的权重进行加权,即Loss=L
hm+λ
sizeL
size+λ
offL
off+λ
angL
ang,其中L
hm为heatmap损失,L
size为宽高损失,L
off为中心点偏移损失,L
ang为角度预测损失,λ
size为宽高损失权重,λ
off为中心点偏移损失权重,λ
ang为角度预测损失权重,权重均取0.1,训练得到新的网络模型,即目标网络模型;
步骤5、将目标网络模型加载到薄膜实时检测系统中,将相机采集的实时薄膜数据载入系统中进行薄膜划痕瑕疵检测。
本发明提供的技术方案的有益效果是:本发明主要针对薄膜划 痕瑕疵数据量少,目标较小、特征不明显,导致使用深度学习方法进行识别检测的难度大大提升。本发明提出的优化结构在CenterNet网络的主干网络部分进行修改,在CenterNet主干网络中减少一次下采样操作,使其对细节特征更加敏感,使用注意力模块对特征中需要重点关注的特征及进行加权,对跨层特征进行融合,使网络提取特征更完整丰富,增加了特征的表现力,更适合于本发明所提出的对薄膜划痕瑕疵的检测。通过增加对划痕瑕疵的旋转角度预测分支,对长宽比较大的目标检测更加收敛,提高对于薄膜划痕瑕疵检测。
图1为本发明瑕疵检测流程示意图;
图2为本发明改进的检测网络示意图。
为了更为具体地描述本发明,下面结合附图及具体实施方式对本发明的技术方案进行详细说明。
本发明提供一种旋转目标和基于改进的注意力机制的薄膜瑕疵检测的方法。薄膜检测系统的工作流程如图1所示,其步骤如下:
(1)薄膜检测系统实时读取薄膜图像;
(2)薄膜图像输入网络模型中进行前向推理;
(3)进入网络判断薄膜图像中是否存在瑕疵,如果有瑕疵则进入步骤(4),否则进入步骤(5);
(4)检测系统对瑕疵进行标注,并提示该图像存在瑕疵;
(5)是否还有未读取的图像,如果有回到步骤(1),否则结束此次检测。
其中的网络模型由以下方式得到:
步骤1、使用工业相机采集薄膜图像,手工标注薄膜瑕疵,得到一个薄膜数据集;
步骤2、在coco大型目标检测数据集上训练CenterNet网络,得 到一个CenterNet的预训练网络模型;
步骤3、对CenterNet的预训练网络模型的结构进行修改,具体见图2。在CenterNet的主干网络ResNet50中去除第一个下采样层,增强图像底层细节信息。对ResNet50主干网络的Layer3与Layer4的输出像素进行跨层融合,采取相似度加权来对Layer3的输出进行增强,融合全局信息并且突出特征图中需要关注的区域,同时不增加过多计算量,保持检测速度。再对Layer3的融合结果进行通道维度的注意力操作,对Layer1进行加权增强。使用子像素卷积替代CenterNet的主干网络之后的上采样层,保护数据的细节信息不受干扰,解决反卷积的人工痕迹问题。对主干网络的输出增加一个旋转角度分支,对目标的角度进行检测;
步骤4、在薄膜数据集上,将薄膜数据输入修改后的CenterNet网络,重新训练修改过的网络模型,heatmap预测使用Focal Loss,宽高、中心点偏移和角度预测均使用L1 Loss,对损失进行融合,设置不同的权重进行加权,即Loss=L
hm+λ
sizeL
size+λ
offL
off+λ
angL
ang,其中L
hm为heatmap损失,L
size为宽高损失,L
off为中心点偏移损失,L
ang为角度预测损失,λ
size为宽高损失权重,λ
off为中心点偏移损失权重,λ
ang为角度预测损失权重,权重均取0.1,训练得到新的网络模型,即目标网络模型;
其中改进CenterNet网络模型的结构,按如下步骤处理改进网络结构中的特征:
(1)输入图像X首先通过CenterNet的主干网络,在主干网络ResNet50中主要分为4个Layer模块对图像进行特征提取,改进部分将Layer2中的残差边去除,同时将卷积核步长改为1,使Layer2模块中不进行下采样,图像X在经过4个Layer特征提取模块后按顺序得到像素矩阵X
1,X
2,X
3,X
4。
(2)对像素矩阵X
3使用3×3的卷积操作进行下采样得到像素 矩阵X
3’,使像素矩阵X
3’和像素矩阵X
4特征尺寸相同,通过将像素矩阵X
3’和像素矩阵X
4进行相似度计算,得到权重;使用softmax函数对权重归一化,将权重和像素矩阵X
3进行加权求和得到X’,对像素矩阵X
1使用5×5的卷积操作进行下采样得到X
1’,在X’上进行通道注意力操作与X
1’加权得到主干网络最后的输出X”:
其中f
1×1(·)代表1×1的卷积操作。
(3)使用子像素卷积代替反卷积进行上采样,通过两个卷积层处理特征图像,为每个输出通道得到r
2个特征通道,r为上采样倍数,将每个像素的r
2个通道的低分辨率特征周期性地重新排列成一个r×r区域,得到高分辨率的图像:
I=PS(f(X”))
其中PS为周期性像素排列,将H×W×C·r
2重新排列为rH×rW×C:
(4)将通过子像素卷积上采样后的结果送入四个分支分别使用3×3卷积和1×1卷积进行预测,预测heatmap、预测宽高尺寸、预测中心点偏移量和预测目标旋转角度。
Claims (3)
- 基于旋转目标和改进注意力机制的薄膜划痕瑕疵检测方法,其特征在于该方法包括如下步骤:步骤1、使用工业相机采集薄膜图像,手工标注薄膜瑕疵,得到一个薄膜数据集;步骤2、在coco大型目标检测数据集上训练CenterNet网络,得到一个CenterNet的预训练网络模型;步骤3、对CenterNet的预训练网络模型的结构进行修改,具体是:在CenterNet的主干网络ResNet50中去除第一个下采样层,用于增强图像底层细节信息;采用跨层融合将主干网络ResNet50的第三模块Layer3与第四模块Layer4的输出像素采取相似度加权来对第三模块Layer3的输出进行增强,再对第三模块Layer3进行通道维度的注意力操作,然后与第一模块Layer1的输出加权增强;使用子像素卷积替代CenterNet的主干网络之后的上采样层;对CenterNet的主干网络的输出增加一个旋转角度分支,用于对目标的角度进行检测;步骤4、在薄膜数据集上,将薄膜数据输入修改后的CenterNet网络模型,重新训练修改过的网络模型,来得到目标网络模型;步骤5、将目标网络模型加载到薄膜实时检测系统中,将相机采集的实时薄膜数据载入系统中进行薄膜划痕瑕疵检测。
- 根据权利要求1所述的基于旋转目标和改进注意力机制的薄膜划痕瑕疵检测方法,其特征在于:步骤3中使用子像素卷积代替反卷积进行上采样,通过两个卷积层处理特征图像,为每个输出通道得到r 2个特征通道,将每个像素的r 2个通道的低分辨率特征周期性地重新排列成一个r×r区域,得到高分辨率的图像,其中r为上采样倍数。
- 根据权利要求1所述的基于改进的注意力机制的薄膜瑕疵检测方法,其特征在于:步骤4中heatmap预测使用Focal Loss,宽高、中心点偏移和角度预测均使用L1 Loss;对得到的heatmap损失、中心点偏移损失、宽高损失和角度预测损失设置不同的权重进行加权融合。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/806,719 US11619593B2 (en) | 2021-06-01 | 2022-06-13 | Methods and systems for detecting a defect of a film |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110609602.7A CN113284123B (zh) | 2021-06-01 | 2021-06-01 | 基于旋转目标和改进注意力机制的薄膜划痕瑕疵检测方法 |
CN202110609602.7 | 2021-06-01 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/806,719 Continuation US11619593B2 (en) | 2021-06-01 | 2022-06-13 | Methods and systems for detecting a defect of a film |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022252348A1 true WO2022252348A1 (zh) | 2022-12-08 |
Family
ID=77282952
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/105737 WO2022252348A1 (zh) | 2021-06-01 | 2021-07-12 | 基于旋转目标和改进注意力机制的薄膜划痕瑕疵检测方法 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN113284123B (zh) |
WO (1) | WO2022252348A1 (zh) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116342589A (zh) * | 2023-05-23 | 2023-06-27 | 之江实验室 | 一种跨视场划痕缺陷连续性检测方法和系统 |
CN116385794A (zh) * | 2023-04-11 | 2023-07-04 | 河海大学 | 基于注意力流转移互蒸馏的机器人巡检缺陷分类方法及装置 |
CN116525295A (zh) * | 2023-07-03 | 2023-08-01 | 河南华佳新材料技术有限公司 | 一种高频脉冲电容用金属化薄膜及其制备方法 |
CN116883409A (zh) * | 2023-09-08 | 2023-10-13 | 山东省科学院激光研究所 | 一种基于深度学习的输送带缺陷检测方法及系统 |
CN117288761A (zh) * | 2023-11-27 | 2023-12-26 | 天津市海迅科技发展有限公司 | 一种基于测试材料的瑕疵检测分类评估方法及系统 |
CN117315238A (zh) * | 2023-11-29 | 2023-12-29 | 福建理工大学 | 一种车辆目标检测的方法与终端 |
CN117437235A (zh) * | 2023-12-21 | 2024-01-23 | 四川新康意众申新材料有限公司 | 基于图像处理的塑料薄膜质量检测方法 |
CN117541587A (zh) * | 2024-01-10 | 2024-02-09 | 山东建筑大学 | 太阳能电池板缺陷检测方法、系统、电子设备及存储介质 |
CN118379267A (zh) * | 2024-04-29 | 2024-07-23 | 浙江正天旅游用品股份有限公司 | 一种箱包配件表面瑕疵的视觉检测智能控制方法及系统 |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114066825B (zh) * | 2021-10-29 | 2024-05-28 | 浙江工商大学 | 一种改进的基于深度学习的复杂纹理图像瑕疵检测方法 |
CN117132870B (zh) * | 2023-10-25 | 2024-01-26 | 西南石油大学 | 一种CenterNet与混合注意力相融合的机翼结冰检测方法 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2004354251A (ja) * | 2003-05-29 | 2004-12-16 | Nidek Co Ltd | 欠陥検査装置 |
CN112132828A (zh) * | 2020-10-15 | 2020-12-25 | 浙江工商大学 | 基于深度学习的薄膜瑕疵检测方法 |
CN112233090A (zh) * | 2020-10-15 | 2021-01-15 | 浙江工商大学 | 基于改进注意力机制的薄膜瑕疵检测方法 |
CN112614101A (zh) * | 2020-12-17 | 2021-04-06 | 广东道氏技术股份有限公司 | 基于多层特征提取的抛光砖瑕疵检测方法及相关设备 |
CN112861803A (zh) * | 2021-03-16 | 2021-05-28 | 厦门博海中天信息科技有限公司 | 一种图像识别方法、装置、服务器以及计算机可读存储介质 |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016208606A1 (ja) * | 2015-06-25 | 2016-12-29 | Jfeスチール株式会社 | 表面欠陥検出装置、表面欠陥検出方法、及び鋼材の製造方法 |
JP7159624B2 (ja) * | 2018-06-04 | 2022-10-25 | 日本製鉄株式会社 | 表面性状検査方法及び表面性状検査装置 |
CN109191511B (zh) * | 2018-07-27 | 2021-04-13 | 杭州电子科技大学 | 一种基于卷积神经网络的双目立体匹配方法 |
CN110348282B (zh) * | 2019-04-30 | 2022-06-07 | 贵州大学 | 用于行人重识别的方法和设备 |
CN110175993A (zh) * | 2019-05-27 | 2019-08-27 | 西安交通大学医学院第一附属医院 | 一种基于FPN的Faster R-CNN肺结核征象检测系统及方法 |
CN111402254B (zh) * | 2020-04-03 | 2023-05-16 | 杭州华卓信息科技有限公司 | 一种ct图像肺结节高性能自动检测方法及装置 |
CN111524135B (zh) * | 2020-05-11 | 2023-12-26 | 安徽继远软件有限公司 | 基于图像增强的输电线路细小金具缺陷检测方法及系统 |
-
2021
- 2021-06-01 CN CN202110609602.7A patent/CN113284123B/zh active Active
- 2021-07-12 WO PCT/CN2021/105737 patent/WO2022252348A1/zh active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2004354251A (ja) * | 2003-05-29 | 2004-12-16 | Nidek Co Ltd | 欠陥検査装置 |
CN112132828A (zh) * | 2020-10-15 | 2020-12-25 | 浙江工商大学 | 基于深度学习的薄膜瑕疵检测方法 |
CN112233090A (zh) * | 2020-10-15 | 2021-01-15 | 浙江工商大学 | 基于改进注意力机制的薄膜瑕疵检测方法 |
CN112614101A (zh) * | 2020-12-17 | 2021-04-06 | 广东道氏技术股份有限公司 | 基于多层特征提取的抛光砖瑕疵检测方法及相关设备 |
CN112861803A (zh) * | 2021-03-16 | 2021-05-28 | 厦门博海中天信息科技有限公司 | 一种图像识别方法、装置、服务器以及计算机可读存储介质 |
Cited By (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116385794A (zh) * | 2023-04-11 | 2023-07-04 | 河海大学 | 基于注意力流转移互蒸馏的机器人巡检缺陷分类方法及装置 |
CN116385794B (zh) * | 2023-04-11 | 2024-04-05 | 河海大学 | 基于注意力流转移互蒸馏的机器人巡检缺陷分类方法及装置 |
CN116342589B (zh) * | 2023-05-23 | 2023-08-22 | 之江实验室 | 一种跨视场划痕缺陷连续性检测方法和系统 |
CN116342589A (zh) * | 2023-05-23 | 2023-06-27 | 之江实验室 | 一种跨视场划痕缺陷连续性检测方法和系统 |
CN116525295A (zh) * | 2023-07-03 | 2023-08-01 | 河南华佳新材料技术有限公司 | 一种高频脉冲电容用金属化薄膜及其制备方法 |
CN116525295B (zh) * | 2023-07-03 | 2023-09-08 | 河南华佳新材料技术有限公司 | 一种高频脉冲电容用金属化薄膜及其制备方法 |
CN116883409A (zh) * | 2023-09-08 | 2023-10-13 | 山东省科学院激光研究所 | 一种基于深度学习的输送带缺陷检测方法及系统 |
CN116883409B (zh) * | 2023-09-08 | 2023-11-24 | 山东省科学院激光研究所 | 一种基于深度学习的输送带缺陷检测方法及系统 |
CN117288761B (zh) * | 2023-11-27 | 2024-02-06 | 天津市海迅科技发展有限公司 | 一种基于测试材料的瑕疵检测分类评估方法及系统 |
CN117288761A (zh) * | 2023-11-27 | 2023-12-26 | 天津市海迅科技发展有限公司 | 一种基于测试材料的瑕疵检测分类评估方法及系统 |
CN117315238A (zh) * | 2023-11-29 | 2023-12-29 | 福建理工大学 | 一种车辆目标检测的方法与终端 |
CN117315238B (zh) * | 2023-11-29 | 2024-03-15 | 福建理工大学 | 一种车辆目标检测的方法与终端 |
CN117437235B (zh) * | 2023-12-21 | 2024-03-12 | 四川新康意众申新材料有限公司 | 基于图像处理的塑料薄膜质量检测方法 |
CN117437235A (zh) * | 2023-12-21 | 2024-01-23 | 四川新康意众申新材料有限公司 | 基于图像处理的塑料薄膜质量检测方法 |
CN117541587A (zh) * | 2024-01-10 | 2024-02-09 | 山东建筑大学 | 太阳能电池板缺陷检测方法、系统、电子设备及存储介质 |
CN117541587B (zh) * | 2024-01-10 | 2024-04-02 | 山东建筑大学 | 太阳能电池板缺陷检测方法、系统、电子设备及存储介质 |
CN118379267A (zh) * | 2024-04-29 | 2024-07-23 | 浙江正天旅游用品股份有限公司 | 一种箱包配件表面瑕疵的视觉检测智能控制方法及系统 |
CN118379267B (zh) * | 2024-04-29 | 2024-10-18 | 浙江正天旅游用品股份有限公司 | 一种箱包配件表面瑕疵的视觉检测智能控制方法及系统 |
Also Published As
Publication number | Publication date |
---|---|
CN113284123A (zh) | 2021-08-20 |
CN113284123B (zh) | 2022-07-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2022252348A1 (zh) | 基于旋转目标和改进注意力机制的薄膜划痕瑕疵检测方法 | |
CN111553929B (zh) | 基于融合网络的手机屏幕缺陷分割方法、装置及设备 | |
CN107561738B (zh) | 基于fcn的tft-lcd表面缺陷快速检测方法 | |
CN113592845A (zh) | 一种电池涂布的缺陷检测方法及装置、存储介质 | |
CN111914698B (zh) | 图像中人体的分割方法、分割系统、电子设备及存储介质 | |
CN112233090B (zh) | 基于改进注意力机制的薄膜瑕疵检测方法 | |
CN111161260A (zh) | 一种基于深度学习的热轧带钢表面缺陷检测方法及装置 | |
CN113052755A (zh) | 一种基于深度学习的高分辨率图像智能化抠图方法 | |
CN111027538A (zh) | 一种基于实例分割模型的集装箱检测方法 | |
CN113706464A (zh) | 一种印刷品外观质量检测方法及系统 | |
CN116612106A (zh) | 一种基于yolox算法的光学元件表面缺陷检测方法 | |
CN116228686A (zh) | 一种基于轻量级网络的划痕缺陷检测方法、装置以及设备 | |
Fan et al. | Application of YOLOv5 neural network based on improved attention mechanism in recognition of Thangka image defects | |
CN114943888A (zh) | 基于多尺度信息融合的海面小目标检测方法、电子设备及计算机可读介质 | |
Xiao et al. | A detection method of spangle defects on zinc-coated steel surfaces based on improved YOLO-v5 | |
CN113705564A (zh) | 一种指针式仪表识别读数方法 | |
Wang et al. | Deformable feature pyramid network for aluminum profile surface defect detection | |
CN116385915A (zh) | 一种基于时空信息融合的水面漂浮物目标检测与跟踪方法 | |
CN114782322A (zh) | YOLOv5模型的电弧增材制造熔池缺陷检测方法 | |
Zhang et al. | A deep learning-based approach for the automatic measurement of laser-cladding coating sizes | |
CN113222028A (zh) | 一种基于多尺度邻域梯度模型的图像特征点实时匹配方法 | |
CN118351112B (zh) | 一种基于喷墨制备量子点色转换层的异常检测方法 | |
CN117274294B (zh) | 一种同源染色体分割方法 | |
Chai et al. | Defocus blur detection based on transformer and complementary residual learning | |
Cao et al. | Detection and classification of surface defects of magnetic tile based on SE-U-Net |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21943705 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21943705 Country of ref document: EP Kind code of ref document: A1 |