WO2022236876A1 - Procédé, système et appareil de reconnaissance de défauts de cellophane et support de stockage - Google Patents

Procédé, système et appareil de reconnaissance de défauts de cellophane et support de stockage Download PDF

Info

Publication number
WO2022236876A1
WO2022236876A1 PCT/CN2021/095962 CN2021095962W WO2022236876A1 WO 2022236876 A1 WO2022236876 A1 WO 2022236876A1 CN 2021095962 W CN2021095962 W CN 2021095962W WO 2022236876 A1 WO2022236876 A1 WO 2022236876A1
Authority
WO
WIPO (PCT)
Prior art keywords
cellophane
defect
semantic segmentation
network
set data
Prior art date
Application number
PCT/CN2021/095962
Other languages
English (en)
Chinese (zh)
Inventor
刘宇迅
田丰
罗立浩
陈小旋
黄建
Original Assignee
广州广电运通金融电子股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 广州广电运通金融电子股份有限公司 filed Critical 广州广电运通金融电子股份有限公司
Publication of WO2022236876A1 publication Critical patent/WO2022236876A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Definitions

  • the invention relates to the technical field of cellophane detection, in particular to a cellophane defect identification method, system, device and storage medium.
  • cellophane is an important part of promoting economic development.
  • the main method for detecting defects of cellophane is manual visual inspection. It is easy to miss subtle defects in the manual inspection process, and the production speed of cellophane is relatively fast. Using naked eyes The speed of inspection can never keep up with the production speed of cellophane, which makes the detection efficiency very slow and seriously delays the production efficiency of cellophane.
  • one of the objects of the present invention is to provide a cellophane defect identification method, which improves the efficiency and accuracy of cellophane defect detection.
  • the second object of the present invention is to provide a cellophane defect identification system that implements the above method.
  • the third object of the present invention is to provide a cellophane defect identification device that implements the above method.
  • the fourth object of the present invention is to provide a storage medium for performing the above method.
  • a cellophane defect identification method comprising:
  • Step S1 collect cellophane surface images, and establish test set data
  • Step S2 Import the test set data into the semantic segmentation network model based on the optimized UNET network for semantic segmentation;
  • Step S3 Obtain the output signal of the semantic segmentation network model, and perform post-processing on the output signal to obtain a defect recognition result.
  • construction method of the semantic segmentation network includes:
  • the acquisition method of the tag data is:
  • Import the surface image of the cellophane select the corresponding defect area according to the requirement, and assign a label to the selected area to obtain the label data.
  • the UNET network uses the group convolution module to group the input feature map, and then uses the convolution module to convolve each group respectively; one of the convolution modules uses 1*1, 1*3 and 3*1 combined convolution.
  • the loss function is composed of a cross-entropy loss function and a dice loss measurement function.
  • the post-processing method is:
  • step S1 an industrial camera is used to photograph the surface of the cellophane to obtain an image of the cellophane surface.
  • a cellophane defect identification system based on the UNET network including:
  • the collection module is used to collect cellophane surface images and establish test set data
  • the model analysis module is used to import the test set data into the constructed semantic segmentation network model for semantic segmentation
  • the post-processing module is used to obtain the output signal of the model analysis module, and perform post-processing on the output signal to obtain the defect identification result.
  • a cellophane defect identification device characterized in that it comprises:
  • a memory for storing the program
  • the processor is configured to load the program to execute the method for identifying cellophane defects as described above.
  • a storage medium stores a program, and when the program is executed by a processor, the above cellophane defect identification method is implemented.
  • the present invention uses the traditional image processing method combined with the semantic segmentation network as the basis for automatic classification, which effectively improves the robustness of cellophane defect recognition compared to relying solely on traditional image processing and analysis; at the same time, the semantic segmentation model uses the existing
  • the optimized UNET network can quickly detect cellophane defects, reduce the time required for actual detection and analysis, improve the efficiency and accuracy of cellophane defect detection, and reduce detection costs.
  • Fig. 1 is the schematic flow chart of cellophane defect identification method of the present invention
  • Fig. 2 is the overall flowchart of the testing and training steps of cellophane defect recognition method of the present invention
  • Fig. 3 is the structural diagram of the UNET model of the present invention.
  • Fig. 4 is a schematic diagram of group convolution in the present invention.
  • Fig. 5 is a schematic diagram of 1x1, 3x1, 1x3 combined convolution of the present invention.
  • Fig. 6 is a diagram of the defect prediction result of a solid-color paper roll according to the present invention.
  • Fig. 7 is a diagram of the defect prediction results of colored paper rolls of the present invention.
  • Fig. 8 is a schematic block diagram of modules of the cellophane defect identification system of the present invention.
  • This embodiment provides a cellophane defect identification method, which can replace the existing manual detection and improve the efficiency and accuracy of cellophane defect detection.
  • the cellophane defect identification method of this embodiment specifically includes the following steps:
  • Step S1 collect cellophane surface images, and establish test set data
  • Step S2 Import the test set data into the semantic segmentation network model based on the optimized UNET network for semantic segmentation;
  • Step S3 Obtain the output signal of the semantic segmentation network model, and perform post-processing on the output signal to obtain a defect recognition result.
  • an industrial camera is used to photograph the surface of cellophane after production and manufacture, thereby obtaining an image of the surface of cellophane, and establishing a data set of related original images, wherein the data set is divided into training set data and test set data, and the training set is photographed Several photographed images obtained on the surface of cellophane, use the training set data as the training basis of the semantic segmentation network model, thereby constructing a complete semantic segmentation network model; The goal is to import the test set into the established semantic segmentation network model, detect whether there are defects in the pictures in the test set, and obtain the cellophane defect recognition result.
  • the lightweight UNET network is optimized on the basis of the existing UNET network;
  • the network structure of the existing UNET network includes two parts, an encoder and a decoder, wherein the encoder performs a down-sampling process, and The decoder performs the upsampling process;
  • the downsampling part consists of multiple Down Blocks, which function as feature extractors.
  • Each Down Block contains two convolutional and pooling layers, and the Max Pooling layer used by the pooling layer The pooling operation, every time a Down Block structure is passed, the spatial size of the feature map will be halved.
  • the upsampling part of the existing UNET network is composed of Up Block. In the Up Block structure, it contains two convolutions and one upsampling layer, which is used to restore the size of the feature map layer by layer.
  • Fig. 3 shows is the structural representation of the lightweight UNET network of the present embodiment, as shown in Fig. 3, the lightweight UNET network in the present embodiment is to improve on the structure of the existing UNET network, the present embodiment
  • each Down Block structure in the existing UNET network uses two 3*3 convolutions to complete the down-sampling process, and this embodiment improves the Down Block structure to A 3*3 convolution and a combination convolution, and adopts the group convolution method, and its combination convolution is composed of 1*1, 1*3 and 3*1, and this embodiment compresses the times of downsampling and upsampling, and then Reduce the amount of network parameters.
  • the cellophane image can be processed by the convolution kernel to obtain the feature map of the image, and the input feature map is grouped , and then each group is convolved separately. Assume that the size of the input feature map is still C*H*W, and the number of output feature maps is N.
  • the number of input feature maps for each group is C/G
  • the output feature map for each group is The number of maps is N/G
  • the size of each convolution kernel is C/G*K*K
  • the total number of convolution kernels is still N
  • the number of convolution kernels in each group is N/G
  • the convolution kernel is only related to The input map of the same group is convolved
  • the total parameter quantity of the convolution kernel is N*C/G*K*K, so it can be seen that the total parameter quantity is reduced to the original 1/G.
  • the existing 3*3 convolution is decomposed into a combined convolution of 1*1, 1*3 and 3*1, and the convolution parameters after decomposition are undecomposed
  • the previous 45% can greatly reduce the amount of network parameters, and carry out lightweight processing on the existing UNET network, thereby improving the efficiency of cellophane defect identification.
  • three more activation functions are used after decomposition, which can increase the linearization capability.
  • the training set data is imported into the optimized lightweight UNET network of the above structure, and the semantic segmentation network model can be constructed using the training set data.
  • the construction method of the semantic segmentation network of the present embodiment includes: importing the training set data into the optimized UNET network; after the images in the training set are processed by the lightweight UNET network, the semantically segmented image can be obtained, from the semantic All the features of the cellophane surface can be seen in the image obtained after segmentation, including the pattern on the cellophane and the defects on the cellophane surface; then, the output data and label data of the UNET network are calculated through the loss function to calculate the loss value, update the network parameters through backpropagation, if the model converges, that is, the loss value stabilizes to the lowest value, then test the model and save the latest network parameters for subsequent testing, otherwise, continue to input the training set into the network, The network parameters are not saved until the network converges, and finally a trained semantic segmentation network model is obtained.
  • the label data is obtained by segmenting and labeling the defect positions in the training set data, that is, when the training set data is obtained, the pre-imported cellophane defect image is calibrated for the defect area, and the selected area is assigned a label , repeat the above process until the labeling is completed, and save the corresponding label to obtain the label data.
  • the labelme tool can be used to perform image labeling, that is, import the cellophane image into the labelme tool, select a defective region in the cellophane image, and assign values to image pixels in the defective region to obtain label data.
  • the label data and the signal output by the lightweight UNET network are used to calculate the loss value through the loss function.
  • the loss function of this embodiment is composed of the cross entropy loss function It is combined with the dice loss metric function.
  • the cross-entropy is derived from the Kullback-Leibler (KL) divergence, which is a measure of the difference between two distributions.
  • KL divergence a measure of the difference between two distributions.
  • the distribution of data is given by the training set. Therefore minimizing the KL divergence is equivalent to minimizing the cross-entropy.
  • Cross entropy is defined as:
  • N is the number of samples, if label c is the correct classification for pixel i, then binary index; is the corresponding predicted probability.
  • the Dice loss metric function aims to minimize the area where the ground truth G and the predicted segmentation area S do not match, or to maximize the area where G and S overlap.
  • the trained model After obtaining the trained semantic segmentation network model through the above method, use the trained model to perform semantic segmentation on the cellophane surface image in the test set, and then use traditional methods to post-process the network output signal to determine whether the signal has defects.
  • the method of post-processing the network output signal is: set a fixed threshold to binarize the output signal, perform contour search and combine the contour size features to identify the defect position on the cellophane surface, thereby judging the cellophane Whether the surface image is flawed.
  • the method of this embodiment is used to identify defects on the surface of cellophane in different colors, no matter whether it is a pure-color paper roll or a colored paper roll, the position of the defect on the surface of the cellophane can be accurately identified. detection, the method improves the efficiency and accuracy of cellophane defect detection, and reduces the detection cost.
  • This embodiment provides a cellophane defect recognition system based on the UNET network, which implements the cellophane defect recognition method described in Embodiment 1; as shown in Figure 8, the recognition system of this embodiment specifically includes the following modules:
  • the collection module is used to collect cellophane surface images and establish test set data
  • the model analysis module is used to import the test set data into the constructed semantic segmentation network model for semantic segmentation
  • the post-processing module is used to obtain the output signal of the model analysis module, and perform post-processing on the output signal to obtain the defect identification result.
  • This embodiment optimizes the existing UNET network, which can reduce the amount of network parameters, thereby achieving the effect of a lightweight UNET network, thereby increasing the image processing speed, quickly detecting cellophane defects, and significantly reducing the time required for actual detection and analysis;
  • the semantic segmentation network is used to replace the original manual detection method, which eliminates the subjectivity of manual detection and manual analysis, improves the efficiency and accuracy of cellophane defect detection, and reduces the detection cost; in addition, this embodiment uses traditional image processing
  • the combination of the method and the semantic segmentation network is used as the basis for automatic classification. Compared with relying solely on traditional image processing and analysis, the accuracy of the detection method in this embodiment can be increased to 98%. Stickiness.
  • This embodiment discloses a cellophane defect identification device, including:
  • a memory for storing the program
  • the processor is configured to load the program to execute the cellophane defect identification method described in Embodiment 1.
  • This embodiment discloses a storage medium, which stores a program, and when the program is executed by a processor, the method for identifying cellophane defects is realized.
  • the device and storage medium in this embodiment are based on the two aspects of the same inventive concept as the method in the previous embodiment.
  • the implementation process of the method has been described in detail above, so those skilled in the art can understand clearly from the foregoing description To understand the structure and implementation process of the device in this implementation, for the sake of brevity of the description, details will not be repeated here.
  • the above-mentioned embodiment is only a preferred embodiment of the present invention, and cannot be used to limit the protection scope of the present invention. Any insubstantial changes and substitutions made by those skilled in the art on the basis of the present invention belong to the scope of the present invention. Scope of protection claimed.

Abstract

Un procédé, un système et un appareil de reconnaissance de défauts de cellophane et un support de stockage sont divulgués ici. Le procédé de reconnaissance de défauts de cellophane comprend spécifiquement les étapes suivantes consistant : étape S1, à collecter des images de surface de cellophane pour établir des données d'un ensemble d'essai; étape S2, à importer les données de l'ensemble d'essai dans un modèle de réseau de segmentation sémantique sur la base d'un réseau UNET optimisé pour une segmentation sémantique; et étape S3, à acquérir un signal de sortie du modèle de réseau de segmentation sémantique et post-traiter le signal de sortie pour obtenir un résultat de reconnaissance de défauts. Selon la présente invention, un procédé de traitement d'image classique et un réseau de segmentation sémantique sont combinés en tant que base pour effectuer une classification automatique, ce qui permet d'améliorer efficacement la robustesse de la reconnaissance de défauts de cellophane comparativement à un traitement et une analyse d'image classiques; de plus, le modèle de segmentation sémantique utilise le réseau UNET optimisé, ce qui permet de détecter rapidement un défaut de cellophane, de réduire le temps nécessaire pour la détection et l'analyse réelles, d'améliorer l'efficacité et la précision de la détection de défauts de cellophane et de réduire les coûts de détection.
PCT/CN2021/095962 2021-05-14 2021-05-26 Procédé, système et appareil de reconnaissance de défauts de cellophane et support de stockage WO2022236876A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110528838.8 2021-05-14
CN202110528838.8A CN113239930B (zh) 2021-05-14 2021-05-14 一种玻璃纸缺陷识别方法、系统、装置及存储介质

Publications (1)

Publication Number Publication Date
WO2022236876A1 true WO2022236876A1 (fr) 2022-11-17

Family

ID=77134417

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/095962 WO2022236876A1 (fr) 2021-05-14 2021-05-26 Procédé, système et appareil de reconnaissance de défauts de cellophane et support de stockage

Country Status (2)

Country Link
CN (1) CN113239930B (fr)
WO (1) WO2022236876A1 (fr)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116152807A (zh) * 2023-04-14 2023-05-23 广东工业大学 一种基于U-Net网络的工业缺陷语义分割方法及存储介质
CN116664846A (zh) * 2023-07-31 2023-08-29 华东交通大学 基于语义分割实现3d打印桥面施工质量监测方法及系统
CN116664586A (zh) * 2023-08-02 2023-08-29 长沙韶光芯材科技有限公司 一种基于多模态特征融合的玻璃缺陷检测方法及系统
CN116703834A (zh) * 2023-05-22 2023-09-05 浙江大学 基于机器视觉的烧结点火强度过高判断、分级方法及装置
CN117011300A (zh) * 2023-10-07 2023-11-07 山东特检科技有限公司 一种实例分割与二次分类相结合的微小缺陷检测方法
CN117237361A (zh) * 2023-11-15 2023-12-15 苏州拓坤光电科技有限公司 基于驻留时间算法的研磨控制方法及系统

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114782387A (zh) * 2022-04-29 2022-07-22 苏州威达智电子科技有限公司 一种表面缺陷检测系统

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110555831A (zh) * 2019-08-29 2019-12-10 天津大学 一种基于深度学习的排水管道缺陷分割方法
CN110738660A (zh) * 2019-09-09 2020-01-31 五邑大学 基于改进U-net的脊椎CT图像分割方法及装置
CN111325713A (zh) * 2020-01-21 2020-06-23 浙江省北大信息技术高等研究院 基于神经网络的木材缺陷检测方法、系统及存储介质
CN111612789A (zh) * 2020-06-30 2020-09-01 征图新视(江苏)科技股份有限公司 一种基于改进的U-net网络的缺陷检测方法
CN112686261A (zh) * 2020-12-24 2021-04-20 广西慧云信息技术有限公司 一种基于改进U-Net的葡萄根系图像分割方法
CN112766110A (zh) * 2021-01-08 2021-05-07 重庆创通联智物联网有限公司 物体缺陷识别模型的训练方法、物体缺陷识别方法及装置

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2793886B1 (fr) * 1999-05-19 2001-06-22 Arjo Wiggins Sa Substrat comportant un marquage magnetique, procede de fabrication dudit substrat et dispositif l'utilisant
DE102013109005A1 (de) * 2013-08-20 2015-02-26 Khs Gmbh Vorrichtung und Verfahren zur Identifikation von Codierungen unter Folie
CN111507343B (zh) * 2019-01-30 2021-05-18 广州市百果园信息技术有限公司 语义分割网络的训练及其图像处理方法、装置
CN110992317B (zh) * 2019-11-19 2023-09-22 佛山市南海区广工大数控装备协同创新研究院 一种基于语义分割的pcb板缺陷检测方法
CN110910368B (zh) * 2019-11-20 2022-05-13 佛山市南海区广工大数控装备协同创新研究院 基于语义分割的注射器缺陷检测方法
CN111127416A (zh) * 2019-12-19 2020-05-08 武汉珈鹰智能科技有限公司 基于计算机视觉的混凝土结构表面缺陷自动检测方法
CN111369550B (zh) * 2020-03-11 2022-09-30 创新奇智(成都)科技有限公司 图像配准与缺陷检测方法、模型、训练方法、装置及设备
CN111932501A (zh) * 2020-07-13 2020-11-13 太仓中科信息技术研究院 一种基于语义分割的密封圈表面缺陷检测方法
CN112215803B (zh) * 2020-09-15 2022-07-12 昆明理工大学 一种基于改进生成对抗网络的铝板电涡流检测图像缺陷分割方法
AU2020103901A4 (en) * 2020-12-04 2021-02-11 Chongqing Normal University Image Semantic Segmentation Method Based on Deep Full Convolutional Network and Conditional Random Field

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110555831A (zh) * 2019-08-29 2019-12-10 天津大学 一种基于深度学习的排水管道缺陷分割方法
CN110738660A (zh) * 2019-09-09 2020-01-31 五邑大学 基于改进U-net的脊椎CT图像分割方法及装置
CN111325713A (zh) * 2020-01-21 2020-06-23 浙江省北大信息技术高等研究院 基于神经网络的木材缺陷检测方法、系统及存储介质
CN111612789A (zh) * 2020-06-30 2020-09-01 征图新视(江苏)科技股份有限公司 一种基于改进的U-net网络的缺陷检测方法
CN112686261A (zh) * 2020-12-24 2021-04-20 广西慧云信息技术有限公司 一种基于改进U-Net的葡萄根系图像分割方法
CN112766110A (zh) * 2021-01-08 2021-05-07 重庆创通联智物联网有限公司 物体缺陷识别模型的训练方法、物体缺陷识别方法及装置

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116152807A (zh) * 2023-04-14 2023-05-23 广东工业大学 一种基于U-Net网络的工业缺陷语义分割方法及存储介质
CN116152807B (zh) * 2023-04-14 2023-09-05 广东工业大学 一种基于U-Net网络的工业缺陷语义分割方法及存储介质
CN116703834A (zh) * 2023-05-22 2023-09-05 浙江大学 基于机器视觉的烧结点火强度过高判断、分级方法及装置
CN116703834B (zh) * 2023-05-22 2024-01-23 浙江大学 基于机器视觉的烧结点火强度过高判断、分级方法及装置
CN116664846A (zh) * 2023-07-31 2023-08-29 华东交通大学 基于语义分割实现3d打印桥面施工质量监测方法及系统
CN116664846B (zh) * 2023-07-31 2023-10-13 华东交通大学 基于语义分割实现3d打印桥面施工质量监测方法及系统
CN116664586A (zh) * 2023-08-02 2023-08-29 长沙韶光芯材科技有限公司 一种基于多模态特征融合的玻璃缺陷检测方法及系统
CN116664586B (zh) * 2023-08-02 2023-10-03 长沙韶光芯材科技有限公司 一种基于多模态特征融合的玻璃缺陷检测方法及系统
CN117011300A (zh) * 2023-10-07 2023-11-07 山东特检科技有限公司 一种实例分割与二次分类相结合的微小缺陷检测方法
CN117011300B (zh) * 2023-10-07 2023-12-12 山东特检科技有限公司 一种实例分割与二次分类相结合的微小缺陷检测方法
CN117237361A (zh) * 2023-11-15 2023-12-15 苏州拓坤光电科技有限公司 基于驻留时间算法的研磨控制方法及系统
CN117237361B (zh) * 2023-11-15 2024-02-02 苏州拓坤光电科技有限公司 基于驻留时间算法的研磨控制方法及系统

Also Published As

Publication number Publication date
CN113239930A (zh) 2021-08-10
CN113239930B (zh) 2024-04-05

Similar Documents

Publication Publication Date Title
WO2022236876A1 (fr) Procédé, système et appareil de reconnaissance de défauts de cellophane et support de stockage
CN108562589B (zh) 一种对磁路材料表面缺陷进行检测的方法
CN108520274B (zh) 基于图像处理及神经网络分类的高反光表面缺陷检测方法
CN113643268B (zh) 基于深度学习的工业制品缺陷质检方法、装置及存储介质
CN111815564B (zh) 一种检测丝锭的方法、装置及丝锭分拣系统
CN112070727B (zh) 一种基于机器学习的金属表面缺陷检测方法
CN114581782B (zh) 一种基于由粗到精检测策略的细微缺陷检测方法
CN112819748B (zh) 一种带钢表面缺陷识别模型的训练方法及装置
CN113222982A (zh) 基于改进的yolo网络的晶圆表面缺陷检测方法及系统
CN113177924A (zh) 一种工业流水线产品瑕疵检测方法
CN112132196A (zh) 一种结合深度学习和图像处理的烟盒缺陷识别方法
CN111445471A (zh) 基于深度学习和机器视觉的产品表面缺陷检测方法及装置
CN115272204A (zh) 一种基于机器视觉的轴承表面划痕检测方法
CN115829995A (zh) 基于像素级的多尺度特征融合的布匹瑕疵检测方法及系统
CN116012291A (zh) 工业零件图像缺陷检测方法及系统、电子设备和存储介质
CN114331961A (zh) 用于对象的缺陷检测的方法
CN112884741B (zh) 一种基于图像相似性对比的印刷表观缺陷检测方法
CN111161228B (zh) 一种基于迁移学习的纽扣表面缺陷检测方法
CN117214178A (zh) 一种用于包装生产线上包装件外观缺陷智能识别方法
CN110136098B (zh) 一种基于深度学习的线缆顺序检测方法
CN115601357B (zh) 一种基于小样本的冲压件表面缺陷检测方法
CN115661126A (zh) 一种基于改进YOLOv5算法的带钢表面缺陷检测方法
CN112200762A (zh) 二极管玻壳缺陷检测方法
CN114898148B (zh) 一种基于深度学习的鸡蛋污损检测方法及系统
CN117078608B (zh) 一种基于双掩码引导的高反光皮革表面缺陷检测方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21941445

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE